Team Alchemy

Members
Overall Objectives
Scientific Foundations
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Alternative computing models/Spatial computing

Compound circuits

Participants : Hugues Berry, Sylvain Girbal, Olivier Temam, Sami Yehia.

Besides parallelization, the other "spatial" scalability path is customization. Customization, which is very popular in embedded systems, has many assets: custom circuits are cheaper, faster and more power efficient than processors. They can also speed up tasks which are by nature sequential (not parallel), so that they are complementary, not an alternative, to parallelism. Their main limitation is flexibility . As a result, we have investigated techniques which can improve the flexibility of custom circuits while achieving the best possible performance, area and power properties. The first technique, which relied on collapsing processor instructions into circuits  [153] , was developed as part of the PhD of Sami Yehia, who went on to work at ARM research to apply such approaches to embedded processors, and later to Thales TRT. More recently, we developed together a novel bottom-up approach where we show how to efficiently combine any number of custom circuits to create a far more flexible compound circuit [47] , without sacrificing the performance, area and power benefits of custom circuits. That approach was recently patented jointly with Thales .

ANNs as accelerators

Participant : Olivier Temam.

We make the case for considering a hardware ANN as a flexible yet energy efficient, high-performance and defect-resilient accelerator, ideally positioned to tackle upcoming technology, applications and programming challenges. For now, we focus this study on one type of algorithms, classifiers, but which are commonly used in many RM applications. We present a hardware accelerator design for ANNs, geared towards robustness and high-performance. We show that transistor density has reached a level where it is now possible to spatially expand in hardware an ANN capable of handling medium-sized applications. Spatial expansion has multiple benefits in terms of robustness, energy efficiency, performance and scalability, over previous time-multiplexed designs.

We synthesized our design at 90nm and showed that such a spatially expanded ANN accelerator achieves orders of magnitude reductions in energy, and similar improvements in performance with respect to the same task executed on a modern processor at the same technology node, at a fraction of the on-chip area, justifying scaling down just one core in order to rip the energy and performance benefits.

Bio-Inspired Computing

Systems biology of the role of glial cells in brain cell communications

Participants : Hugues Berry, Eshel Ben Jacob, Maurizio DePitta, Vladislav Volman, Mati Goldberg.

The 20th century witnessed crystallization of the neuron as the fundamental building block responsible for higher brain functions. Yet, neurons are not the most numerous cells in the brain. In fact, up to 90This work is a long-term collaboration with Eshel Ben Jacob,The Maguy-Glass Chair in Physics of Complex Systems, School of Physics and Astronomy, Tel Aviv University, Israel. As a first step, we derived and investigated a concise mathematical model for glutamate-induced nastrocytic intracellular Ca2+ dynamics that captures the essential biochemical features of the regulatory pathway of inositol 1,4,5-trisphosphate (IP3) [12] . Compared with previous similar models, our three-variable models include a more realistic description of IP3 production and degradation pathways, lumping together their essential nonlinearities within a concise formulation. Using bifurcation analysis and time simulations, we demonstrate the existence of new putative dynamical features. The crosscouplings between IP3 and Ca2+ pathways endow the system with self-consistent oscillatory properties and favor mixed frequencyÅ amplitude encoding modes over pure amplitudeÅ modulation ones.// This article has been has been selected for the Faculty of 1000 Biology: http://www.f1000biology.com/article/id/1163674/evaluation . Our ongoing works are investigating the biophysical mechanisms of calcium wave propagation in astrocyte populations and astrocyte-regulation of the synaptic transmission between neurons.

AMYBIA : Aggregating MYriads of Bio-Inspired Agents

Participants : Hugues Berry, Nazim Fates, Bernard Girau.

In the framework of the ARC Amybia, we are searching for innovative schemes of decentralised and massively distributed computing. We mainly aim at contributing to this at three levels. At the modelling level, we think that biology provides us with complex and efficient models of such massively distributed behaviours. We start our study by addressing the decentralised gathering problem with the help of an original model of aggregation based on the behaviour of social amoebae. At the simulation level, our research mainly relies on achieving large scale simulations and on obtaining large statistical samples. Mastering these simulations is a major scientific issue, especially considering the imposed constraints: distributed computations, parsimonious computing time and memory requirements. Furthermore its raises further problems, such as: how to handle asynchronism, randomness and statistical analysis? At the hardware level, the challenge is to constantly confront our models with the actual constraints of a true practise of distributed computing. The main idea is to consider the hardware as a kind of sanity check. Hence, we intend to implement and validate our distributed models on massively parallel computing devices. In return, we expect that the analysis of the scientific issues raised by these implementations will influence the definition of the models themselves.// As a first step, we have recently proposed a bio-inspired system based on the so-called Greenberg-Hastings cellular automaton (GHCA), to achieve decentralized and robust gathering of mobile agents scattered on a surface or computing tasks scattered on a massively-distributed computing medium. As usual with such models, GHCA has mainly been studied using an homogeneous and regular lattice. However, in the context of massively distributed computing, one also needs to consider unreliable elements and defect-based noise. A first analysis showed that in this case, phase transitions could govern the behaviour of the system. Our next goal was to broaden the knowledge on stochastic reaction-diffusion media by investigating how such systems behave when various types of noise are introduced. Hence, in [29] , we study GHCA where noise and topological irregularities of the grid are taken into account. The decrease of the probability of excitation changes qualitatively the behaviour of the system from an active to an extinct steady state. Simulations show that this change occurs near a critical threshold; it is identified as a nonequilibrium phase transition which belongs to the directed percolation universality class. We test the robustness of the phenomenon by introducing persistent defects in the topology : directed percolation behaviour is conserved. Using experimental and analytical tools, we suggest that the critical threshold varies as the inverse of the average number of neighbours per cell. The inverse proportionality law we presented paves the way for obtaining generic laws (even approximate ones) to predict the position of the critical threshold in various simulation conditions.

The Impact of Network Topology on Self-Organizing Maps

Participants : Hugues Berry, Fei Jiang, Marc Schoenauer.

The connectivity structure of complex networks (i.e. their topology) is a crucial determinant of their information transfer properties. Hence, the computation made by complex neural networks, i.e. neural networks with complex connectivity structure, could as well be dependent on their topology. For instance, recent studies have shown that introducing a small-world topology in a multilayer perceptron increases its performance. However, other studies have inspected the performance of Hopfield or Echo state networks with small-world or scale-free topologies and reported more contrasted results.// In [38] , we study instances of complex neural networks, i.e. neural networks with complex topologies. We use Self-Organizing Map neural networks whose neighborhood relationships are defined by a complex network, to classify handwritten digits. We show that topology has a small impact on performance and robustness to neuron failures, at least at long learning times. Performance may however be increased (by almost 10%) by evolutionary optimization of the network topology. In our experimental conditions, the evolved networks are more random than their parents, but display a more heterogeneous degree distribution. On the limited experiments presented here, it thus seems that the performance of the network is only weakly controlled by its topology. Interestingly, though, these slight differences can nevertheless be exploited by evolutionary algorithms: after evolution, the networks are more random than the initial small-world topology population. Their more heterogeneous connectivity distribution may indicate a tendency to evolve toward scale-free topologies. Unfortunately, this assumption can only be tested with large-size networks, for which the shape of the connectivity distribution can unambiguously be determined, but whose artificial evolution, for computation cost reasons, could not be carried out. Similarly, future work will have to address other classical computation problems for neural networks before we are able to draw any general conclusion.

Cortical Microarchitecture: Computing by Abstractions

Participants : Hugues Berry, Olivier Temam, Mikko Lipasti, Atif Hashmi.

Recent advances in the neuroscientific understanding of the brain are bringing about a tantalizing opportunity for building synthetic machines that perform computation in ways that differ radically from traditional Von Neumann machines. These brain-like architectures, which are premised on our understanding of how the human neocortex computes, are highly fault-tolerant, averaging results over large numbers of potentially faulty components, yet manage to solve very difficult problems more reliably than traditional algorithms. A key principle of operation for these architectures is that of automatic abstraction: independent features are extracted from highly disordered inputs and are used to create abstract invariant representations for external entities expressed in the inputs. This feature extraction is applied hierarchically, leading to increasing levels of abstraction at higher layers in the hierarchy.// In collaboration with Mikko Lipasti, University of Wisconsin at Madison, WI, USA, we introduce in [36] a behavioral model for this process, using biologically-plausible neuron-level behavior and structure, and illustrates it with an image recognition task. We also introduce a computationally-effective higher-order modelÅ one that representsthe behavior of hundreds of neurons in a cortical column using just two perceptronsÅ is shown to be capable of this same task. These models are a first step towards developing a comprehensive and biologically-plausible understanding of the computational algorithms and microarchitecture of computing systems that mimic the human neocortex.

Biological neural networks as bio-inspiration sources for future architectures

Participants : Hugues Berry, Olivier Temam.

Beyond a certain number of individual components, it is not even clear whether it will be possible to decompose tasks in such a way they can take advantage of such a large number of computing resources. Searching for solution to this problem has progressively lead us to biological neural networks. Indeed, biological neural networks (as opposed to artificial neural networks, ANNs) are well-known examples of systems capable of complex information processing tasks using a large number of self-organized, but slow and unreliable components. And the complexity of the tasks typically processed by biological neurons is well beyond what is classically implemented with ANNs.

Emulating the workings of biological neural networks may at first seem far-fetched. However, the SIA (Semiconductor Industry Association) in its 2005 roadmap addresses for the first time “biologically inspired architecture implementations”  [138] as emerging research architectures, and focuses on biological neural networks as interesting scalable designs for information processing. More importantly, the computer science community is beginning to realize that biologists have made tremendous progress in the understanding of how certain complex information processing tasks are implemented using biological neural networks.

One of the key emerging features of biological neural networks is that they process information by abstracting it, and then only manipulate such higher abstractions. As a result, each new input (e.g., for image processing) can be analyzed using these learned abstractions directly, thus avoiding to rerun a lengthy set of elementary computations. More precisely, Poggio et al.  [131] at MIT have shown how combinations of neurons implementing simple operations such as MAX or SUM, can automatically create such abstractions for image processing, and some computer science researchers in the image processing domain have started to take advantage of these findings.

We are starting to investigate the information processing capabilities of this abstraction programming method  [70] . While image processing is also our first application, we plan to later look at a more diverse set of example applications.

Spatial complexity of reversible computing

Participants : Mouad Bahi, Christine Eisenbeis.

Especially since the work of Bennett about reversibility of computation and how to make a computation reversible, the relationship between reversibility, energy, computation and space complexity has gained interest in a lot of domains in computer science. This direction could help us understanding physical limitations of processors performance. We have chosen to start by studying the space complexity of a DAG computation, defined as the maximum number of registers needed for performing the computation in both directions. This criteria is closely related to our more classical criterion of “register saturation”. We have defined heuristics for computing this number and have performed systematic experiments on all possible graphs of given size. The first experiments tend to show that for a graph of size n , no more that n/2 registers are needed to perform the computations in both directions compared to the forward direction. This latter number can be considered as the “garbage” of the computation. More work is needed to prove/disprove this result more formally and understand the hypothesis in which it is valid  [63] . In this work, all operations in the DAG are assumed to be reversible. See also [19] .


previous
next

Logo Inria