The brain processes information in a multitude of timescales, and the MeM-Scales project will explore this property.
Artificial intelligence is considered the enabling computational technology for technological innovation in the coming years. The internet of things already makes extensive use of deep-learning computational paradigms to offer services for searching for information on the web or for recognizing audiovisual information, while the emerging internet of everything (IoE) will manage and deliver services that process data from billions of networked sensors.
CEA-Leti announced its participation in the EU’s new MeM-Scales project, which aims to develop a class of algorithms, devices, and circuits that will mimic the multi-timescale processing of biological neural systems.
The results will be used to build neuromorphic computing systems that can efficiently process real-world sensory signals and natural-time–series data in real time and to demonstrate the concepts with a practical laboratory prototype. Targeted applications include high-dimensional distributed environmental monitoring, implantable microchips for medical diagnosis, wearable electronics, and human-computer interfaces.
To interact with the real world, brains process and percept the sensory signals in multiple timescales, Elisa Vianello, edge AI program director at CEA-Leti, pointed out in an interview with EE Times Europe.
“Memory of this interaction forms in time-scales ranging from milliseconds (short-term memory) to months and years (long-term structural changes),” said Vianello. “To design systems that interact with the real world, neuromorphic circuits need to mimic the multi-timescale processing of the brain. Therefore, these circuits are the critical elements in the processing pipeline.”
In a standard neural network (NN) model, input data is first sent to the input neurons and is then passed through hidden layers of other neurons via connections called synapses. The data is transformed at each step, and the output from one layer is used as input for the next layer.
The data eventually arrives at the final output layer, which provides the prediction — for example, a category classification or a numerical value in a regression. There is no real-time element here; the input data is all transmitted at the same time, passes through each hidden layer in order, and is output all at once.
But what if your input data doesn’t all arrive at the same time cleanly — what if it’s time-series or time-related data in some other way, such as real-time input from sensors on a self-driving car? What if the same is true of your results — and what if the results are also time-based, such as instructions given to a self-driving car on when to turn and when to increase or decrease speed?
Spiking neural networks (SNNs) are a solution to this problem (Figure 1). They can accept time-based inputs and produce time-based outputs. Instead of ordered layers, they have more complex structures within them for passing data between neurons, such as loops or multidirectional connections. Because they are more complicated, they require different types of training and learning algorithms, such as making changes to backward-propagation–like approaches to adapt to spike behavior.
In general, SNNs are neural network paradigms that implement the biological neuron by emulating the natural signals of the nervous system (spikes) and the processing mechanism of the natural neuron’s spikes (mechanism of action). The peculiarity of SSNs lies in the internal way of processing information, i.e., as a sequence of spikes (impulses).
Simulating a neural system
Processing on multiple timescales is inspired by neural processing in the nervous system, which occurs naturally on timescales ranging from milliseconds (axonal transmission) to seconds (spoken sentences) and at much longer intervals (motor learning).
“The most complicated part is that there are still many unknowns about how brains exactly work,” said Vianello. “There is a great deal of understanding, but we still have a long way to go to understand the exact encoding, processing, and decoding in the brain. One of the understandings we have is that the brain processes information in a multitude of timescales. And this is the very property which we would like to exploit in the MeM-Scales project.”
The MeM-Scales project aims to elevate neuromorphic computing in microprocessors, with close interactions among experts in nanoelectronic device engineering, circuit and microprocessor design, manufacturing technology, and computer science.
“The idea is not to raise the computational envelope but to co-develop a novel class of devices, circuits, and algorithms that reproduce multi-timescale processing of biological neural systems,” said Vianello. “These systems can process real-world sensory signals efficiently and in real time, without increasing the computational envelope.”
The goal of the project is to develop devices, circuits, and algorithms to enable both learning and inference at the edge. Vianello pointed out that the project is focused on event-based, or spiking, neural network or neuromorphic applications in which diverse timescales are present and required. “So we mostly aim at streaming applications in which the input/sensor data is sampled in a near-continuous stream and information of the past has to be stored across multiple time horizons,” said Vianello.
A spiking neural network aims to simulate natural neural networks more accurately. In addition to the synaptic and neuronal states, such a network also incorporates the concept of time into its operational model.
Analog circuits and resistive memories are noisy. The simplest technique to address that is to average over space (populations of neurons) or over time (calculated mean rate), said Vianello. This simple technique allows precision to be recovered.
Another possibility is to exploit resistive memory variability to build stochastic synapses that keep track of and store two variables: the mean and the variance (i.e., the error bar) of the associated probability distribution, said Vianello. “Stochastic synapses will enable the design of Bayesian models; [these] are particularly adapted for the ‘small data’ world, which has a lot of uncertainty,” she said. “We recently proposed a machine-learning technique that exploits resistive memory variability.” Vianello and her fellow researchers described the work in a recent Nature Electronics article.2
Vianello explained that we can imagine a series of spatial and temporal filters distributed with a variety of time and space constants as processing elements in the brain. These elements can be passive RC filters that filter and integrate information, as well as active elements that introduce non-linearity. “Low-power neuromorphic systems make use of sparsity in time, only consuming energy when an ‘event’ arrives at the input,” she said. “These systems are not clocked and are asynchronous and event-based.” Therefore, the systems “make use of the physics of the substrate to implement time constants (e.g., RC circuits).”
One challenge for the implementation, Vianello added, is “the large range of time constants.”
The MeM-Scales goal is to develop compact non-conventional memories and devices with controllable retention time. “In other words, we want to exploit the physics of the memories and devices — not just use them as conventional digital elements but [use them] to implement neural dynamics,” said Vianello.
Vianello also told us that as part of this project, a solution was proposed to exploit the drift behavior of phase-change memory (PCM) devices to implement eligibility traces (ETs) covering behavioral timescales.3 The use of this new solution improves the efficiency of the 10× area compared with existing solutions and allows synapses to remember their past activities. Both are crucial for enabling the implementation of next-generation on-chip learning.
The technology developed in the MeM-Scales project will enable new solutions for the internet of things. “Today, the only solutions available for automatic learning and complex data interpretation are based on a cloud-computing paradigm where locally extracted sensor data is transmitted by edge devices to remote servers,” said Vianello. In the future IoT, however, much computing volume will be offloaded from central servers and delegated to small controllers and smart sensors directly where their services are needed.
One major application domain where this is valid is in autonomous navigation and movement of vehicles such as robots, drones, and even cars. In this case, one could take advantage of a heterogeneous collection of video cameras, radar sensors, and potentially LiDAR sensors as well.
“Another major application target domain are sensor-based health-care and lifestyle systems such as smart patches, smart wristbands, smart glasses, and even smart shoes,” said Vianello. Here, too, “we can make use of sensory fusion by combining a heterogeneous set of sensors for collecting information such as ECG, EMG, bio-impedance streams, and potentially also brain signals through EEG sensors and neuro-probes.”
Artificial neural networks, software, and/or hardware systems that mimic the functioning of neurons in the human brain are at the heart of theoretical and practical developments in artificial intelligence. There is still much to be learned about how a biological brain works, but researchers and scientists, with a multidisciplinary approach, are increasingly able to understand how cognitive processes take place and, with innovative technological insights, are succeeding in producing systems with a high level of emulation.
1. Tavanaei et al. Deep Learning in Spiking Neural Networks. Revised 2019. arxiv.org/abs/1804.08150
2. Dalgaty et al. In situ learning using intrinsic memristor variability via Markov chain Monte Carlo sampling. Nature Electronics 4 151–161. 2021. go.nature.com/3ofw3wP
3. Demirag et al. YPCM-trace: Scalable Synaptic Eligibility Traces with Resistivity Drift of Phase-Change Materials. 2021. arxiv.org/abs/2102.07260
This article was originally published on EE Times Europe.
Maurizio Di Paolo Emilio holds a Ph.D. in Physics and is a telecommunication engineer and journalist. He has worked on various international projects in the field of gravitational wave research. He collaborates with research institutions to design data acquisition and control systems for space applications. He is the author of several books published by Springer, as well as numerous scientific and technical publications on electronics design.