Advances in Neuromorphic Computing

Article By : Nitin Dahad, EE Times

Research efforts start to bear fruit

LONDON — There has been a tremendous amount of research in recent years into brain-inspired computing to tackle the explosion in computing performance and memory requirements to meet growing demands for artificial intelligence and machine learning in just about everything.

That research is now starting to bear fruit, with at least one neuromorphic computing chip developer, BrainChip, planning to detail its chip architecture next month.

Earlier this year, Barbara de Salvo, CEA-Leti’s chief scientist explained the semiconductor industry could take its cue from biology to address the power requirements that traditional computing architectures now struggle to meet. She outlined the characteristics of a brain synapse, containing both memory and computing in a single architecture, that can form the basis for brain-inspired non-von Neumann computer architecture. One recent trend in neuromorphic computing is to encode neuron values as pulses or spikes.

And then there’s the European Human Brain Project’s neuromorphic computing project, which has been working on constructing two large-scale, unique neuromorphic machines and prototyping the next generation neuromorphic chips. It recently published a paper on its first full scale simulations of a cortical microcircuit model of 80,000 neurons and 300 million synapses based on the SpiNNaker hardware to demonstrate its usability for computational neuroscience applications.

Markus Diesmann

Markus Diesmann

Professor Markus Diesmann, co-author of the paper and head of the computational and systems neuroscience department at the Jülich Research Center in Germany, said, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

He adds, “It is presently unclear which computer architecture is best-suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time, are currently out of reach.”

The microcircuit model is simulated on a machine consisting of six SpiNN-5 SpiNNaker boards, using a total of 217 chips and 1,934 ARM9 cores. Each board consists of 48 chips and each chip of 18 cores, resulting in a total of 288 chips and 5,174 cores available for use. Of these, two cores are used on each chip for loading, retrieving results and simulation control. Of the remaining cores, only 1,934 are used as this is all that is required to simulate the number of neurons in the network with 80 neurons on each of the neuron cores.

“This is the first time such a detailed simulation of the cortex has been run on SpiNNaker or on any neuromorphic platform,” said Steve Furber, another co-author and professor of computer engineering at the University of Manchester, U.K. “The simulation described in this study used just six boards — 1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

One company hoping to be the first entrant into market with a commercial spiking neural-network chip, or neuromorphic system-on-chip (NSoC), is BrainChip, which is listed on the Australian Securities Exchange but has most of its 30 staff in Orange County, Calif., and Toulouse, France.

The company acquired Spikenet in Toulouse in September 2016 — a specialist in software-based spiking neural networks (SNNs) that learned any visual pattern in real time but with no intensive training and very few image samples. This was emulating the SNNs in software running on x86 platforms, whereas BrainChip was originally founded to put the neuron into silicon itself.

Spikenet had customers in security and gaming (with a number of casinos in Las Vegas), so this enabled BrainChip to acquire a revenue stream and a complementary product that could benefit from the integration of its spiking neuron adaptive processor.

Artificial neural networks can be classified into two areas — convolutional neural networks, or CNNs (also known as deep learning), and spiking neural networks (SNNs), also known as neuromorphic computing (because they model neuron functions). The base functionality of CNNs is a math function.

Bob Beachler

Bob Beachler

“In a CNN, what you are doing is a linear algebra matrix multiplication, and a deep neural network is just a bunch of filters trying to extract salient features that can be put together to help the machine or system recognize certain objects — such as visual, financial, or cybersecurity data,” said Bob Beachler, BrainChip’s senior vice president of marketing and business development, in an interview with EE Times.

“In spiking neural networks, instead of doing the matrix multiplication, our base functionality is an actual neuron that we model as a series of synapses, where the connections are either inhibited or reinforced,” Beachler added. “The neuron itself, which is an integration function, is basically counting the number of spikes — a piece of data that is sent across a synapse. The way that it gets trained is, as opposed to setting weights in the CNN, in SNNs, we either reinforce or inhibit the synapse. That’s one way it trains. The other is that we set the threshold within the neuron itself, which are modifiable functions.”

Beachler added, “It trains a feed-forward approach, so it is unsupervised training, not sitting down and memorizing pre-labelled datasets. They’re seeing the real world and hearing the real world, and it is unsupervised pattern recognition.”

 

Spiking neural networks vs convolutional neural networks.

Spiking neural networks vs convolutional neural networks.

According to Beachler, BrainChip has proven its technology on x86 running in software emulation mode, used FPGAs to accelerate its spiking neural networks, and is now developing its Akida NSoC. It is expecting to announce the chip architecture in September but is already disclosing details to its customers under NDA.

However, to get the market developing using its SNNs, it has launched this month its Akida development environment, a machine-learning framework for the creation, training, and testing of SNNs, supporting the development of systems for edge and enterprise products on the company’s Akida NSoC.

The development environment includes its execution engine, data-to-spike converters, and a model zoo of pre-created SNN models. The framework leverages the Python scripting language and its associated tools and libraries, including Jupyter notebooks, NumPy, and Matplotlib.

Akida execution engine

The Akida execution engine contains a software simulation of the Akida neuron, synapses, and the multiple supported training methodologies. Accessed through API calls in a Python script, users can specify their neural-network topologies, training method, and datasets for execution.

Based on the structure of the Akida neuron, the execution engine supports multiple training methods, including unsupervised training and unsupervised training with a labelled final layer.

Spiking neural networks work on spike patterns. The development environment natively accepts spiking data created by dynamic vision sensors (DVS). However, there are many other types of data that can be used with SNNs. Embedded in the Akida execution engine are data-to-spike converters, which convert common data formats such as image information (pixels) into the spikes required for an SNN. The development environment will initially ship with a pixel-to-spike data converter to be followed by converters for audio and big data requirements in cybersecurity, financial information, and IoT data. Users are also able to create their own proprietary data to spike converters to be used within the development environment.

The development environment includes pre-created SNN models. Currently, available models include a multi-layer perceptron implementation for MNIST in DVS format, a seven-layer network optimized for the CIFAR-10 dataset, and a 22-layer network optimized for the ImageNet dataset. These models can be the basis for users to modify or create their own custom SNN models.

Beachler said that BrainChip is primarily targeting the embedded vision space, where people are applying machine learning most, such as object classification. “We are seeing that in a number of different marketplaces: automotive for ADAS and autonomous vehicles, drones, machine vision.” The company is also targeting cybersecurity and financial technology because of Akida’s ability to find patterns in unsupervised training mode and in analysis of large datasets.

Asked about technologies for the neuromorphic SoC that it plans to announce in September, Beachler told us, “We’re going to be doing this in a purely digital process, standard CMOS, and in whatever feature size we decide, whether it’s 28 nm or 14 nm. We’re not doing anything esoteric, not doing phase-change memories, memristors, or anything like that. We’re a firm believer in using a pure digital logic process.”

— Nitin Dahad is a European correspondent for EE Times.

Leave a comment