Neuromorphic Computers: Promise and Pitfalls

Article By : Sunny Bains

Neuromorphic processors could eventually overcome the challenges encountered in building intelligent machines....

How much intelligence can we build into machines? We’ve come a long way in recent decades, but assistants like Siri and Alexa, image recognition apps, and even recommendation engines tell us as much about how far we have to go as how far we have come. For deep learning tasks like medical diagnosis, an abundance of computer power, and plenty of well-tagged data, the progress has been astonishing.

But what about the rest?

Our aspirations for AI go well beyond data science. We want our wearable biosensor systems to warn us instantly of anomalies that could indicate a serious health event, self-driving cars that can react in real time to the most challenging driving conditions and autonomous robots that can act with minimal direction. How do we get there?

For now, it may seem like at least our mobile devices understand us (after a fashion), but they really don’t. They simply relay our voices, to be decoded into words, then requests, and into actions or answers. This is a computation-intensive process.

A 2013 projection showed that “people searching by voice for three minutes a day using speech recognition would double the datacenters’ computation demands, which would be very expensive using conventional CPUs.” This led to Norman Jouppi and his colleagues at Google developing the Tensor Processing Unit (TPU), focused on optimizing hardware for matrix multiplication. That offers the promise of improving power efficiency of neural network queries by 95 percent.

Those processors are already making a big impact in server-based applications, but the approach concentrates all the memory and processing at the datacenter. It therefore puts huge reliance on the communication infrastructure and is potentially wasteful, as time and energy are spent communicating information—much of it irrelevant.

Enter edge computing: How can we meet the speed, power, area, and weight requirements of mobile and stand-alone applications?

Selectively copying biology
Neural networks used for machine intelligence are inspired by biology (see Figure 1). Hence, neuromorphic engineers try to imitate that same biology to create hardware that will run it optimally. That approach gives engineers a large menu to choose from.

Figure 1: Neurons: Biology to Technology — The goal of neuromorphic engineering is to learn as many lessons from biology as possible in an attempt to achieve the low power and high functionality of the brain. The design choices engineers make in implementing neural processing, memory and communication will determine how efficiently an artificial brain performs a given task.

One strategy stops partitioning chip architectures into processor and memory, instead breaking it into neurons (which perform both functions).

Next, large many-to-many neuron connectivity is preferred since it makes the networks more powerful. Sensor-processor pipelines that maintain the geometry of incoming signals (such as images) can be helpful because they allow for efficient neighbor-neighbor interactions during processing; for example, human retina. Keeping signal values in the analog domain is beneficial because everything can be processed simultaneously rather than split up into complicated actions on different bits.

Finally, relating the timing of communications to neural behavior—not an arbitrary clock—means signals inherently contain more information: brain-like spikes that arrive together are often related to the same events.

This is why the term neuromorphic engineering or computing is a bit slippery.  The term was coined by Caltech Professor Carver Mead in the late 1980s. Mead’s projects and others over the following decades were particularly focused on the benefits of using analog computation. An example is a system that replicates a set of fly’s-eye motion-detector circuits: analog signals are detected by multiple receptors, then propagate sideways through nearest-neighbor interaction. This system was very fast and very low power (Reid Harrison demonstrated just a few microwatts in the late 1990s), showed the advantages of both retaining signal geometry and analog processing.

Another characteristic of “pure” neuromorphic systems is their frequent use of address-event representation, or AER. The communication system has many of the advantages of conventional networks while providing many-to-many communication whilke retaining spike timing.

Using AER, a neuron sends out a spike whenever it needs to depend on its learning, behavior and the input just received. It transmits to all other neurons in the network, but only the neurons that should receive spikes do; all others ignore it. What makes this network different is that the gaps between the spikes are relatively large, so spikes that arrive from different neurons at roughly the same time can be considered correlated (see Figure 2).

Figure 2: Address-Event Representation — Address-event representation is one way of maintaining the timing of spikes that neurons use to communicate—without each neuron directly wired to hundreds or thousands of others. It works so long as the likelihood of a spike is sufficiently low so that there is little contention among spikes arriving simultaneously at the encoder.

Many neuromorphic systems use AER, including especially France’s Prophesee and the Swiss firm aiCTX (AI cortex) focus on sensory processing. That approach is equally elegant and practical, providing the benefits of hard-wired connections between neurons without all the wiring. It also means information from an incoming signal can simply flow through the processor in real time, with irrelevant information being discarded and the remainder further processed in the neural pipeline.

Trade offs
However attractive is this “classic” neuromorphic technology,c downsides remain. For instance, all that power and speed doesn’t come without a penalty. Digital electronics are more power-hungry than analog precisely because devices are constantly correcting errors: forcing the signal to either one or zero. With analog circuitry, this doesn’t happen, so errors and drift caused by changes in temperature, variability in devices and other factors don’t disappear. The result may not be exactly wrong, simply shifted or mangled.

Making matters worse is the reality that electronics manufacturing is far from perfect. Thankfully, that matters less with digital technology since circuits can be tested.  If they fail they can be discarded.

Still, a set of neural weights that process perfectly in one analog system might work very poorly on another. If you are trying to learn centrally and then clone that behavior into many different machines, there’s a price to pay. The penalty could be accepting lower reliability or, more likely, designing in more redundancy to overcome these problems.

Luckily, this is not an insurmountable problem, but worth noting as we move towards emerging technologies such as memristors. These elegant devices can be used in neural networks since they are memories that can be embedded within core neural circuitry. Memristors have the advantage of allowing analog neurons to become even more compact and lower power.

Another approach involves minimizing (or eliminating) analog circuitry while implementing architectural lessons based on nature, especially distributed, well-connected neurons—making them only as neuromorphic as is expedient (see Figure 3).

Figure 3: What makes a neuromorphic system?
The line between what is and isn’t neuromorphic is blurry. Where engineers make compromises depends on the application and the criteria for success. If absolute repeatability is critical and power is not, less-neuromorphic solutions are preferred. Where speed, power, area and weight are the main concerns, and fuzzier behavior is more acceptable, neuromorphic computing becomes practical.

IBM’s TrueNorth, for instance, achieves very low power despite being digital since logic-based devices can be made using nanometer manufacturing processes; analog circuitry is much more difficult to implement at these small sizes.

Intel’s Loihi chip moves a step closer to analog since it is asynchronous: each neuron is allowed to fire at its own rate. Synchronization happens through a set of neighbor interactions, so that no process begins until others are finished for a time step, or clock-driven operation.

Virtuous circle
All this makes it appear analog neuromorphic systems are dead ends, but that’s not necessarily the case. Loihi and TrueNorth are both general-purpose chips, intended to be used for generic learning tasks.

But biology is not generic; it optimizes based on specific tasks. Short term, where speed, power, area and weight really matter, engineers too will want to optimize. They will ultimately choose the most efficient design, even if a chip is more expensive.

If some niche applications for neuromorphic computing prove successful, such as keyword spotting or sensory processing, then that could create a virtuous circle of investment, development, innovation and optimization. The end result could be a viable industry with its own version of Moore’s Law, one that is more tightly connected with the needs of cognitive and intelligence tasks.

Which leads to a logical conclusion: Neuromorphic processors could eventually overcome the bottlenecks currently faced in trying to build sophisticated intelligent machines. Among the goals are machines that can reason, recognize objects and situations, then adapt on-the-fly to different types of sensory data—akin to human brains.

That goal will require deep understanding of biologic processes in the effort to make neuromorphic computing a practical reality.

–Sunny Bains, who teaches at University College London, is the author of Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies.

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment