Do AI and Neuromorphic Computing Compete?

Article By : Sally Ward-Foxton

Both AI and neuromorphic computing run neural networks, but it doesn’t necessarily follow that they will go head-to-head.

At first glance, the new breed of neuromorphic chips have several things in common with the similarly cutting-edge field of AI accelerators. Both are designed to process artificial neural networks, both offer improvements in performance compared to CPUs, and both claim to be more power efficient.

That’s where the similarity ends, though: Neuromorphic chips are designed only for special neural networks called spiking networks, and their structure is fundamentally different from anything seen in traditional computing (nothing so conventional as multiply-accumulate units). It is perhaps a too soon to say what the market for these devices will look like, as new applications and technologies continue to emerge.

EE Times asked CEOs at leading AI accelerator companies whether the technologies are truly complementary or whether there is some overlap.

The big question is: Will these computing paradigms end up competing with each other further down the line?

Different niche
Intel doesn’t think so. The chip giant is a leader in both neuromorphic computing research with its Loihi chip, and AI acceleration with its range of data center CPUs plus the acquisition of AI accelerator company Habana Labs.

Mike Davies, director of Intel’s neuromorphic computing lab, doesn’t see neuromorphic computing as directly comparable to conventional AI accelerators such as those developed by Habana Labs. “Neuromorphic computing is useful for a different regime, a different niche in computing than large data, supervised learning problems,” Davies said.

Intel’s Mike Davies

Today’s AI accelerators are designed for deep learning, which uses large amounts of data to train large networks. This requires huge I/O bandwidth and memory bandwidth.

“Neuromorphic models are very different to that,” Davies added. “They’re processing individual data samples… where real world data is arriving to the chip and it needs to be processed right then and there with the lowest latency and the lowest power possible.

“What’s different on the edge side compared to even edge deep learning AI chips is that we’re also looking at models that adapt and can actually learn in real time based on those individual data samples that are arriving which the deep learning paradigm does not support very well.”

Intel’s neuromorphic computing lab recently presented its biggest system yet, Pohoiki Springs, which integrates 768 Loihi chips to provide the equivalent of 100 million neurons (Image: Intel)

In other words, Intel’s view is that we are talking about two different computing approaches for totally different types of neural networks.

Similar vision

On the AI accelerator front, Kalray CEO Eric Baissus said he sees some similarities between the company’s massively parallel processor array (MPPA) architecture and some of the emerging neuromorphic approaches.

“Neuromorphic computing is very interesting  — this new way of thinking is very close to our vision,” Baissus said. “The brain has a lot of functions in parallel doing their own calculations, and then you consolidate this little by little, which is very close to the way our architecture has been designed.”

Kalray CEO Eric Baissus

Kalray’s latest chip, Coolidge, can be used to accelerate AI in data center and automotive applications. While Coolidge isn’t a pure AI accelerator, with wider applications across edge computing, MPPA does lend itself to AI acceleration and the company demonstrated AI use cases on Coolidge at CES 2020.

“I believe that we will see interesting [neuromorphic] products. I’m not uncomfortable with that because first, I think that our technology is very close to this type of approach,” Baissus said. “I believe that the market is so big that you will see applications for a lot of different types of architectures.”

Economic horizon
Mark Lippett, CEO of XMOS, said commercial adoption of neuromorphic computing remains years away, especially for its target markets which cover the IoT and consumer devices.

“I think it’s too far away for us to worry  — it’s not on our economic horizon any time soon,” he said.

Mark Lippett, XMOS CEO

XMOS’ Xcore.ai chip integrates the company’s IP into an AI accelerator for voice interface applications that require AI for keyword detection or dictionary functions. It fits into a new category, crossover processors, combining the performance of an application processor with the ease of use, low power consumption and real-time operation of a microcontroller.

Lippett also said any technology that involves a change in the way people think about programming systems will be faced with challenges when it comes to market. Even the

Xcore, though based on a RISC architecture, faced market challenges. Lippert said any technology that involves new programming paradigms will meet resistance.

“The key observation that we’ve made is you really need to be very close to a pre-existing, familiar programming model in order for rapid adoption to take place. So, I think that’s the challenge, to make the benefits of those technologies available from a cost perspective, but also accessible to the skills of the existing community, otherwise adoption will be very slow,” he added. “I’m comfortable that we’re not competing [with neuromorphic computing] in the very near term, but clearly it’s an exciting technology and one to watch.”

Inspired by nature
Orr Danon, Hailo’s CEO, noted that software is a potential concern.

“I spent quite a few years in the software area, and I’m always concerned about ideas which look good on the hardware level, but are not feasible in real life development scenarios,” he said. “I’m not saying this will be the case, but it’s a major concern.”

Hailo CEO Orr Danon

Hailo recently completed a $60 million series B funding round, part of which will be spent on the continuing development of software for its Hailo-8 edge AI accelerator chip. The architecture mixes compute, memory and control blocks, and relies on software to allocate adjacent blocks to different layers of a neural network, depending on their individual requirements.

“We shouldn’t glorify the tool rather than the purpose,” he said. “We don’t want to build airplanes based on feathers. It’s good to be inspired [by nature], but the question is, What value do you get, and are you actually solving the right part of the problem?”

Restricting a new computing architecture to mimicking a particular part of the brain can result in bottlenecks, Danon said, which he suspects may be more significant than performance benefits. “On the other hand, I think if you want to bring innovation and significant improvement, then you have to take bold approaches,” he said. “In this sense, I like the neuromorphic approaches, but they really have to prove the previous points before they can fulfil their promise.”

Edge vs. cloud
Nigel Toon, CEO of Graphcore, said the company’s AI accelerators are designed for different verticals. “We don’t see these [neuromorphic] companies necessarily as directly competing,” he said. “To the extent they are out there in the market, they’re typically very much at the edge, trying to carve out low power, close to the sensor applications… It may be that a neural network built in a more conventional way would be better for those applications, but they are certainly not directly competitive with [Graphcore], as we build larger, cloud-based training and large-scale deployment inference systems.”

Nigel Toon, Graphcore CEO

As for the challenges faced neuromorphic computing faced, Toon sounded a familiar refrain. “Software, software and software!” he said. “The challenge in building any processor is software. Far too often people build interesting processors and then afterwards try and work out how they’re going to program it. The reality is you need to work out how you’re going to program it and then build a processor that will support that efficiently. You’ve got to understand the software programming approach and build a processor to do that.”

Graphcore’s approach began with the recognition that machine learning models are essentially large, high-dimensional graphs where the vertices are computing and edges are the communication between compute elements. The company initially developed software to describe graphs at the compute level, and then designed a processor to crunch them. The high dimensionality of graphs is hard to process in normal memory space, when only have two pieces of memory can reside next to each other. Data ends up becoming very sparse, with pieces of the problem spread out in memory. Hence, Graphcore put large amounts of memory, and memory bandwidth, inside its intelligence processing unit chip.

Graphcore’s Intelligence Processing Unit

“Ultimately, what we’ll find is neuromorphic is interesting as a research [project]. But it’s probably not the way you build efficient processors in silicon,” Toon said, citing molecular computing as a future paradigm that may find synergies with neuromorphic approaches now in development. Molecular computing is an emerging field that uses biological molecules such as DNA as computers, in order to reduce the Landauer limit, the amount of energy required to erase one bit of information.

“But [the semiconductor industry has] invested trillions of dollars to build computers using silicon, and we know how to do that and how to build that in volume,” Toon said. “Probably the best path is to work out how we build silicon computers that use different architectures to build these new machine intelligence approaches.”

Leave a comment