MRAM, RRAM, 3D or not 3D?
GRENOBLE, France — Addressing the “memory wall” and pushing for a new architectural solution enabling highly efficient performance computing for rapidly growing artificial intelligence (AI) applications are key areas of focus for Leti, the French technology research institute of CEA Tech.
Speaking to EE Times at Leti’s annual innovation conference here, Leti CEO Emmanuel Sabonnadière said there needs to be a highly integrated and holistic approach to moving AI from software and the cloud into an embedded chip at the edge.
“We really need something at the edge, with a different architecture that is more than just CMOS, but is structurally integrated into the system, and enable autonomy from the cloud — for example for autonomous vehicles, you need independence of the cloud as much as possible,” Sabonnadière said.
He commented on the Qualcomm bid for NXP being a key pointer as a driver for more computing at the edge. “Why do you think Qualcomm is buying NXP? It’s for the sensing, and to put digital behind the sensing.”
To address the computing architecture paradigm, Sabonnadière said that he hopes for some breakthroughs in Let’s collaboration with professor Subhasish Mitra’s team at Stanford University’s department of electrical engineering and computer science. Mitra’s work, in development for quite some time — and funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation, Semiconductor Research Corp., STARnet SONIC, and member companies of the Stanford SystemX Alliance — focuses on exploring a new processing-in-memory architecture for abundant data and dense interconnections applications.
“We have a deep conviction that this is a way forward to address ‘more-than-Moore’ challenges and have asked professor Mitra to create a demonstrator,” said Sabonnadière, talking about the need to validate in silicon.
At the conference, Mitra said a computing nanosystem architecture using advanced 3D integration is necessary for the coming superstorm of abundant data, where computational demands exceed processing capability.
“We have to process the data to create the decisions, but there’s so much ‘dark’ data which we just can’t process,” Mitra said. “Look at Facebook for example – it took 256 Tesla P100 GPUs to train ImageNet in one hour, which would previously have taken days.”
So what are the current options to improve computing performance? One is to have a better logic switch — but there are few experimental demos for this. A second is to use design “tricks” Mitra said, such as multicores, accelerators, or power management techniques. But there are few tricks, he added, and when those tricks are implemented the designs become so complex that verification then becomes difficult. Another challenge is what Mitra calls the “memory wall.”
“A single common thread among all types of abundant-data applications is the memory wall – systems need to access memories vigorously,” Mitra said.
This, Mitra said, is where the concept of computation immersed in memory comes in and is the focus of the collaboration with Leti which Sabonnadière hopes there will be a breakthrough. It brings computation close to the data using advanced 3D integration. The chip uses carbon nanotubes, because Mitra says they are the only technology that can surpass CMOS, and resistive random-access memory (RRAM).
The RRAM and carbon nanotubes are built vertically over one another, making a dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultra-dense wires between these layers, this 3-D architecture should address the communication bottleneck.
Mitra likens this to the challenge of getting from San Francisco to Berkeley in California — with just three bridges to get across, this creates a traffic jam. But more bridges — or in the case of the 3D architecture he proposes, multiple nano-scale inter-layer vias — the bottleneck cand be addressed.
Breakthroughs in Memory and Software 2.0
Barbara De Salvo, chief scientist at Leti, said the industry is not putting enough emphasis on new memory technologies, which are generally still considered niche technologies.
“In memory, the industry is still continuing to take a conventional approach,” De Salvo said. “Technologies like resistive RAM, magnetic RAM, and phase change memory are still not being fully exploited. But it could bring huge breakthroughs in terms of enabling novel architectures.”
De Salvo added that the use of deep learning and AI in software also has the potential to yield major breakthroughs in computing in the years to come.
“I’m talking about a new concept that uses deep learning and machine learning to develop software,” she said. “Software is one of the most expensive parts of a system. By using deep learning, some tasks that previously took six months can take just a matter of days using deep learning to generate the software.”
— Nitin Dahad is a European correspondent for EE Times.