AI Replaces Moore

Article By : Rick Merritt

New path discussed at Applied Materials' Semicon West symposium

SAN FRANCISCO — Moore’s Law is dead, long live AI. That’s the semiconductor industry’s new rallying cry, sounded at a daylong symposium sponsored by Applied Materials at Semicon West.

“The time of the node train is coming to an end. There needs to be greater collaboration from materials to devices — hardware, software and systems” in new avenues, said Steve Ghanayem, former head of Applied’s transistor and interconnect group now scouting for acquisitions and alliances to take the company in directions beyond Moore’s Law.

Moore’s Law is not entirely dead, of course. The race to smaller chips continues — for a few.

In a keynote, CEO Gary Dickerson said Applied will announce soon new transistor materials that will reduce leakage current by three orders of magnitude. The news is nearly as big for chip makers as was Intel’s advance in high-k metal gates in 2007. But today such advances are increasingly relevant only for an increasingly small group of designs and companies.

It can cost $100 million to tape out a 7nm chip, and the time from tape out to first silicon is stretching out to four months, said speakers here. “That’s a check few people can write — as startup I can’t afford to write a $100 million check,” said Kurt Busch, chief executive of Syntiant, a designer of an in-memory processor for AI.

“I’m getting less enthusiastic about the latest nodes. They are good for Qualcomm, but that doesn’t apply to everyone else,” said Dileep Bhandarkar, a server processor architect who left the company recently.

“I think this is what the end of Moore’s Law looks like,” said Berkeley professor emeritus David Patterson, noting transistor costs are flat at TSMC and Intel is struggling to produce 10nm chips. “Ninety-five percent of architects think the future is about special-purpose processors,” said Patterson who had a hand in helping Google design its TPU.

Yan Borodovsky, a veteran lithographer recently retired from Intel, blessed the passing of the torch from Moore’s Law to AI as a new guiding light.

“I think something beyond today’s von Neuman architectures will be helped by something more than Moore. For example, memristor crossbars may become a fundamental component for neuromorphic computing…the world beyond Moore’s Law may be about how many kinds of synapses you can put in a given area and how complex they are,” he said, taking a stab at an AI law.

Applied transistor leakage chart

Applied is preparing new transistor materials to lower leakage. (Chart: Applied)

Wanted: supercomputing for embedded systems

A $2 trillion business in decision support that includes AI is being built on top of the $1.5 trillion IT business, said John Kelly III, IBM’s evangelist for what he called a new cognitive era.

“I was around for the early days of Moore’s Law, but a couple things are happening now that will truly change the world and it’s all around artificial intelligence…This will spawn 50 or more years of tech innovations and will power our semiconductor industry forward,” Kelly said.

The 13-megawatt Summit system IBM recently delivered to U.S. government researchers is the first supercomputer specifically geared to handle AI jobs thanks in part to its use of Nvidia GPUs. “You will never see another traditional supercomputer — they will be blended with AI in the future of computing…but we can’t keep building 13MW systems,” Kelly said.

Indeed, one of the grand challenges of machine learning is driving inference jobs and ultimately training tasks to power-constrained processors on the edge of the network. It’s a moonshot target given giants like Baidu, Facebook and Google take weeks to train models using racks of GPUs today.

“We will see some training on the edge within five years with the first several layers of a neural network processed in a data center and the last few at the edge — that’s inevitable,” said Busch of Syntiant.

AI will serve as a performance driver across many industries. AI processing for a single high-def video stream a 30 frames/second requires 9.4 tera-operations/second. A self-driving car will need many such cameras, said Bill Dally, chief scientist at Nvidia, in a keynote.

IBM's Kelly preached the gospel of cognitive computing.

IBM’s Kelly preached the gospel of cognitive computing. (Images: EE Times)


New agenda needed for materials to algorithms

As AI sets aggressive new performance targets, it is also suggesting new technology directions to reach them. They span work in new materials, processes, circuits, architectures, packaging and algorithms.

In short, rethink everything for AI.

“We have been thinking about MRAM or ReRAM as flash replacement…But AI is shining a new light on crossbar architectures using emerging memories and different materials for more linear analog scaling — something like a programmable memristor,” said Jason Woo, an electrical engineering professor at UCLA.

Woo’s researchers have been exploring arrays of three-terminal analog memories with integrated logic functions. It’s the new kind of programming element startups such as Syntiant and Mythic as well as IBM researchers want for AI accelerators based on in-memory computing.

“Given the parallelism of AI workloads, there’s a great opportunity in packaging. We shouldn’t limit ourselves to single-die silicon. Packaging has great potential to overcome the brick wall we hit in Denard scaling,” said Gary Lauterback, CTO of Cerebras Systems, one of many startups designing full-reticle chips to deliver the training speeds data centers crave.

Many of the latest data center chips use 2.5D stacks of logic and memory. Meanwhile, TSMC is rolling out so many versions of wafer-level fan-out packages for smartphones and other devices, engineers need a decoder ring to keep up with the acronyms. Still, it’s not enough for AI.

“I haven’t seen any multichip technologies I am happy with in terms of cost and performance. The best I’ve seen is Intel’s EMIB, but that’s not available to everyone,” said Bhandarkar.

Dally gave a quick rundown of work to simplify the algorithms by reducing the size of neural networks and the precision of their matrix math. Using mixed-precision math supercomputer veteran Jack Dongarra was able to deliver an exaflop of AI-style performance on the Summit system, he said.

Nvidia researchers showed promise for floating point operations at as little as two bits. The Imec research institute is exploring a single-bit approach.

Neural nets themselves can be radically simplified to require less compute muscle, Dally added. Accuracy does not take an unacceptable hit even when using just 10% of a neural net’s weights and 30% of its activations, he said. SqueezeNet is an example of this kind of work, targeting embedded AI.

Bits per weight

Nvidia’s Dally said neural nets need to lose some weights. (Image: EE Times)

Quantum is a long shot as a backup

Scary as it is to step off the known path, it can also be a good thing. “It’s a very exciting time to be a computer architect. Now that Moore’s Law has run its course, we have to be really clever,” said Dally.

Applied’s Dickerson had a more philosophical take. “The thing I’ve come to realize in the last few years is what I don’t know,” he said.

If it all fails miserably, IBM’s Kelly held out the potential of quantum computing. IBM has a working 50-qubit system in the lab.

“Between 50 and 100 qubits a system will do computations in seconds that today’s computers could never do…. Beyond AI, it’s the most important thing I’ve seen in my life — it’s a game changer,” he said.

Others cautioned that much fundamental research is still ahead in how to build and use a quantum system.

“We know how to build deep learning systems, but we don’t understand how they work…and we’re still in the Edison-ian stage of trying different techniques. Quantum is the opposite. We understand the math and physics, but we don’t know how to build a quantum system,” said Conrad James, a principal member of technical staff at Sandia National Laboratories.

With Moore’s Law winding down and quantum computing a long way off, the semiconductor industry has few options for a single guiding light. Long live AI.

— Rick Merritt, Silicon Valley Bureau Chief, EE Times

Leave a comment