Acquisitions, Markets and Startups Galore
The AI accelerator hardware segment has been quite an exciting place to be this year. The landscape is continuing to evolve with startups popping up in every vertical market to take on the incumbents. While it may be too early to assess whether we’ve seen a true giant-killer just yet, the picture gets more and more interesting as the various novel architectures are unveiled.
Here are our top 5 pivotal events in the AI accelerator hardware sector that will continue to reverberate into 2020 and beyond.
AI Comes to Tiny Edge Devices
In July, the inaugural meeting of the TinyML group marked the emergence of a nascent industry segment – software and hardware for AI in ultra-low power devices. The TinyML group describes its focus as machine learning (ML) approaches that consume 1mW or less, which is the threshold for always-on applications in smartphones.
While there are a few startups working on ultra-low power AI accelerators (GreenWaves, Eta Compute, Esperanto and others), there are also companies working on adapting ML to existing microcontroller hardware (Xnor and Picovoice, for example). Google also has a team dedicated to adapting its TensorFlow Lite framework, which adapts TensorFlow for limited resource environments
We’ll see this segment really start to take off in 2020.
Groq Reveals its Hand
One of the industry’s most hotly anticipated startups, Groq, revealed some of the details of its architecture this autumn. With co-founders that worked on Google’s TPU, and having raised $67 million in funding, all eyes were on the secretive startup when it pulled out of an appearance at the AI Hardware Summit at the last minute.
Following that event, EETimes interviewed with the senior leadership team who detailed the chip’s software defined hardware architecture with what it calls “predictable performance” for machine learning inference.
The company officially exited stealth mode at SC ’19 where it publicly revealed its gargantuan chip, capable of 1 POPS (1000 TOPS).
Microsoft offers GraphCore
For the first time this November, a major cloud service provider offered customers the opportunity to run their workloads on AI accelerator hardware designed and built by one of this segment’s many startups. Microsoft said that it had been working with GraphCore for two years to develop systems around Graphcore’s IPU chip, which it now offers to customers as part of Azure.
Microsoft noted that Azure’s GraphCore hardware is reserved for customers “pushing the boundaries of machine learning,” and certainly, the performance advantage GraphCore’s accelerator offers seems more pronounced for futuristic neural network types. So it’s probably safe to say that GraphCore’s IPU won’t be powering shopping website recommendations or chatbots any time soon.
MLPerf Inference Scores
Also in November, the first round of benchmark scores for AI inference were released by the industry’s well-known AI benchmarking organisation, MLPerf. Given the landscape of chip giants and startups all claiming some performance advantage in this sector, the results finally began to reveal who is winning the race. GPU giant Nvidia took most of the prizes, but close on its heels was Israeli startup Habana Labs. Which brings us to…
Intel acquires Habana Labs
Four weeks later, the news emerged that Habana was in serious talks about being acquired by Intel. Intel ended up paying $2 billion for Habana Labs, causing tongues to wag regarding the fate of Intel’s previous data centre AI chip startup, Nervana. Nervana had unveiled its training and inference chips just two weeks previously, with many reading the Habana acquisition as a sign that all was not well with Nervana’s hardware performance.
This acquisition was pivotal because serves to show just how important the AI acceleration vertical is to the chip giants (Intel expects to generate over $3.5 billion in AI-driven revenue in 2019, so it cannot afford to get its offering wrong).