Nvidia-Arm Deal: What Does It Mean for AI Compute?

Article By : Sally Ward-Foxton

Unless you’ve been living under a rock this autumn, you’ve surely heard by now that Nvidia is set to acquire Arm for US$40 billion in the biggest-ever deal...

Unless you’ve been living under a rock this autumn, you’ve surely heard by now that Nvidia is set to acquire Arm for US$40 billion in the biggest-ever deal in semiconductor history. Nvidia has gone from strength to strength, with its valuation recently overtaking Intel’s, perhaps spurring this latest, most ambitious move. There has been a lot written about the business and market implications of this potential merger (will Arm licensees accept Nvidia’s pledge to remain neutral, or will they migrate in their droves to RISC-V?), but what does this mean for AI compute? Nvidia is far and away the leader in specialized artificial-intelligence compute chips. Its GPU technology is hugely successful in the data center, high-performance computing (HPC), and edge server/edge box markets. While it has offerings for robots, automotive, and the internet of things, limitations of the technology mean it is necessarily focused on higher-power applications. Arm’s offering fills this hole. The U.K. company has the most popular CPU architecture in the world, with a huge customer base in smartphones and IoT devices, which Nvidia will no doubt leverage. As for areas of potential technology overlap, Arm has dedicated AI accelerator IP (Ethos) for low-power systems-on-chip and GPU intellectual property (Mali) for smartphones. Assuming the sizable regulatory hurdles do not prove to be insurmountable in the next 18 months or so, and the deal goes through, what will Jensen Huang, engineering CEO extraordinaire, make from this virtual candy store of state-of-the-art IP? Here are a few ideas for what this deal could mean for the future of AI compute.
Jensen Huang
Nvidia’s Jensen Huang

Data Center

Nvidia’s data center and HPC AI accelerator business, as of last quarter, has overtaken the company’s graphics card business. This sector will be the company’s primary focus in the months and years to come. Nvidia has said it will build an AI supercomputer at Arm HQ in the U.K. to demonstrate Arm CPUs alongside Nvidia GPUs at massive scale. While Arm has a small but growing presence in the data center and HPC market (Amazon’s Graviton CPU for the data center, Fujitsu’s A64FX CPU in the Fugaku supercomputer, and the Ampere Altra power-efficient data center CPU), Huang could be planning to use Nvidia’s customer base to accelerate Arm’s data center presence. I think the combination of the most power-efficient CPU architecture with the leading GPU AI training chips will be too good to pass up. As data centers demand more compute within tight power budgets, we will also see closer integration of CPU, GPU, and networking IP (from Mellanox, Nvidia’s previous purchase) for servers.

Far Edge

With everything from mobile phones to toasters set to get AI capabilities, this is a huge market which for Nvidia is untapped. The trouble is, these markets aren’t really a good fit for power-hungry standalone GPU chips. Nvidia has already said that it plans to offer its GPU IP for license going forward. The result will be third-party SoCs that contain Nvidia AI accelerator blocks, alongside Arm CPU and MCU cores, for smartphones through to smart-home appliances and sensor nodes. How will Nvidia-powered SoCs meet the power and performance demands of these varied niches? Competing with AI accelerator IP designed specifically for low-power markets (Arm Ethos and others) will be a tough sell without other advantages such as price. Could Nvidia offer its GPU IP as part of an Nvidia-Arm license bundle in order to become the dominant AI accelerator in IoT devices? That’s one possibility.
Nvidia’s Jetson Nano
Nvidia’s Jetson Nano family, until now the company’s lowest power offering (Image: Nvidia)
Nvidia has tried to license its IP in the past, without much success. Any move in this direction would mean the company effectively competes with its customers, a huge no-no. However, if the legalities are carefully structured (see: Qualcomm), it is perfectly possible.

Future devices

Further into the future, will Nvidia use the additional profits from its new cash cow to research and develop alternative, interesting, ambitious, novel types of compute for the huge and varied AI accelerator market? I certainly hope so, and I can’t wait to see what Jensen Huang and his team are planning. This article was first published on EE Times Europe

Leave a comment