Nvidia’s GPUs now account for 97.4% of infrastructure-as-a-service (IaaS) instance types of dedicated accelerators deployed by the top four cloud services. By contrast, Intel’s processors are used in 92.8% of compute instance types, according to one of the first reports from Liftr Cloud Insights’ component tracking service.

AMD’s overall processor share of instance types is just 4.2%. Cloud services tend to keep older instance types in production as long as possible, so we see AMD increasing its share with expected deployments of its second-generation Epyc processor, aka Rome, in the second half of this year.

Among dedicated accelerators, AMD GPUs currently have only a 1.0% share of instance types, the same share as Xilinx’s Virtex UltraScale+ FPGAs. AMD will have to up its game in deep-learning software to make significant headway against the Nvidia juggernaut and its much deeper, more mature software capabilities.

Intel’s Arria 10 FPGA accounts for only 0.6% of dedicated accelerator instance types. Xilinx and Intel must combat the same Nvidia capabilities that AMD is facing, but FPGAs face additional challenges in data center development and verification tools.

Last October, Xilinx introduced its Alveo add-in-boards and SDAccel software development environment to address these challenges. In early April, Intel responded by launching its AgileX data center boards and Quartus Prime Design software.

We would never underestimate Intel, but the company does seem defocused at the moment with too many products for AI acceleration from Xeon CPUs to dedicated neural network processors. While AgileX looks like a good response to Xilinx’s Alveo, we believe that Xilinx has an edge while Intel figures out which product line it wants customers to focus on for AI inferencing.

Lftr2

Nvidia so far is keeping rival GPUs and FPGAs to tiny fractions of the cloud market (Source: Liftr Cloud Insights)

The top four cloud service providers sometimes don’t specify the processor deployed in an instance type. Liftr Cloud Insights has pushed the share of instance types that do not specify a processor (but are still known to be x86-64) down to 2.8% of the total.

AWS’s Graviton processor is the only Arm processor currently deployed at the top four clouds. Graviton (and hence Arm) accounts for only 0.2% of overall compute instance types in these clouds. AWS has not grown its Graviton footprint since late March.

Microsoft’s Azure team has shown its Project Olympus server motherboards using Marvell (formerly Cavium) ThunderX2 Arm-based server processors at several events over the past year. If Microsoft is going to offer Arm-based compute instances in Azure, they would do so in the second half of 2019, while they also deploy AMD Epyc Rome and Intel Xeon Cascade Lake CPUs.

Alibaba Cloud seems unlikely to deploy an Arm chip in China that is manufactured outside of China, and so will probably stick to x86 servers worldwide. Google Cloud is unlikely to deploy Arm-based instances in 2019. We believe it is likely that other top-ten cloud service providers will deploy either ThunderX2 or Ampere’s eMAG Arm-based server processor before the end of 2019.

Liftr Cloud Insights is a devops-based cloud industry analysis service. Our first monthly production scan of the top four public clouds’ IaaS deployments occurred in late March. We delivered our first monthly Liftr Cloud Components Tracker report in May, after the second scan. A sample version of the first report can be downloaded from the Liftr Cloud Insights website.

Lftr1

Both AMD and Arm are expected to grow their tiny CPU shares with new deployments late this year.(Source: Liftr Cloud Insights)

  • Paul Teich, principal analyst for Liftr Cloud Insights, has been a cloud computing analyst since 2012, focused on hardware and software infrastructure. He is a senior member of both the ACM and IEEE and has been granted 12 U.S. patents.