Graphcore cloud offering available with American AI specialist Cirrascale...
Graphcore Mk2 IPUs are now available to run customer workloads through specialist AI cloud provider Cirrascale. This is the first publicly available Mk2 IPU-POD technology, though Graphcore’s Mk1 product was available to selected customers as part of Microsoft’s Azure cloud offering. Graphcore hopes the new cloud offering will help customers scale from experimentation, proof of concept and pilot projects to larger production systems.
US cloud compute provider Cirrascale specializes in compute for AI applications in autonomous vehicle research and infrastructure, medical imaging, and natural language processing. The company currently offers GPU-based acceleration and is focused on autonomous driving, though at the time of writing, Cirrascale’s Graphcore hardware is advertised predominantly for financial services and healthcare applications. Cirrascale currently has six data centers in the US, with more planned.
The Cirrascale cluster, dubbed “Graphcloud,” has two Graphcore instance types. IPU-POD16 instances deliver 4 petaFLOPS from 16 IPUs (4 M2000 systems), and IPU-POD64 instances deliver 16 petaFLOPS from 64 IPUs (16 M2000 systems). Each IPU-POD16 instance is supported by a Dell R6525 host servers with dual-socket AMD EPYC2 CPUs, with the IPU-POD64 supported by four of the same. Each IPU-POD16 offers 14.4GB in-processor memory and 512GB streaming memory as 8 x 64GB DIMMs (the IPU-POD64’s memory is scaled by 4x).
One of the first companies to use Graphcloud, UK-based Healx, has developed an AI platform for drug discovery. The company has been using Graphcloud for a month, having ported existing Mk1 code to the Mk2 hardware which gave them “a huge performance advantage,” according to Dan O’Donovan, Technical Lead, Machine Learning Engineering at Healx.
Graphcore also announced an academic program aimed at grad students, researchers and professors who want to use Graphcore systems for research or teaching. The program will give free access to IPU-powered systems (based on Dell DSS8440 server with 16 Mk1 IPUs on 8x C2 PCIe cards). Graphcore also offers free support for these systems.
Prior to the official launch of Graphcore’s academic program, researchers using IPUs for their work have already demonstrated several applications.
Researchers from the University of Berkeley, working with Google Brain, published research examining approaches to the efficiency of training deep neural networks. The researchers trained individual layers of the neural network separately which reduced the memory footprint required and reduced the communication overhead compared to model parallelism.
Imperial College London used Graphcore hardware for their computer vision work. Their recent paper shows how a technique called Gaussian Belief Propagation can be combined with the IPU to solve the classical computer vision problem of bundle adjustment.
And at the University of Bristol, researchers examined how the Graphcore IPU could be used to manage experimental data from the Large Hadron Collider at CERN. Their paper compares various neural network-based particle physics workloads implemented on IPUs, GPUs and CPUs.