Nvidia has teamed up with the Tokyo Institute of Technology to build what could be Japan's fastest AI supercomputer, Tsubame 3.0.

Built on Nvidia’s accelerated computing platform, Tsubame 3.0 is expected to deliver more than twice the performance of its predecessor, Tsubame 2.5. It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance, according to Nvidia. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, which was released in November.

The system is expected to deliver more than 47 petaflops of AI horsepower. A combination of Tsubame 3.0 and Tsubame 2.5 will pack a performance of 64.3 petaflops, making it Japan’s highest performing AI supercomputer, according to Tokyo Tech.

TokyoTech Tsubame30 01 (cr) Figure 1: Excelling in AI computation, Tsubame 3.0 will be Japan’s highest performing AI supercomputer when operated concurrently with its predecessor. (Source: Nvidia)

Once up and running this summer, Tsubame 3.0 will be used for education and high-technology research at Tokyo Tech, and accessible to outside researchers in the private sector. It will also serve as an information infrastructure centre for select Japanese universities.

“Artificial intelligence is rapidly becoming a key application for supercomputing,” said Ian Buck, vice president and general manager of Accelerated Computing at Nvidia. “Nvidia’s GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can drive life-changing advances in such fields as healthcare, energy and transportation.”