Nvidia Touts Computing at Both Ends of the Scale

Article By : Sally Ward-Foxton

Nvidia CEO Jensen Huang promotes GPU-driven applications from the atomic level to a digital twin of Earth.

In his keynote at the company’s fall conference, Nvidia CEO Jensen Huang announced dozens of new technologies, presented an overview of where accelerated computing is headed and offered some tantalizing glimpses of potential world-changing applications.

The common thread winding through Nvidia’s announcements was the metaverse – the virtual world – as the company presented technologies for avatar generation, cybersecurity, computational science and digital twins.

Jensen Huang (Source: Nvidia)

“The internet is like a digital overlay on the world,” Huang said. “The overlay is largely 2D information, text, voice, images, video. But that’s about to change. We now have the technology to create new 3D virtual worlds, or model our physical world.”

Avatar expansion
Unlike previous pandemic keynotes delivered from Huang’s kitchen, this one came from the metaverse, or at least from a digital twin of the same kitchen which quickly transformed itself into different scenes.

Among the demos was “Toy Jensen,” an avatar of Huang, complete with leather jacket. The avatar interpreted and answered questions on several scientific topics and provided reasonable, if brief, answers. It spoke in a slightly eerie facsimile of Huang’s voice – eerie only because it was able to mimic the real Huang quite closely, despite ever-so-slightly disjointed speech.

Toy Jensen illustrated the chipmaker’s technologies for the creation of avatars, including speech AI, computer vision, natural language processing as well as recommendation and simulation. Nvidia Omniverse Avatar is designed to create AI-powered assistants in the form of interactive characters with ray-traced 3D graphics capable of seeing, conversing and understanding a speaker’s intent.

Nvidia GTC Toy Jensen Avatar
The “Toy Jensen” avatar understood and answered scientific questions in a voice eerily similar to real Jensen Huang’s. (Source: Nvidia)

Avatars can be either cartoonish, like Toy Jensen, or realistic. Another Nvidia demo showed a video conference call with the speaker’s eyes animated to maintain eye contact with the viewer along with simultaneous translation into other languages. The example also included corresponding mouth movement and voice imitation.

The ability to create unique voices derives from Riva, Nvidia’s speech AI software that can imitate a speaker using only 30 minutes of sample speech. Riva includes accents, vocabulary, context, languages, pitch and pronunciation. Natural language understanding is powered by Nvidia’s AI model Megatron 530B, a huge model that can both understand and generate language, including answering questions on many subjects, summarizing information and language translation.

A third avatar demo Nvidia calls Project Tokkio is designed as a virtual assistant for customer service kiosks. One example showed an avatar replacing a waiter, recommending foods, interpreting queries and taking orders, all while maintaining appropriate eye contact with users.

Digital twins
Avatars are among Nvidia’s offerings for a category called “Omniverse,” a virtual world and simulation platform. Huang said he envisions the metaverse being larger than the real world. It could be used, for example, to purchase a 2D representation of an item in the metaverse similar to how digital books and music are purchased. Nvidia wants to expand that model to homes, furniture, cars and art.

Omniverse can also be used to create digital twins of real-world systems, predicting how they behave under certain conditions. The system could also be used to simulate autonomous vehicle or robotics AIs via a tool called Omniverse Replicator.

Nvidia GTC keynote Omniverse Replicator
The metaverse can be used to create training data for autonomous vehicles and robots via Omniverse Replicator. (Source: Nvidia)

Huang also highlighted digital twins of an entire city and a Siemens Energy heat recovery steam generator. In the latter, the physics of a stream generator was added to Omniverse, enabling the twin to predict parts corrosion. Siemens expects to reduce unplanned downtime by 70 percent.

Another example was a digital twin of a city built by Ericsson to visualize 5G antenna signal quality at various locations while simulating beam-forming. The twin was adjusted to ensure the accuracy of building materials. Nvidia also built an RF extension for Omniverse used in this application.

Nvidia GTC Keynote Ericsson digital twin
A digital twin built by Ericsson helps visualize 5G antenna effectiveness and beam-forming. (Source: Nvidia)

Biology revolution
Huaung’s keynote also featured high-performance computing applications. “The software revolution of deep learning is coming to science,” Huang declared.

Nvidia predicts a 1 million-fold leap in performance for computational science applications based on three connected areas. The first step is reinventing the full stack from the chip and system to acceleration libraries to applications, yielding a 50-fold performance boost. The improvement has enabled AI advances while fundamentally changing software development: Code written via deep learning is highly parallel, making it even more conducive to GPU acceleration and scalable to multi-GPU and multi-node systems. That has enabled 5,000-fold performance gains. Third, software written by AI can predict results up to 100,000 times faster than software written by humans.

The approach “has completely burst open the way we solve problems, and the problems that are solvable,” Huang said, resulting in overall performance gains in the range of 250 million.

Nvidia also unveiled its physics machine learning framework, Modulus (formerly known as SimNet). The platform can train neural networks using governing physics equations along with observed or simulated data.

Applying AI to science requires obeying the laws of physics, and researchers are now creating models that can learn and obey those immutable laws. Physics ML can be used for drug discovery and climate science, among other applications.

For drug discovery, recent advancements in protein folding AI have unlocked the three-dimensional structures of proteins. AI can also be used to generate millions of potentially effective chemicals to interact with proteins, and to simulate molecular interactions. Both would accelerate costly drug discovery and development.

“The future of drug discovery is computational [from] end to end, modelling the disease pathway, the genes involved, the drug target interactions, the off-target interactions,” Huang said. “With the confluence of million-X acceleration and ML for protein and chemical structure prediction and physics ML approaches, we are witnessing the dawn of the biology revolution.”

Climate change
Physics ML also can be used to simulate the Earth’s climate decades into the future, helping climate scientists to predict the regional impact of climate change while recommending responses.

“Predicting climate change so we can develop strategies to mitigate and adapt is arguably one of the greatest challenges facing society today,” Huang said. “We don’t currently have the ability to accurately predict the climate decades out. Although much is known about the physics, the scale of the simulation is daunting.”

Climate simulation is far harder than weather simulation, that relies on atmospheric data and where results can be validated every few days. Long-term predictions must model the physics of the atmosphere, ocean, ice and land while accounting for human activities–and the interactions among all those variables.

Resolution ranging from 1 to 10 meters is required to incorporate effects like atmospheric clouds that reflect solar radiation back to space. Accounting for those effects are critical for accurate long-term climate predictions. Resolution as high a 100,000 times higher than current models require supercomputer performance beyond what is currently available.

With that in mind, Huang said Nvidia will build a giant supercomputer, called E2 for “Earth 2,” serving as a digital twin of Earth that can simulate and predict climate change at the scale of the planet. “All the technologies we’ve invented up to this moment are needed to make Earth 2 possible. I can’t imagine a greater and more important use,” Huang said.

This article was originally published on EE Times.

Sally Ward-Foxton covers AI technology and related issues for EETimes.com and all aspects of the European industry for EE Times Europe magazine. Sally has spent more than 15 years writing about the electronics industry from London, UK. She has written for Electronic Design, ECN, Electronic Specifier: Design, Components in Electronics, and many more. She holds a Masters’ degree in Electrical and Electronic Engineering from the University of Cambridge.

 

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment