Great Wall Motors' WEY Mocha SUV uses a camera based in-cabin sensing system based on Ambarella's AI vision processor.
At the recent Shanghai Auto Show, Ambarella said that automotive manufacturer Great Wall Motors’ (GWM) new WEY Mocha flagship sports utility vehicle (SUV) uses a camera based in-cabin sensing system based on the Ambarella CV25AQ CVflow artificial intelligence (AI) vision processor.
The system is integral to the new SUV, the first model from GWM’s “Coffee Intelligence” driving platform. Launched in 2020, this platform provides an AI system that advances automotive technology via intelligent cockpit systems, intelligent drive, and intelligent automotive electronic and electrical architecture technology. Senya Pertsel, senior director of automotive marketing at Ambarella told embedded.com, “The in-cabin sensing data from our CV25AQ CVflow AI vision processor gets fed to the Coffee Intelligence system for autonomous driving.”
The CV25AQ-based system can support a variety of simultaneous, multi-camera channel combinations for recording and/or in-cabin sensing, with the entire system meeting Euro NCAP 2025 standards. Additionally, it performs reliable visual processing under complex lighting conditions and plays a key role in GWM’s intelligent drive process.
“Ambarella and GWM have a strong history of successful collaboration, with several generations of vision systems already in production for a variety of car models,” said Fermi Wang, CEO of Ambarella. “Our companies worked together even more closely to develop this new Ambarella CVflow-based multi-channel AI vision system.”
Ambarella’s AEC-Q100 qualified CV25AQ SoC combines image processing, 6MP30 video encoding / decoding, and CVflow computer vision processing in a single, low-power design. The CVflow architecture provides the deep neural network (DNN) required by intelligent automotive cameras. Fabricated in advanced 10nm process technology, CV25AQ achieves a combination of low power and high performance in both human vision and computer vision applications. It is ideal for implementing multi-channel digital video recorders, single- or dual-channel electronic mirrors with recording capabilities, and driver / in-cabin monitoring cameras.
The CV25AQ’s CVflow architecture provides computer vision processing at 6MP resolution, enabling image recognition over long distances with high accuracy. It includes efficient encoding in both AVC and HEVC video formats, delivering high-resolution video encoding with very low bit rates. The CV25AQ’s image signal processor (ISP) provides good imaging in low-light conditions, while its high dynamic range (HDR) processing extracts maximum image detail in high-contrast scenes.
It also includes a suite of cybersecurity features such as secure boot with TrustZone and secure memory, true random number generator (TRNG), one-time programmable memory (OTP), DRAM scrambling and virtualization, and a programmable secure level for each peripheral interface. To help customers easily port their own neural networks onto the CV25AQ SoC, Ambarella’s software development kit offers a complete set of tools.
The automotive market is a key target for Ambarella’s CV range of vision systems on chip (SoCs). In its most recent earnings announcement, Wang said the company is expected to have shipped 300,000 CV SoCs into the automotive market by the end of April 2021.
Louis Gerhardy, senior director of corporate development at Ambarella, said to embedded.com, “I can tell you that 15%-20% of our total annual revenue was from automotive in our fiscal 2021 that ended on January 31st. Furthermore, we’ve indicated that by the end of this month [April 2021], we expect to have cumulatively shipped approximately 2 million CV SoCs, with at least 300,000 going into the automotive market. Additionally, we anticipate that our automotive revenue will increase at least 20% during our current fiscal Q1 2022 quarter that ends on April 30th.”
The GWM Coffee Intelligence system is based on Qualcomm Technologies’ Snapdragon Ride platform. The Coffee Intelligence system supports multiple high-resolution cameras and offers users L2+ and L3 intelligent driving capabilities with multi-source heterogenous sensors. Paired with an upgraded solution featuring two standard high-computing power platforms, this intelligent driving system can deliver computing power of 700+ TOPS and reserve sufficient hardware capabilities and computational redundancy for L4/L5 and more complicated full-scenario autonomous driving capabilities.
This article was originally published on Embedded.
Nitin Dahad is a correspondent for EE Times, EE Times Europe and also Editor-in-Chief of embedded.com. With 35 years in the electronics industry, he’s had many different roles: from engineer to journalist, and from entrepreneur to startup mentor and government advisor. He was part of the startup team that launched 32-bit microprocessor company ARC International in the US in the late 1990s and took it public, and co-founder of The Chilli, which influenced much of the tech startup scene in the early 2000s. He’s also worked with many of the big names – including National Semiconductor, GEC Plessey Semiconductors, Dialog Semiconductor and Marconi Instruments.