DeepScale on Robo-Car: Fuse Raw Data

Article By : Junko Yoshida

DeepScale has developed a perception system for ADAS/autonomous vehicles. It offers pre-trained AI algorithms, based on raw data, not object list, coming from multiple sensory data types, and accelerates its fusion on an embedded processor like Snapdragon.

MADISON, Wis. — How does a robo-car perceive the world around it — in real time — safely and accurately? If you think this is a solved problem, think again.

In an exclusive interview with EE Times, DeepScale (Mountain View, Calif.) has disclosed its unique approach to a “perception system” the startup is building for ADAS and highly automated vehicles.

DeepScale is developing the perception technology that ingests raw data, not object data, and accelerates its sensor fusion on an embedded processor. 

“A good chunk of research on deep neural networks (DNN) today is based on tweaks or modifications of off-the shelf DNN,” observed Forrest Iandola, DeepScale’s CEO. In contrast, over at DeepScale, “We’re starting from scratch in developing our own DNN by using raw data — coming from not just image sensors but also radars and lidars,” he explained. 

Early fusion vs. late fusion
Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), called DeepScale’s approach “very contemporary,” representing “the latest thinking in applying AI to automated driving.”

How does the DeepScale approach — using raw data to train the neural network — differ from other sensor-fusion methodologies? 

First off, “Today, most sensor fusion applications fuse the object data, not the raw data,” Magney stressed. Further, in most cases, smart sensors produce object data within the sensors, while other sensors send raw data to the main processor — where objects are produced before it is ingested into the fusion engine, he explained. Magney called such an approach “late fusion.”

Late Fusion: Traditional approach to sensor fusion (Source: DeepScale) 
Click here for larger image

Late Fusion: Traditional approach to sensor fusion (Source: DeepScale)

Click here for larger image

Clearly, Iandola sees an inherent issue in late fusion.

It poses problems in fusing object data with raw data, he said, especially when the sensor fusion is tasked to handle multiple types of sensory data. “Think about 3D point cloud created by a lidar,” he said. “While you’re reconstructing 3D-point cloud in your sensor, you are also receiving data coming from cameras at a much different frame rate.” 

While creating objects, the raw data that might have been relevant to other sensory data could be lost. Think about the moment when the sun shines directly into the vehicle camera’s lens, or when snow covers the radar, Iandola suggested. Or, when sensor data don’t agree with one another. In such a case, fusing object lists becomes challenging.

DeepScale's approach: Deep Neural Network Sensor Fusion
Click here for larger image

DeepScale’s approach: Deep Neural Network Sensor Fusion

Click here for larger image

“That’s why we believe we must do raw data fusion early, not late, and do it closer to the sensors,” he said. “We think early fusion can help mitigate some of those problems.”

Continue reading on EETimes.

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment