Vehicle Perception Engine Makes Autonomous Driving Safer

Article By : Nitin Dahad

The software-based autonomous vehicle perception engine upscales raw data from camera, lidar and radar sensors to provide a more accurate 3D model than object fusion-based platforms.

LONDON — Israeli startup Vayavision has launched a software-based autonomous vehicle environmental perception engine which upscales raw data from camera, lidar and radar sensors to provide what it claims is a more accurate 3D model than object fusion-based platforms.

The company’s CEO, Ronny Cohen, told EETimes that today’s object-led fusion of sensor data is not reliable and can lead to objects being missed. Roads are full of unexpected objects that are absent from training data sets, even when those sets are captured while travelling millions of kilometers.

Cohen said most current generation autonomous driving solutions are based on object fusion, in which each sensor registers an independent object, and then reconciles which data is correct. This can provide inaccurate detections and result in a high rate of false alarms — and ultimately accidents.

It’s thought that more advanced perception solutions like raw data fusion could help better model the 3D environment. Cohen said cameras don’t see depth and distance sensors — such as lidars and radars — are usually low resolution. Vayavision's VAYADrive 2.0 takes the raw data and upsamples the sparse samples from distance sensors and assigns distance information to every pixel in the high-resolution camera image. This, the company said, allows autonomous vehicles to receive crucial information on an object’s size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road.

The VAYADrive 2.0 software solution combines artificial intelligence (AI), analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensor hardware, and is compatible with a wide range of cameras, lidars and radars. This provides an accurate 3D environmental model of the area around the self-driving vehicle. The company said it breaks new ground in several categories of AV environmental perception: raw data fusion, object detection, classification, SLAM, and movement tracking, providing crucial information about dynamic driving environments.

Cohen emphasized that the key to the company’s solution is its unique set of patented algorithms, which upscale and generate high-resolution images from sparse data.

“It is not extrapolation. Our secret is to upscale the data, analyze the scene frame by frame, and build an accurate 3D model of the environment around the car,” Cohen said.

Vayavision, VAYADrive 2.0
Vayavision's raw data fusion with up sampling. (Source: Vayavision)

He added that VAYADrive 2.0 increases the safety and affordability of self-driving vehicles and provides OEMs and Tier 1s with the required level of autonomy for the mass-distribution of autonomous vehicles. Its raw data fusion architecture offers automotive players a viable alternative to object fusion models that are common in the market and increase detection accuracy and decrease the high rate of false alarms.

Vayavision is is already in talks with several Tier 1s, including top German players, Cohen said. The company, formed in 2016, employs 24 people and raised an $8 million seed round last year led by Viola Ventures, Mizmaa Ventures, and OurCrowd, with strategic investment from Mitsubishi UFJ Capital Co. Ltd, and LG Electronics.

Cohen said that at the Consumer Electronics Show last week he kicked off discussions with investors to initiate a $20 million series A financing round.

Leave a comment