Sensor Fusion Maps the Road to Full Autonomy

Article By : Anne-Françoise Pelé

Autonomous vehicles (AVs) need sensors such as camera, radar, and LiDAR units to see their surroundings. AVs also need computing power and AI to analyze multidimensional and sometimes multisource data streams to provide the vehicle with a holistic and unified view…

Autonomous vehicles need sensors such as camera, radar, and LiDAR units to see their surroundings. AVs also need computing power and artificial intelligence to analyze multidimensional and sometimes multisource data streams to provide the vehicle with a holistic and unified view of the environment in real time. If sensor fusion maps the road to full autonomy, many technical challenges remain. In a presentation at AutoSens Brussels 2020, Norbert Druml, concept engineer at Infineon Technologies Austria, shared the ambition of the €51 million European research project dubbed Prystine — Programmable Systems for Intelligence in Automobiles. Druml showcased some of the key results achieved so far in the fields of fail-operational sensing, control, and AI-controlled vehicle demonstrators beyond Level 3 autonomy. Prystine’s consortium comprises 60 partners from 14 European and non-European countries, including car manufacturers such as BMW, Ford, and Maserati; semiconductor companies such as Infineon Technologies and NXP Semiconductors; technology partners; and research institutes.
EU-funded Prystine
(Source: Prystine)
Fail-operational behavior Vehicles will gradually acquire more autonomous functionalities, and the driver will focus less on driving and more on monitoring the intelligent systems to which driving action will be delegated. At Level 3, the driver can take over the driving task in case of a system failure or if the automated driving function reaches its operational limits. At Level 4 and Level 5, however, the driver can’t be relied upon to intervene in a timely and appropriate way. The automation must be able to handle safety-critical situations on its own. In this regard, fail-operational behavior is critical in the sense, predict, and act stages of the automation chain.
Norbert Druml
Norbert Druml
One of Prystine’s main objectives is the implementation of FUSION — Fail-operational Urban Surround Perception — which is based on robust radar and LiDAR sensor fusion, along with control functions to enable safe automated driving in rural and urban environments “and in scenarios where sensors start to fail due to adverse weather conditions,” said Druml. The objective is to move from a fail-safe to a fail-operational behavior system “to really improve the safety of all components being integrated into future cars,” said Druml. “This covers components like safety controllers, sensors, radar, LiDAR, cameras, and the computing platforms that come with processing power.” The fail-operational behavior system envisioned by the Prystine partners will not switch off the whole functionality when a fault is detected; instead, said Druml, “it will activate the backup system that is capable of supporting some functionality and driving the car at reduced speed to the next pit stop.” To realize Prystine’s FUSION, the research focus has been on the development of four clusters of AI algorithms, described as follows. Detection of vulnerable road users In the European Union, 22% of road fatalities are pedestrians and 8% are cyclists. This cluster addresses the perception of vulnerable road users — pedestrians, cyclists, children, and disabled and aged populations — by fusing data coming from radar, LiDAR, and camera sensors. The SuperSight solution has been developed to eliminate blind spots so that vulnerable road users are seen before they enter the driver’s natural field of view. Partners claim SuperSight also provides automatic safety alerts, which reduce road accidents and improve driver proactivity. The SuperSight solution uses 360° video processing with surrounding cameras attached on the vehicle.
Prystine Project
(Image: Prystine)
Traffic management In the transition from autonomous-driving Levels 2 and 3 to Level 4, vehicles need to cope with more complex traffic conditions and road networks, especially in urban environments. Prystine partners are working on a traffic-management solution that fuses traffic data coming from traffic controllers, floating car data, and automatic plate-recognition cameras. “We fuse this data and provide traffic predictions to the cars and to the road users,” said Druml. “This provides a field of view way beyond the actual car field of view, and the car can optimize its trajectory and path planning.” For instance, he said, the car may adjust its trajectory and speed to make a run of green lights and optimize time and energy consumption. Suspension control The consortium evaluated various sensor technologies — laser triangulation, radar, and ultrasonic imaging — that can scan road-surface conditions so that a vehicle can react to the predicted condition by changing the amount of damping coefficient or vertical position of the suspension system. “Algorithms analyze the geometry of the road ahead and adapt the suspension of the car in such a way that the user does not feel holes and bumps on the road,” said Druml. “The convenience of driving the car is vastly improved.” Vehicle control and trajectory planning This cluster of algorithms is for use cases such as collision detection, collision avoidance, lane changes, emergency stops, overtaking, back maneuvering of heavy-duty trucks and full-size trailers, and start/stop safety, said Druml. The cluster is deployed in a demonstrator with three levels of complexity. In the first level, called the shared control scenario, the driver is supported by an AI-based co-driver that “continuously analyzes the trajectory of the car. If a safety-critical situation is detected, the AI-based co-driver supports the driver and hopefully is able to resolve the critical situation in a safe manner.” The next level of complexity, called layered control, “switches smoothly between different automation levels,” Druml said. For instance, the vehicle can switch from “a supervised city control to a city chauffeur, and this is done by continuously monitoring not only the driving scenario and situation around the car but also by analyzing the driver status and the complexity of the maneuvers.” The third level of complexity is a fully AI-controlled vehicle. “Here, we fuse the sensor data coming from radars, LiDARs, and cameras and take into account cloud-based information — in particular, traffic state and traffic prediction information — in order to improve the AI-based solution toward automated driving,” said Druml. The three-year Prystine project will end in April 2021, but “the idea of this project” will continue, he said. “We [partners] assembled to get some funding in order to speed up our development and research activities.” Progress, opportunities, challenges The frenzied, competitive pressure to accelerate the advent of AVs has yielded significant progress in sensor fusion algorithm development. But how far along are we with AV sensor fusion? How are market players approaching it?
Pierrick Boulay
Pierrick Boulay
EE Times Europe posed these questions to Pierrick Boulay, technology and market analyst at Yole Développement (Lyon, France). “The E/E [electric/electronic] architecture in cars is evolving from a distributed architecture to a domain-centralized architecture,” said Boulay. “There will be steps in between.” One initial step was taken by carmaker Audi with the zFAS domain controller in 2016, he noted. All sensor data, including the signals from the 3D cameras, the long-range radar, LiDAR, and ultrasonic sensors, is continuously fed into and processed by the module. “With this type of domain controller, it is easier to realize data fusion,” said Boulay. Tesla took a similar approach with its Autopilot hardware, which “gathers the data from all the embedded sensors but also controls the audio and RF, as well as the navigation systems.” As described in the Prystine project, one key to unlocking autonomy is to fuse and interpret the data coming from a variety of sensors so that the system can see and understand the vehicle’s surroundings as a human driver can. AI will be increasingly implemented, and all companies developing algorithms to analyze such volumes of heterogeneous data are expected to find many opportunities, said Boulay. This is also true for companies manufacturing chips to process the data. The need for computing power increases consistently with the levels of autonomy. Robotic cars, for instance, are already beyond 250 tera-operations per second, while Tesla’s Full Self-Driving capability approaches 70 TOPS. Can the computing power increase indefinitely? At what level can it be considered enough for full AVs? “Some companies will reach full autonomy with optimized systems needing moderate computing power, while others could require twice as much,” said Boulay. “Only time will tell where the limit will be.” Some challenges are related to the power consumption of sensors and computing, especially in electric vehicles. “Processing huge amounts of data can have an impact on the range of an electric vehicle,” Boulay said. “As the range is a major concern for customers, such autonomous systems will have to be energy-efficient.” Another challenge related to sensor fusion is the ability to fuse data in different dimensional spaces, i.e., 2D and 3D. “That is a key question that OEMs and Tier Ones will have to answer,” said Boulay. “The LiDAR will be able to set the scene in 3D, and cameras and radar will be used to fine-tune this scene to bring color to the image, velocity to the objects. “This will be quite complex to achieve, and this is what will make the difference between the leaders and the laggards.” This article was first published on EE Times Europe

Leave a comment