A standardized architecture for autonomous vehicles to capture and process data will enable driverless vehicles to reach the market efficiently, effectively, and safely.
The automotive industry has come a long way. And the technology for building self-driving cars and autonomous vehicles is no longer science fiction. I’ll admit that it’s certainly tempting to think of the world of tomorrow as being full of space-age vehicles, all gracefully traversing the globe in carefully orchestrated precision. The only thing that I hope doesn’t come to pass is that people in the future are all forced to wear the same shiny silver jumpsuit, like in the movies. Who came up with that idea?
Reality is of course far more diverse. And that certainly applies to the technology environment surrounding developers of tomorrow’s autonomous vehicles — especially at the proof-of-concept stage. In addition to a unique and demanding development environment, engineers will be surrounded by a kaleidoscopic variety of custom-built on-premise and cloud applications — all of which somehow need to be able to communicate with each other seamlessly, an undertaking that requires a highly autonomous industrial Internet of things (IIoT) system to bring the concept to life.
Drivers, start your engines!
There are a large number of manufacturers who are diving headfirst into the process of building autonomous vehicles (AVs). As developers move through the proof-of-concept phase, they’ll have to negotiate the occasional roadblock along the way.
First of all, a system for autonomous vehicles must be able to do three main things: sense the environment, process data about that environment, and then act on that information within the environment. And that’s essentially a cycle or a loop that happens over and over again. But the amount of data being generated and the speed at which it needs to be processed can quickly become overwhelming.
Common challenges on the systems side of AVs
To break it down a little, when we look at a self-driving car, it must have a sensor package that’s looking at the environment (which can range from simple driver-assist level technology to a more complicated, highly or fully autonomous vehicle system). And the environment will dictate the level of fidelity and how much data will be collected from lidar sensors, radar sensors, actuators, and other points of input. We call that sensor fusion, or data fusion, as it really only works when all these components can share data with each other and agree on the accuracy of the conclusions.
And then there’s the thinking part, where the system has to use artificial intelligence (AI) to resolve issues, such as: Okay, what do I do with this information? Am I going to go to turn left? Am I going to go straight? Am I going to turn right? What’s going on in the environment? The system also needs to analyze different transient factors, like people or bicycles or cars, and then make decisions and plan. And of course, as the car then takes physical action that in turn changes the environment, the cycle starts all over again.
So the challenge is really about high-level connectivity: the system is only as good as the speed and the quality at which data is captured and processed. And then when external connections are added — like connecting to the cloud and connecting to other systems — they become part of the connectivity solution. The result is a complex distributed system with many components, all in a very tight package.
The concept of the layered databus
Massive scalability is the core premise of every highly autonomous system. And this truism especially applies in the world of autonomous cars because even the best developer teams can get blindsided by the jump in complexity of building a system that’s running under controlled test conditions, versus a system that’s truly ready to go-to-market. To go out into the market and function — with all the press scrutiny and new test cases that the general public will demand — typically adds a whole new layer of mission-critical requirements on the system that no one has accounted for so far.
The layered databus is a concept and term developed by the Industrial Internet Consortium (IIC), an organization that catalyzes and coordinates the priorities and enabling technologies of the industrial Internet. The layered databus was developed to allow developer teams to identify different planes of either control or information within a system. In addition to complete control of the environment, teams are also able to specify the quality of service (QoS) that determines how data must flow between applications for different use cases, including reliability, bandwidth, and latency.
This layered databus concept allows developers to use the same standard across the ecosystem; it also lets them set different conditions and different rules for how that data is managed for different parts of the system — all of which allows for a standardized way to communicate between different systems without having to add new protocols and gateways or other bridges. The layered databus naturally allows teams to find different conditions for data use in order to make the system reliable and repeatable.
There is debate around where we are as an industry in terms of autonomous vehicle development and when we’ll start to see level 4 and 5 autonomous vehicles on our roads. Though that timeline will often differ depending on whom you speak to, the one thing developers can all agree on is that high-level connectivity is the core ingredient necessary to capture and process the data and to address the many complexities of these systems. A layered databus architecture provides standardized communication within these systems and gives developers the tools to enable driverless vehicles to reach the market efficiently, effectively, and safely.
Bob Leigh is the senior market development director, autonomous systems, at RTI. He brings over 15 years of experience developing new markets and building technology companies to his role.