For Mobileye’s rivals, the race comes down to “performance, including accurate and reliable software and algorithms where Mobileye is very strong."
But then, what’s the playbook for competing vision chip companies versus Mobileye?
“Better vision algorithms” and “a more open system” are the two answers given by Alberto Broggi, general manager of VisLab (Parma, Italy). Ambarella, best known for its high-end video compression and image processing chips, is now planning to move into the autonomous car market, via its 2015 acquisition of VisLab, an automotive vision firm with expertise in autonomous vehicles, including its 15,000 km 2010 test drive from Parma to Shanghai.
Ambarella, although relatively new to the auto market, sees a solid chance to get a foot in the door. Besides its high-resolution image-processing expertise in high dynamic range, low-light conditions that can run on a low power SoC, Ambarella now says it is integrating traditional computer vision and deep learning, neural network capabilities from VisLab.
Phil Magney, founder & principal advisor at Vision Systems Intelligence (VSI), observed, “Mobileye is the best at image recognition because their hardware and software is so tightly integrated (this is the same reason Apple typically works better than others). Most vision processor alternatives don’t have dedicated vision algorithms—they typically port third party vision algorithms to their instruction sets.”
In Magney’s opinion, “It is possible that that Mobileye could lose their grip on image since there is a lot of innovation on the image side applying neural networks.”
And then, there’s a prevailing argument against Mobileye for its “black box” solution.
Magney said, “Some OEMs and Tier 1's want to go deeper into the value chain to have more control over the applications. Mobileye solutions are still black box in this regard, meaning that Mobileye is providing the whole stack on the perception side.”
Luca De Ambroggi, principal analyst, automotive semiconductors at IHS Markit, agrees.
For Mobileye’s rivals, obviously, the race comes down to “performance, including accurate and reliable software and algorithms where Mobileye is very strong,” said De Ambroggi. If you cannot compete just on performance, flexible solutions are the other card you can play, he added. By flexible solutions, he means “more ‘open’ stack to allow OEM to differentiate and add their own value.”
On the other hand, Yole Développement’s Cambou doesn’t necessarily believe that “open” stack is the answer.
He said, “First, let’s acknowledge that Mobileye has set the standard of video based ADAS and in particular Automatic Emergency Braking (AEB).” Tesla, Volvo, Ford, Mazda, GM, Renault and the world’s leading automaker, Volkswagen, have been the main beneficiaries, explained Cambou.
“There was initially big reluctance from the big tier one companies to partner with Mobileye since its approach was relatively closed (think Apple),” Cambou acknowledged. “However, the robustness of its technology did translate into large success, not just for Mobileye itself but also for companies such as TRW, Autoliv, Magna and more recently Valeo and Delphi.”
In Cambou’s opinion, the automotive ecosystem Mobileye has been able to build has given the Israeli company an immeasurable lead. The vision SoC business for the automotive industry “is now turning into a big boys’ game.”
Cambou said Mobileye is outdistancing other vision SoC companies further by trying to solve problems in the next chapter of autonomous driving. As observed in a press briefing by Mobileye’s co-founder and CTO during CES, “Mobileye is no longer focusing on the sensing side (camera and hardware) which will be the part handled by Tier 1's. Mobileye wishes to become the platform for the real time mapping part in cooperation with the mapping companies (Here, Zenrin, TomTom, Google, Baidu…), while the driving part itself will be handled by car manufacturers and ECU providers,” Cambou explained.
To-do list for Vision SoC designers
When Mobileye and its rivals square off, there are certain things vision SoC designers must do. In software, said Cambou, you need to be first and foremost “an image analysis specialist fully up to date with latest Convoluted Neural Network (CNN) approach.” Second, you must “master real time video handling / image data analytics.”
As far as hardware goes, Cambou added, you must “have access to best-in-class digital technology node.” He added, “I am talking 7nm FinFET.” Then, you must “be able to master all SoC integration level, and then have access to the best image processing IPs (GPU,CPU, MPCPU...).”
According to Ceva’s Wertheizer, that’s where smartphones’ apps processor experience might shine. Although they may not be expert in vision algorithms, “the benefit of having been smartphone guys is that they know how to build complex SoCs in a short design cycle,” he noted.
Hardest of all, for SoC companies challenging Mobileye, is the complex, often intertwined automotive ecosystem. Cambou advised, “Be ready to invest massive amounts of money. This is a big boys’ game. He added, “Partner with automakers and [enter] multiple technology partnership agreements on all levels—including camera, IP, Maps.”
Figure 1: Already complex web of automated driving partnerships have been forged among largest players (Source: Yole Developpement)
Renesas, MediaTek and Ambarella all say that they will sample their own automotive vision SoCs—competing with Mobileye fair and square—in 2017.
Exactly how each of these SoCs might look remains unknown until they’re announced. Most likely, coming out this year will be “vision SoCs” for ADAS, rather than “sensor fusion chips”—in a strict sense—aimed at Level 4 and Level 5 autonomous driving.
But vision-chip companies are all saying that, in image analysis, they will offer both traditional computer vision-based HOG and newer deep learning-based CNN. Asked why a dual track, VisLab’s Broggi said, “We are worried about corner cases. Deep learning can’t answer all the questions yet.”
Remaining questions on sensor fusion
As for upcoming fusion chips, Mobileye has confirmed that a sensor fusion chip alone won’t enable autonomous driving. As Linley Group’s Demler pointed, “Mobileye needs a partner to supply a high-performance general-purpose processor to handle the driving-policy execution.”
The EyeQ5 processor will handle camera/lidar fusion to tell the computer what it sees, but the driving-policy computer needs to decide what to do with that information, Demler said. “Driving policy execution will require AI algorithms. I wouldn’t necessarily call it Mobileye’s secret sauce, since they’re developing it in collaboration with Intel. Other companies are going about it in different ways.” He added, “AImotive is one example, combining their software stack with an FPGA accelerator and general-purpose Intel processors.”
Considering that Mobileye’s strength has always been software—especially image processing, the key focus of 360-degree coverage offered by EyeQ5 is still on cameras, observed Ceva’s CEO Wertheizer. EyeQ5, for example, is designed to handle imaging data from more than 16 cameras. There are 16 virtual MIPI channels and more than 16 sensors can be supported by multiplexing several physical sensors on a single virtual MIPI channel.
But for fusing other sensory data, Wertheizer isn’t so sure Mobileye has nailed down all the software needed to process and analyse radar and lidar data.
EyeQ5 has hardware cores such as PMA (Programmable Macro Array) and VMP (Vector Microcode Processor) designed in. This allows EyeQ5 to run deep neural networks efficiently, enabling low level support for any dense resolution sensor (cameras, next generation lidars and radars), as Mobileye told us earlier. But Wertheizer wonders if EyeQ5 comes with unique software of their own to processes sensory data coming from radars and lidars.
“In the end, it might not matter,” said Wertheizer, “because EyeQ5 treats those other sensory data as redundancy to image processing.”
This article first appeared on EE Times U.S.