The 5G rollout is a big story for 2020. One of the drivers of 5G will be improved auto applications at the edge, near-edge and fog...
The 5G rollout is a big story for 2020. One of the drivers of 5G will be improved auto applications at the edge, near-edge and fog.
Junko Yoshida has been comprehensively covering key aspects of automotive sensors and autonomous vehicles. As she notes:
But there’s a complication: whatever sensors perceive at the edge won’t stay at the edge. Captured sensory data must be processed inside a vehicle to be interpreted by machines. This requires massive processing power inside the vehicle brain. It demands updated in-vehicle networks fed by a fatter pipe with very little latency. In the end, it takes a sensor village of enabling machines to make safe and sound decisions.
There is another “pipe” to consider, the one to the centralized AI brains in the cloud or at the edge. It will leverage the big bandwidth promised by 5G radio connections. Of course, the vehicles, autonomous or even the very driver-supportive variety, also need to talk to each other over 5G connections. Things are moving fast for connected cars. As I write, Qualcomm announced a large-scale capacity test in Shanghai to demonstrate cellular vehicle-to-everything (C-V2X).
Qualcomm kicks off large-scale C-V2X capacity test in Shanghai (source: Qualcomm)
5G networks are projected to enable more than just mission-critical decision-making. This new big mobile connection is set to offload complex computing like AI to the cloud. It’s not just automotive that will drive this. Other untethered verticals like mobile and IoT will push more data through the higher bandwidths available with 5G.
Lately, we have been bombarded with news that the AI is about to explode. The AMD acquisition of Xilinx has dominated the tech news cycle. With the deal recently confirmed, CEO Lisa Su detailed the Xilinx deal for the financial community as a move to gain traction in the data center.
Is AMD looking deep into the data center or more AI on the edge? That depends too much on confusing definitions. If you want a detailed explanation about the terms and where the various computing appliances exist in the network, you should take a look at this Nitin Dahad’s edge AI article.
The most thought-provoking label for edge or near-edge hardware is “fog.” What decisions will be made in the fog?
Xilinx’s role in cloud computing
But let’s get off meteorological terms for computing hardware. It is worth diving into what role Xilinx technology – the FPGA – plays in cloud computing.
Alveo U50 data center accelerator card (source: Xilinx)
The first microprocessor company that bought one of the two leading FPGA companies was Intel (who acquired Altera). Of course, Intel is heavily invested in the data center and was looking to advance the technology and improve its position. Intel provides some more detailed information about the role of the FPGA through its line of accelerator cards.
Naturally, Xilinx is also in this business with its Alveo data center accelerator.
The main point of the FPGA is that it is field programmable — it is re-programmable after deployment, which means it can be retrained. In other words, there is some machine learning that can take place. Naturally, this can also be done purely in code running on a microprocessor (more typically a GPU), but hardware is faster and hence the FPGA is preferred to adapt algorithms and continuously improve data processing.
Shift from mobile to HPC
Whether deep in the data center or closer to the user, what we are talking about is high performance computing (HPC). The last major shift in the semiconductor industry was to mobile at a time when demand in the PC market was fading. Is the industry now looking to the other end of the spectrum? You bet. I don’t think that comes as any big revelation to anyone who has spent any time thinking about the many tens of thousands of hours of content uploaded to YouTube for every hour of real-time passing on this earth and the servers required to store and distribute that content.
But the more specific impact on semiconductor supply was highlighted in a recent information release from the world’s largest manufacturer. As I first found in a story reported by Daniel Nenni, TSMC is showing an obvious trend to chips supplied to the HPC vertical. The third quarter operating summary indicates smartphone revenue slowing slightly while high performance computing is ramping quickly. HPC was 29% of the TSMC supply for Q3 2019 and has now jumped up to 37%. The full TSMC report is available online.
It makes one wonder. A new mobile technology, 5G cellular, is actually expected to drive semiconductor shares out of the mobile space and into high performance computing. As long as more silicon is consumed, I’m sure everyone will be happy.
Circling back to cars, I wonder how much control we want to hand off to big brother regardless of reduced latency at the edge versus a more centralized server. How will edge AI be programmed to handle situations akin to the Trolley Dilemma? I wonder if morality algorithms might come into play weighing the lives in your vehicle against some other unfolding situation on the road ahead. Especially, if these decisions are made in “the fog.”
You need to Login or Register to be able to continue reading