Moving AI to the edge is going to require an entirely new category of processor that is inexpensive and flexible.
Artificial intelligence (AI) has existed in the public consciousness for decades. The (mostly) sentient machines playing the villains in Hollywood movies have never been realistic depictions of the technology, but they have left an impression, nonetheless. AI has proved as exciting to the layman as it is to the expert.
Usually based in remote data centers, AI is capable of collecting and examining immense volumes of data, generating insights based on analytical algorithms. With varying degrees of autonomy, these capabilities have been put to use streamlining decision-making processes.
While AI is often thought of as a product in its own right, it is increasingly intersecting with other parallel trends. Chief among these is the Internet of things (IoT), which enables previously isolated machines to “talk” to one another and, at the same time, generate data that makes new modes of operation a possibility.
There is a clear point of convergence here, and it has found expression as the artificial intelligence of things, or the AIoT. The vision of the AIoT is to create an intelligent network of devices capable of gathering and analyzing data remotely, and crucially translating that data into insights and actions locally that enable a wide variety of use cases that simply weren’t realistic before.
Inverting the cloud model
Perhaps the most revolutionary element of this vision is the inversion of the cloud model we have all become accustomed to. Shifting services, internal solutions, and networks to a cloud infrastructure has been a convenient and popular solution to digital problems to date — but with an AIoT model, reasoning about the data takes place locally, on the device.
There is good reason for adopting this model. With Statista forecasting that there will be more than 75 billion IoT devices by 2025 — five times the number in 2015 — it must be acknowledged that cloud infrastructure and connectivity cannot scale in the same way, and that more intelligence must be deployed at the edge.
AIoT’s use of on-device processing power avoids the network bandwidth, compute scalability, latency, and security issues of the past. By reducing the strain on the network as a whole, and avoiding expensive and resource-heavy data centers, the AIoT is able to distribute the workload in a way that promises to greatly improve performance.
There is a caveat to this vision, however. Simply putting today’s high-end CPUs onto endpoint devices is not viable. These CPUs are too power-hungry and expensive for this to be a commercially scalable model. So, the question is: how do you get the necessary processing power on to endpoint devices in a cost-effective and power-efficient way?
If the AIoT is to achieve explosive growth in the coming years this is a question that must be answered. It also represents a significant market opportunity.
Delivering on the requirements
Achieving the necessary processing power without the cost and energy requirements of current high-end CPUs is an unenviable challenge.
Any new breed of processor designed for the AIoT has to have a lower price point than existing solutions. But the challenges are numerous. Existing CPUs rely heavily on third-party hardware/software — with all of the associated licensing costs. As such, new processors will need new architectures that remove the need for this third-party IP. Moreover, component requirements and costs in the overall system must be minimized.
At the same time, these cost-saving measures cannot affect performance. While some AIoT applications will not require the full heft of processing power that can be delivered by a server in a data center, the requirements of delivering even basic AI and decision-making functionality are still exceptionally high. This means new approaches to the processing function are required, with sophisticated algorithms picking up much of the heavy lifting — allowing the hardware itself to be relatively lightweight.
However, perhaps the biggest challenge is creating a class of processors that can be applied to a multitude of applications. That’s because the reality of the AIoT market is that it is not monolithic — the standards and requirements across the hundreds of markets, and thousands of market segments vary wildly.
We all know the costs associated with delivering a new CPU. It is simply not realistic to expect vendors to produce thousands of variations of a platform to serve all of those different needs. If you produce a chip that serves only one purpose or one application, then the problem hasn’t been solved.
Instead, AIoT processors need incredible flexibility, enabling programmable trade-offs between compute classes (AI, DSP, control, and I/O) that can be defined by the product designer rather than the chip vendor.
Making the AIoT real
If these requirements can be met, the impact could be enormous. A fully-functioning and commercially viable AIoT promises to revolutionize everything from the smart home to connected healthcare, the automotive industry, Industry 4.0 and smart cities.
Take the smart home. With AIoT capabilities, it will be possible to offer total control over every facet of your surroundings without the need to even divert your attention, never mind navigating one of the plethora of applications we have today.
While that may seem trivial, the benefits in other application areas could be more significant. An AIoT-enabled healthcare market could open up a world of far greater preventative medicine. Devices in the home can track heart rate and breathing, and can identify problems early, provide alarms and automatically deliver real-time data to healthcare professionals, who can deliver the right treatment and care exactly when it is required.
Ultimately this sort of technology can fundamentally enhance our quality of life with new levels of convenience and efficiency from the most trivial parts of our day (such as finding a parking space) to the most vital (such as our safety and health). But none of this will be possible without a new generation of electronics that is able to diffuse the power of AI to the endpoint devices we have all around us.
It is a huge challenge, but one that could herald a genuine intelligence revolution.
Mark Lippett is the CEO of XMOS