5 Trends to Watch in Embedded Vision and Edge AI

Article By : Jeff Bier

What is the state of innovation in embedded vision?

While deep learning remains a dominant force, deep neural networks alone don’t make a product.

Presented as a virtual event in May, the Embedded Vision Summit examined the latest developments in practical computer vision and AI edge processing. In my role as the summit’s general chair, I reviewed more than 300 great session proposals for the conference. Here are the trends I’m seeing in the embedded-vision space.

Deep-learning dominance

First, surprising no one, deep learning continues to be a dominant force in the field. It has radically changed what’s possible with computer vision. It has made development more data-driven than code-driven, and it’s changed the tools and techniques we use. But data is a pain. Where do you get it? How much of it do you need? How do you get more of it? How do you know you have the right kind of data?

Complex vision pipelines

Second, despite the deep-learning revolution, product developers are increasingly realizing that deep neural networks (DNNs) do not, by themselves, constitute a product. Real-world products require a complex vision pipeline, often including camera and image processing, DSP, Kalman filters, classical computer vision, and maybe even multiple DNNs, all combined in just the right way to get the results you need.

Democratized development

Five Trends to Watch in Embedded Vision and Edge AI

The third trend is democratization. It’s easier than ever to develop an embedded-vision application; thanks to a proliferation of tools and libraries, you don’t have to develop your algorithm from scratch in assembly or C. A great example of this is Edge Impulse, which offers easy-to-use software tools that let developers quickly and easily develop AI models and deploy them on low-cost microprocessors — all with very little coding required.

Also, we’re starting to see suppliers stepping up to support the whole pipeline (Lattice and Qualcomm are good examples here). It’s not hard to imagine a future in which a semiconductor company that has great tools for one component of the pipeline — DNNs, for example — but nothing for the other critical pieces will lose market share to competitors that offer more complete solutions.

Rise of practical systems

Fourth is what I’d call the maturation of the field: We’re moving past the “wow, that’s so cool” stage and are asking how we deploy this technology in ways that are commercially viable and maintainable.

Containerization is a great example. The approach has been a best practice in cloud development for over a decade, but we’re starting to see it used to speed development in practical embedded systems, including vision and AI systems (which bring their own challenges, with potentially frequent over-the-air model updates).

Similarly, the specters of security and privacy rear their heads. How do we design systems that are secure against hackers and protect user privacy? Relatedly, how do we meet functional safety requirements — indeed, how do we even test for such things? These are issues that don’t come up in science fair projects but do arise when you’re shipping real products to serious customers.

Processors aplenty

Fifth is, honestly, an embarrassment of processor riches. A year or two ago, I observed that we were in a Cambrian explosion of processors for AI. Today, if anything, that trend has accelerated and spread: It seems like everybody who makes a processor — whether it’s a one-dollar MCU or a big, multicore, multi-gigahertz, on-premises server processor — is targeting edge-AI and vision applications

That said, it’s a big space, and processor companies often target different zones in terms of performance, price, and power. For system developers, while it’s great having a choice, it can be challenging to choose, especially when you consider not just technical factors (such as performance and power consumption) but other critical issues, such as price, business, and supply-chain risk.

If there’s a megatrend here, it’s this: We’re living in a golden era of innovation in embedded vision. There’s never been a better time to build vision-based products.

This article was originally published on EE Times.

Jeff Bier is president of consulting firm BDTI, founder of the Edge AI and Vision Alliance, and general chair of the Embedded Vision Summit.

 Lucky Draw for Members 2021

Leave a comment