Synaptics Expands Into Edge AI

Article By : Gina Roos

Once best known for interface products such as fingerprint sensors and touchpads, Synaptics' portfolio now expands into edge-AI processors.

At one time, Synaptics Inc. was best known for its interface products, including fingerprint sensors, touchpads, and display drivers for PCs and mobile phones. Today, propelled by several acquisitions over the past several years, the company is making a big push into consumer IoT as well as computer-vision and artificial-intelligence solutions at the edge. Synaptics sees opportunities in computer vision across all markets and recently launched edge-AI processors that target real-time computer-vision and multimedia applications.

The company’s recent AI roadmap spans from enhancing the image quality of high-resolution cameras using the high-end VS680 multi-TOPS processor to serving battery-powered devices at a lower resolution with the ultra-low–power Katana Edge AI system-on-chip (SoC).

Last year, Synaptics introduced the Smart Edge AI platform, consisting of the VideoSmart VS600 family of edge-computing video SoCs with a secure AI framework. The SoCs combine a CPU, NPU, and GPU and are designed specifically for smart displays, smart cameras, video sound cards, set-top boxes, voice-enabled devices, and computer-vision IoT products.

The platform uses the company’s Synaptics Neural Network Acceleration and Processing (SyNAP) technology, a full-stack solution for on-device deep-learning models for advanced features. With inferencing done on the device, it addresses privacy, security, and latency issues.

The VS600 SoCs include an integrated MIPI-CSI camera serial interface with an advanced image-signal–processing engine for edge-based computer-vision inference. The also use the company’s far-field voice and customizable wake-word technology for edge-based voice processing and the SyKURE security framework.

At the other end of the spectrum is an ultra-low–power platform for battery-operated devices. Built on a multicore processor architecture optimized for ultra-low power and low latency for voice, audio, and vision applications, the Katana Edge AI platform features proprietary neural network and domain-specific processing cores, on-chip memory, and use of multiple architectural techniques for power savings. Katana Edge AI can be combined with the company’s wireless connectivity offerings for system-level modules and solutions.

An example of how Synaptics manages security on the VS680 edge SoC (Source: Synaptics)

“There is a ton of applications where plugging in is just not viable, so there is an interest in battery power, whether it is in the field or industrial, and particularly at home,” said Patrick Worfolk, senior vice president and chief technology officer at Synaptics. “With this particular Katana platform, we’re targeting very low power.”

Typical applications for the Katana SoC for battery-powered devices include people or object recognition and counting; visual, voice, or sound detection; asset or inventory tracking; and environmental sensing.

The Katana platform also requires software optimization techniques coupled with the silicon, which is where the company’s recently announced partnership with Eta Compute comes into play. The Katana SoC will be co-optimized with Eta Compute’s Tensai Flow software, and the companies will work together to offer application-specific kits that will include pre-trained machine-learning models and reference designs.

Users will also be able to train the models with their own datasets using frameworks such as TensorFlow, Caffe, and ONNX.

Synaptics eased its path into consumer IoT via two acquisitions in 2017: Conexant Systems LLC and Marvell Technology Group’s Multimedia Business Unit. Conexant gave the company access to advanced voice- and audio-processing solutions for the smart home, including far-field voice technology for trigger-word detection and keyword spotting, while Marvell’s Multimedia Business Unit delivered extensive IP for advanced processing technology for video and audio applications, particularly digital personal assistants, as well as the smart home.

With the Conexant acquisition, Synaptics gained a portfolio of audio products, providing the right architecture to do keyword spotting at the edge — and that is done through neural networks, said Worfolk. The multimedia team carved out from Marvell Technology was developing video processors, and those devices are used in streaming media as well as in smart displays, he added.

“As these smart display products integrate cameras, you run into all the same challenges around performance and privacy, and there’s more and more drive to do those algorithms in the edge device,” said Worfolk. “The natural structures for those types of algorithms today with the best performance are AI algorithms.”

In 2020, Synaptics bolstered its IoT position with the acquisition of Broadcom’s wireless IoT business, adding Wi-Fi, Bluetooth, and GNSS/GPS technologies for applications including home automation, smart displays and speakers, media streamers, IP cameras, and automotive. By pairing its edge SoCs with the wireless technology, it can open up opportunities beyond the consumer IoT market.

Synaptics also acquired DisplayLink Corp., adding its universal docking solutions and video-compression technology to its portfolio. The company will combine the video-compression technology with its existing and new video interface products and new wireless solutions.

Built on edge processing

Processing at the edge is not new to Synaptics. All processing of the raw data in its embedded sensing chips, including fingerprint products and touch controllers, happens on-chip because of concerns around power, latency, and security.

Even before Synaptics manufactured its first interface products, the company was founded in 1986 to research neural networks but pivoted into other technologies before coming full circle to develop edge-AI processors for computer-vision and multimedia applications.

“We were founded to do neural network chips over 30 years ago; back then, all the chips were analog AI chips, and it was challenging to scale well,” said Worfolk. “In fact, the company went off in a slightly different direction after the initial founding and started doing pattern recognition, which is a classic AI problem. We’ve been doing AI for a long time, but we have recently migrated to deep learning, and these deep neural networks have really taken over by storm.

“With the breakthroughs in AI in the last decade, more and more of these traditional algorithms have mapped over to AI algorithms, enabling performance advantages when you do the processing at the edge,” he added.

Worfolk said the nature of the company’s products and the vertical markets it serves are driving the need for AI-based algorithms.

“We’ve entered the AI space vertically through our existing markets, and then with those products, we’re expanding into neighboring markets,” he said. “This is quite different from many of the startups you’ve seen in this space who have some sort of novel concept about some kind of AI processing and are developing a chip that they want to go broadly across multiple markets.”

Next steps

In the early days, Synaptics developed its own algorithms for its chips. The keyword-spotting and trigger-word algorithms for voice digital assistants, as examples, are the company’s core algorithms. But Synaptics wanted to open up its silicon to allow third parties to run their own deep neural networks and other algorithms on its chips, so it needed a tool suite. That is not so easy to do.

The company entered into a partnership with Eta Compute to develop the software tools to train a deep neural network and compile it “so it can run on our silicon, and we could move a little faster and open up our chips to third parties,” said Worfolk.

There are other challenges in a market where innovation is happening at such a fast pace, which often can lead to performance tradeoffs.

“The field as a whole is very immature, and in that sense, it is moving very quickly; there are new announcements about new types of neural networks every single week, and a lot of the work has been done through academic or big research groups that are trying to push the boundaries of performance,” said Worfolk. “But there is a big gap between academic research and what can actually run on a small device. Although we are seeing algorithms that are able to perform on new levels that we’ve never seen before on the vision or the audio side, they take more and more compute.”

This often translates into tradeoffs between efficiency and flexibility. What typically happens is the first piece of silicon for a particular target market isn’t very flexible, and as the neural networks that “run on that silicon mature and become more stable to produce the desired functionality, we can look at a second-generation chip,” which is optimized for that performance, said Worfolk.

“It’s what makes it so exciting for us, because there’s always something new and interesting, but it is a challenge from a business perspective,” he said.

The company’s strategy is to build the silicon and then create an example application or demo that makes it easier to discuss with customers. “It also generates ideas around what you can do with the chip and makes sure that we understand what it takes to bring this chip to product,” said Worfolk.

At the Embedded Vision Summit, Worfolk demonstrated the Katana chip in a battery-powered people counter used to track usage in office conference rooms.

At the Embedded Vision Summit, Synaptics demonstrated the Katana Edge AI SoC in a people-counting application like this one. (Source: Synaptics)

“This system is not just the Synaptics Katana chip; it also includes cameras, motion sensors, and wireless communications, which are also part of our portfolio,” he said. “If you want to run on, for example, four AA batteries for two years, Katana is a platform that, under appropriate use cases and conditions, could operate on that kind of time frame.”

Worfolk also showcased the more powerful VS680 multimedia SoC in an AI-based scaling application. The demo showed how the chip can be used for super-resolution enhancement, upscaling from full HD to 4K using a neural-network–based upscaler, which shows crisper and sharper images than are possible with a traditional hardware-scaling algorithm.

“There is a lot of specsmanship in silicon, but at the end of the day, you want to know if your deep neural network runs efficiently or effectively on the device or not,” he said. “So the goal of the presentation [was] to discuss sample applications that can run on Synaptics devices.”

So how do you select the example application? “As we do our MRD [market requirements document] and PRD [product requirements document] for the chip, there are particular applications that we target, those that we view as flagship applications for the piece of silicon, and that drives the demo,” said Worfolk. “We want something that is sufficiently challenging to show off the competitive advantages, but we also want a demo that has broad interest and is representative of what customers might want to do.”

An example is the people counter for the conference room, which shows off the chip’s ability to run an object-detection network at relatively low power, Worfolk said.

The Katana SoC is based on the company’s custom NPU architecture and proprietary DSPs. It also includes an off-the-shelf CPU and DSP that makes it easy for its customers to run their algorithms on the chip, he said. This is coupled with Eta Compute’s tool chain, which makes it easy for customers to port their networks.

Super resolution using the VS680 AI scaler (Source: Synaptics)

“We gain a great deal of efficiency for the particular networks we have in mind by architecting our own MPU solutions, so we have all the right features in our silicon, and then we add the NPU that is tailored for what we expect some of those verticals will need and make it broad and flexible enough to go into adjacent markets,” said Worfolk.

“I suspect that many companies have fairly similar architectures, so it really is a matter of sizing the compute engines, the memory, and then the interfaces to the sensors,” he said. “When you know what neural network you want to run and the resources it requires, you can make sure that you don’t run into any bottlenecks. So there is a bunch of co-engineering between the very bottom, [which is] the hardware design; the top, [which is] the neural-network–model architecture; and then the tools that map that neural-network–model architecture all the way down to running on the hardware. By considering all those together, that is where you can have a real competitive advantage.”

A lot of companies don’t have their own AI teams, so they are learning about AI even as they look to integrate it into their products, Worfolk said. To support such companies, Synaptics has partners who can either train or optimize the models or support the tools.

He views partnerships as a way to fill the company’s gaps, particularly in the IoT space, where there is a broad range of applications and customers who do not have expertise in AI.

This article was originally published on Electronic Products.

 Lucky Draw for Members 2021

Leave a comment