Lattice Semiconductor unveiled a new control FPGA with enhanced security features, and updated its SensAI stack to improve AI performance by a factor of ten.
In a double announcement, Lattice Semiconductor unveiled a new control FPGA with enhanced security features, and updated its SensAI stack to improve artificial intelligence (AI) performance on its low power FPGAs by a factor of ten.
This is the company’s first silicon release since new CEO Jim Anderson took over last August, at which point most of the senior leadership team also changed. At the company’s financial analyst day in New York today, Anderson made reference to “remodelling” the company, completely revamping almost every element of the business, said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy.
“I took the characterization as a [combination of] a completely new leadership team with focusing on lowest power FPGAs, getting away from areas like HDMI IP, plus more financial discipline and a development change to reuse more IP for efficiency,” Moorhead said.
At the same event, Lattice R&D lead Steve Douglass teased the company’s next generation FPGA platform. This will include a new, more efficient architecture using a lower power Samsung 28nm FD-SOI process, increased DSP capabilities and 5x the on-chip memory. But the new design still uses a lower-risk, less complex 4 look up table design.
Devices built on the new architecture will be sampling in 2020.
With the silicon announced today, Lattice has chosen to address a number of security threats caused by unauthorised firmware access.
“Our customers in a whole variety of areas are starting to worry a lot about security of their hardware,” said Gordon Hands, Lattice’s director of product marketing. “Traditionally, people have thought of computer security as being a software issue; they were worried about viruses, now people are starting to think about it at the hardware level.”
Component firmware cyberattacks are certainly on the increase. Hands referred to the hacking of Jeep vehicles, which led to parent company Fiat Chrysler recalling almost 1.4 million vehicles, and the Mirai botnet attack that hacked an army of internet of things (IoT) devices and used them to launch a huge distributed denial of service (DDoS) attack. The risks of security concerns — such as equipment hijacking, design theft, data corruption and theft, counterfeiting and overbuilding — still loom large, he said.
The new Lattice device, designated MachXO3D, can be used as a hardware root of trust (RoT), a device that can always be trusted to operate as expected. RoT functions, such as verifying the device’s own code and configuration, must be implemented in secure hardware. By checking the security of each stage of power-up, RoT devices form the first link in a chain of trust that protects the entire system.
Since Lattice’s MachXO3 family is widely used to implement system control functions in server boards, telecommunications equipment, and industrial equipment, devices in this family are often the first component to be powered up, and the last to power down.
“A lot of critical infrastructure boards have a control PLD that manages resets, manages the sequencing of the power supplies as the system powers up, then it also manages system shutdown. This makes them the ideal place to put the RoT capability,” said Hands. “The MachXO3D is the first small control-oriented FPGA that’s been developed to be compliant with [NIST’s Platform Firmware Resiliency] guidelines. It’s compliant itself, as a chip, and it also enables our customers to build systems that are compliant with the guidelines.”
The MachXO3D has functionality to protect non-volatile memory through access control, cryptographically detect and prevent booting from malicious code, and recovers to the latest trusted firmware in case of corruption. Ports can be dynamically reconfigured to minimize any attack surface; and the instruction set and security scheme can be changed dynamically.
Additional security functions, such as the use of transport keys and secure erase packets, extend security throughout the supply chain and to the product’s end of life.
AI Stack Update
Lattice has also updated its SensAI hardware and software stack. Launched a year ago, SensAI is designed for running AI and machine learning on either the iCE40 UltraPlus FPGA, for operation down to 1mW in a 5.5mm2 package, or the ECP5, typically consuming around 1W in 100mm2 board area.
Lattice’s AI offering is focused on low power applications, such as small IoT/edge devices, that need machine learning functionality to process images or image streams. Compared to MCUs, MPUs, GPUs and other types of processing, Lattice FPGAs offer moderate performance in the power consumption range 1mW-1W. This is very much in demand from applications that need small-sized, low-cost solutions.
The SensAI stack includes IP cores to accelerate convolutional neural networks (CNNs), software tools and compilers linking popular AI frameworks to the company’s FPGA software, and reference designs. It is primarily used for image recognition and computer vision applications.
The updated stack has improved its performance by a factor of ten — the time taken to process a given image with a given network is a tenth of what it was with the old stack.
“Some customers are choosing to process higher frames per second, since it can process more images,” said Hands. “Some are fine with frame rate but want more resolution. Some are fine with everything as it is but can use a smaller chip. Some people want to turn the clock speed down and use less power.”
Lattice used several techniques to get this substantial performance increase.
The first version of SensAI used 16-bit fixed point numbers, but in the new version this has changed to 8-bit fixed point. That effectively doubled the number of multipliers on the chip, and the on-chip memory has effectively doubled (since twice as many intermediate data points can be stored).
“Why didn’t we do this before? Well, we also changed how the tool flow works,” said Hands.
Most neural networks are typically trained on 32-bit floating point data, then when moved to the implementation phase, they are quantized to 16-bit. In the first version of SensAI, the error associated with quantizing to 16-bit was acceptable, said Hands, but quantizing to 8-bit increased the error too much.
“To solve that, we used different techniques for how to use the machine learning framework in the training phase, that means that we can train assuming 8-bit fixed point,” he said. “Then when we come out of the training framework into our compiler, you don’t need that quantization. We went from 16-bit to 8-bit, and we did it in a way that doesn’t impact error.”
Lattice has also changed the way the compiler sequences certain calculations. Some of the most common layers used in CNNs (Maxpool, ReLU and Convolution layers) can now be processed together in a single step, which reduces the number of intermediate data points that need to be stored by a factor of four.
The new version of SensAI incorporates a number of reference designs, including a power optimized human presence detection system and a performance optimized person counting system. There is also an ecosystem of partners available to help with FPGA design, machine learning design, or both, said Hands.