The TinyML Platform works with high performance and Linux-capable Sapphire RISC-V processor and delivers an accelerated AI capability to the Sapphire core.
Efinix has released its TinyML Platform for artificial intelligence (AI) acceleration on its innovative range of FPGAs. Comprising a model profiler and graphical user interface for accelerator selection, the TinyML Platform works with high performance and Linux-capable Sapphire RISC-V processor and delivers an accelerated AI capability to the quad-core-capable Sapphire core.
“We are seeing an increasing trend to drive AI workloads to the far edge where they have immediate access to raw data in an environment where it is still contextually relevant. Providing sufficient compute for these AI algorithms in power and space constrained environments is a huge challenge,” said Mark Oliver, Efinix VP of Marketing. “Our TinyML Platform harnesses the potential of our high performance, embedded RISC-V core combined with the efficiency of the Efinix FPGA architecture and delivers them intuitively to the designer, speeding time to market and lowering the barrier to AI adoption at the edge.”
The TinyML Platform leverages the RISC-V custom instruction capability to deliver a scalable library of accelerated instructions customized for the requirements of the chosen TensorFlow Lite model. Instructions vary in complexity and size and can be selected through a graphical user interface to deliver the resource / acceleration tradeoff needed for the end user’s application.
Time to market is further accelerated for the designer by leveraging the Edge Vision SoC Framework as the starting point for AI model implementations. Both the Edge Vision SoC Framework and the TinyML Platform are available to the open-source community on the Efinix GitHub page.