The ARM library contains low-level building blocks for imaging, vision and machine learning.
ARM has demonstrated at the Mobile World Congress a free library of popular machine learning and computer vision routines optimised to run on its CPUs and GPUs. The library contains low-level building blocks for imaging, vision and machine learning, which will be available by the end of March as open source software.
The library covers common functions for machine learning frameworks and includes neural networks, colour manipulation, feature detection image reshaping and General Matrix-to-Matrix Multiplication (GEMM) that can be at the heart of implementing convolutional neural networks on maths-capable processors.
Figure 1: The "show-and-tell" on the booth was by way of an application running on a standard mobile phone that attempts to estimate the calorific content of food in picture, such as popcorn, chocolate or seeds.
The demonstration was prepared by ThunderView, a division of Chinese software developer Thunder Software Technology, otherwise known as ThunderSoft, which is developing the calorie counting application.
The demo operates by cutting away the background from the foodstuff in image and then comes up with an estimate of the volume of the foodstuff. It uses image recognition on a trained neural network to decide what that foodstuff is and then goes to an on-device look up table to find out the per volume calorific content of the identified food. Because everything is done on the smartphone data bandwidth and latency from communicating with the crowd are not an issue although battery life on the mobile phone might be.
A few questions might arise here. Like the background cut-away routine seemed to underestimate the size of an image such as seeds. Another issue would be the ability to determine the volume of the material in shot. But as a demonstration of the ARM Compute Library, it served its purpose.
« Next: Compute library runs on ARMv7, ARMv8 CPUs