Facebook Shows Smartphone AI

Article By : Rick Merritt

Facebook demoed neural network inference in smartphones at its @Scale event, where a researcher described work on DNA storage.

SAN JOSE, Calif. — Facebook is using OpenGL to deploy to smartphones’ visual effects created with machine learning. The open API is delivering solid performance across iOS and Android phones; however, a lead developer called for a move to more modern Vulkan or Metal APIs to ease mobile graphics programming.

That was one of several news nuggets from @Scale, the social network’s event targeting software engineers. In other developments, an exhibitor showed a copper alternative to solder, a startup demoed its 16-lens camera, and an academic described progress using DNA for computer storage.

Facebook runs the event in various cities to spawn a collaborative ecosystem using open-source software to solve some of the biggest issues plaguing big data centers.

At one booth, the company showed image recognition and special-effects filters running on smartphone cameras at rates from 30 to 45 frames/second using OpenGL-based inference code that it developed in-house. By contrast, Qualcomm’s new neural-networking SDK for Snapdragon delivers lower frame rates.

Facebook expects to deploy generations of OpenGL-based inference code on smartphones for at least two or three years. It first showed machine-learning inference on handsets at an event in April.

Developer Fabio Riccardi shows Facebook's inference software running on his iPhone. (Images: EE Times)

Developer Fabio Riccardi shows Facebook’s inference software running on his iPhone. (Images: EE Times)

OpenGL is widely used in handsets, but it is a relatively old and hard-to-program API. The newer Khronos Vulkan or Apple Metal APIs deliver higher performance and ease of programming, but so far, they are only used on a few high-end phones.

Although Facebook is not using the Qualcomm neural-net SDK for the smartphone AI service, the company encouraged more than 3,000 developers attending the event to see a talk given on it here.

“Being able to scale and run on the [consumer] device is really important, said Jay Parikh, head of engineering and infrastructure at Facebook, noting that the SDK gave Snapdragon chips a five-fold boost on some machine-learning tasks.

Separately Facebook announced that it now updates its live code about every two hours with tens to hundreds of changes. Google used the event to talk about its language translation services as well as another system it runs that contains a whopping two billion lines of code.

Next page: Copper alternative to traditional solder

'We are trying to build a community to share best practices,' said Parikh of @Scale.

“We are trying to build a community to share best practices,” said Parikh of @Scale.

Continue reading on EETimes

Leave a comment