Cadence rolled out its Q7, a new member of its Tensilica Vision DSP family, aimed at vision and AI applications in mobile, robotics, drones, AR/VR and cars
Vision and AI applications in smartphones are evolving as rambunctiously as AI in drones, AR/VR (augmented & virtual reality), robotics and surveillance markets.
System designers are no longer just talking about adding face detection or face recognition. They’ve been there and done that. Increasingly listed as “must-have” new features in vision and AI apps are depth sensing, image stitching, de-warping, eye-tracking, HDR (high-dynamic range) processing and simultaneous localization and mapping (SLAM).
Against this backdrop of ever-expanding vision and AI applications, Cadence Design Systems, Inc., this week rolled out its Q7, a new member of its Tensilica Vision DSP product family.
Cadence has been on a roll with Tensilica. DSPs in the family have already scored big design wins from high-profile chip designers such as HiSilicon and MediaTek. With the new Q7, the team focused on beefing up computational power and further improving the instruction set architecture (ISA).
The new Q7 DSP delivers up to 1.82 tera operations per second (TOPS). The company claims it provides “up to 2X greater AI and floating-point performance in the same area compared to its predecessor, the Vision Q6 DSP.”
Pulin Desai, product marketing director for the Tensilica Vision DSP Product Line, told EE Times that the advantage of its Tensilica Vision DSP products lies in its ability to create “custom-like” optimum instruction sets for a specific market.
“We now have a large number of instructions sets, each tailored to a different vision/AI market segment,” he said.
Desai also made clear that the Vision Q7 DSP’s edge is that this is Tensilica’s 6th-generation Vision and AI DSP. The Tensilica team has continued to improve DSP performance while making it “a superset of Q6.” The result is that Q7 preserves customers’ existing software investment, in that it assures a smooth migration to Q7 from Vision Q6 or P6 DSPs.
Notably, there is no universal AI solution in the sprawling market.
Consider AR Map, a Google Map feature recently announced at the Google I/O conference. With an AR mode on Google Maps, for example, users could see not only their route but also determine which direction they should start walking.
The industry is expecting more applications to emerge that require a device to simultaneously localize – to find the location of some object/sensor with reference to its surroundings. Then it must map the layout and framework of the immediate surroundings. This makes so-called SLAM an important feature, using a range of algorithms that simultaneously localize and map the objects.
As Desai explained, blocks necessary for SLAM are based on classical computer vision approaches, and are typically implemented on CPUs or GPUs.
However, SLAM “heavily relies on a host of linear algebra and matrix operations,” Desai noted. This makes SLAM very heavy, computationally.
He believes this is where DSPs such as Tensilica’s Q7 could come into play. It’s partly because SLAM often needs real-time response. Battery-operated small robots and drones, meanwhile, are power-frugal, and can’t stand GPUs that generate too much power. “You can use DSP to offload the heavy computational task of SLAM from CPUs or GPUs,” Desai noted.
Mike Demler, senior analyst at The Linley Group, said Tensilica’s new Q7 is the first DSP he's come across optimized for SLAM.
So, what separates Tensilica’s Vision DSP family (including its previous DSP core offerings such as P6 and Q6) from its competitors?
Demler noted, “At this point, the DSP-IP business is pretty much down to just Cadence and Ceva. Synopsys includes a DSP in their embedded-vision cores, but they don’t offer standalone DSPs.”
He believes that there are more similarities than differences in offerings by Cadence and Ceva for computer-vision and AI. However, “the biggest differences are in the target applications,” he pointed out.
On one hand, Cadence has Vision P5, P6, Q6, Q7 and HiFi-DSPs and the DNA 100. On the other, Ceva has the XM series, TeakLite, and NeuPro. “Both offer DSPs for wireless modems, but Ceva has focused more on keeping at the leading edge of that market,” said Demler. “So, bottom line – strengths depend on your application. This new Q7 is the first I’ve heard of to be optimized for SLAM.”
Perhaps, the best-selling point for Tensilica’s Vision DSP family is its flexibility – allowing a building-block approach toward AI and vision applications. It lets customers design multi-core solutions with Q7 DSPs or simply combine a Q7 DSP with Tensilican’s DNA 100 Processor (a deep neural-network accelerator AI processor IP that Cadence announced last fall) to expand both vision and AI experiences.
Tensilica DSP in automotive market?
It’s been well-publicized that Tensilica Vision P6 is designed into HiSilico’s Kirin 970 mobile app processor. MediaTek also uses Vision P6 inside an AI processing unit (APU) on the company’s P60 mobile SoC.
But beyond the smartphone apps processor market, how’s Tensilica doing in automotive?
Tensilica’s Desai gave two examples: GEO Semiconductor (San Jose) and Vayyar Imaging. GEO Semiconductor uses Vision P5 DSP in a rearview-camera video processor. “Intelligence inside the chip allows the rearview camera to detect what it is the processor is seeing,” explained Desai.
Vayyar Imaging selected Tensilica Vision DSP for its advanced millimeter wave 3D radar imager. Vayyar’s says its SoC covers imaging and radar bands from 3 GHz to 81 GHz with 72 transmitters and 72 receivers in a single chip. It enables the sensor to differentiate between objects and people, determine location while mapping large areas, and create an accurate 3D image of the environment, according to Vayyar.
Demler said that because Q7 specifically targets SLAM, he expects the new DSP to enjoy an advantage for applications in AR, robotics, and some automotive.
The Vision Q7 DSP supports AI applications developed in the Caffe, TensorFlow and TensorFlowLite frameworks through the Tensilica Xtensa Neural Network Compiler (XNNC). Desai explained that it maps neural networks into executable and highly optimized high-performance code for the Vision Q7 DSP.
Q7 also supports the Android Neural Network (ANN) API for on-device AI acceleration in Android-powered devices. The software environment also features complete and optimized support for more than 1,700 OpenCV-based vision library functions. This is key to enabling fast, high-level migration of existing vision applications, the company explained.
Noting the company’s ambition for further inroads into automotive, Cadence stressed that its development tools and libraries are designed to help SoC vendors achieve ISO 26262 safety integrity level D (ASIL D) certification.