NXP Seeks ‘Edge’ vs. Intel, Cavium

Article By : Junko Yoshida, EE Times

As the lines begin to blur between cloud and edge computing, NXP Semiconductors is racing to offer the highest performance SoC of the company's Layerscape family.

TOKYO — As the lines begin to blur between cloud and edge computing, NXP Semiconductors is racing to offer the highest performance SoC of the company’s Layerscape family. 

The new chip, LX2160A, can offload heavy-duty computing done at data centers in the cloud, enabling the middle of the network —  typically, service operators — to execute network virtualization and run high-performance network applications on network equipment such as base stations. 

Toby Foster, senior product manager for NXP, told us that his team developed the new high-performance chip with three goals in mind. They sought first to enable new types of virtualization in the network, second to achieve new heights of integration and performance at low power featuring next-generation I/Os, and third, to double the scale of virtual network functions and crypto, compared to NXP’s previous Layerscape SoC (LS2088A), while maintaining low power consumption.

Specifically, the LX2160A features 16 high-performance ARM Cortex-A72 cores running at over 2 GHz at 20- to 30-watt. It supports both the 100 Gbit/s Ethernet and PCIe Gen4 interconnect standards.

Why edge computing?
The industry, including NXP, tends to view edge processing as the driver for the next phase of networking, computing and IoT infrastructure growth. 
By moving workloads from the cloud to the edge, operators will suffer less latency while gaining resiliency and bandwidth reliability, explained Foster.

Bob Wheeler, principal analyst responsible for networking at the Linley Group, told us, “In some cases, such as content delivery networks, the transition from the cloud to the edge is already happening.” He predicted, “Mobile edge computing will primarily happen in conjunction with 5G rollouts starting in 2019.”

 

Race to virtualize the network (Source: NXP)
Click here for larger image

Race to virtualize the network (Source: NXP)

Click here for larger image

The race to virtualize the network is moving to the middle from two directions. The X86 camp is moving from the data center and the ARM camp is moving up from the edge.

Asked about NXP’s competition in this race, Wheeler said, “Key competitors include Cavium’s Octeon TX in the ARM camp and Intel’s Xeon D from the x86 side.” Although claiming that NXP’s LX2 will offer “superior power efficiency and more modern network interfaces (25/50/100G Ethernet),” no one will get to see how it performs until it starts sampling in the first quarter next year. NXP’s competitors are already in production he noted.

But what sort of data processing do we mean when we talk about offloading processing to the middle of the network? NXP’s Foster explained, “Take an example of cameras installed in base stations. The base stations can stream video (captured by such cameras) back to the data center for processing — doing facial recognition of missing children, for instance. But the base stations can also do certain processing locally, and then send only the metadata back to the data center.” 

As Joe Byrne, a senior strategic marketing manager for NXP’s Digital Networking Group, noted, the lines that used to separate edge and cloud computing are blurring. This also means that an SoC used in the middle of the network needs to be equipped with the right class of performance to offer secure virtualized services.

Beyond 16 units of A72 cores and a rich set of I/O’s, Foster noted that the LX2160A features an 8mb platform cache to hold internal packets. “This is much more economical than adding two DDR memory controllers,” which can be costly.

What's inside LX2160A (Source: NXP)
Click here for larger image

What’s inside LX2160A (Source: NXP)

Click here for larger image

Competition
The biggest difference between the NXP and Intel solutions is the level of integration. Two separate chips — one for Ethernet, another for accelerating security — must be added to Intel’s Xeon D-1548 SoC. NXP has integrated them all in one. 

Compared to Cavium’s solution, although both NXP and Cavium belong to the ARM camp, Foster said that NXP’s SoC is higher performance at lower power, thanks to the use of FinFET.

— Junko Yoshida, Chief International Correspondent, EE Times Circle me on Google+

 

More Related Links
RISC-V Boots Linux at SiFive
Smooth Succession Expected at TSMC
TSMC Chairman Morris Chang Announces Retirement
Intel Refreshes Desktop CPUs
Nvidia CEO in China: Big Push for AI Inference

Leave a comment