Intel's exascale supercomputer project may be the first to use one of three chip-stacking techniques Intel disclosed on its roadmap.
SAN FRANCISCO — Intel gave a first glimpse of three packaging technologies on its roadmap at a gathering on the sidelines of Semicon West here. The most interesting of the three may debut in the exascale supercomputer Intel is building for the U.S. Department of Energy.
The trio of techniques aim to give Intel’s processors an edge at a time when advances in conventional silicon scaling are slowing and getting more expensive. They arrive as rival TSMC expands its portfolio of chip stacks, and two consortia hope to set standards in the area.
MDIO is the next-generation of Intel’s AIB, a physical interface for stacking chiplets it released last year as part of a DARPA program. Intel claims MDIO is on par with advances rival TSMC announced last month. It will use the interface in chip stacks starting sometime in 2020 but has not decided if it will make the spec open.
The most interesting of the three new techniques is Co-EMIB. A combination of Intel’s latest 2D and 3D stacking techniques, it likely will see its first use as a way to link CPU and GPU cores in the Aurora supercomputer Intel and Cray won a $500 million contract to deliver before the end of 2021.
Prototypes shown here of Co-EMIB wafers and devices stacked 18 small die on one large one using the Foveros 3D technique Intel announced in December. Two of the devices were then connected using four of its Embedded Multi-die Interconnect Bridge (EMIB) links using 45- and 55-mm bump pitches.
Intel has shipped as many as a million devices using EMIB in Stratix X FPGAs and Kaby Lake G, an integrated CPU/GPU module. Next year it will ship Lakefield, an integrated notebook processor slated to be its first chip using Foveros.
Currently,Intel suffers transport delays making the face-to-face Foveros stacks because the process is split between a front-end line in Oregon and a back-end line in Arizona. Once it moves the process to one location, turn around times should be about two weeks.
The third new option is so far just a research project. Omni Directional Interconnect (ODI) is a 70mm thick vertical link for delivering power to a chip.
Chip stacks are widely seen as one of the most significant routes to delivering larger, faster devices. Intel rival TSMC has been using various forms of them for years for everything from smartphone SoCs to high-end FPGAs, GPUs and communications ASICs.
Defining “an Ethernet for chiplets” is the most important goal of DARPA’s CHIPS project, its program manager said last year. Separately, the Open Compute Project launched an effort recently to define open standards for chiplets, but they are still in an early stage.
Intel’s news shows it has a broad portfolio of techniques in development. None of them appear to move the industry closer to a standard, but they will advance Intel’s products.
For example, Intel aims to deliver two full-reticle Cascade Lake processors in a single die. “It will not be uncommon in a year or so to see 2x reticle die in a package without compromising power or latency,” said Ram Viswanath, a vice president of Intel’s assembly and test group.
The new techniques will help shrink packaging interconnects down from 50mm today and drive up their density to tens of thousands of I/Os per mm2 from a couple hundred today, he said.
Intel sees several hurdles ahead. “Somewhere between 20-35 microns we will need to transition from solder to non-solder-based interconnects,” said Ravi Mahajan, an Intel fellow.
Yields are as low as 20% on some chip stacks, an even larger challenge. Intel designed a new module for its homegrown chip tester that better determines how individual die perform in a module, pushing yields above 70% for an eight-chip stack.
“We can make products others can’t thanks to our known-good-die capabilities,” said Babak Sabi, a 35-year Intel veteran who runs the company’s packaging division.