Cloud computing and big data analysis will require the development of ever more performant computing systems. We can only imagine what these high-performance computers might look like as early as 2035.
Today, our phones are already perfectly capable of running any app or mobile game, and of streaming video. Our laptops perfectly support us at work or at home. Will we need ever more-powerful computer chips 10-20 years from now? Yes! The need for high-performance computing will continue to increase.
In 2035, we will still produce massive amounts of data, without deleting any. Think of pictures and videos posted on social media – in whatever form – and the huge amount of data processed by companies such as Google, Facebook, and Amazon. Wearables and ingestibles will continuously monitor our health, and combine the data with our genetic imprint. Add to this the large amount of data generated by emerging IoT applications such as autonomous cars, smart buildings and smart cities. Most of this data will be processed and stored in the cloud. This can only be sustained through increasingly more performant computing and memory solutions.
Another clear driver relates to big data analysis. Applications such as drugs and materials discovery, weather forecast, or nuclear simulations will continue to demand increasingly more powerful computers to handle their ever-expanding sets of data. Today, these applications run on supercomputers, in which hundreds of thousands of classical processors work in parallel to solve different parts of a single large problem. A drawback of these supercomputers is the gigantic power consumption: for a typical supercomputer, the power consumption can reach 15-20 megawatts.
For both cloud computing and supercomputing, we will need solutions that bring computing to a higher level of performance, and this at the lowest possible energy consumption. At Imec, we investigate many avenues and try to provide realistic projections to guide industrial adoption of the proposed solutions.
New drivers for innovation
For more than 50 years, the road towards ever more performant computing has been guided by Moore’s Law, which describes the continuous reduction in size and cost of the transistor. Every two years, the industry introduced a new technology node with more transistors per chip area, leading to ever more performant logic and memory chips. Cost reduction per transistor was mainly enabled by reduced device footprint.
Nowadays, new technology nodes follow less frequently. As it is increasingly more difficult to reduce price per transistor with simple area scaling, new technology drivers are becoming more prominent. The ability to deliver a certain logic (or memory) function for the lowest amount of power is increasing in importance. The need for increased performance and reduced power consumption are becoming the main drivers for innovation.
High-performant chips in a multitude of flavors
In the future, classical device scaling will no longer be the only instrument towards higher performance. There is a clear trend towards increased device diversification and circuit customization. In the past, one and the same transistor architecture was used to enable all functionalities on a chip. Today, five to seven transistor options co-exist in the same technology node. Each of the options has different performance specs, with specific threshold and performance levels. This enables different applications ranging from ultralow-power in the IoT space, to mobile and to high-performance computing.
In the high-performance realm, we expect chip-level diversification and an increased use of multiple small chips in 2.5D and 3D packaging. As such, more targeted CPUs will be available, leading to the development of even more custom-made chips. We will even see different devices either integrated on the same chip or on multiple chips working together, enabled by system technology co-optimization.
Our expectation is that in the 2035 timeframe, technology nodes could include not only silicon-based transistors, but also other materials and possible ‘beyond-CMOS’ devices that are co-integrated with the classical CMOS-based solutions. The alternative devices could be used along CMOS for specific functions. Imec, for example, is developing majority gates based on spintronics devices. These devices promise a decrease of power consumption by up to two orders of magnitudes – but only for specific logic functions. We are also developing devices that have 2D materials in their conduction channel, which could be implemented towards extreme device scaling or used as transistors in the back-end.
On the eve of quantum computing
Some applications are too complex to be solved with classical computing paradigms. Quantum computing can come to the rescue. In a quantum computer, information is manipulated in a fundamentally different way than in a classical computer. Traditional computers operate with bits – that can be either zero or one – and operations on these bits are performed sequentially. Quantum computers operate with qubits that can with a certain probability be zero and with a certain probability be one.
Add to this entanglement, which means that qubits talk with each other and act concertedly, and the number of states in a quantum registry increases like a power law with the number of qubits. Operations can be performed on all these states simultaneously, resulting in immense parallelization capability. Quantum computing promises the ability to solve those big problems, too difficult to solve on a classical computer.
A possible application for quantum computing is materials research, for example
the search for superconducting materials that can replace copper in the rotors of windmills.
Quantum computing can go beyond the capabilities of classical computing, but it will not be the ‘holy grail’ for solving all issues. It will only be useful for certain applications, for example, for solving problems that have many variables as their input.
An example of a possible application for quantum computing is materials research, for example the search for superconducting materials that can replace copper in the rotors of windmills. Today’s windmills contain tons of copper as the winding of the coils in their engines. This significantly contributes to the weight of the windmill head, limiting the size increase in the wing span of the windmills. In Europe, supercomputing time is being used for finding new superconducting materials that can replace copper.
This search could be significantly advanced if a quantum processor could be used as a building block of the supercomputing systems. Besides materials discovery, there are many other useful applications, including weather and climate modelling, space exploration, fundamental science, the modelling of economical or societal phenomena (where complex differential equations need to be solved), machine learning, and the development of personalized medicine.
My expectation is that by 2035, we will already see processors with a few thousand qubits, allowing us to run some algorithms and some small applications. In that timeframe, we will see the materials discovery being done on a quantum computer. Ultimately, we will need to embed the growing power of quantum processing into existing computing paradigms to enable the required ‘quantum leap’ in performance...
- Iuliana Radu is program director at Imec, where she is leading the beyond CMOS and quantum computing activities. Prior to joining the logic program at Imec in 2013, she was a Marie Curie and FWO fellow at KU Leuven and Imec. Her work at imec and KU Leuven includes devices using the metal-to-insulator transition, ionic and electronic transport in functional oxides and devices with graphene and other 2D materials. Iuliana has received a PhD in physics from MIT in 2009 where she worked on the fractional quantum Hall effect and searched for non-abelian quasiparticles.