The day will come when artificial intelligence (AI)—both AI sensing and AI processing —takes place very close to the source of sensor data, if not directly at the sensor level.
Image sensors cover an expanded field of applications, ranging from smartphones to machine vision and automotive. Soon the day will come when artificial intelligence (AI) — both AI sensing and AI processing — takes place very close to the source of sensor data, if not directly at the sensor level.
The CMOS imaging sensor (CIS) market follows a continuous growth trend. According to market research firm Yole Développement, the CIS market revenue reached $19.3 billion in 2019, neared $21 billion in 2020 and is expected to get to $27 billion in 2025. It will also represent 5.1% of the global semiconductor sales in 2021, “becoming a very significant sub-segment,” said Pierre Cambou, principal analyst at Yole Développement, in a session at SEMI’s recent MEMS & Imaging Sensors Forum, as part of the Technology Unites Global Summit.
Looking at the different applications, CMOS image sensors started as webcams and mobile phones with the low-end, high-volume markets. They have gradually reached other segments such as automotive, broadcast, medical, and industrial segments. In 2019, for reference, the mobile segment represented almost 70% of the CIS revenue, while consumer photography accounted for about 5%, computing (i.e., PC and Tablets) for almost 8%, automotive for 7%, and industry around 3%. Last year, Cambou indicated, all these segments had a 25% growth, except for consumer photography.
The next disruption
Looking back at the history of image sensors and photography, Cambou identified three major disruptions. The first one was the advent of photography, which ended in the 1970s when digital photography started to emerge and became a consumer application. By the 2010s, mobile photography became dominant, thanks to the CMOS image sensor technology.
What’s next? “The CIS industry has already survived three disruptions, and the next one will be sensing,” said Cambou. “We are expecting a new disruption powered by sensing applications serving mainly robotics and virtual reality applications.”
The enabling technology for CIS has been wafer stacking, and it’s opening up new opportunities for embedded computing, Cambou said. For instance, Sony, in its Xperia XZ camera, provided a triple stack sensor using TSVs, which includes a 32nm DRAM wafer “in order to provide 1,000 frames per second.” In 2020, Sony provided an in-pixel hybrid stack sensor. “Instead of the TSV, there are copper-to-copper connections. This enabling technology will be very important in the future, especially for embedded computing.”
The next wave of innovation in imaging will come from AI. While some of the innovation currently takes place either in the cloud or in the central APU, the trend is to bring computing closer to the sensor and embed intelligence within or near the sensor. “There are two paths for artificial intelligence,” Cambou said. “One path is to go in the cloud, using 5G typically. That’s happening with voice-AI devices. The other path is to have the computing program on board the image sensor. This is more for high-privacy and critical applications like automotive.”
For robotic vehicles, the sensor data flow is limited by downstream computing power. While previous generations were in the range of several hundred Tops, the latest robotic vehicles are in the range of a thousand Tops. This results in limited increases in sensor data flow, which is what Yole calls “More than Moore’s law”. The computing power increases with the square of data, “so there will never be enough performance, there will be a demand for performance on the computing side,” said Cambou. “For the sensing [part], it’s going to be difficult to bring more cameras. Embedded computing will avoid compute saturation.” Real-time and critical applications will require embedded computing, either on the device or in the image sensor.
The addition of more cameras—and, with them, more data—means “the computing power explodes,” Cambou once told us. One solution is improving data quality. “If you really want to solve autonomy, you will need more diversity quickly,” the analyst said. “You will use LiDARs, thermal cameras, and hyperspectral cameras. I think car companies should also consider event-based cameras.”
New technologies are indeed emerging, including neuromorphic and quantum sensing.
Early 2020, Prophesee and Sony presented a co-developed stacked event-based vision sensor that detects changes in the luminance of each pixel asynchronously and outputs data including coordinates and time only for the pixels where a change is detected. This enables high efficiency, high speed, low latency data output, companies claimed. At the time of the announcement, Cambou told us this partnership opened “a Pandora’s box” with new stacking potentialities. “We are at the beginning of understanding what we can stack. It is fantastic because we have opened the triple stack with the memory. And the next step, we know what it is. It’s AI.”
In his presentation, Cambou also cited ETH Zurich’s UltimateSLAM, a visual-inertial odometry pipeline that combines events, images, and IMU to yield robust and accurate state estimation in HDR and high-speed scenarios. Another example is Gigajot Technology’s Quanta Image Sensors (QIS). QIS are single-photon image sensors with photon counting capabilities, and the California-based startup claims dynamic scenes can be reconstructed from a burst of frames at a photon level of 1 photon per pixel per frame.
“Since the More than Moore’s Law means that computing power will never be enough, there will be some innovation scenarios leading to the new generation of image sensors and will benefit from the embedded computing trend,” he concluded.
This article was originally published on EE Times Europe.Anne-Françoise Pelé is editor-in-chief of eetimes.eu and EE Times Europe.