Kris Ardis, executive director, Microcontrollers and Security BU, and David Dwelley, vice president and chief technology officer at Maxim Integrated, in this talk about AI...
Artificial Intelligence (AI) is a great hope for the future of humanity. We are already using it without even realizing it. According to some “scientists”, it can also represent a great danger, as it is capable of supplanting man as the dominant species. For example, man cannot travel through time, but somehow if artificial intelligence was able to solve certain quantum nodes, man could find himself arguing with his alter ego without imagining what could happen.
I wanted to involve Kris Ardis, executive director, Microcontrollers and Security BU, and David Dwelley, vice president and chief technology officer at Maxim Integrated, in this talk about AI.
So what prevents AI, which can already drive cars, diagnose diseases, and recognize objects and people in photos, from taking the next step and becoming truly intelligent? Creativity and common sense can be two fundamental factors.
What is AI?
The aim of AI was to create a simulated human intelligence in machines to “think” like humans and imitate the way a person acts. The fundamental characteristic of AI is its ability to analyze data and ensure the best chance of achieving a set goal.
“The science fiction movies will tell us that AI are these giant computers in some secretive location and they’re taking over the world,” said Dwelley. He added, “That’s not what AI is. I suppose it makes great science fiction, but it’s not the way the technology industry sees AI. And really, from IoT-focused AI, it really is just a specialized algorithm that does really sophisticated pattern matching. It looks for patterns and recognizes them and then classifies them. It’s also a class of semiconductor devices that have specific hardware to accelerate that algorithm. Obviously, Maxim is in the business of making chips, and we make hardware accelerators specially made to run those sorts of algorithms.”
From intellectual abilities, the functioning of an AI is mainly made up of four different functional levels:
Comprehension: Through the simulation of cognitive abilities of data and events correlation, AI is able to recognize text, images, tables, videos, voice, and extrapolate information.
Reasoning: Systems can connect the multiple bits of information collected (through precise mathematical algorithms and in an automated way).
Learning and Interaction. “The aspect of pattern recognition is actually inspired by how we think and how our brain works”, said Ardis. He added, “you train the neurons in your brain to recognize something, and you do the same thing with the neural network.”
Should we be afraid of AI? “If you watch too many scary movies, then maybe, but really, I don’t think it’s something to be afraid of. It’s just another kind of algorithm. If there is anything scary that comes from it, it’s what people choose to do with the algorithms,” said Dwelley.
Ardis added, “One important thing to keep in mind is that these things are trained for a task that comes with data and a set of expectations, i.e., expected results. So you could train anything to do the wrong thing. Is this AI? Or is that the person?” He continued, “Yes, in every fear scenario that comes to mind, I always end up thinking that that is the person. That’s not really the technology.”
“We are experimenting with using AI inside our own manufacturing flows, which are today very traditional, very human-driven algorithms. And we are running experiments now in our factories to see if we can use machine learning to improve or optimize, just to learn faster, to get improved results more quickly, because the system can watch many, many variables at once. It can draw a number of parallel conclusions at once and ultimately get a faster path to a better result. So we are experimenting with that both internally inside Maxim, as well as externally as products that we sell,” said Dwelley.
Security and Applications
AI is really a very capable model recognition system. If we have an IT system and a cyber-attack threatens it, the number one challenge is to recognize that the attack is happening.
In general, AI (and machine learning) can be used in cyber-defense to create valid “security assessment” tools. Unfortunately, the same tools are equally valid for the massive collection of information about victims and the planning of a workmanlike attack.
AI formation takes place using billions of data from structured and unstructured sources, such as blogs and news reports. Through deep learning and machine learning techniques, AI improves its ability to “recognize” cybersecurity threats and cyber risk.
“This is where we find something that can be carefully observed with a competent pattern recognition system. In essence, it recognizes when something is happening to it. A cyberattack is something that happens to the IT system. An artificial intelligence or a machine learning algorithm trained by a machine is just as likely to notice that attack and be able to alert the system so that it can then take some other action to protect itself from the cyber attack,” said Dwelley.
The learning algorithms that are used for pattern recognition tasks are much more widespread than most people realize.
“The postal service has been using it for decades to sort mail,” said Ardis. He added, “These are very primitive machine learning algorithms. But they’ve been in use since the 1990s, I imagine, or maybe even before. And so, the mail arrives at the letterbox through a machine learning algorithm. This has been going on for a long time. Google works with machine learning. Every time you type a search in Google’s search box, as practically the whole world does, there are machine learning algorithms behind that search, and they decide which results to return to you,” said Dwelley. One of the reasons why Google has been so successful is that the results are generally correct or are generally relevant.
“This is done with machine learning. Yet, without a machine learning algorithm, this would not work. The most obvious modern example is when you talk to your Alexa device. When you give it the word, machine learning algorithms recognize what you say, which is just looking for a pattern, and then they activate. There is a lot of AI working behind the scenes in everyday life today that many people are not aware of,” said Dwelley.
AI eliminates time-consuming research and offers precise risk analysis, reducing the time it takes for analysts to make important decisions and offer a well-organized response.
Among AI applications for everyday life, the field of medicine is another area where this discipline is fundamental. Machine learning and AI are used to detect a disease’s potential symptoms, analyze imaging examinations, and make diagnoses based on pathological reports. But that’s not all: AI applied to chatbots makes it possible to find patterns in patients’ symptoms, just as facial recognition technologies make a major contribution to the identification of rare genetic diseases.
The infrastructure that enables AI has evolved to support an increasingly wide range of activities and business processes. Traditional processors, architectures, and platforms often take a long time to calculate the range of options in a neural network (Deep Learning algorithm). The most advanced processors and Graphic Processor Units (GPUs) have already partially solved this problem, offering superior computational capabilities and speed, but supporting advanced AI solutions is still not enough.
“Keyword detection is actually a relatively simple task. It can be done with a low power Arm Cortex-M4 microcontroller,” said Ardis. “It is not a big challenge. But if you want to do great things like an industrial security camera or classify a whole range of sounds, there historically hasn’t been anything that can really do it on a battery or at an affordable cost. The same thing with, for example, facial identification or counting people. There haven’t been outstanding solutions that are cheap and low-consumption. So I think one of the things we are certainly trying to address with our new MAX78000 IC is to provide a cost-effective point in terms of chip capability to really allow these things on the market.”
He added, “Machine learning engineers are used to using infinite power to implement their neural networks. This doesn’t fit with battery operation, where you count bytes and microwatts, for example.”
To realize the full potential of AI, we need to think beyond the technology itself. It is estimated that AI can contribute to GDP growth with a significant impact on all industries. The key factor is to capture and acquire technical skills.
AI is one of the technologies that will change our lives. This is why engineers working in this field must develop critical thinking that can make them true protagonists of the future.
“I have always thought that part of the value of the Internet of Things was to make intelligence invisible. I think that AI is another tool to really do it more effectively. AI is a powerful tool to make all these things around us more invisibly intelligent,” said Ardis.
“AI IoT makes all our lives better. It makes all our lives easier. It does all these repetitive, boring tasks that humans don’t like to do,” added Dwelley.
The use and development of AI should support a new economic model capable of increasing peoples’ well-being that is also sustainable for the planet by optimizing energy consumption. But not only will the industry see the wonders of AI implemented, physics will also benefit from it.
Some issues at the frontiers of physics, such as combining quantum mechanics and general relativity, are incredibly complex. As the physicist Al-Khalili said, “The next Einstein will not be a human being.” AI is beginning to show innovative, intuitive, and creative thinking skills. One day it will be able to face such problems, and perhaps it will be able to imagine mathematical patterns beyond the capability of the human brain. Perhaps time travel? In all honesty, I can say that I am not dying to see the future.