Blog: Do We Have AI Clarity Now?

Article By : Russell James

With several ‘AI winters’ since the late 1960s, do we have greater clarity on AI and its potential or is it still in the realms of science fiction?

There is a lot of confusion around artificial intelligence (AI) – what it is and isn’t, how it works, what works and what doesn’t, its dangers, its potential for bias and generally the chasm between hype and reality.

In recent months, several articles, from both mainstream media and specialists, have debated whether we are heading for a new AI winter. This is a term used to describe downturns in interest and/or funding into AI and there have arguably been two significant AI winters so far. This article looks at whether AI is going through a downturn or whether we are more able to understand the strengths and weaknesses of the current AI technology. This analysis involves looking at AI in the past and the present and how the cyclical nature of AI hype and disillusionment may not be as clear as it seems.

What is AI?

AI is a highly overloaded term. Past usage of the word aside, in today’s world AI essentially means adding a limited and often singular type of “intelligence” to a product. Products can become smarter at doing certain focussed tasks. This is often achieved using neural networks, although the term is also being used to give new life to older more classical computer vision or imaging solutions. Machine learning (ML) is another often misquoted term, that is the process by which these neural networks “learn” the topology and weights to achieve the desired result.

Neural networks are of course nothing new. How neurons might work was first theorised in 1943 by Warren McCulloch and Walter Pitts1, using a simple electrical circuit to model a simple neural network. In 1958, Frank Rosenblatt proposed the idea of a perceptron modelled on the McCulloch-Pitts neuron. In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models called “Adaline” and “Madaline” (Multiple ADAptive LINear Elements). Adaline was developed to recognize binary patterns to predict the next bit in a bitstream from a phone line. Madaline was used to eliminate echoes on phone lines and is still in commercial use today.

1969 saw the advent of the first AI winter after Marvin Minsky conclusively argued that the single perceptron approach could not be effectively translated to multi-layered networks, without very long compute times.

In the 1980s, the AI winter came to an end, with the Hopfield network from Jon Hopfield and the advent of backpropagation and gradient descent techniques to efficiently move and adjust weights to the minimum of the cost function.

One of the first convolutional neural networks came about in 1994 and set neural network development firmly on the path to deep learning. However, the early 2000s were a slow time for neural network development and could be considered the second AI winter. In 2010, Dan Claudiu Ciresan and Jurgen Schmidhuber publishedone of the first implementations of a GPU-based neural network. This opened up AI processing to the massively parallel compute offered by GPUs, which whilst not perfect was much better than what CPUs alone could provide. In 2012, AlexNet was released and thus began the most recent phase of furious activity on neural networks and their usage in real-world environments and applications.

The AI of today is not capable of general intelligence or complex contextual understanding; rather the term AGI (artificial general intelligence) is often used to describe this type of capability. AI is not the answer to everything and, in many cases, more established solutions perform better. Often, a mix of the two gives a more well-rounded solution to a given problem.

How does AI work?

The main thrust of AI development has been in the form of neural network processing, particularly deep neural networks. There are many guides, articles and books on how neural networks work. The most common networks today are essentially pattern matching tools with a categorized probability output vector, i.e. 60% probability this is a flower, 20% chance this is a tree. As the network architecture complexity grows, multiple stages will perform different elements of the desired task, while temporal feedback will enable a level of contextual interpretation.

Dangers of AI: still science fiction?

The dangers of some rogue artificial intelligence taking over the world are firmly in the realms of science fiction right now. The real dangers of AI today are in several areas. First, the non-deterministic reality of how neural networks behave in training can lead to unintended consequences of inference results when exposed to the real world.

“Ghosts in the machine” is an apt, if more foreboding, analogy of this danger. There have been many experiments on how to fool object or image recognition networks, such as by changing single pixels or using randomizing patterns on objects, often with catastrophic results. Another danger is in bias in the training datasets and also potentially the network architecture itself. These serve to embed inherent flaws in the network that are not discovered until wider usage of real-world data happens.

To combat these problems and reduce these unintended consequences, further work must be done such as training dataset bias awareness, additional network architecture development, strengthening of the training process and more frequent erroneous result checking in the execution pipelines.

Hype and reality

There is an enormous amount of hype around AI. With the scale and breadth of what AI could potentially do, this hype is inevitable. However, it must be mixed with a healthy dose of reality. The world is complex and interpreting it even more so. The complexity of the network architectures needed to interpret some of these promised applications is significant and this realization is now being felt more keenly. Autonomous driving is a great example of this realism in action. Early promises were to have Level 4 and 5 cars by mid-2020, whereas the reality now is that this is going to be much further out: the current focus is on Level 2+ or maybe Level 3.

The hype, in this case, has been for a transition from a traditional automotive industry to a mobility-based one. This would turn the entire business model on its head, by moving from ownership to a mini-transaction or subscription service, with autonomous driving as one of the key features. The underlying technology needs to catch up and enable at least Level 4, before many (but not all) of the current concept mobility solutions can be realized.

Going into automotive a little more, the Hollywood vision of autonomous driving is a reality that is most likely 30-40 years out. In the next 10 years, it’s much more likely we’ll see a more limited version based on a reasonable coverage of machine/machine and machine/human driving.

An additional set of difficulties are not so much the technology itself, but the integration of that technology into current infrastructure, liability and legal frameworks. Some automotive OEMs at the forefront of ADAS and autonomous driving have taken the approach of adding new features and functions to their vehicles and working out these difficulties later; more a case of seeking forgiveness than asking permission.

Hindsight will no doubt decide whether this approach is judged to be “right” or not. For now, though, these OEMs are yielding impressive results.

Are we really in an AI winter?

AI is the technology that refuses to quit, leading to this cyclical hype and disillusionment. The difference to other areas is that research and exploration of AI are constantly opening up new areas of application, proving that we are only on the very beginning of the technology maturity curve.

One could view this cyclical variation as mere noise on what will become massive technological growth. As we understand the capabilities and complexities of what state-of-the-art AI can achieve, interest will naturally wax and wane, due to overhyping the baby steps of progress.

However, what is clear is that with continued R&D investment AI has the potential to solve many currently intractable problems. Are we in an AI winter? It’s probably more of an AI autumn, but it doesn’t matter as much as it once would have. This is because there is much more clarity on what AI can currently achieve and a good line of sight to the incredible heights it could reach.

Leave a comment