At the AAAI-84 conference, more than 30 years ago, was a panel discussion on The Dark Ages of AI. Drew McDermott said in the article in AI Magazine (vol 6 no 3 1985):
In spite of all the commercial hustle and bustle around AI these days there’s a mood […] of deep unease among AI reserachers […]. This unease is due to the worry that that perhaps expectations about AI are too high, and that this will eventually end in disaster.
I have a real feeling of deja-vu. We have been there before, in the late 1960s, when the failed promises of instant high-quality machine translation caused what was dubbed the AI Winter, with all research funding being stopped. AI recovered, and twenty years later the same happened. And now again.
The situation has somewhat changed, as we have an abundance of data and processing power, but still no intelligence. The algorithms are still the same old ones we had before, only on a larger scale. Artificial neural networks are faster, and with Deep Learning they have become easier to use. We are replacing human engineering work (feature selection etc) by clever algorithms and are letting the machine find things out by itself.
The problem is that it works fine in some situations, and then it suddenly becomes the magic bullet. Everybody now has to do Deep Learning, as it has become the new marketing buzzword. AI is everywhere, but also nowhere, really. We do not know anything more about the world than we did before, but we can get fast computers to recognise patterns in data. No intelligence there. And if you are trying to calm down expectations you are not ‘disruptive’ enough.
It is only a question of time until it all comes crashing down, and AI will be be blamed for empty promises. We’ve been there before.
This all fits into the overall pattern of human behaviour. In Europe we are forgetting the lessons from the early 20th century, and nationalism is on the rise; it might seem a bit over the top to compare the current situation to 1930s Germany, but the point is that at the time people didn’t know what was happening. We now have the benefit of knowing where it can lead to, and it would be foolish to ignore the warning signs.
In AI as in society: history always repeats itself, and if you forget the past, you are condemmed to repeat it. (George Santayana)
Note: thanks to @DiegoKuonen for tweeting the reference to the AAAI-84 paper.