Arguable Intelligence

In which AI is only A, and not really I

March 28, 2017
AI machine learning

I work in the field of AI. I studied it at university, I read books about AI in my spare time, implement toy systems that operate in the field, and I try to keep uptodate with current developments. And it infuriates me…

Artificial Intelligence has moved from an academic/applied subject to a marketing buzzword. Every washing machine is now “powered by AI”. So AI means everything and nothing these days. Let’s look at the traditional definitions:

Hard AI
the cognitive processes of human beings are implemented in software. By doing so, a computer basically functions like a human being, the brain being equivalent to a micro processor.
Soft AI
the cognitive system of a human being is imitated by computer software. There is no claim that this models human cognition, but the outcomes are the same.

This dichotomy was current when I learned about AI. But now there is a new element which arose in the past couple of decades with the availability of large amounts of data in all sorts of domains: machine learning (ML). In ML you basically train a mathematical model with data, which allows you to classify other data items (which your system has not seen before). Or maybe map between one item (eg an English phrase) and another (eg an equivalent German phrase). The difference here is that there is no human cognition involved.

AI is a direct reference to cognition, hence the ‘I’ of it, but ML is not. It’s an automaton, which has no understanding of what it is doing. In early AI systems, domain knowledge was usually modelled in a representation system, a logical language, a semantic network, whatever. Knowledge representation was/is an important aspect of AI, as you cannot have AI without knowledge. In an ML system, knowledge is encoded in model parameters, numerical values that have no meaning outside the model.

One point that motivates me to work in AI is that we learn about how humans process the world. We find out what is important, how me make decision, how we evaluate what is better or worse. In ML we just feed a black box with examples that we have marked up as good or bad, and we get a black box that can distinguish between these categories with a more or less acceptable accuracy. What have we learned about the process of making that distinction?


To re-iterate, machine learning is not artificial intelligence. It is statistical engineering. I’m not denying that it is useful, but calling it AI is plain wrong, and will undoubtedly lead to a new AI winter once the bubble bursts.