
It has been roughly 70 years since Alan Turing, a mathematician engaged in theorisation about digital computers and constructing them, considered the possibility of creating machines that ‘think’. In 1950 he proposed a test to validate whether or not they demonstrated artificial intelligence, or AI. Several have argued that AI in the ‘strong’ sense implied by the Turing test can never be achieved. For example, John Searle, in his ‘Chinese Room‘ thought experiment of 1980, drew on the Turing test to argue against the reductionist view that human thought can be reduced to merely syntactic machine operations at all.
But funding for AI research has mostly continued apace and has been massive lately. Except that there have been two ‘AI winters’ (see the timeline below). The first, in the 1970s, followed from researchers’ failure to create an intelligent system through programs as symbolic ‘reasoning’ systems. The second AI winter, in the 1980s, followed their failure to encode human reasoning as the application of sets of rules for computers to follow: the so-called ‘expert systems’.
Since the beginning of this century, machine learning has risen to prominence as an AI technology. This is a collection of techniques whereby algorithms are exposed to large amounts of data and as a result acquire capabilities in relation to new data, without being explicitly programmed to do so. Many machine learning techniques date from the last century but Big Data and increased processing capabilities, as well as algorithmic advances, have empowered it this century. There has been significant progress in image and speech recognition, game-playing, and machine translation.
However, I argue in The Scent of Data and in a recent talk on AI for Sceptics, that the capabilities of machine learning systems are quite unlike those of the human intellect. We humans are processors and inventors of symbolic systems for reasoning and communication, par excellence. Consider, for example, the symbolic meaning in Blake’s painting of Newton (above), and the symbolic reasoning in the mathematics and physical theories in which Newton is engaged. The capabilities of machine learning include none of those things – indeed they miss them by gigantic gaps. Rather, machine learning techniques are more accurately comparable to olfaction (smell) in relation to data, instead of intelligence in relation to symbols. And machine learning constitutes an alien, inhuman ability to recognise the ‘smell’ of data, at that.
We don’t (and can’t) understand how machine learning instances operate in any symbolic (as opposed to reductive) sense. Equally, we don’t know what structures and processes in our brains enable us to process symbols in intelligent ways: to abstract, communicate and reason through symbols, whether they be words or mathematical variables, and to do so across domains and problems. Moreover, we have no convincing path for progress from the first type of system, machine learning, to the second, the human brain.
It seems, in other words – notwithstanding genuine progress in machine learning – that it is another dead end with respect to intelligence: the third AI winter will soon be upon us. There’s too much money behind machine learning for the third winter to occur in 2018, but it won’t be long before the limited nature of AI advances sinks in.
That leads to the point of this article, which is to pose related questions under the banner ‘intelligence is analogue’ (IA). What evidence is there that our symbol-processing faculties are effectively reducible to algorithms running on digital computers? There is 70 years of evidence against that view, but it persists. Moreover, can we do better than contested arguments such as Searle’s to refute the vision of AI?
I share AI proponents’ fascination with it. We are drawn to AI due to the seductive nature of the idea (propounded in art, fiction, TV and cinema). Also through an analogy: that digital computers already ‘do arithmetic’ and ‘do logic’, ‘process text’ as we do, and the task of AI research is merely to become more sophisticated in the way we combine their fundamental operations. But that’s merely tendentious, metaphorical and vague. Digital computers can also ‘do’ nuclear physics – by running calculations from equations. But we don’t feel tempted to say that nuclear fission is occurring in our laptops.
Which takes us to a counterpart question, one that has been posed for centuries: given that thinking is an analogue process, how can we come to understand it as a production of the components in the human brain? No one would claim that we are anything other than a very long way from answering that question.
But could we become, if only we wanted to apply serious effort, closer to answering the first: the question of IA? Will the third AI winter last forever?