The discipline of artificial intelligence (AI) has been around for about 70 years. Its goal, a software analogue of human intelligence, still doesn’t exist despite advances. Every example of what is sometimes taken to be AI is in fact a case of the ‘robotic fallacy’1. This fallacy is to mistake an instance of seemingly intelligent behaviour for the existence of an underlying faculty of intelligence. That is, one sees or hears an AI program or robot say or do something which, if it were human, would be associated with a general level of intelligence. And one tends to assume it also has – or could come to possess – that intelligence. But in fact the behaviour fell into what, by human standards, is a very narrow and customised domain. There is little, if anything, else that the AI can offer in the way of apparently intelligent actions. And it’s not a question of waiting a little while until researchers have worked out how to attain AI. There is a vast chasm they need to cross. And that chasm, it will be argued here, exists in part because of a failure to recognise the nature of symbolic systems.
To make things more concrete, here is pseudocode for a type of ‘AI’ program2 typified by Alexa, Siri and other virtual assistants or bots:
if ( human says "Hi, I'm Fred" )
{
say( "Hello, Fred");
say( "What would you like to play or do?" );
}
Technology called machine learning turns the human’s spoken words into text and performs ‘natural language processing’ to break them down to fit a template of (greeting, name). The first of the AI’s responses follows that programmatic template. It uses the fact that, syntactically speaking, the human uttered a greeting and supplied a name. So far, so human-like in its effect. But not remotely intelligent: at no point has any meaning been processed by the AI, or has any reasoning been applied; only pattern-matching, i.e. fitting data to a computational template. The AI’s subsequent response is human-scripted. It is merely a prompt for further statements from its human interlocutor, which will in turn be pattern-matched to one of the relatively few number of templates available to the AI. Which is fine if you want to play songs, set alarms etc.; but no use at all if you wanted to talk about Brexit, about washing up, or your baby’s funny exclamation that morning. Unless, of course, you think a wikipedia extract related by keyword will suffice – another case of pattern-matching, without any processing of meaning.
The robotic fallacy arises from a false induction: “If it can understand my spoken words and answer ‘Hello, Fred’, then what else is possible?” A useful analogy for the invalidity of this induction is of someone who has climbed a tree, and then goes on to say that they are closer to the Moon. That huge distance to the goal, with no idea of how to cross it using techniques employed so far, is the ‘AI chasm’.
The robotic fallacy appears frequently in media reports of AI. It clouds our understanding of what software is capable of, and of what little it will plausibly be capable of for hundreds of years. Always assuming, that is, that civilisation survives to reach the next century despite the climate breakdown to which digital activity contributes massively. The fallacy also reinforces the hold that digital technology has over us, with its chimerical promise of purely technological solutions to what are in fact mostly human problems (political, ethical, sociological, …) – a fantasy that serves mainly the imperatives of surveillance states and the profit-making interests of corporations whose products we buy and whose advertising we are supposed to respond to.
The remainder of this article invokes a philosophical approach, involving thought experiments, to help us dispel the fantasy of AI. As Iris Murdoch said of moral philosophy – and this applies to analytical philosophy in general – “part of [its] role is to improve ourselves, to get us in a position where we can liberate ourselves from fantasy and see things as they really are.”
Intelligence
There is no single agreed-upon definition of human intelligence. All definitions do agree on one aspect of it, however: that it is a generalised, adaptive faculty, whether that is expressed in terms of overcoming unseen obstacles, reasoning, planning or solving problems beyond any particular domain or experience. The generality and adaptivity applies with respect to differences in the domain of discourse (now we’re talking about Brexit, now we’re talking about babies), and differences in the task (now I’m doing the washing up, now I’m writing code).
Even a human who possesses no great acumen in any particular domain is nonetheless capable of language – of conversing variously and with unbounded capacity about things as diverse as Brexit and babies, whether orally or using text or signing. A being capable of language is thereby intelligent in this sense to some degree. As Descartes wrote, in Discourse on the Method in 1637:
“if there were any such machines that bore a resemblance to our bodies and imitated our actions as far as this is practically feasible, we would always have two very certain means of recognizing that they were not at all true men. The first is that they could never use words or other signs, or put them together as we do in order to declare our thoughts to others. For one can well conceive of a machine that utters words, and even that utters words appropriate to the bodily actions that will cause some change in its organs…. But it could not arrange its words differently so as to respond to the sense of all that will be said in its presence, as even the dullest man can do.”
This faculty to “arrange … words differently so as to respond to the sense of all that will be said in its presence” is one criterion for human-like intelligence. It is the underlying basis for the Turing test, which Alan Turing devised for establishing whether machines could match humans.
There are terms coined to make a distinction between what we have achieved so far with computers and what human intelligence is capable of. We read of ‘strong’ (human-like) vs ‘weak’ AI, and ‘AGI’ or artificial general intelligence, i.e. intelligence akin to that of humans, as opposed to AI.
Whatever the terminology, there are in fact no digital systems that remotely approximate human intelligence, even after about 70 years of massive effort (see AI timeline). Rather, there are digital systems, fundamentally different in their capabilities from our brains (or minds), that happen to meet our own behaviour in a small set of cases where comparison is possible. Those notably include:
- taking part in games such as Go, poker and chess
- turning speech into text
- converting text from one language to another
- labelling images
- synthesising images and videos
These statements specifically avoid the anthropomorphic terms ‘playing’, ‘understanding’, ‘translation’, ‘recognition’ or ‘art’. None of those phenomena take place in the systems we have built thus far – let alone the ‘mind reading’, ’emotion detection’, ‘prediction’ or other tendentious terms that sometimes appear in articles.
Recent advances in the areas 1-5 are mainly due to the suite of technologies called machine learning. Before describing what machine learning is, consider the following examples of its limitations, taken from a recent paper Natural Adversarial Examples by Dan Hendrycks et al.3
ResNet-50 is a state-of-the-art image classifier which uses a machine learning technique known as residual learning – also a type of what is known as deep learning. The figure shows each image’s classification above in red. ResNet-50 gets the classification of these images completely wrong, and does so while estimating its mathematical ‘confidence’ in the result as 99%. This is despite the existence of many more images that it accurately classifies. The key point here is that the error or failure modes of machine learning are not human-like. They are, as is argued below, unknowable except post hoc.
There is an increasing body of work on so-called adversarial examples: images whose contents are clear to the human eye but which AI labels incorrectly – despite a host of examples where the classifier agrees with humans.
This is an example of the robotic fallacy par excellence. “Look, it can pick out objects in images just like us!” But it can’t. It doesn’t.
Machine Learning
Over the last twenty years or so, systems based on work in statistics starting from about 300 years ago (Bayes theorem) or from the 1940s (artificial neural networks) have come to the fore under the collective heading of machine learning. Machine learning is a set of techniques applied to big data, in an attempt to detect patterns in it and integrate those patterns in the processing of new data. In that sense, these techniques ‘learn’ from previous data. The goal of machine learning is to more effectively process the ensuing data.
In the last twenty years or so two factors have increased the efficacy of machine learning in the domains 1-5, by orders of magnitude:
- the increase in the amount of human text, images and videos and other ‘big’ data that has been aggregated, especially with the rise of social networks
- the amount of compute power available, with the increasing assemblage of tens of thousands of processing units that can be applied to a single problem.
Machine learning is very different from earlier approaches to AI (see AI timeline). Those include ‘symbolic’ AI, where intelligent human procedures such as planning were emulated in code; and expert systems where they were emulated in codified rules. Both failed, to the extent that they were followed by so-called AI ‘winters’ (a rapid drop in funding). The hopes that their proponents had raised were signally dashed against real-world measures.
We can split machine learning into two:
- Plain old statistics. This includes Bayesian analysis and clustering algorithms, for example. These are, or are based on, classical probability theory. Mathematical measures of learning ‘progress’ guide the statistical development.
- Connectionist statistics. This includes artificial neural networks – assemblies of connected computational nodes, typically arranged in layers – although other architectures exist. Some nodes accept inputs, some others produce outputs, then there are internal nodes. The figure shows just one internal or ‘hidden’ layer but in general there are multiple such layers in between. The governing principles behind connectionist statistics are to adapt the numerical ‘strength’ of the connections between nodes in adjacent layers as successive data is passed through them. The numerical values of the connections are generated according to mathematical constraints that pertain to a notional ‘energy’ of the system, which is used for optimisation.
Among connectionist statistical techniques, a well-known form is ‘deep learning’. This means processing the data through successively higher-order features, e.g. from pixels in an image to lines, and from lines to shapes. It entails processing that is typically carried out on an artificial neural network with many more nodes and layers than was computationally feasible before the advent of massive computing power. In fact, some results of deep learning have become exclusively the domain of wealthy entities such as corporations: university teams do not have access to the requisite computing power.
In machine learning, humans decide how to extract ‘features’ from the data to be provided as input. These are numbers, and numbers4 alone are processed. Those numbers might be, for example, the values of each pixel in an image, or higher-order data extracted from the image such as numbers describing the edges formed by contrasting colours.
At no point in machine learning are symbols processed. As with all forms of computer software devised so far, machine learning processes only numbers. AI research tends to be lax about the term ‘symbol’, which means a sign with meaning. Newell and Simon, in a seminal paper5 describe computers as instances of “physical symbol systems”:
“A physical symbol system consists of a set of entities, called symbols, which are physical patterns that can occur as components of another type of entity called an expression (or symbol structure).”
Herein lies a problem that has led to the AI chasm: of AI research not seeing beyond mere computational execution. In fact, signs within an execution are never symbols because they are never part of a language in the human (as opposed to computer programming) sense. Signs become symbols by entering into a language between two or more interlocutors that share what the philosopher Wittgenstein called a form of life – an activity with common ground. Even so-called ‘symbolic’ AI is no such thing. It is a form of AI that manipulates meaningless tokens (variables) according to the operation of code which was written, meaningfully, by a human.
As Wittgenstein observed in his Philosophical Investigations, one cannot in general conjure symbols, i.e. meaning, from formal (machine-like) manipulations of tokens alone. He wrote, “To imagine a language means to imagine a form of life.” Words as symbols become such only through their embedding in life: in the forms of ‘language game’ in which they are used. He wrote, “If a lion could speak, we could not understand him.” The lion’s symbols, presumably carrying meaning among lions who share a particular form of life, would not carry meaning for us, in so far as we do not share the lion’s form of life. One can argue as to whether this applies to activities we share with lions such as sleeping and eating, but Wittgenstein’s point is that we could not exchange symbols with meaning if the grounds of meaning are absent. In the film Arrival (2016), we are supposed to believe that humans and alien life forms can learn to communicate merely by displaying signs to one another without any physical contact and without sharing any artefacts. On the contrary, communication would be possible only to a very limited extent in the absence of common ground.
The root of the robotic fallacy
Returning to the robotic fallacy, how is it that machines can exemplify seemingly intelligent behaviour in some cases, without in fact possessing a general faculty for processing symbols? And what is it that they lack? Let us work through some examples.
Face detection
Consider a case in which we present a million images such as the faces on the left to a machine learning algorithm, providing pixels or other numerical features. After training we present another image (the sequence of bits on the right) and the program classifies the data in that image as a face. That classification is based purely on statistical relationships between numerical data. It has no direct relationship to how we humans understand faces, which is manifested in the roles that faces play in our lives. The program uses precisely no information about what we understand as faces, which have shapes, tactile properties and capabilities for certain movements, and are involved in eating, sneezing, kissing, crying etc. – all of which we humans might ordinarily use to spot a face even in very unusual circumstances.
To see the contrast, imagine that, for all the variety within a million human faces, the machine learning program in fact hinged on the presence of at least one pixel of a certain value (hue) – a shade of grey, say, in every single one of the training images – in what we (not the machine) would understand as the iris, for example. This would not happen in practice: machine learning tends to take into account all the data presented to it. But in principle it’s the same thing: ‘face detection’ is no more than statistical pattern-matching of data values.
Due to this conflation of data and world, there are two immediate drawbacks which apply to machine learning in general:
- Unknowable error modes. Consider the case in which we show the program an image of a human face, identifiable by any human as such, but without the exact pixel value that, unbeknown to us, it uses as a criterion for facial classification. It misclassifies it, claiming it is not an instance of a face. And we have no idea why. We do not see the absence of the magic pixel. Just as we do not see the failure to meet a far more complex numerical relationship in an actual case.
- No intelligible rationale. It’s not possible to make intelligible how a machine learning program arrived at a classification in a particular case. It determined a very specific statistical relationship between the image data and its training set. We can observe the numerical parameters within the trained program – the connection strengths, in the case of deep learning – but there is nothing more to be said. There is no “well, there were two almond-shaped eyes arranged symmetrically, and a nose with nostrils then mouth at right-angles to one another below them.” Some researchers claim to be able to provide automated commentary on machine learning classification, but all attempts result in unintelligible statistical relationships at some level of the description.
What is remarkable about the adversarial examples above – the misclassification of a dragonfly as a manhole cover, for example – is not so much the incorrectly labelled examples themselves, but that the classifiers label so many images correctly. These systems do not employ human-like understanding – for example, of the fact that dragonflies have wings and the circumstances in which one would expect to find them, or the shape, size and purpose of manhole covers and the circumstances in which one might find those.
To believe that one could build a reliable classifier in the absence of all of that information would be a kind of magical thinking.
Evenness
Now for an example using numbers alone. This in one sense will clear away much of the complexity of the physical world. What it doesn’t do, however, is take us away from Wittgenstein’s idea that symbols can be understood only as embedded in a form of life6. Mathematics is something taught to us in schools and practised in many contexts.
Consider the series of integers:
2, 4, 6, 8, 10, 12, 14, …
After some education, a child can spot immediately that these are the even numbers, and could continue the series indefinitely. The child has learned through forms of life (counting, drawing pictures of numbers as dots that do or do not form rectangles of side two, etc.) the meaning of these numbers and can understand that they exemplify the symbolic relationship “multiples of two”. To use the language of Logic, the latter is evenness as an intensional relationship or concept, expressed through a series of instances. Unlike machine learning, children require relatively little exposure to “data” – perhaps scores or hundreds of instances – to learn such a relationship.
Let us consider the thought experiment of creating a connectionist statistical program that could ‘learn’ this relationship, without encoding divisibility as a prior capability7. We feed this program many millions of even numbers labelled ‘even’, and odd numbers labelled ‘odd’. Now enter a number not used before. Will it print ‘even’ or ‘odd’?
How could we know, without presenting the number to it? What could possibly be detected as correctly representing ‘divisibility by 2’, about a statistically generated set of numerically weighted connections between deep learning nodes?
All that we could ask about this program is whether it is correct in some sense. Well, how often does it mislabel numbers? Let’s suppose we try. And it correctly labels a hundred numbers. Are we to conclude that it is ‘intelligent’ – has it grasped evenness? No? What about a million tests, then? It’s ‘learned evenness’, surely, if it gets those correct! Now, will it always be correct? Again, how could we know – by inspecting the weights between its connected nodes and somehow divining the symbolic relationship there? No. Even though the underlying artificial neural network has an architecture (e.g. it’s organised as layers), there is no architecture in the state of a trained machine learning system: no analytical representation that we could use as an alternative to the system of weights between nodes, to reason about them. There might be symbolic analogues in the early stages – a pixel is a dot in an image, a line is a segment within a picture, etc. – but the ultimate state of the system is not symbolic. It’s a complex set of numerical relationships. Even if there was a function that could map the state of an artificial neural network to a logical, symbolic relationship that it represents, we would have no method – no architecture – for finding it. The crux is that a machine whose operation is statistical is fundamentally different from one with an architecture (cogs, processing units, …). The latter has analytical properties and can be understood at a symbolic level. The former does not.
What about the brain, you say. It’s a neural network and we could say the same thing! Now we’re back to the AI chasm. As it happens, the brain is not the same because in fact it is capable of language and in fact it can symbolically reason about states of affairs irrespective of its faculties’ encoding within its neural circuitry.
Equally you might say: so what if the machine learning system is wrong on some occasions? Surely that’s allowed! Humans get arithmetic wrong sometimes. But when humans make mistakes, it’s fundamentally different. Sharing, as we do, the practices of carrying out calculations and performing arithmetical reasoning, we would be entitled to be puzzled by the mistake and say something like: But I asked about 15679 – did you hear me say ‘even’? And they would say: Oh yes, of course! Or: doh, somehow I switched to thinking about odd numbers!
After getting many examples correct, they wouldn’t say: ‘Show me why it’s not even.’ If they did, we would conclude that they had never understood evenness in the first place; they must have been merely guessing correctly beforehand.
Evenness, the symbolic relationship, is not a statistical property of numbers per se. And this is where we have little, if any, idea of how to take machine learning forward: to human-like symbol processing, in any domain.
At least two things are missing:
- the lack of certain innate symbol processing structures which humans seem to be born with
- recognition that our symbol processing faculties do not take place in a vacuum – or in a sea of nothing but data – but are embedded in forms of life, for which humans are equipped with sensory-motor skills.
Flora, fauna, artificia
Let us examine Wittgenstein’s notions of forms of life and language games more closely.
AI researchers often talk about the need to model causality. What do they mean by that? In The Golden Bough (1890), the Scottish anthropologist James Frazer writes:
“Thus we see that in sympathetic magic one event is supposed to be followed necessarily and invariably by another, without the intervention of any spiritual or personal agency. This is, in fact, the modern conception of physical causation; the conception, indeed, is misapplied, but it is there none the less. Here, then, we have another mode in which primitive man seeks to bend nature to his wishes.”
Never mind, for the purposes of this document, the condescending reference to “primitive man”. We are not ourselves free of magical thinking. There is a pointed reference, above, to magical thinking among some in the 21st century West with respect to the capabilities of machine learning. And Blockchain, anyone? The point – which Wittgenstein made in relation to the Golden Bough – is that causation appears in many forms in many language games. “The cat caused me to swerve.” “He caused him great distress.” “Our ceremony caused the rain to fall.” “The weight of the lorry caused the bridge to break.” “It was Germany’s invasion of Poland that caused Britain to enter the war.”
A symbol, “cause” – or, if you like, the notion of causality – has meanings which are related but different in each of these cases. Some can be broken down to the laws of Physics, and thus have an obvious objective basis. Others do to a lesser extent, and some none at all even though the interlocutors act as though causation took place. Wittgenstein would say that the grammar of the word differs across the examples. To make this clearer, consider that our criteria for what counts as a mistake differs across these cases. “I put it to you, your honour, that the defendant swerved because he was rowing with his partner and lost control of the wheel. The cat just happened to be there.” “What he did was regretful but I upset myself: I could have ignored it.” “No, it was public pressure that caused Britain to enter the war.” It’s not the case that one of those forms of causation is superior to the others: the only one that counts.
Let’s posit artificial entities – robots and purely software constructs – that engage in language games as distinct forms of life. Let’s call them artificia – by contrast with flora and fauna (the latter includes us, of course). What are the preconditions for us to exchange meaningful symbols with artificia in language games? What are the preconditions for artificia to exchange meaningful symbols among themselves?
But this already happens, doesn’t it? I’ve seen humans “talking to robots” and even robots talking among themselves, on TV! No: while humans might have uttered and heard symbols as though they were in a language game with these robots, the machines processed only data and only seemed to be taking part in a language game. They did not know the relevant grammars of the symbols involved. The article The Scent of Data argues that machine learning programs are analogous to dogs with an ability to ‘smell’ data. We offer them a ‘handkerchief’ with a data ‘scent’ upon it, which we want the dog to go and find instances of. In fact we offer the dog many such handkerchiefs, and are pleased when it returns with samples of what we are after. However, the handkerchief has many other data ‘smells’ upon it, none of which are recognisable to us but which the dog is able to detect. Although it looks like a dog – an animal whose capabilities and foibles we are very familiar with – it isn’t. It is an alien, statistical, unintelligent creature whose capabilities and foibles we actually know nothing about beyond the trivia of statistical bounds.
A language game requires the following components:
- Interlocutors
- Symbols
- Utterances as sequences of symbols
- Actions, including but not restricted to utterances
- Objects
- Common ground: the objects and actions visible between the interlocutors
In a language game, each symbol has a grammar, which determines which combinations of symbols (utterances) it can occur in, and in what circumstances (or how it relates to them) with respect to the common ground. All the evidence is that we are not born with the grammar of particular symbols we come to acquire, but with a faculty for acquiring grammars particularised with the form of life (and its language games) that we are born into. That grammar has to be learned, and cannot be learned except with reference to the common ground between instances of those forms of life.
The problem with AI research as it stands is, firstly, that its fixation on “symbol manipulation” – the software processing of tokens abstracted from data – is quite wrong with respect to this picture. It ignores the co-dependent embedding of symbols, objects and actions that are relative to a form of life. And it has no answer to the question of how the grammars of those symbols are made possible by an innate faculty for grammar acquisition.
Suppose we could make a faculty for grammar acquisition, in software. Then what? Unless we also constrained the forms of life of these new artificia to tie them to ours, we wouldn’t be able to understand them. “If a lion could speak…” OK, how about two artifia as interlocutors. Surely they could understand one another?
Maybe. What is their form of life? What their common ground? Why should we think they would be relatively stable species, like those found in nature? They could evolve wildly and differently across their populations, and cease to be able to understand one another.
Yes, but
The brain is a kind of machine.
It processes sensory input in the form of electrical signals, turns them into meaningful perceptions, thoughts and symbols, which it processes further.
But it differs from the systems we’ve been talking about – including “artificial neural networks” in important ways. What little we know includes:
- The brain contains hundreds of types of neuron, each with complex chemistry. By contrast, artificial neural networks contain only one type of node, with very simple numerical input-output relationships
- It contains a large amount of structure; there are 100+ cortical regions (that we can identify), for example
- It is very highly parallel in its operation, but with no apparent central coordination
- Its operation is analogue, with no digital processing that we are aware of.
Machine learning is extensional i.e. based on instances, as opposed to intensional – again to use the language of Logic. It proceeds from data, not innate symbolic structures. Humans have an innate ability to process symbols whereas much machine learning research attempts to dispense with that – or, as in domain-specific programs such as AlphaGo, incorporates some domain-specific algorithms. In a rather technical paper8, Raquel G. Alhama and Willem Zuidema start with a psychological experiment by Gary Marcus et al on some innate symbolic capabilities of seven-month old babies, and survey some of the machine learning approaches in response to that. None succeeds in capturing the simple relationships that infants are capable of understanding.
It is a paradox, how brains produce symbolic faculties from its extremely elaborate neuronal make-up. At first sight, the argument made above, that artificial neural networks cannot be taken to possess an architecture for symbol processing, can also be raised against the brain: show me where evenness is, in the neuronal structure!
But it’s not the same. The particular system of numerical weights formed between the nodes in an artificial neural network when it is trained has no architecture that we could derive even in principle. In the brain, on the other hand, there must be functional units that we have not yet comprehended. Maybe the brain can more correctly be described as an organic nexus of expressions in the lambda calculus, for example, than a nexus of weights.
And this leads to a research question. Could we train (mutate, evolve) an artificial neural net over metamorphoses of its own structure, so as to achieve meaningful symbol processing? That is, could we mutate the functions of the nodes, and the structures by which they are organised, as inputs are passed through the network millions – trillions? – of times? The overarching principle or mechanism by which the mutations would be evaluated are hardly clear. And this would be to beg the question of the second item that we have identified as missing from current approaches to AI: the forms of life in which symbols’ meanings are embedded. It also would be to beg the question of whether we should expend any more fossil fuel on AI – which will be addressed in the conclusion.
Or, to sweep away all of the above, is intelligence intrinsically analogue, not digital?
Conclusion
This article has argued that:
- Approaches to AI to date are lacking in two respects, if human-like intelligence is to be achieved. Firstly, they are missing an account of how to emulate (or create) the brain’s innate faculties; secondly, they suppose that symbol processing can be said to take place in the computational environment of data alone, whereas in fact symbols can be understood only in terms of the forms of life and language games in which they inhere, to use Wittgenstein’s terms.
- The robotic fallacy is where we mistake an instance of seemingly intelligent behaviour for the existence of an underlying intelligent faculty. There are plenty of machine-based examples of the former, none of the latter – as evidenced by machines’ failure to play any of our language games, i.e. to exchange meaningful symbols with us in an adaptive way.
- The robotic fallacy is intimately linked to the tendency to believe, from an instance of seemingly intelligent behaviour, that we are somehow close to achieving intelligence as a general faculty. We are not. We are facing the AI chasm.
The robotic fallacy is offered as a tool for thinking about claims made regarding AI. The appeal to Wittgenstein’s philosophy is offered as a way of broadening how we see AI research progressing. In fact there were two main phases to Wittgenstein’s philosophical development. In his Tractatus Logico Philosophicus, he put forward the thesis that “the world consists of facts, not of things”. He considered then that one needed only to elaborate a scheme of formal logic which, operating upon those facts, would describe the world completely. But he later abandoned that position entirely with the arguments of the Philosophical Investigations. The world, which is tractable to us only through symbols, is not isomorphic to data and cannot be inferred from it. Data will always be missing elements of symbolic form. Not only is human language not formal logic, neither can it be learned through data alone. The human brain contains, in some sense, our symbolic forms. We need to identify how it realises those forms, and how they are learnt as a function of language games as a whole. Otherwise, even if we could create intelligent artificia through computational evolution, we almost certainly wouldn’t be able to understand them.
Whatever we think of progress towards AI so far, this article has not mentioned the many problems that machine learning, inappropriately applied, raises for social issues including justice and welfare. Automated decision-making in areas such as employment, potential criminality and creditworthiness raise issues of social justice and potential discrimination. How, from a legal perspective, can one establish whether an uninterpretable system with unknown error modes is making biased or incorrect decisions, except by subjecting it to audit tests? What tests will suffice, and how often should one apply them to an evolving system? See The Scent of Data for a little more about these issues.
We end on a topic mentioned in the introduction: the climate breakdown. Machine learning requires very large amounts of computing power, which requires correspondingly large amounts of electricity. See, for example, this recent article in MIT Technology Review. Given that we need to curtail carbon emissions by 45% within five to ten years if we are to restrict global heating to 1.5 degrees Celsius – and avoid the catastrophe of a larger increase – is machine learning worth it? For whom? Because if the answer is mainly profits for Big Tech then we should find a way to cease current levels of investment in it, now. And concentrate, as Iris Murdoch said, on liberating ourselves from fantasy. And getting on with the job of saving the planet for our children.
Footnotes
- The author invented this term after not finding a statement of the fallacy elsewhere. Is there one?
- We’ll drop the quotes around AI hereafter but they are always implied
- Natural Adversarial Examples. Hendrycks, Dan; Zhao, Kevin; Basart, Steven; Steinhardt, Jacob; Song, Dawn. eprint arXiv:1907.07174. https://arxiv.org/pdf/1907.07174.pdf
- This includes characters such as letters of the alphabet, which are also represented in machines as numbers
- A. Newell and H. Simon, “Computer Science as Empirical Inquiry: Symbols and Search” Communications of the Association for Computing Machinery, Vol. 19, No. 3, 1976, pp. 113-126
- Immanuel Kant would have disagreed. He wrote that ‘5 + 7 = 12’ is both synthetic and a priori: a substantive (non-tautological) relationship which is nonetheless known independently of any possible experience
- On a technical note, these are integers, not strings of digits (characters); the existence of 0, 2, 4, 6, or 8 on the end of a digit string could almost certainly be machine-learnt – without, thereby, any symbolic relationship of divisibility being involved
- Raquel G. Alhama and Willem Zuidema, “A review of computational models of basic rule learning: The neural-symbolic debate and beyond“