
Large language models (LLMs) like ChatGPT, and all the others that have come since such as Deepseek, have provided a huge impetus to AI research. They seem to divide people, however, into those who think LLMs bring us close to a degree of intelligence comparable to our own (the ability to solve maths problems, for example), and those like me who think they have done no such thing – for all the shiny examples of “reasoning” that their advocates call upon.
I read the critiques, but I never feel that they truly get to the bottom of the fundamental limitations, not only of large language models, but of digital systems more generally. In a 1980 article, philosopher John Searle wrote up a thought experiment concerning the limits of AI which is known as the Chinese Room Argument. It has not stood the test of time particularly well; it is certainly not a “gotcha” that has stopped AI researchers in their tracks. I decided to write a thought experiment of my own to address the limitations of large language models in particular, and AI more generally. I wasn’t thinking of the Chinese room when I wrote it, but I did arrive also at the metaphor of a room. It’s called The Baby Room. Someone has already compared it to another classic thought experiment, the Brain in a Vat. It’s not that, either: The baby room is addressed to a different problem altogether.
In The baby room, I make an epistemological comparison between digital systems and nature. You can read the article here.