In his famous work Minds, Brains, and Programs (1980), John Searle formulated a set of responses to the key questions that arise when discussing the ability of machines to think.
Weak AI vs. Strong AI
At the beginning, Searle makes an important distinction between weak and strong AI. Weak AI refers to machines as tools that can be used for a wide range of practical applications—essentially, purely computational systems. In contrast, strong AI suggests that an appropriately programmed computer is not just a tool for studying the mind, but is itself a mind—that it can literally be said to understand and possess cognitive states. In other words, strong AI holds that a computer, given the right programs, can be considered as having beliefs, intentions, and human-level thought.
Let’s now turn to Searle’s central thought experiment, the Chinese Room, as he describes it:
“Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that ‘formal’ means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch.” (1, p. 417)
In this scenario, the person inside the room is producing outputs in Chinese that are indistinguishable from those of a native speaker. They follow instructions and apply formal rules but understand nothing of the language. The person, and by analogy the computer, does not comprehend the content—it is simply manipulating symbols.
Searle writes:
“I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.” (1, p. 419)
This thought experiment illustrates that even when we have correct inputs and outputs—i.e., seemingly meaningful Chinese texts—there is no real understanding taking place. The person in the room, who is following the program, clearly has no comprehension of Chinese. Yet they might pass a Turing Test and fool native speakers into thinking they are fluent.
This leads us to an important distinction between mental and non-mental states. Much of the debate disappears when this line is clearly drawn. As Chalmers noted 25 years after Searle’s paper, it would be valuable to establish benchmarks for different aspects of consciousness—such as self-awareness, affective experience, attention, or unconscious processing. However, these remain elusive.
Syntax Without Semantics
Conscious systems should be treated as having moral status. We do not, for example, attribute intentionality to a water bottle or a coffee machine.
Searle extends the thought experiment by imagining a robot:
“Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking—anything you like. The robot would, for example, have a television camera attached to it that enabled it to ‘see’; it would have arms and legs that enabled it to ‘act’; and all of this would be controlled by its computer ‘brain.’” (1, p. 420)
This robot may have perceptual capacities and may interact with the external world through movement, sight, and touch. However, it still does not know or understand anything. It merely follows instructions, manipulating formal symbols without assigning any meaning to them. The confusion arises when we observe machines or robots that behave and communicate in a human-like way—but do they really possess mental states?
Searle argues: no. And strictly speaking, these systems do not even perform true symbol manipulation, because the symbols used by the machine do not stand for anything—they are meaningless. As Searle succinctly puts it:
“They have only a syntax but no semantics.” (1, p. 422)
Searle’s answer to the question of whether machines can produce mental phenomena is clearly negative. However, with the development of large language models (LLMs), this answer becomes more complex.
While LLMs are, in one sense, still “stochastic parrots”—to use Melanie Mitchell’s term—they operate on a scale and depth that complicates Searle’s analysis. These models are built from simple statistical operations, but through their vast parameter space and recursive architecture, something unexpected occurs: the emergence of new qualities that cannot be found in any single component.
As Mitchell writes:
“Complex Systems science—especially the study of dynamics and chaos—have shown that complex behavior and unpredictability can arise in a system even if the underlying rules are extremely simple and completely deterministic. […] The study of complexity has shown that when a system’s components have the right kind of interactions, its global behavior—the system’s capacity to process information, to make decisions, to evolve and learn—can be powerfully different from that of its individual components.” (4)
This opens a philosophical space for rethinking what it means for a machine to “understand,” even in the absence of classical intentionality. While Searle may have successfully dismantled the idea of computers as minds in his time, LLMs suggest we now face a different kind of question: can understanding emerge from a system that is, at its core, a statistical structure—but whose complexity gives rise to behaviors that appear thoughtful, responsive, and even creative?
Bibliography
- Searle, J. R. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3(3), 417–457.
- Chalmers, D. J. (2023). Could a Large Language Model Be Conscious? Boston Review. https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/
- Chalmers, D. J. (2022). Reality+: Virtual Worlds and the Problems of Philosophy. W. W. Norton & Company.
- Mitchell, M. (2009). Complexity: A Guided Tour. Oxford University Press.
- Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux.
- Boden, M. A. (2006). Mind as Machine: A History of Cognitive Science (2 vols.). Oxford University Press.
Image: The Chinese Room Thought Experiment – Illustrated
Credit: Image generated by ChatGPT (OpenAI)