Philosophy of mind of John Searle
In large part, Searle was driven to the study of mind by his study of language. As indicated above, his analysis of speech acts always involved reference to mental concepts. Since mental states are essentially involved in issuing speech acts, Searle realized that his analysis of language could not be complete unless it included a clear understanding of those states.
Intentionality and consciousness
An important feature of the majority of mental states is that they have an “intentional” structure: they are intrinsically about, or directed toward, something. (Intentionality in this sense is distinct from the ordinary quality of being intended, as when one intends to do something.) Thus, believing is necessarily believing that something is the case; desiring is necessarily desiring something; intending is necessarily intending to do something. Not all mental states are intentional, however: pain, for example, is not, and neither are many states of anxiety, elation, and depression.
Speech acts are intentional in a derivative sense, insofar as they are expressive of intrinsically intentional mental states, including expressed psychological states and propositional contents. According to Searle, the derived intentionality of language accounts for the apparently mysterious capacity of words, phrases, and sentences to refer not only to things in the world but also to things that are purely imaginary or fictional.
Although not all mental states are intentional, all of them, in Searle’s view, are conscious, or at least capable in principle of being conscious. Indeed, Searle maintains that the notion of an unconscious mental state is incoherent. He argues that, because consciousness is an intrinsically biological phenomenon, it is impossible in principle to build a computer (or any other nonbiological machine) that is conscious. This thesis runs counter to much contemporary cognitive science and specifically contradicts the central claim of “strong” artificial intelligence (AI): that consciousness, thought, or intelligence can be realized artificially in machines that exactly mimic the computational processes presumably underlying human mental states.
The Chinese room argument
In a now classic paper published in 1980, “Minds, Brains, and Programs,” Searle developed a provocative argument to show that artificial intelligence is indeed artificial. Imagine that a person who knows nothing of the Chinese language is sitting alone in a room. In that room are several boxes containing cards on which Chinese characters of varying complexity are printed, as well as a manual that matches strings of Chinese characters with strings that constitute appropriate responses. On one side of the room is a slot through which speakers of Chinese may insert questions or other messages in Chinese, and on the other is a slot through which the person in the room may issue replies. The person in the room, using the manual, acts as a kind of computer program, transforming one string of symbols introduced as “input” into another string of symbols issued as “output.” Searle claims that even if the person in the room is a good processor of messages, so that his responses always make perfect sense to Chinese speakers, he still does not understand the meanings of the characters he is manipulating. Thus, contrary to strong AI, real understanding cannot be a matter of mere symbol manipulation. Like the person in the room, computers simulate intelligence but do not exhibit it.
The Chinese room argument has generated an enormous critical literature. According to the “systems response,” the occupant of the room is analogous not to a computer but only to a computer’s central processing unit (CPU). He does not understand Chinese because he is only one part of the computer that responds appropriately to Chinese messages. What does understand Chinese is the system as a whole, including the manual, any instructions for using it, and any intermediate means of symbol manipulation. Searle’s reply is that the other parts of the system can be dispensed with. Suppose the person in the room simply memorizes the characters, the manual, and the instructions so that he can respond to Chinese messages entirely on his own. He still would not know what the Chinese characters mean.
Another objection claims that robots consisting of computers and sensors and having the ability to move about and manipulate things in their environment would be capable of learning Chinese in much the same way that human children acquire their first languages. Searle rejects this criticism as well, claiming that the “sensory” input the computer receives would also consist of symbols, which a person or a machine could manipulate appropriately without any understanding of their meaning.
Mind and body
Searle’s view that mental states are inherently biological implies that the perennial mind-body problem—the problem of explaining how it is possible for minds and bodies to interact—is fundamentally misconceived. Minds and bodies are not radically different kinds of substance, as the 17th-century French philosopher René Descartes maintained, and minds certainly do not belong to any realm that is separate from the physical world. This is not to say that mental states are “reducible” to physical states, so that all talk of the mental can be eliminated in favour of talk of the physical. Rather, they are intrinsic features of certain very complex kinds of biological system. Because mental states are biological, they can cause and be caused by physical changes in human bodies. Moreover, reference to them is essential to any adequate explanation of human behaviour.