Another frequent objection against theories like CRTT, originally voiced by Wittgenstein and Ryle, is that they merely reproduce the problems they are supposed to solve, since they invariably posit processes—such as following rules or comparing one thing with another—that seem to require the very kind of intelligence that the theory is supposed to explain. Another way of formulating the criticism is to say that computational theories seem committed to the existence in the mind of “homunculi,” or “little men,” to carry out the processes they postulate.
This objection might be a problem for a theory such as Freud’s, which posits entities such as the superego and processes such as the unconscious repression of desires. It is not a problem, however, for CRTT, because the central idea behind the development of the theory is Turing’s characterization of computation in terms of the purely mechanical steps of a Turing machine. These steps, such as moving left or right one cell at a time, are so simple and “stupid” that they can obviously be executed without the need of any intelligence at all.
Artifactuality and artificial intelligence (AI)
It is frequently said that people cannot be computers because whereas computers are “programmed” to do only what the programmer tells them to do, people can do whatever they like. However, this is decreasingly true of increasingly clever machines, which often come up with specific solutions to problems that certainly might not have occurred to their programers (there is no reason why good chess programmers themselves need to be good chess players). Moreover, there is every reason to think that, at some level, human beings are indeed “programmed,” in the sense of being structured in specific ways by their physical constitutions. The American linguist Noam Chomsky, for example, has stressed the very specific ways in which the brains of human beings are innately structured to acquire, upon exposure to relevant data, only a small subset of all the logically possible languages with which the data are compatible.
Searle’s “Chinese room”
In a widely reprinted paper, “Minds, Brains, and Programs” (1980), Searle claimed that mental processes cannot possibly consist of the execution of computer programs of any sort, since it is always possible for a person to follow the instructions of the program without undergoing the target mental process. He offered the thought experiment, since known as the Chinese room argument, of a man who is isolated in a room in which he produces Chinese sentences as “output” in response to Chinese sentences he receives as “input” by following the rules of a program for engaging in a Chinese conversation—e.g., by using a simple conversation manual. Such a person could arguably pass a Chinese-language Turing test for intelligence without having the remotest understanding of the Chinese sentences he is manipulating. Searle concluded that understanding Chinese cannot be a matter of performing computations on Chinese sentences, and mental processes in general cannot be reduced to computation.
Critics of Searle have claimed that his thought experiment suffers from a number of problems that make it a poor argument against CRTT. The chief difficulty, according to them, is that CRTT is not committed to the behaviourist Turing test for intelligence, so it need not ascribe intelligence to a device that merely presents output in response to input in the way that Searle describes. In particular, as a functionalist theory, CRTT can reasonably require that the device involve far more internal processing than a simple Chinese conversation manual would require. There would also have to be programs for Chinese grammar and for the systematic translation of Chinese words and sentences into the particular codes (or languages of thought) used in all of the operations of the machine that are essential to understanding Chinese—e.g., those involved in perception, memory, reasoning, and decision making. In order for Searle’s example to be a serious problem for CRTT, according to the theory’s proponents, the man in the room would have to be following programs for the full array of the processes that CRTT proposes to model. Moreover, the representations in the various subsystems would arguably have to stand in the kinds of relation to external phenomena proposed by the externalist theories of intentionality mentioned above. (Searle is right to worry about where meaning comes from but wrong to ignore the various proposals in the field.)
Defenders of CRTT argue that, once one begins to imagine all of this complexity, it is clear that CRTT is capable of distinguishing between the mental abilities of the system as a whole and the abilities of the man in the room. The man is functioning merely as the system’s “central processing unit”—the particular subsystem that determines what specific actions to perform when. Such a small part of the entire system does not need to have the language-understanding properties of the whole system, any more than Queen Victoria needs to have all of the properties of her realm.
Searle’s thought experiment is sometimes confused with a quite different problem that was raised earlier by Ned Block. This objection, which also (but only coincidentally) involves reference to China, applies not just to CRTT but to almost any functionalist theory of the mind.