The computational-representational theory of thought (CRTT)
The idea that thinking and mental processes in general can be treated as computational processes emerged gradually in the work of the computer scientists Allen Newell and Herbert Simon and the philosophers Hilary Putnam, Gilbert Harman, and especially Jerry Fodor. Fodor was the most explicit and influential advocate of the computational-representational theory of thought, or CRTT—the idea that thinking consists of the manipulation of electronic tokens of sentences in a “language of thought.” Whatever the ultimate merits or difficulties of this view, Fodor rightly perceived that something like CRTT, also called the “computer model of the mind,” is presupposed in an extremely wide range of research in contemporary cognitive psychology, linguistics, artificial intelligence, and philosophy of mind.
Of course, given the nascent state of many of these disciplines, CRTT is not nearly a finished theory. It is rather a research program, like the proposal in early chemistry that the chemical elements consist of some kind of atoms. Just as early chemists did not have a clue about the complexities that would eventually emerge about the nature of these atoms, so cognitive scientists probably do not have more than very general ideas about the character of the computations and representations that human thought actually involves. But, as in the case of atomic theory, CRTT seems to be steering research in promising directions.
The computational account of rationality
The chief inspiration for CRTT was the development of formal logic, the modern systematization of deductive reasoning (see above Deduction). This systematization made at least deductive validity purely a matter of derivations (conclusions from premises) that are defined solely in terms of the form—the syntax, or spelling—of the sentences involved. The work of Turing showed how such formal derivations could be executed mechanically by a Turing machine, a hypothetical computing device that operates by moving forward and backward on an indefinitely long tape and scanning cells on which it prints and erases symbols in some finite alphabet. Turing’s demonstrations of the power of these machines strongly supported his claim (now called the Church-Turing thesis) that anything that can be computed at all can be computed by a Turing machine. This idea, of course, led directly to the development of modern computers, as well as to the more general research programs of artificial intelligence and cognitive science. The hope of CRTT was that all reasoning—deductive, inductive, abductive, and practical—could be reduced to this kind of mechanical computation (though it was naturally assumed that the actual architecture of the brain is not the same as the architecture of a Turing machine).
Note that CRTT is not the claim that any existing computer is, or has, a mind. Rather, it is the claim that having a mind consists of being a certain sort of computer—or, more plausibly, an elaborate assembly of many computers, each of which subserves a specific mental capacity (perception, memory, language processing, decision making, motor control, and so on). All of these computers are united in a complex “computational architecture” in which the output of one subsystem serves as the input to another. In his influential book Modularity of Mind (1983), Fodor went so far as to postulate separate “modules” for perception and language processing that are “informationally encapsulated.” Although the outputs of perceptual modules serve as inputs to systems of belief fixation, the internal processes of each module are segregated from each other—explaining, for example, why visual illusions persist even for people who realize that they are illusions. Proponents of CRTT believe that eventually it will be possible to characterize the nature of various mental phenomena, such as perception and belief, in terms of this sort of architecture. Supposing that there are subsystems for perception, belief formation, and decision making, belief in general might be defined as “the output of the belief-formation system that serves as the input to the decision-making system” (beliefs are, after all, just those states on which a person rationally acts, given his desires).
For example, a person’s memory that grass grows fast might be regarded as a state involving the existence of an electronic token of the sentence “Grass grows fast” in a certain location in the person’s brain. This sentence might be subject to computational processes of deductive, inductive, and abductive reasoning, yielding the sentence “My lawn will grow fast.” This sentence in turn might serve as input to the person’s decision-making system, where, one may suppose, there exists the desire that his lawn not be overgrown—i.e., a state involving a certain computational relation to an electronic token of the sentence “My lawn should not be overgrown.” Finally, this sentence and the previous one might be combined in standard patterns of decision theory to cause his body to move in such a way that he winds up dragging the lawn mower from the garage. (Of course, these same computational states may also cause any number of other nonrational effects—e.g., dreading, cursing, or experiencing a shot of adrenaline at the prospect of the labour involved.)
Although CRTT offers a promise of a theory of thought, it is important to appreciate just how far current research is from any actual fulfillment of that promise. In the 1960s the philosopher Hubert Dreyfus rightly ridiculed the naive optimism of early work in the area. Although it is not clear that he provided any argument in principle against its eventual success, it is worth noting that the position of contemporary theorists is not much better than that of Descartes, who observed that, although it is possible for machines to emulate this or that specific bit of intelligent behaviour, no machine has yet displayed the “universal reason” exhibited in the common sense of normal human beings. People seem to be able to integrate information from arbitrary domains to reach plausible overall conclusions, as when juries draw upon diverse information to render a verdict about whether the prosecution has established its case “beyond a reasonable doubt.” Indeed, despite his own commitment to CRTT as a necessary feature of any adequate theory of the mind, even Fodor doubts that CRTT is by itself sufficient for such a theory.