Radical behaviourism
While acknowledging that people—and many animals—do appear to act intelligently, eliminativists thought that they could account for this fact in nonmentalistic terms. For virtually the entire first half of the 20th century, they pursued a research program that culminated in B.F. Skinner’s (1904–90) doctrine of “radical behaviourism,” according to which apparently intelligent regularities in the behaviour of humans and many animals can be explained in purely physical terms—specifically, in terms of “conditioned” physical responses produced by patterns of physical stimulation and reinforcement (see also behaviourism; conditioning).
Radical behaviourism is now largely only of historical interest, partly because its main tenets were refuted by the behaviourists’ own wonderfully careful experiments. (Indeed, one of the most significant contributions of behaviourism was to raise the level of experimental rigour in psychology.) In the favoured experimental paradigm of a rigid maze, even lowly rats displayed a variety of navigational skills that defied explanation in terms of conditioning, requiring instead the postulation of entities such as “mental maps” and “curiosity drives.” The American psychologist Karl S. Lashley (1890–1958) pointed out that there were, in principle, limitations on the serially ordered behaviours that could be learned on behaviourist assumptions. And in a famously devastating critique published in 1957, the American linguist Noam Chomsky demonstrated the hopelessness of Skinner’s efforts to provide a behaviouristic account of human language learning and use.
Since the demise of radical behaviourism, eliminativist proposals have continued to surface from time to time. One form of eliminativism, developed in the 1980s and known as “radical connectionism,” was a kind of behaviourism “taken inside”: instead of thinking of conditioning in terms of external stimuli and responses, one thinks of it instead in terms of the firing of assemblages of neurons. Each neuron is connected to a multitude of other neurons, each of which has a specific probability of firing when it fires. Learning consists of the alteration of these firing probabilities over time in response to further sensory input.
Few theorists of this sort really adopted any thoroughgoing eliminativism, however. Rather, they tended to adopt positions somewhat intermediate between reductionism and eliminativism. These views can be roughly characterized as “irreferentialist.”
Irreferentialism
Wittgenstein
It has been noted how, in relation to introspection, Wittgenstein resisted the tendency of philosophers to view people’s inner mental lives on the familiar model of material objects. This is of a piece with his more general criticism of philosophical theories, which he believed tended to impose an overly referential conception of meaning on the complexities of ordinary language. He proposed instead that the meaning of a word be thought of as its use, or its role in the various “language games” of which ordinary talk consists. Once this is done, one will see that there is no reason to suppose, for example, that talk of mental images must refer to peculiar objects in a mysterious mental realm. Rather, terms like thought, sensation, and understanding should be understood on the model of an expression like the average American family, which of course does not refer to any actual family but to a ratio. This general approach to mental terms might be called irreferentialism. It does not deny that many ordinary mental claims are true; it simply denies that the terms in them refer to any real objects, states, or processes. As Wittgenstein put the point in his Philosophische Untersuchungen (1953; Philosophical Investigations), “If I speak of a fiction, it is of a grammatical fiction.”
Of course, in the case of the average American family, it is quite easy to paraphrase away the appearance of reference to some actual family. But how are the apparent references to mental phenomena to be paraphrased away? What is the literal truth underlying the richly reified façon de parler of mental talk?
Although Wittgenstein resisted general accounts of the meanings of words, insisting that the task of the philosopher was simply to describe the ordinary ways in which words are used, he did think that “an inner process stands in need of an outward criterion”—by which he seemed to mean a behavioral criterion. However, for Wittgenstein a given type of mental state need not be manifested by any particular outward behaviour: one person may express his grief by wailing, another by somber silence. This approach has persisted into the present day among philosophers such as Daniel Dennett, who think that the application of mental terms cannot depart very far from the behavioral basis on which they are learned, even though the terms might not be definable on that basis.
Ryle and analytical behaviourism
Some irreferentialist philosophers thought that something more systematic and substantial could be said, and they advocated a program for actually defining the mental in behavioral terms. Partly influenced by Wittgenstein, the British philosopher Gilbert Ryle (1900–76) tried to “exorcize” what he called the “ghost in the machine” by showing that mental terms function in language as abbreviations of dispositions to overt bodily behaviour, rather in the way that the term solubility, as applied to salt, might be said to refer to the disposition of salt to dissolve when placed in water in normal circumstances. For example, the belief that God exists might be regarded as a disposition to answer “yes” to the question “Does God exist?”
A particularly influential proposal of this sort was the Turing test for intelligence, originally developed by the British logician who first conceived of the modern computer, Alan Turing (1912–52). According to Turing, a machine should count as intelligent if its teletyped answers to teletyped questions cannot be distinguished from the teletyped answers of a normal human being. Other, more sophisticated behavioral analyses were proposed by philosophers such as Ryle and by psychologists such as Clark L. Hull (1884–1952).
This approach to mental vocabulary, which came to be called “analytical behaviourism,” did not meet with great success. It is not hard to think of cases of creatures who might act exactly as though they were in pain, for example, but who actually were not: consider expert actors or brainless human bodies wired to be remotely controlled. Indeed, one thing such examples show is that mental states are simply not so closely tied to behaviour; typically, they issue in behaviour only in combination with other mental states. Thus, beliefs issue in behaviour only in conjunction with desires and attention, which in turn issue in behaviour only in conjunction with beliefs. It is precisely because an actor has different motivations from a normal person that he can behave as though he is in pain without actually being so. And it is because a person believes that he should be stoical that he can be in excruciating pain but not behave as though he is.
It is important to note that the Turing test is a particularly poor behaviourist test; the restriction to teletyped interactions means that one must ignore how the machine would respond in other sorts of ways to other sorts of stimuli. But intelligence arguably requires not only the ability to converse but the ability to integrate the content of language into the rest of one’s psychology—for example, to recognize objects and to engage in practical reasoning, modifying one’s behaviour in the light of changes in one’s beliefs and preferences. Indeed, it is important to distinguish the Turing test from the much more serious and deeper ideas that Turing proposed about the construction of a computer; these ideas involved an account not merely of a system’s behaviour but of how that behaviour might be produced internally. Ironically enough, Turing’s proposals about machines were instances not of behaviourism but of precisely the kind of view of internal processes that behaviourists were eager to avoid.
Functionalism
The fact that mental terms seem to be applied in ensembles led a number of philosophers to think about technical ways of defining an entire set of terms together. Perhaps, they thought, words like belief, desire, thought, and intention could be defined in the way a physicist might simultaneously define mass, force, and energy in terms of each other and in relation to other terms. The American philosopher David Lewis (1941–2001) invoked a technique, called “ramsification” (named for the British philosopher Frank Ramsey [1903–30]), whereby a set of new terms could be defined by reference to their relations to each other and to other old terms already understood. Ramsification was based on an idea that had already been noted by the American philosopher Hilary Putnam with regard to the set of standard states of a computer. Each state in the set is defined in terms of what the machine does when it receives an input; specifically, the machine produces a certain output and passes into another of the states in the same set. The states can then be defined together in terms of the overall patterns produced in this way.
States of computers are not the only things that can be so defined; most any reasonably complex entity that has parts that function in specific ways will do as well. For example, a carburetor in an internal-combustion engine can be defined in terms of how it regulates the flow of gasoline and oxygen into the cylinders where the mixture is ignited, causing the piston to move. Such analogies between mental states and the functional parts of complex machines provided the inspiration for functionalist approaches to understanding mental states, which dominated discussions in the philosophy of mind from the 1960s.
Functionalism seemed an attractive approach for a number of reasons: (1) as just noted, it allows for the definition of many mental terms at once, avoiding the problems created by the piecemeal definitions of analytical behaviourism; (2) it frees reductionism from a chauvinistic commitment to the particular ways in which human minds happen to be embodied, allowing them to be “multiply realized” in any number of substances and bodies, including machines, extraterrestrials, and perhaps even angels and ghosts (in this way, functionalism is also compatible with the denial of type identities and the endorsement of token identities); and, most important, (3) it allows philosophers of mind to recognize a complex psychological level of explanation, one that may not be straightforwardly reducible to a physical level, without denying that every psychological embodiment is in fact physical. Functionalism thus vindicated the reasonable insistence that psychology not be replaced by physics while avoiding the postulation of any mysterious nonphysical entities as psychology’s subject matter.
However, as will emerge in the discussion that follows, these very attractions brought with them a number of risks. One worry was whether the apparent detachment of functional mental properties from physical properties would render mental properties explanatorily inert. In a number of influential articles, the American philosopher Jaegwon Kim argued for an “exclusion principle” according to which, if a functional property is in fact different from the physical properties that are causally sufficient to explain everything that happens, then it is superfluous, just as are the epiphenomenal angels that push around the planets. Whether something like the exclusion principle is correct would seem to depend upon exactly what relation functional properties bear to their various physical realizations. Although this relation is obviously a good deal more intimate than that between angels and gravitation, it is unclear how intimate the relation needs to be in order to ensure that functional properties play some useful explanatory role.
It is important to appreciate the many different ways in which a functionalist approach can be deployed, depending on the specific kind of functionalist account of the mind one thinks is constitutive of the meaning of mental terms. Some philosophers—e.g., Lewis and Jackson—think that the account is provided simply by common “folk” beliefs, or beliefs that almost everyone believes that everyone else believes (e.g., in the case of the mental, the beliefs that people scratch itches, that they assert what they think, and that they avoid pain). Others—e.g., Sidney Shoemaker—think that one should engage in philosophical analysis of possible cases (“analytical functionalism”); and still others—e.g., William Lycan and Georges Rey—look to empirical psychological theory (“psychofunctionalism”). Although most philosophers construe such functional talk realistically, as referring to actual states of the brain, some (e.g., Dennett) interpret it irreferentially—indeed, as merely an instrument for predicting people’s behaviour or as an “intentional stance” that one may (or equally may not) take toward humans, animals, or computers and about whose truth there is no genuine “fact of the matter.” In each case, definitions vary according to whether they are derived from an account of the whole system at once (“holistic” functionalism) or from an account of specific subparts of the system (“molecular” functionalism) and according to whether the terms to be defined must refer to observable behaviour or may refer also to specific features of human bodies and their environments (“short-armed” versus “long-armed” functionalism). Thus, there may be functional definitions of states of specific subsystems of the mind, such as those involved in sensory reception (hearing, vision, touch) or in capacities such as language, memory, problem solving, mathematics, and interpersonal empathy. The most influential form of functionalism is based on the analogy with computers, which, of course, were independently developed to solve problems that require intelligence. See also functionalism.