The word consciousness is used in a variety of ways that need to be distinguished. Sometimes the word means merely any human mental activity at all (as when one talks about the “history of consciousness”), and sometimes it means merely being awake (as in As the anesthetic wore off, the animal regained consciousness). The most philosophically troublesome usage concerns phenomena with which people seem to be “directly acquainted”—as the British philosopher Bertrand Russell (1872–1970) described them—each in his own case. Each person seems to have direct, immediate knowledge of his own conscious sensations and of the contents of his propositional attitudes—what he consciously thinks, believes, desires, hopes, fears, and so on. In common philosophical parlance, a person is said to have “incorrigible” (or uncorrectable) access to his own mental states. For many people, the existence of these conscious states in their own case is more obvious and undeniable than anything else in the world. Indeed, the French mathematician and philosopher René Descartes (1596–1650) regarded his immediate conscious thoughts as the basis of all of the rest of his knowledge. Views that emphasize this first-person immediacy of conscious states have consequently come to be called “Cartesian.”
It turns out to be surprisingly difficult to say much about consciousness that is not highly controversial. Initial efforts in the 19th century to approach psychology with the rigour of other experimental sciences led researchers to engage in careful introspection of their own mental states. Although there emerged some interesting results regarding the relation of certain sensory states to external stimulation—for example, laws proposed by Gustav Theodor Fechner (1801–87) that relate the apparent to the real amplitude of a sound—much of the research dissolved into vagaries and complexities of experience that varied greatly over different individuals and about which interesting generalizations were not forthcoming.
It is worth pausing over some of the difficulties of introspection and the consequent pitfalls of thinking of conscious processes as the central subject matter of psychology. While it can seem natural to think that all mental phenomena are accessible to consciousness, close attention to the full range of cases suggests otherwise. The Austrian-born British philosopher Ludwig Wittgenstein (1889–1951) was particularly adept at calling attention to the rich and subtle variety of ordinary mental states and to how little they lend themselves to the model of an introspectively observed object. In a typical passage from his later writings (Zettel, §§484–504), he asked:
Is it hair-splitting to say: —joy, enjoyment, delight, are not sensations? —Let us at least ask ourselves: How much analogy is there between delight and what we call “sensation”? “I feel great joy” —Where? —that sounds like nonsense. And yet one does say “I feel a joyful agitation in my breast.” —But why is joy not localized? Is it because it is distributed over the whole body? … Love is not a feeling. Love is put to the test, pain not. One does not say: “That was not true pain, or it would not have gone off so quickly.”
In a related vein, the American linguist Ray Jackendoff proposed that one is never directly conscious of abstract ideas, such as goodness and justice—they are not items in the stream of consciousness. At best, one is aware of the perceptual qualities one might associate with such ideas—for example, an image of someone acting in a kindly way. While it can seem that there is something right in such suggestions, it also seems to be immensely difficult to determine exactly what the truth might be on the basis of introspection alone.
In the late 20th century, the validity and reliability of introspection were subject to much experimental study. In an influential review of the literature on “self-attribution,” the American psychologists Richard Nisbett and Timothy Wilson discussed a wide range of experiments that showed that people are often demonstrably mistaken about their own psychological processes. For example, in problem-solving tasks, people are often sensitive to crucial clues of which they are quite unaware, and they often provide patently confabulated accounts of the problem-solving methods they actually employ. Nisbett and Wilson speculated that in many cases introspection may not involve privileged access to one’s own mental states but rather the imposition upon oneself of popular theories about what mental states a person in one’s situation is likely to have. This possibility should be considered seriously when evaluating many of the traditional claims about the alleged incorrigibility of people’s access to their own minds.
In any event, it is important to note that not all mental phenomena are conscious. Indeed, the existence of unconscious mental states has been recognized in the West since the time of the ancient Greeks. Obvious examples include the beliefs, long-range plans, and desires that a person is not consciously thinking about at a particular time, as well as things that have “slipped one’s mind,” though they must in some way still be there, since one can be reminded of them. Plato thought that the kinds of a priori reasoning typically used in mathematics and geometry involve the “recollection” (anamnesis) of temporarily forgotten thoughts from a previous life. Modern followers of Sigmund Freud (1856–1939) have argued that a great many ordinary parapraxes (or “Freudian slips”) are the result of deeply repressed unconscious thoughts and desires. And, as noted above, many experiments reveal myriad ways in which people are unaware of, and sometimes demonstrably mistaken about, the character of their mental processes, which are therefore unconscious at least at the time they occur.
Partly out of frustration with introspectionism, psychologists during the first half of the 20th century tended to ignore consciousness entirely and instead study only “objective behaviour” (see below Radical behaviourism). In the last decades of the century, psychologists began to turn their attention once again to consciousness and introspection, but their methods differed radically from those of early introspectionists, in ways that can be understood against the background of other issues.
One might wonder what makes an unconscious mental process “mental” at all. If a person does not have immediate knowledge of it, why is it not merely part of the purely physical machinery of the brain? Why bring in mentality at all? Accessibility to consciousness, however, is not the only criterion for determining whether a given state or process is mental. One alternative criterion is that mental states and processes enter into the rationality of the systems of which they are a part.
Rationality
There are standardly thought to be four sorts of rationality, each presenting different theoretical problems. Deductive, inductive, and abductive reason have to do with increasing the likelihood of truth, and practical reason has to do with trying to base one’s actions (or “practice”) in part on truth and in part upon what one wants or values.
Deduction
Deduction is the sort of rationality that is the central concern of traditional logic. It involves deductively valid arguments, or arguments in which, if the premises are true, then the conclusion must also be true. In a deductively valid argument, it is impossible for the premises to be true and the conclusion false. Some standard examples are:
(1) All human beings are mortal; all women are human beings; therefore, all women are mortal.
(2) Some angels are archangels; all archangels are divine; therefore, some angels are divine.
These simple arguments (deductive arguments can be infinitely more complex) illustrate two important features of deductive reasoning: it need not be about real things, and it can be applied to any subject matter whatsoever—i.e., it is universal.
One of the significant achievements of philosophy in the 20th century was the development of rigorous ways of characterizing such arguments in terms of the logical form of the sentences they comprise. Techniques of formal logic (also called symbolic logic) were developed for a very large class of arguments involving words such as and, or, not, some, all, and, in modal logic, possibly (or possible) and necessarily (or necessary). (See below The computational account of rationality.)
Although deduction marks a kind of ideal of reason, in which the truth of the conclusion is absolutely guaranteed by the truth of the premises, people’s lives depend upon making do with much less. There are two forms of such nondeductive reasoning: induction and abduction.
Induction
Induction consists essentially of statistical reasoning, in which the truth of the premises makes the conclusion likely to be true, even though it could still be false. For example, from the fact that every death cap mushroom (Amanita phalloides) anybody has ever sampled has been poisonous, it would be reasonable to conclude that all death cap mushrooms are poisonous, even though it is logically possible that there is one such mushroom that is not poisonous. Such inferences are indispensable, given that it is seldom possible to sample all the members of a given class of things. In a good statistical inference, one takes a sufficiently large and representative sample. The field of formal statistics explores myriad refinements of arguments of this sort.
Abduction
Another sort of nondeductive rationality that is indispensable to at least much of the higher intelligence displayed by human beings is reasoning to a conclusion that essentially contains terms not included in the premises. This typically occurs when someone gets a good idea about how to explain some data in terms of a hypothesis that mentions phenomena that have not been observed in the data itself. A familiar example is that of the detective who infers the identity of a certain criminal from the evidence at the scene of the crime. Sherlock Holmes erroneously calls such reasoning “deduction”; it is more properly called abduction, or “inference to the best explanation.” Abduction is also typically exercised by juries when they decide whether the prosecution has established the guilt of the defendant “beyond a reasonable doubt.” Most spectacularly, it is the form of reasoning that seems to be involved in the great leaps of imagination that have taken place in the history of scientific thought, as when Isaac Newton (1642–1727) proposed the theory of universal gravitation as an explanation of the motions of planets, projectiles, and tides.
Practical reason
All the forms of rationality so far considered involve proceeding from one belief to another. But sometimes people proceed from belief to action. Here desire as well as belief is relevant, since successful rational action is action that satisfies one’s desires. Suppose, for example, that a person desires to have cheese for dinner and believes that cheese can be had from the shop down the street. Other things being equal—that is, he has no other more pressing desires and no beliefs about some awful risk he would take by going to the shop—the “rational” thing for him to do would be to go to the shop and buy some cheese. Indeed, if this desire and this belief were offered as the “reason” why the person went to the shop and bought some cheese, one would consider it a satisfactory explanation of his behaviour.
Although this example is trivial, it illustrates a form of reasoning that is appealed to in the explanation of countless actions people perform every day. Much of life is, of course, more complex than this, in part because one often has to choose between competing preferences and estimate how likely it is that one can actually satisfy them in the circumstances one takes oneself to be in. Often one must resort to what has come to be called cost-benefit analysis—trying to do that which is most likely to secure what one prefers most overall with as little cost as possible. At any rate, engaging in cost-benefit analysis seems to be one way of behaving rationally. The ways in which people can be practically rational are the subject of formal decision theory, which was developed in considerable detail in the 20th century in psychology and in other social sciences, especially economics.
None of the foregoing should be taken to suggest that people are always rational. Many people report being “weak-willed,” failing to perform what they deem to be the best or most rational act, as when they fail to diet despite their better judgment. In the case of many other actions, however, rationality seems to be simply irrelevant: jumping up and down in glee, kicking a machine that fails to work, or merely tapping one’s fingers impatiently are actions that do not seem to be performed for any particular reason. The claim here is only that rationality forms one important basis for thinking that something has genuine mental states.