rationality

print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Share
Share to social media
URL
https://www.britannica.com/topic/rationality
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

rationality, the use of knowledge to attain goals.

(Read Britannica’s biography of Steven Pinker, author of this entry.)

Models of Rationality

Rationality has a normative dimension, namely how an agent ought to reason in order to attain some goal, and a descriptive or psychological dimension, namely how human beings do reason.

Normative models from logic, mathematics, and artificial intelligence set benchmarks against which psychologists and behavioral economists can compare human judgment and decision making. These comparisons provide answers to the question “In which ways are humans rational or irrational?”

Formal logic, for example, consists of rules for deriving new true propositions (conclusions) from existing ones (premises). A common departure from formal logic is the fallacy of affirming the consequent, or leaping from “p implies q” to “q implies p,” for example, going from “If a person becomes a heroin addict, the person first smoked cannabis” to “If a person smokes cannabis, the person will become a heroin addict.”

Probability theory allows one to quantify the likelihood of an uncertain outcome. It may be estimated as the number of actual occurrences of that outcome divided by the number of opportunities for it to have taken place. Humans instead often base their subjective likelihood on the availability heuristic: the more available an image or anecdote is in memory, the likelier they judge it to be. Thus, people overestimate the likelihood of events that get intense media coverage, such as plane crashes and rampage shootings, and underestimate those that don’t, such as car crashes and everyday homicides.

Bayes’s rule shows how to adjust one’s degree of confidence in a hypothesis depending on the strength of evidence. It says that a rational agent should give credence to a hypothesis to the extent that it’s credible a priori, it’s consistent with the evidence, and the evidence is uncommon across the board. More technically, it allows one to calculate the probability of a hypothesis given the data (the posterior probability, or credence in the hypothesis in the light of the evidence) from three numbers. The first is the prior probability of the hypothesis—how credible it was before one examines the evidence. (For example, the prior probability that a patient has a disease, before knowing anything about that patient’s symptoms or test results, would be the base rate for the disease in the population.) That is then multiplied by the likelihood that one would obtain those data if the hypothesis is true (in the case of a disease, that could be the sensitivity or true positive rate of a test). This product is then divided by the marginal probability of the data—that is, how often it occurs in all, regardless of whether the hypothesis is true or false (for a disease, the relative frequency of all the positive test results, true and false).

Get Unlimited Access
Try Britannica Premium for free and discover more.

People often violate Bayes’s rule by neglecting the base rate of some state of affairs, which is relevant to estimating its prior credence. For example, when told that 1 percent of women in the population have breast cancer (the base rate) and that a test for the disease gives a true positive result 90 percent of the time (when she has the disease) and a false positive result 9 percent of the time (when she doesn’t), most people estimate the probability that a woman with a positive result has the disease (the posterior probability) as 80 to 90 percent. The correct answer, according to Bayes’s rule, is 9 percent. The error arises from neglecting the low base rate (1 percent), which implies that most positives will be false positives.

The theory of rational choice advises deciders among risky alternatives on how to keep their decisions consistent with one another and with their values. It says one should choose the option with the greatest expected utility: the sum of the values of all the possible outcomes of that choice, each weighted by its probability. People may flout it by taking steps to avoid an imaginable outcome while ignoring its probability, as when they buy costly extended warranties for appliances which break so seldom that they pay more for the warranties than they would, over the long run, for the repairs.

Game theory tells a rational agent how to make a choice when the outcome depends on the choices of other rational agents. One of its counterintuitive conclusions is that a community of actors can make choices that are rational for each one of them but irrational for the community, as when shepherds who aim to fatten their sheep overgraze the commons, or motorists who aim to save time jam a freeway.

One more example: principles of causal inference indicate that the soundest way to establish whether A causes B is to manipulate A while holding all other factors constant. Yet people commonly fail to consider these confounding factors and prematurely leap from correlation to causation, as in the joke about the man who gorged on bean stew washed down by a cup of tea and lay moaning complaining that the tea made him sick.

Why are people irrational?

So, why do people so often make irrational judgments and decisions? It’s not that we’re an inherently irrational species. Humans have discovered the laws of nature, explored the solar system, and decimated disease and hunger. And, of course, we established the normative benchmarks that allow us to assess rationality in the first place. Humans can be irrational for several reasons.

First, rationality is always bounded. No mortal has unlimited time, data, or computational power, and these costs must be traded off against the benefits of the optimal solution. It makes little sense to spend 30 minutes studying a map to calculate a shortcut that would save you 10 minutes in travel time. Instead, people often must rely on fallible shortcuts and rules of thumb. For example, if one had to determine which of two cities has the larger population, then guessing that it’s the one with a major-league football team yields the correct result most of the time.

Second, human rationality is optimized for natural contexts. People indeed have trouble applying formulas that are couched in abstract variables like p and q, whose power comes from the fact that any values can be plugged into them. But people can be adept at logical and probability problems that are couched in concrete examples or pertain to significant challenges in living. When asked how to enforce the rule “If a bar patron drinks beer, the patron must be over 21,” everyone knows one must check the age of beer drinkers and the beverage of teenagers; no one fallaciously “affirms the consequent” by checking the beverage of an adult. And, when a diagnosis problem is reframed from abstract probabilities (“What is the likelihood that the woman has cancer?”) to frequencies (“How many women out of a thousand with this test result have cancer?”), they intuitively apply Bayes’s rule and answer correctly.

Third, rationality is always deployed in pursuit of a goal, and that goal is not always objective truth. It may be to win an argument, to persuade others of a conclusion that would benefit oneself (motivated reasoning), or to prove the wisdom and nobility of one’s own coalition and the stupidity and evil of the opposing one (myside bias). Many manifestations of public irrationality, such as conspiracy theories, fake news, and science denial, may be tactics to express loyalty to or avoid ostracism from one’s tribe or political faction.

Fourth, many of our rational beliefs are not grounded in arguments or data that we establish ourselves but are based on trusting institutions that were established to pursue truth, such as science, journalism, and government agencies. People may reject the consensus from these institutions if they sense that they are doctrinaire, politicized, or intolerant of dissent.

Many commentators have despaired about the future of rationality given the rise in political polarization and the ease of disseminating falsehoods through social media. Yet this pessimism may itself be a product of the availability heuristic, driven by conspicuous coverage of the most politicized examples. People, for example, are divided by vaccines but not by antibiotics, dentistry, or splints for fractures. And irrationality is nothing new but has been common throughout history, such as beliefs in human and animal sacrifice, miracles, necromancy, sorcery, bloodletting, and omens in eclipses and other natural events. Progress in spreading rationality, driven by scientific and data-based reasoning, is not automatic but propelled by the fact that rationality is the only means by which goals may be consistently attained.

Steven Pinker