Nonverbal fallacies
Among Aristotle’s nonverbal fallacies, what is known as the fallacy of accident, in the simplest cases, amounts to at least a confusion between different senses of verbs for being. Because Aristotle’s handling of these verbs differs from contemporary treatments, his discussion of this fallacy has no direct counterpart in modern logic. One of his examples is the fallacious inference from (1) “Coriscus is different from Socrates” (i.e., Coriscus is not Socrates) and (2) “Socrates is a man” to (3) “Coriscus is different from a man” (i.e., Coriscus is not a man). The modern understanding of this fallacy is that the sense of “is” in 1 is different from the sense of “is” in 2: in 1 it is an “is” of identity, whereas in 2 it is an “is” of predication. Aristotle’s explanation is that the same things cannot always be said of both a predicate and the thing of which it is predicated—in other words, predication is not transitive.
What is known as the fallacy of secundum quid is a confusion between unqualified and qualified forms of a sentence. The fallacy with the quaint title “ignorance of refutation” is best understood from a modern point of view as a mistake concerning precisely what is to be proved or disproved in an argument.
Nonfallacial mistakes in reasoning and related errors
Some of the most common mistakes in reasoning are not usually discussed under the heading of fallacies. Some of them depend upon a confusion about the respective scope of different terms, which often amounts to a confusion about their logical priority. The phrase “farm machine or vehicle,” for example, can mean either “farm (machine or vehicle)” or “(farm machine) or vehicle.” In natural language, scope mistakes sometimes take the form of a confusion regarding what is the head, or antecedent, of an anaphoric pronoun. For example, the statement “The winner of the Oscar for best performance by an actress was Katherine Hepburn, but I thought that she was Ingrid Bergman” can mean either “The winner of the Oscar for best performance by an actress was Katherine Hepburn, but I thought that the winner of the Oscar for best performance by an actress was Ingrid Bergman” or “The winner of the Oscar for best performance by an actress was Katherine Hepburn, but I thought that Katherine Hepburn was Ingrid Bergman.”
A philosophically important scope distinction, known as the distinction between statements de dicto (Latin: “from saying”) and statements de re (“from the thing”), is illustrated in the following example. The sentence “The president of the United States is a powerful person” can mean either “Whoever is the president of the United States is a powerful person” or “The person who in fact is the president of the United States is a powerful person.” In general, a referring expression (“the president of the United States”) in its de dicto reading picks out whoever or whatever may satisfy a certain condition, while in its de re reading it picks out the person or thing that in fact satisfies that condition. Thus, there can be mistakes in reasoning based on a confusion between a de dicto reading and a de re reading. A related mistake is to assume that the two readings correspond to two irreducible meanings of the expression in question, rather than to the form of the sentence in which the expression is contained.
Several of the traditional fallacies are not mistakes in logical reasoning but rather mistakes in the process of knowledge seeking through questioning (i.e., in an interrogative game). For example, the fallacy of many questions—illustrated by questions such as “have you stopped beating your wife?”—consists of asking a question whose presupposition has not been established. It can be considered a violation of the definitory rules of an interrogative game. The fallacy known as begging the question—in Latin petitio principii—originally meant answering the “big” or principal question that an entire inquiry is supposed to answer by means of answers to several “small” questions. It can be considered a violation of the strategic rules of an interrogative game. Later, however, begging the question came to mean circular reasoning, or circulus in probando.
Some of the modes of reasoning traditionally listed in inventories of fallacies are not necessarily mistaken, though they can easily lead to misuses. For example, ad hominem reasoning literally means reasoning by reference to a person rather than by reference to the argument itself. It has been variously characterized as using certain admissions of, or facts about, a person against him in an argument. Ad hominem arguments based on admissions are routinely and legitimately used in adversarial systems of law in the examination and cross-examination of witnesses. (In the United States, persons who are arrested are typically informed that “anything you say can and will be used against you in a court of law.”) In a different walk of life, Socrates engaged in a kind of philosophical conversation in which he put questions to others and then used their answers to refute opinions they had earlier expressed. Ad hominem arguments based on facts about a person can be acceptable in a courtroom setting, as when a cross-examining attorney uses facts about a witness’s eyesight or veracity to discredit his testimony. This kind of ad hominem criticism becomes fallacious, however, when it is strictly irrelevant to the conclusion the arguer wishes to establish or refute.
Some so-called fallacies are not mistakes in reasoning but rather illicit rhetorical ploys, such as appeals to pity (traditionally called the fallacy of ad misericordiam), to authority (ad verecundiam), or to popular opinion (ad populum).
Modes of human reasoning that are (or seem) fallacious have been studied in cognitive psychology. Especially interesting work in this area was done by two Israeli-born psychologists, Amos Tversky and Daniel Kahneman, who developed a theory according to which human reasoners are inherently prone to making certain kinds of cognitive mistakes. These mistakes include the conjunctive fallacy, in which added information increases the perceived reliability of a statement, though the laws of probability dictate that the addition of information reduces the likelihood that the statement is true. In another alleged fallacy, sometimes called the “juror’s fallacy,” the reasoner fails to take into account what are known as base-rate probabilities. For example, assume that an eyewitness to a hit-and-run accident is 80 percent sure that the taxicab involved was green. Should a jury simply assume that the probability that the taxicab was green is 80 percent, or should it also take into account the fact that only 15 percent of all taxicabs in the city are green? Despite great interest in such alleged cognitive fallacies, it is still controversial whether they really are mistakes.
Strategies of deductive reasoning
As compared with definitory rules, strategic rules of reasoning have received relatively scant attention from logicians and philosophers. Indeed, most of the detailed work on strategies of logical reasoning has taken place in the field of computer science. From a logical vantage point, an instructive observation was offered by the Dutch logician-philosopher Evert W. Beth in 1955 and independently (in a slightly different form) by the Finnish philosopher Jaakko Hintikka. Both pointed out that certain proof methods, which Beth called tableau methods, can be interpreted as frustrated attempts to prove the negation of the intended conclusion. For example, in order to show that a certain formula F logically implies another formula G, one tries to construct in step-by-step fashion a model of the logical system (i.e., an assignment of values to its names and predicates) in which F is true but G is false. If this procedure is frustrated in all possible directions, one can conclude that G is a logical consequence of F.
The number of steps required to show that the countermodel is frustrated in all directions depends on the formula to be proved. Because this number cannot be predicted mechanically (i.e., by means of a recursive function) on the basis of the structures of F and G, the logician must otherwise anticipate and direct the course of the construction process (see decision problem). In other words, he must somehow envisage what the state of the attempted countermodel will be after future construction steps.
Such a construction process involves two kinds of steps pertaining to the objects in the model. New objects are introduced by a rule known as existential instantiation. If the model to be constructed must satisfy, or render true, an existential statement (e.g., “there is at least one mammal”), one may introduce a new object to instantiate it (“a is a mammal”). Such a step of reasoning is analogous to what a judge does when he says, “We know that someone committed this crime. Let us call the perpetrator John Doe.” In another kind of step, known as universal instantiation, a universal statement to be satisfied by the model (e.g., “everything is a mammal”) is applied to objects already introduced (“Moby Dick is a mammal”).
There are difficulties in anticipating the results of steps of either kind. If the number of existential instantiations required in the proof is known, the question of whether G follows from F can be decided in a finite number of steps. In some proofs, however, universal instantiations are required in such large numbers as the proof proceeds that even the most powerful computers cannot produce them fast enough. Thus, efficient deductive strategies must specify which objects to introduce by existential instantiation and must also limit the class of universal instantiations that need to be carried out.
Constructions of countermodels also involve the application of rules that apply to the propositional connectives ~, &, ∨, and ⊃ (“not,” “and,” “or,” and “if…then,” respectively). Such rules have the effect of splitting the attempted construction into several alternative constructions. Thus, the strategic question as to which universal instantiations are needed can often be answered more easily after the construction has proceeded beyond the point at which splitting occurs. Methods of automated theorem-proving that allow such delayed instantiation have been developed. This delay involves temporarily replacing bound variables (variables within the scope of an existential or universal quantifying expression, as in “some x is ...” and “any x is ...”) by uninterpreted “dummy” symbols. The problem of finding the right instantiations then becomes a problem of solving sets of functional equations with dummies as unknowns. Such problems are known as unification problems, and algorithms for solving them have been developed by computer scientists.
The typical example of the use of such methods is the introduction of a formula such as A ∨ ~A; such a rule may be called tautology introduction. In it, A may be any formula whatever. Although the rule is trivial (because the formula A ∨ ~A is true in every model), it can be used to shorten a proof considerably, for, if A is chosen appropriately, the presence of either A or ~A may enable the reasoner to introduce suitable new individuals more rapidly than without them. For example, if A is “everybody has a father,” the presence of A enables the reasoner to introduce a new individual for each existing one—viz., his father. The negation of A, ~A, is “it is not the case that everybody has a father,” which is equivalent to “someone does not have a father”; this enables one to introduce such an individual by existential instantiation. The use of the tautology introduction rule or one of the essentially equivalent rules is the main vehicle of shortening proofs.
Strategies of ampliative reasoning
Reasoning outside deductive logic is not necessarily truth-preserving even when it is formally correct. Such reasoning can add to the information that a reasoner has at his disposal and is therefore called ampliative. Ampliative reasoning can be studied by modeling knowledge-seeking as a process involving a sequence of questions and answers, interspersed by logical inference steps. In this kind of process, the notions of question and answer are understood broadly. Thus, the source of an “answer” can be the memory of a human being or a database stored on a computer, and a “question” can be an experiment or observation in natural science. One rule of such a process is that a question may be asked only if its presupposition has been established.
Interrogative reasoning can be compared to the reasoning used in a jury trial. An important difference, however, is that in a jury trial the tasks of the reasoner have been divided between several parties. The counsels, for example, ask questions but do not draw inferences. Answers are provided by witnesses and by physical evidence. It is the task of the jury to draw inferences, though the opposing counsels in their closing arguments may urge the jury to follow one certain line of reasoning rather than another. The rules of evidence regulate the questions that may be asked. The role of the judge is to enforce these rules.
It turns out that, assuming the inquirer can trust the answers he receives, optimal interrogative strategies are closely similar to optimal strategies of logical inference, in the sense that the best choice of the presupposition of the next question is the same as the best choice of the premise of the next logical inference. This relationship enables one to extend some of the principles of deductive strategy to ampliative reasoning.
In general, a reasoner will have to be prepared to disregard (at least provisionally) some of the answers he receives. One of the crucial strategic questions then becomes which answers to “bracket,” or provisionally reject, and when to do so. Typically, bracketing decisions concerning a given answer become easier to make after the consequences of the answer have been examined further. Bracketing decisions often also depend on one’s knowledge of the answerer. Good strategies of interrogative reasoning may therefore involve asking questions about the answerer, even when the answers thereby provided do not directly advance the questioner’s knowledge-seeking goals.
Any process of reasoning can be evaluated with respect to two different goals. On the one hand, a reasoner usually wants to obtain new information—the more, the better. On the other hand, he also wants the information he obtains to be correct or reliable—the more reliable, the better. Normally, the same inquiry must serve both purposes. Insofar as the two quests can be separated, one can speak of the “context of discovery” and the “context of justification.” Until roughly the mid-20th century, philosophers generally thought that precise logical rules could be given only for contexts of justification. It is in fact hard to formulate any step-by-step rules for the acquisition of new information. However, when reasoning is studied strategically, there is no obstacle in principle to evaluating inferences rationally by reference to the strategies they instantiate.
Since the same reasoning process usually serves both discovery and justification and since any thorough evaluation of reasoning must take into account the strategies that govern the entire process, ultimately the context of discovery and the context of justification cannot be studied independently of each other. The conception of the goal of scientific inference as new information, rather than justification, was emphasized by the Austrian-born philosopher Sir Karl Popper.
Nonmonotonic reasoning
It is possible to treat ampliative reasoning as a process of deductive inference rather than as a process of question and answer. However, such deductive approaches must differ from ordinary deductive reasoning in one important respect. Ordinary deductive reasoning is “monotonic” in the sense that, if a proposition P can be inferred from a set of premises B, and if B is a subset of A, then P can be inferred from A. In other words, in monotonic reasoning, an inference never has to be canceled in light of further inferences. However, because the information provided by ampliative inferences is new, some of it may need to be rejected as incorrect on the basis of later inferences. The nonmonoticity of ampliative reasoning thus derives from the fact that it incorporates self-correcting principles.
Probabilistic reasoning is also nonmonotonic, since any inference of probability less than 1 can fail. Other frequently occurring types of nonmonotonic reasoning can be thought of as based partly on tacit assumptions that may be difficult or even impossible to spell out. (The traditional term for an inference that relies on partially suppressed premises is enthymeme.) One example is what the American computer scientist John McCarthy called reasoning by circumscription. The unspoken assumption in this case is that the premises contain all the relevant information; exceptional circumstances, in which the premises may be true in an unexpected way that allows the conclusion to be false, are ruled out. The same idea can also be expressed by saying that the intended models of the premises—the scenarios in which the premises are all true—are the “minimal” or “simplest” ones. Many rules of inference by circumscription have been formulated.
Reasoning by circumscription thus turns on giving minimal models a preferential status. This idea has been generalized by considering arbitrary preference relations between models of sets of premises. A model M is said to preferentially satisfy a set of premises A if and only if M is the minimal model (according to the given preference relation) that satisfies A in the usual sense. A set of premises preferentially entails A if and only if A is true in all the models that preferentially satisfy the premises.
Another variant of nonmonotonic reasoning is known as default reasoning. A default inference rule authorizes an inference to a conclusion that is compatible with all the premises, even when one of the premises may have exceptions. For example, in the argument “Tweety is a bird; birds fly; therefore, Tweety flies,” the second premise has exceptions, since not all birds fly. Although the premises in such arguments do not guarantee the truth of the conclusion, rules can nevertheless be given for default inferences, and a semantics can be developed for them. As such a semantics, one can use a form of preferential-model semantics.
Default logics must be distinguished from what are called “defeasible” logics, even though the two are closely related. In default reasoning, the rule yields a unique output (the conclusion) that might be defeated by further reasoning. In defeasible reasoning, the inferences themselves can be blocked or defeated. In this case, according to the American logician Donald Nute,
there are in principle propositions which, if the person who makes a defeasible inference were to come to believe them, would or should lead her to reject the inference and no longer consider the beliefs on which the inference was based as adequate reasons for making the conclusion.
Nonmonotonic logics are sometimes conceived of as alternatives to traditional or classical logic. Such claims, however, may be premature. Many varieties of nonmonotonic logic can be construed as extensions, rather than rivals, of the traditional logic. However, nonmonotonic logics may prove useful not only in applications but in logical theory itself. Even when nonmonotonic reasoning merely represents reasoning from partly tacit assumptions, the crucial assumptions may be difficult or impossible to formulate by means of received logical concepts. Furthermore, in logics that are not axiomatizable, it may be necessary to introduce new axioms and rules of inference experimentally, in such a way that they can nevertheless be defeated by their consequences or by model-theoretic considerations. Such a procedure would presumably fall within the scope of nonmonotonic reasoning.