applied ethics

print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

External Websites

applied ethics, the application of normative ethical theories—i.e., philosophical theories regarding criteria for determining what is morally right or wrong, good or bad—to practical problems.

(Read Peter Singer’s Britannica entry on ethics.)

From Plato (428/427–348/347 bce) onward, Western moral philosophers have concerned themselves with practical questions, including suicide, the exposure of infants, the treatment of women, and the proper behaviour of public officials. Christian philosophers, notably St. Augustine (354–430) and St. Thomas Aquinas (1224/25–74), examined with great care such matters as when a war is just, whether it is ever right to tell a lie, and whether a Christian woman does wrong by committing suicide to save herself from rape. The British philosopher Thomas Hobbes (1588–1679) had an eminently practical purpose in writing his masterpiece, Leviathan (1651), which justified the absolute authority of the sovereign on the grounds that only such authority could prevent an anarchic “war of all against all.” The Scottish Enlightenment philosopher David Hume (1711–76) wrote about the ethics of suicide. The British utilitarians, including Jeremy Bentham (1748–1832) and John Stuart Mill (1806–73), were very much concerned with practical problems; indeed, they considered social reform to be the aim of their philosophy. Thus, Bentham wrote on electoral and prison reform and animal rights, and Mill discussed the power of the state to interfere with the liberty of its citizens, the status of womencapital punishment, and the right of one state to invade another to prevent it from committing atrocities against its own people.

Code of Hammurabi
More From Britannica
ethics: Applied ethics

Nevertheless, during the first six decades of the 20th century, moral philosophers largely neglected applied ethics—something that now seems all but incredible, considering the traumatic events through which most of them lived. The most notable exception, the British philosopher Bertrand Russell (1872–1970), seems to have regarded his writings on ethical topics as largely separate from his philosophical work and did not attempt to develop his ethical views in any systematic or rigorous fashion.

The prevailing view of this period was that moral philosophy is quite separate from “moralizing,” a task best left to preachers. What was not generally considered was whether moral philosophers could, without merely preaching, make an effective contribution to discussions of practical issues involving difficult ethical questions. The value of such work began to be widely recognized only during the 1960s, when the American civil rights movement and subsequently the Vietnam War and the growth of student political activism (see, for example, Students for a Democratic Society) started to draw philosophers into discussions of the ethical issues of equalityjustice, war, and civil disobedience.

Applied ethics soon became part of the philosophy curriculum of most universities in many countries. Here it is not possible to do more than briefly mention some of the major areas of applied ethics and point to the issues that they raise.

Equality

Since much of the early impetus for the 20th-century revival of applied ethics came from the American civil rights movement, topics such as equality, human rights, and justice were prominent from the beginning. The initial focus, especially in the United States, was on racial and gender equality. Since there was a consensus that outright discrimination against women and members of racial minority groups (notably African Americans) is wrong, the centre of attention soon shifted to reverse discrimination: Is it acceptable to favour women and members of racial minority groups for jobs and enrollment in universities and colleges because they have been discriminated against in the past? (See affirmative action.)

Are you a student?
Get a special academic rate on Britannica Premium.

Inequality between the sexes was another early focus of discussion. Does equality here mean ending as far as possible all differences between the traditional gender roles, or could there be equal status for different roles? There was a lively debate—both between feminists and their opponents and, on a different level, between feminists themselves—about what a society without sexual inequality would be like. Feminist philosophers were also involved in debates about abortion and about new methods of reproduction. These topics will be covered separately below (see Abortion, euthanasia, and the value of human life and Bioethics).

Until the late 20th century, most philosophical discussions of justice and equality were limited in scope to a single society. For example, even the theory of justice advanced by the American philosopher John Rawls, which was widely interpreted as providing a philosophical foundation for egalitarian liberalism, had nothing to say about the distribution of wealth between societies. In the 1990s philosophers began to think about the moral implications of the vast inequality in wealth between the leading industrialized countries and the countries of the developing world, some of which were afflicted with widespread famine and disease. What obligations, if any, do the citizens of affluent countries have to those who are starving? In Living High and Letting Die: Our Illusion of Innocence (1996), the American philosopher Peter Unger made a strong case for the view that any person of reasonable means who neglects to send money to organizations that work to reduce global poverty is thereby doing something very seriously wrong. The German-born philosopher Thomas Pogge, in World Poverty and Human Rights: Cosmopolitan Responsibilities and Reforms (2002), argued that affluent countries are responsible for increasing the poverty of developing countries and thus for causing millions of deaths annually. In one of his late works, The Law of Peoples (1999), Rawls himself turned to the relations between societies, though his conclusions were more conservative than those of Unger and Pogge.

Animals

There is one issue related to equality in which philosophers have led, rather than followed, a social movement. In the early 1970s a group of young Oxford-based philosophers began to question the assumption that the moral status of nonhuman animals is automatically inferior to that of humans—as well as the conclusion usually drawn from it, that it is morally permissible for humans to use nonhuman animals as food, even in circumstances where they could nourish themselves well and efficiently without doing so. The publication in 1972 of Animals, Men and Morals: An Inquiry into the Maltreatment of Non-humans, edited by Roslind and Stanley Godlovitch and John Harris, was followed three years later by Peter Singer’s Animal Liberation and then by a flood of articles and books that established the issue as a part of applied ethics. At the same time, these writings provided a philosophical basis for the animal rights movement, which had a considerable effect on attitudes and practices toward animals in many countries.

Most philosophical work on the issue of animal rights advocated radical changes in the ways in which humans treat animals. Some philosophers, however, defended the status quo, or at least something close to it. In The Animals Issue: Moral Theory in Practice (1992), the British philosopher Peter Carruthers argued that humans have moral obligations only to those beings who can participate in a hypothetical social contract. The obvious difficulty with such an approach is that it proves too much: if humanity has no obligations to animals, then it also has no obligations to the minority of humans with severe intellectual disabilities or to future generations of humans, since they too cannot reciprocate. Another British philosopher, Roger Scruton, supported both animal welfare and the right of humans to use animals, at least in circumstances that entailed some benefit to the animals in question. Thus, in Animal Rights and Wrongs (2000) he supported foxhunting, because it encourages humans to protect the habitat in which foxes live, but condemned modern “factory” farms, because they do not provide even a minimally acceptable life for the animals raised in them.

Environmental ethics

Environmental issues raise a host of difficult ethical questions, including the ancient question of the nature of intrinsic value. Whereas many philosophers in the past have agreed that human experiences have intrinsic value—and the utilitarians at least have always accepted that the pleasures and pains of nonhuman animals are of some intrinsic significance—this does not show why it is so bad if dodoes become extinct or a rainforest is cut down. Are these things to be regretted only because of the experiences that would be lost to humans or other sentient beings? Or is there more to it than that? From the late 20th century, some philosophers defended the view that trees, rivers, species (considered apart from the individual animals of which they consist), and perhaps even ecosystems as a whole have a value independent of the instrumental value they may have for humans or nonhuman animals. There is, however, no agreement on what the basis for this value should be.

Concern for the environment also raises the question of obligations to future generations. How much do human beings living now owe to those not yet born? For those who hold an ethics based on social contract theory (i.e., an ethics that grounds moral rights and duties in a hypothetical agreement with other members of society) or for ethical egoists (i.e., those who hold that morally correct actions are those that advance or protect one’s self-interest), the answer would seem to be: nothing. Although humans existing in the present can benefit those existing in the future, the latter are unable to reciprocate. Most other ethical theories, however, do give some weight to the interests of future generations. Utilitarians, for example, would not think that the fact that members of future generations do not yet exist is any reason for giving less consideration to their interests than to the interests of present generations—provided that one can be certain that future generations will exist and will have interests that will be affected by what one does. In the case of, say, the storage of radioactive waste or the emission of gases that contribute to climate change, it seems clear that what present generations do will indeed affect the interests of generations to come. Most philosophers agree that these are important moral issues. Climate change in particular has been conceived of as a question of global equity: How much of a scarce resource (the capacity of the atmosphere safely to absorb waste gases produced by human activity) may each country use? Are industrialized countries justified in using far more of this resource, on a per capita basis, than developing countries, considering that the human costs of climate change will fall more heavily on developing countries because they cannot afford the measures needed to mitigate them?

These questions become even more complex when one considers that the size of future generations can be affected by government population policies and by other less-formal attitudes toward population growth and family size. The notion of overpopulation conceals a philosophical issue that was ingeniously explored in Reasons and Persons (1984), by the British philosopher Derek Parfit. What is optimum population? Is it the population size at which the average level of welfare will be as high as possible? Or is it the size at which the total amount of welfare is as great as possible? There were decisive objections to the average view, but the total view also had counterintuitive consequences. The total view entails that a vastly overpopulated world, one in which the average level of welfare is so low as to make life barely worth living, is morally preferable to a less-populated world in which the average level of welfare is high, provided that the number of people in the overpopulated world is so great as to make the total amount of welfare in that world greater than in the less-populated world. Parfit referred to this implication as the “Repugnant Conclusion.” Much thought was given to finding alternatives that did not carry the counterintuitive consequences of the average and total views. But the alternatives suggested had their own difficulties, and the question remained one of the most baffling conundrums in applied ethics. (See also environmentalism.)

War and peace

The Vietnam War ensured that discussions of the justness of war and the legitimacy of conscription and civil disobedience were prominent in early writings in applied ethics. There was considerable support for civil disobedience against unjust aggression and against unjust laws even in a democracy.

With the end of conscription in the United States and of the war itself two years later (1975), philosophers turned their attention to the problem of nuclear weapons. One central question was whether the strategy of nuclear deterrence could be morally acceptable, given that it treats civilian populations as potential nuclear targets. In the 1990s the massacres of civilians in the former Yugoslavia and in Rwanda raised the issue mentioned above in connection with Mill: the right of one or more countries to intervene in the internal affairs of another country solely because it is engaged in crimes against its own citizens. This issue was taken up within discussions of broader questions dealing with human rights, including the question of whether the insistence that all countries respect human rights is an expression of a universal human value or merely a form of Western “cultural imperialism.”

Abortion, euthanasia, and the value of human life

A number of ethical questions are concerned with the endpoints of the human life span. The question of whether abortion or the use of human embryos as sources of stem cells can be morally justified was exhaustively discussed in popular contexts, where the answer was often taken to depend directly on the answer to the further question: “When does human life begin?” Many philosophers argued that the latter question was the wrong one to ask, since no conclusion of a specifically moral character follows directly from the scientific fact that human life begins at conception or at some other time. A better approach, according to these philosophers, is to ask what it is that makes killing a human being wrong and then to consider whether these characteristics, whatever they might be, apply to the earliest stages of human life. Although there was no generally agreed-upon answer, some philosophers presented surprisingly strong arguments to the effect that not only the embryo and the fetus but even the newborn infant has no right to life. This position was defended by the British philosopher Jonathan Glover in Causing Death and Saving Lives (1977) and in more detail by the Canadian-born philosopher Michael Tooley in Abortion and Infanticide (1983).

Such views were hotly contested, especially by those who claimed that all human life, irrespective of its characteristics, is sacrosanct. The task for those who defended the sanctity of human life was to explain why human life, no matter what its characteristics, is specially worthy of protection. Explanation could no doubt be provided in terms of traditional Christian doctrines such as that all humans are made in the image of God or that all humans have an immortal soul. In the philosophical debate, however, opponents of abortion and embryo research eschewed religious arguments of this kind, though without finding a convincing secular alternative.

Somewhat similar issues were raised by the practice of euthanasia when it is nonvoluntary, as in the case of severely disabled newborn infants (see below Bioethics). Voluntary euthanasia, on the other hand, could be defended on the distinct ground that the state should not interfere with the free, informed choices of its citizens in matters that do not cause harm to others. (The same argument was often invoked in defense of the pro-choice position in the abortion controversy. But it was much weaker in that case, because it presupposed what it needed to prove: namely, that the fetus does not count as a person—or at least not as a person to the extent that the pregnant woman does.) Critics of voluntary euthanasia emphasized practical matters such as the difficulty of maintaining adequate safeguards; their chief objection was that the practice would lead via a “slippery slope” to nonvoluntary euthanasia and eventually to the compulsory involuntary killing of those whom the state considers socially undesirable.

Bioethics

Ethical issues raised by abortion and euthanasia are part of the subject matter of bioethics, which deals with the ethical dimensions of new developments in medicine and the biological sciences. Inherently interdisciplinary in scope, the field benefits from the contributions of professionals outside philosophy, including physicians, lawyers, scientists, and theologians. From the late 20th century, centres for research in bioethics were established in many countries, and medical schools added the discussion of ethical issues in medicine to their curricula. Governments sought guidance in setting public policy in particularly controversial areas of bioethics by appointing special committees to provide ethical advice.

Several key themes run through the subjects covered by bioethics. One is whether the quality of a human life can be a reason for ending it or for deciding not to take steps to prolong it. Since medical science can now keep alive severely disabled infants who would otherwise die soon after birth, pediatricians are regularly faced with this question. A major controversy erupted in the United States in 1982 when a doctor agreed to follow the wishes of the parents of an infant with Down syndrome by not carrying out the surgery necessary to save the baby’s life. The doctor’s decision was upheld by the Supreme Court of Indiana, and the baby died before an appeal could be made to the U.S. Supreme Court. The ensuing discussion and the rules subsequently promulgated by the administration of Pres. Ronald Reagan made it less likely that in the United States an infant with Down syndrome would be denied medically feasible lifesaving surgery, but other countries treated such cases differently. Moreover, in virtually every country, including the United States, there were situations in which doctors decided, on quality-of-life grounds, not to sustain the life of an infant with extremely poor prospects.

Even those who defended the doctrine of the sanctity of all human life did not always insist that doctors use extraordinary means to prolong it. But the distinction between ordinary and extraordinary means, like that between acts and omissions, was problematic. Critics asserted that the wishes of the patient or, if these cannot be ascertained, the quality of the patient’s life provides a more relevant basis for a decision than the nature of the means to be used.

Another central theme is that of patient autonomy. This issue arose not only in connection with voluntary euthanasia but also in the area of human experimentation. It was generally agreed that patients must give informed consent to any experimental procedures performed on them. But how much information should they be given? The problem was particularly acute in the case of randomly controlled trials, which require that patients agree to courses of treatment that may consist entirely of placebos. When experiments were carried out using human subjects in developing countries, the difficulties and the potential for unethical practices become greater still.

The allocation of medical resources became a life-and-death issue in the late 1940s, when hospitals in the United States first obtained dialysis machines and had to choose which of their patients suffering from kidney disease would be allowed to use them. Some bioethicists argued that the decision should be made on a “first come, first served” basis, whereas others thought it obvious that younger patients or patients with dependents should be given preference. Although dialysis machines are no longer so scarce, the availability of various other exotic, expensive lifesaving techniques is limited; hence, the search for rational principles of distribution continues. This problem was particularly complicated in the United States, where access to such techniques often depended on the business decisions of private health insurance firms.

Further advances in biology and medicine gave rise to new issues in bioethics, some of which received considerable public attention. In 1978 the birth of the first human being to be conceived outside a human body initiated a debate about the morality of in vitro fertilization. This soon led to questions about the freezing of human embryos and about what should be done with them if the parents should die. Controversies also arose about the practice of surrogate motherhood, in which a woman is impregnated with the sperm of a male member of an infertile couple (or in some cases with an embryo fertilized in vitro) and then surrenders the resulting baby, usually performing this service for a fee. Is this different from selling a baby? If so, how? If a woman who has agreed to act as a surrogate mother changes her mind and decides to keep the baby, should she be allowed to do so?

From the late 1990s, one of the most controversial issue in bioethics was cloning. The first successful cloning of a mammal, Dolly the sheep, in 1996 conjured up in the public imagination alarming visions of armies of identical human clones, and many legislatures hastened to prohibit the reproductive cloning of human beings. But the public’s reaction resulted more from ignorance and distaste than reflection (which the popular news media did little to encourage). Some bioethicists suggested that in a free society there are no good reasons—apart from the risk that a cloned human may suffer from genetic abnormalities—for cloning to be prohibited. Others viewed cloning as a violation of human dignity, because it would mean that human beings could be designed by other humans. This objection was forcefully stated by the bioethicist Leon Kass, who appealed to what he called, in the title of a 1997 essay, “The Wisdom of Repugnance.”

The culmination of such advances in techniques for influencing human reproduction will be the mastery of genetic engineering. Already in the late 20th century, some couples in the United States paid substantial sums for eggs from women with outstanding test scores at elite colleges. (Payment for eggs or sperm was illegal in most other countries.) Prenatal testing for genetic defects was also common, especially in older pregnant women, many of whom terminated the pregnancy when a defect was discovered. Some genetic testing can now be done in embryos in vitro, before implantation. As more genetic tests become available—not only for defects but perhaps eventually for robust health, desirable personality traits, attractive physical characteristics, or intellectual abilities that are under strong genetic influence—humanity will face the question posed by the title of Jonathan Glover’s probing book What Sort of People Should There Be? (1984). Perhaps this will be the most challenging issue for ethics in the remainder of the 21st century.

Peter Singer The Editors of Encyclopaedia Britannica