Immunization against viral diseases
With the exception of smallpox, it was not until well into the 20th century that efficient viral vaccines became available. In fact, it was not until the 1930s that much began to be known about viruses. The two developments that contributed most to the rapid growth in knowledge after that time were the introduction of tissue culture as a means of growing viruses in the laboratory and the availability of the electron microscope. Once the virus could be cultivated with comparative ease in the laboratory, the research worker could study it with care and evolve methods for producing one of the two requirements for a safe and effective vaccine: either a virus that was so attenuated, or weakened, that it could not produce the disease for which it was responsible in its normally virulent form; or a killed virus that retained the faculty of inducing a protective antibody response in the vaccinated individual.
The first of the viral vaccines to result from these advances was for yellow fever, developed by the microbiologist Max Theiler in the late 1930s. About 1945 the first relatively effective vaccine was produced for influenza; in 1954 American physician Jonas Salk introduced a vaccine for polio; and in 1960 an oral polio vaccine, developed by virologist Albert Sabin, came into wide use.
The vaccines went far toward bringing under control three of the major diseases of the time—although, in the case of influenza, a major complication is the disturbing proclivity of the virus to change its character from one epidemic to another. Even so, sufficient progress was made to reduce the chances that a pandemic such as the influenza pandemic of 1918–19, which killed an estimated 25 million people, would occur again. Medical centres were equipped to monitor outbreaks of influenza worldwide in order to establish the identity of the responsible viruses and, if necessary, take steps to produce appropriate vaccines.
During the 1960s effective vaccines came into use for measles and rubella (German measles). Both evoked a certain amount of controversy. In the case of measles in the Western world, it was contended that, if acquired in childhood, measles is not a particularly hazardous malady, and the naturally acquired disease evokes permanent immunity in the vast majority of cases. Conversely, the original vaccine induced a certain number of adverse reactions, and the duration of the immunity it produced was problematic. In 1968 an improved measles vaccine was developed. By 2000 measles was eliminated from the United States. Subsequent lapses in vaccination, however, resulted in its reemergence.
The situation with rubella vaccination was different. This is a fundamentally mild affliction, and the only cause for anxiety is its proclivity to induce congenital deformities if a pregnant woman should acquire the disease. Once an effective vaccine was available, the problem was the extent to which it should be used. Ultimately, consensus was reached that all girls who had not already had the disease should be vaccinated at about 12 years of age. In the United States children are routinely immunized against measles, mumps, and rubella at the age of 15 months.
The immune response
With advances in cell biology in the second half of the 20th century came a more profound understanding of both normal and abnormal conditions in the body. Electron microscopy enabled observers to peer more deeply into the structures of the cell, and chemical investigations revealed clues to their functions in the cell’s intricate metabolism. The overriding importance of the nuclear genetic material DNA (deoxyribonucleic acid) in regulating the cell’s protein and enzyme production lines became evident. A clearer comprehension also emerged of the ways in which the cells of the body defend themselves by modifying their chemical activities to produce antibodies against injurious agents.
Up until the 20th century, immunity referred mostly to the means of resistance of an animal to invasion by a parasite or microorganism. About the mid-20th century, however, there arose a growing realization that immunity and immunology cover a much wider field and are concerned with mechanisms for preserving the integrity of the individual. The introduction of organ transplantation, with its dreaded complication of tissue rejection, brought this broader concept of immunology to the fore.
At the same time, research workers and clinicians began to appreciate the far-reaching implications of immunity in relation to endocrinology, genetics, tumour biology, and the biology of a number of other maladies. The so-called autoimmune diseases were found to be caused by an aberrant series of immune responses by which the body’s own cells are attacked. Suspicion grew that a number of major disorders, such as diabetes, rheumatoid arthritis, and multiple sclerosis, were associated with similar mechanisms.
In some conditions viruses were found to invade the genetic material of cells and distort their metabolic processes. Such viruses may lie dormant for many years before becoming active. This was discovered to be the underlying cause of certain cancers, such as primary hepatocellular carcinoma (caused by hepatitis C virus) and adult T-cell leukemia (caused by human T-cell lymphotropic virus type I, or HTLV-I). Acquired immune deficiency syndrome (AIDS) was found to be caused by human immunodeficiency virus (HIV), which has a long dormant period and then attacks T cells (immune cells that produce antibodies). The result is that the affected person is not able to generate an immune response to infections or malignancies.
Endocrinology
At the beginning of the 20th century, endocrinology was in its infancy. Indeed, it was not until 1905 that Ernest Starling, a pupil of British physiologist Edward Sharpey-Schafer, introduced the term hormone for the internal secretions of the endocrine glands. In 1891 English physician George Redmayne Murray achieved the first success in treating myxedema (the common form of hypothyroidism) with an extract of the thyroid gland. Three years later, Sharpey-Schafer and George Oliver identified in extracts of the adrenal glands a substance that raised blood pressure. In 1901 Jokichi Takamine, a Japanese chemist working in the United States, isolated this active principle, known as epinephrine (adrenaline).
Insulin
For more than 30 years, some of the greatest minds in physiology sought the cause of diabetes mellitus. In 1889 German physicians Joseph von Mering and Oskar Minkowski showed that removal of the pancreas in dogs produced the disease. In 1901 American pathologist Eugene Lindsay Opie described degenerative changes in the clumps of cells in the pancreas known as the islets of Langerhans, thus confirming the association between failure in the functioning of these cells and diabetes. Sharpey-Schafer concluded that the islets of Langerhans secrete a substance that controls the metabolism of carbohydrate. The outstanding event of the early years of the 20th century in endocrinology was the discovery of that substance, insulin.
In 1921 Romanian physiologist Nicolas C. Paulescu reported the discovery of a substance called pancrein (later thought to have been insulin) in pancreatic extracts from dogs. Paulescu found that diabetic dogs given an injection of unpurified pancrein experienced a temporary decrease in blood glucose levels. Also in 1921, working independently of Paulescu, Canadian physician Frederick Banting and American-born Canadian physician Charles H. Best isolated insulin. They then worked with Canadian chemist James B. Collip and Scottish physiologist J.J.R. Macleod to purify the substance. The following year a 14-year-old boy with severe diabetes was the first person to be treated successfully with the pancreatic extracts. Almost overnight the lot of the diabetic patient changed from a sentence of almost certain death to a prospect of not only survival but a long and healthy life.
Insulin subsequently became available in a variety of forms, but synthesis on a commercial scale was not achieved easily, and the only source of the hormone was the pancreas of animals. Moreover, one of its practical disadvantages was that it had to be given by injection. Consequently, an intense search was conducted for some alternative substance that would be active when taken by mouth. Various preparations—oral hypoglycemic agents, as they are known—appeared that were effective to a certain extent in controlling diabetes, but evidence indicated that these were of value only in relatively mild cases of the disease. By the early 1980s, however, certain strains of bacteria had been genetically modified to produce human insulin. Later, a form of human insulin was made using recombinant DNA technology. Human insulin was available in a form that acted quickly but transiently (short-acting insulin) and in a biochemically modified form designed to prolong its action for up to 24 hours (long-acting insulin). Another type of insulin acted rapidly, with the hormone beginning to lower blood glucose within 10 to 30 minutes of administration. Such rapid-acting insulin was made available in an inhalable form in 2014.
Cortisone
Another major advance in endocrinology came from the Mayo Clinic, in Rochester, Minnesota. In 1949 Philip Showalter Hench and colleagues announced that a substance isolated from the cortex of the adrenal gland had a dramatic effect upon rheumatoid arthritis. This was compound E, or cortisone, as it came to be known, which had been isolated by Edward C. Kendall in 1935. Cortisone and its many derivatives proved to be potent anti-inflammatory agents. As a temporary measure—it was not a cure for rheumatoid arthritis—cortisone could control the acute exacerbation caused by the disease and could provide relief in other conditions, such as acute rheumatic fever, certain kidney diseases, certain serious diseases of the skin, and some allergic conditions, including acute exacerbations of asthma. Of even more long-term importance was the valuable role it had as a research tool.
Sex hormones
Not the least of the advances in endocrinology was the increasing knowledge and understanding of the sex hormones. The accumulation of this knowledge was critical to dealing with the issue of birth control. After an initial stage of hesitancy, the contraceptive pill, with its basic rationale of preventing ovulation, was accepted by the vast majority of family-planning organizations and many gynecologists as the most satisfactory method of contraception. Its risks, practical and theoretical, introduced a note of caution, but this was not sufficient to detract from the wide appeal induced by its effectiveness and ease of use.
Vitamins
In the field of nutrition, the outstanding advance of the 20th century was the discovery and appreciation of the importance to health of the “accessory food factors,” or vitamins. Various workers had shown that animals did not thrive on a synthetic diet containing all the correct amounts of protein, fat, and carbohydrate. They even suggested that there must be some unknown ingredients in natural food that were essential for growth and the maintenance of health. But little progress was made in this field until the classical experiments of English biologist Frederick Gowland Hopkins were published in 1912. These were so conclusive that there could be no doubt that what he called “accessory substances” were essential for health and growth.
The name vitamine was suggested for these substances by Polish biochemist Casimir Funk in the belief that they were amines, certain compounds derived from ammonia. In due course, when it was realized that they were not amines, the term was altered to vitamin.
Once the concept of vitamins was established on a firm scientific basis, it was not long before their identity began to be revealed. Soon there was a long series of vitamins, best known by the letters of the alphabet after which they were originally named when their chemical identity was still unknown. By supplementing the diet with foods containing particular vitamins, deficiency diseases such as rickets (due to deficiency of vitamin D) and scurvy (due to lack of vitamin C, or ascorbic acid) practically disappeared from Western countries, while deficiency diseases such as beriberi (caused by lack of vitamin B1, or thiamine), which were endemic in Eastern countries, either disappeared or could be remedied with the greatest of ease.
The isolation of vitamin B12, or cyanocobalamin, was of particular interest because it almost rounded off the fascinating story of how pernicious anemia was brought under control. Throughout the first two decades of the century, the diagnosis of pernicious anemia, like that of diabetes mellitus, was nearly equivalent to a death sentence. Unlike the more common form of so-called secondary anemia, it did not respond to the administration of suitable iron salts, and no other form of treatment was effective—hence, the grimly appropriate title of pernicious anemia.
In the early 1920s, George Richards Minot, an investigator at Harvard University, became interested in work being done by American pathologist George H. Whipple on the beneficial effects of raw beef liver in severe experimental anemia. With a Harvard colleague, William P. Murphy, he decided to investigate the effect of raw liver in patients with pernicious anemia, and in 1926 they were able to announce that this form of therapy was successful. The validity of their findings was amply confirmed, and the fear of pernicious anemia came to an end.
Many years were to pass before the rationale of liver therapy in pernicious anemia was fully understood. In 1948, however, almost simultaneously in the United States and Britain, the active principle, cyanocobalamin, was isolated from liver, and this vitamin became the standard treatment for pernicious anemia.