Understanding of clouds, fog, and dew
- Key People:
- Arnold Henry Guyot
News •
Most of the names given to clouds (cirrus, cumulus, stratus, nimbus, and their combinations) were coined in 1803 by the English meteorologist Luke Howard. Howard’s effort was not simply taxonomic; he recognized that clouds reflect in their shapes and changing forms “the general causes which effect all the variations of the atmosphere.”
After Guericke’s experiments it was widely believed that water vapour condenses into cloud as soon as the air containing it cools to the dew point. That this is not necessarily so was proved by Paul-Jean Coulier of France from experiments reported in 1875. Coulier found that the sudden expansion of air in glass flasks failed to produce an artificial cloud if the air in the system was filtered through cotton wool. He concluded that dust in the air was essential to the formation of cloud in the flask.
From about the mid-1820s, efforts were made to classify precipitation in terms of the causes behind the lowering of temperature. In 1841 the American astronomer-meteorologist Elias Loomis recognized the following causes: warm air coming into contact with cold earth or water, responsible for fog; mixing of warm and cold currents, which commonly results in light rains; and sudden transport of air into high regions, as by flow up a mountain slope or by warm currents riding over an opposing current of cold air, which may produce heavy rains.
Observation and study of storms
Storms, particularly tropical revolving storms, were subjects of much interest. As early as 1697 some of the more spectacular features of revolving storms were recorded in William Dampier’s New Voyage Round the World. On July 4, 1687, Dampier’s ship survived the passage of what he called a “tuffoon” off the coast of China. The captain’s vivid account of this experience clearly describes the calm central eye of the storm and the passage of winds from opposite directions as the storm moved past. In 1828 Heinrich Wilhelm Dove, a Prussian meteorologist, recognized that tropical revolving storms are traveling systems with strong winds moving counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. The whirlwind character of these storms was independently established by the American meteorologist William C. Redfield in the case of the September hurricane that struck New England in 1821. He noted that in central Connecticut the trees had been toppled toward the northwest, whereas some 80 kilometres westward they had fallen in the opposite direction. Redfield identified the belt between the Equator and the tropics as the region in which hurricanes are generated, and he recognized how the tracks of these storms tend to veer eastward when they enter the belt of westerly winds at about latitude 30° N. In 1849 Sir William Reid, a British meteorologist and military engineer, studied the revolving storms that occur south of the Equator in the Indian Ocean and confirmed that they have reversed rotations and curvatures of path compared with those of the Northern Hemisphere. Capt. Henry Piddington subsequently investigated revolving storms affecting the Bay of Bengal and Arabian Sea, and in 1855 he named these cyclones in his Sailor’s Horn-book for the Laws of Storms in all Parts of the World.
Beginning in 1835, James Pollard Espy, an American meteorologist, began extensive studies of storms from which he developed a theory to explain their sources of energy. Radially convergent winds, he believed, cause the air to rise in their area of collision. Upward movement of moist air is attended by condensation and precipitation. Latent heat released through the change of vapour to cloud or water causes further expansion and rising of the air. The higher the moist air rises the more the equilibrium of the system is disturbed, and this equilibrium cannot be restored until moist air at the surface ceases to flow toward the ascending column.
That radially convergent winds are not necessary to the rising of large air masses was demonstrated by Loomis in the case of a great storm that passed across the northeastern United States in December 1836. From his studies of wind patterns, changes of temperature, and changes in barometric pressure, he concluded that a cold northwest wind had displaced a wind blowing from the southeast by flowing under it. The southeast wind made its escape by ascending from Earth’s surface. Loomis had recognized what today would be called a frontal surface.
Weather and climate
Modern meteorology began when the daily weather map was developed as a device for analysis and forecasting, and the instrument that made this kind of map possible was the electromagnetic telegraph. In the United States the first telegraph line was strung in 1844 between Washington, D.C., and Baltimore. Concurrently with the expansion of telegraphic networks, the physicist Joseph Henry arranged for telegraph companies to have meteorological instruments in exchange for current data on weather telegraphed to the Smithsonian Institution. Some 500 stations had joined this cooperative effort by 1860. The Civil War temporarily prevented further expansion, but, meanwhile, a disaster of a different order had accelerated development of synoptic meteorology in Europe. On Nov. 14, 1854, an unexpected storm wrecked British and French warships off Balaklava on the Crimean Peninsula. Had word of the approaching storm been telegraphed to this port in the Black Sea, the ships might have been saved. This mischance led in 1856 to the establishment of a national storm-warning service in France. In 1863 the Paris Observatory began publishing the first weather maps in modern format.
The first national weather service in the United States began operations in 1871 as an agency of the Department of War. The initial objective was to provide storm warnings for the Gulf and Atlantic coasts and the Great Lakes. In 1877 forecasts of temperature changes and precipitation averaged 74 percent in accuracy, as compared with 79 percent for cold-wave warnings. After 1878 daily weather maps were published.
Synoptic meteorology made possible the tracking of storm systems over wide areas. In 1868 the British meteorologist Alexander Buchan published a map showing the travels of a cyclonic depression across North America, the Atlantic, and into northern Europe. In the judgment of Sir Napier Shaw, Buchan’s study marks the entry of modern meteorology, with “the weather map as its main feature and forecasting its avowed object.”
In addition to weather maps, a variety of other kinds of maps showing regional variations in the components of weather and climate were produced. In 1817 Alexander von Humboldt published a map showing the distribution of mean annual temperatures over the greater part of the Northern Hemisphere. Humboldt was the first to use isothermal lines in mapping temperature. Buchan drew the first maps of mean monthly and annual pressure for the entire world. Published in 1869, these maps added much to knowledge of the general circulation of the atmosphere. In 1886 Léon-Philippe Teisserenc de Bort of France published maps showing mean annual cloudiness over Earth for each month and the year. The first world map of precipitation showing mean annual precipitation by isohyets was the work of Loomis in 1882. This work was further refined in 1899 by the maps of the British cartographer Andrew John Herbertson, which showed precipitation for each month of the year.
Although the 19th century was still in the age of meteorologic and climatological exploration, broad syntheses of old information thus kept pace with acquisition of the new fairly well. For example, Julius Hann’s massive Handbuch der Klimatologie (“Handbook of Climatology”), first issued in 1883, is mainly a compendium of works published in the Meteorologische Zeitschrift (“Journal of Meteorology”). The Handbuch was kept current in revised editions until 1911, and this work is still sometimes called the most skillfully written account of world climate.
The 20th century: modern trends and developments
Geologic sciences
The development of the geologic sciences in the 20th century has been influenced by two major “revolutions.” The first involves dramatic technological advances that have resulted in vastly improved instrumentation, the prime examples being the many types of highly sophisticated computerized devices. The second is centred on the development of the plate tectonics theory, which is the most profound and influential conceptual advance the Earth sciences have ever known.
Modern technological developments have affected all the different geologic disciplines. Their impact has been particularly notable in such activities as radiometric dating, experimental petrology, crystallography, chemical analysis of rocks and minerals, micropaleontology, and seismological exploration of Earth’s deep interior.
Radiometric dating
In 1905, shortly after the discovery of radioactivity, the American chemist Bertram Boltwood suggested that lead is one of the disintegration products of uranium, in which case the older a uranium-bearing mineral the greater should be its proportional part of lead. Analyzing specimens whose relative geologic ages were known, Boltwood found that the ratio of lead to uranium did indeed increase with age. After estimating the rate of this radioactive change, he calculated that the absolute ages of his specimens ranged from 410 million to 2.2 billion years. Though his figures were too high by about 20 percent, their order of magnitude was enough to dispose of the short scale of geologic time proposed by Lord Kelvin.
Versions of the modern mass spectrometer were invented in the early 1920s and 1930s, and during World War II the device was improved substantially to help in the development of the atomic bomb. Soon after the war, Harold C. Urey and G.J. Wasserburg applied the mass spectrometer to the study of geochronology. This device separates the different isotopes of the same element and can measure the variations in these isotopic abundances to within one part in 10,000. By determining the amount of the parent and daughter isotopes present in a sample and by knowing their rate of radioactive decay (each radioisotope has its own decay constant), the isotopic age of the sample can be calculated. For dating minerals and rocks, investigators commonly use the following couplets of parent and daughter isotopes: thorium-232–lead-208, uranium-235–lead-207, samarium-147–neodymium-143, rubidium-87–strontium-87, potassium-40–argon-40, and argon-40–argon-39. The SHRIMP (Sensitive High Resolution Ion Microprobe) enables the accurate determination of the uranium-lead age of the mineral zircon, and this has revolutionized the understanding of the isotopic age of formation of zircon-bearing igneous granitic rocks. Another technological development is the ICP-MS (Inductively Coupled Plasma Mass Spectrometer), which is able to provide the isotopic age of the minerals zircon, titanite, rutile, and monazite. These minerals are common to many igneous and metamorphic rocks.
Such techniques have had an enormous impact on scientific knowledge of Earth history because precise dates can now be obtained on rocks in all orogenic (mountain) belts ranging in age from the early Archean (about 4 billion years old) to the early Neogene (roughly 20 million years old). The oldest known rocks on Earth, estimated at 4.28 billion years old, are the faux amphibolite volcanic deposits of the Nuvvuagittuq greenstone belt in Quebec, Canada. A radiometric dating technique that measures the ratio of the rare-earth elements neodymium and samarium present in a rock sample was used to produce the estimate. Also, by extrapolating backward in time to a situation when there was no lead that had been produced by radiogenic processes, a figure of about 4.6 billion years is obtained for the minimum age of Earth. This figure is of the same order as ages obtained for certain meteorites and lunar rocks.
Experimental study of rocks
Experimental petrology began with the work of Jacobus Henricus van ’t Hoff, one of the founders of physical chemistry. Between 1896 and 1908 he elucidated the complex sequence of chemical reactions attending the precipitation of salts (evaporites) from the evaporation of seawater. Van ’t Hoff’s aim was to explain the succession of mineral salts present in Permian rocks of Germany. His success at producing from aqueous solutions artificial minerals and rocks like those found in natural salt deposits stimulated studies of minerals crystallizing from silicate melts simulating the magmas from which igneous rocks have formed. Working at the Geophysical Laboratory of the Carnegie Institution of Washington, D.C., Norman L. Bowen conducted extensive phase-equilibrium studies of silicate systems, brought together in his Evolution of the Igneous Rocks (1928). Experimental petrology, both at the low-temperature range explored by van ’t Hoff and in the high ranges of temperature investigated by Bowen, continues to provide laboratory evidence for interpreting the chemical history of sedimentary and igneous rocks. Experimental petrology also provides valuable data on the stability limits of individual metamorphic minerals and of the reactions between different minerals in a wide variety of chemical systems. These experiments are carried out at elevated temperatures and pressures that simulate those operating in different levels of Earth’s crust. Thus, the metamorphic petrologist today can compare the minerals and mineral assemblages found in natural rocks with comparable examples produced in the laboratory, the pressure–temperature limits of which have been well defined by experimental petrology.
Another branch of experimental science relates to the deformation of rocks. In 1906 the American physicist P.W. Bridgman developed a technique for subjecting rock samples to high pressures similar to those deep in the Earth. Studies of the behaviour of rocks in the laboratory have shown that their strength increases with confining pressure but decreases with rise in temperature. Down to depths of a few kilometres the strength of rocks would be expected to increase. At greater depths the temperature effect should become dominant, and response to stress should result in flow rather than fracture of rocks. In 1959 two American geologists, Marion King Hubbertand William W. Rubey, demonstrated that fluids in the pores of rock may reduce internal friction and permit gliding over nearly horizontal planes of the large overthrust blocks associated with folded mountains. More recently the Norwegian petrologist Hans Ramberg performed many experiments with a large centrifuge that produced a negative gravity effect and thus was able to create structures simulating salt domes, which rise because of the relatively low density of the salt in comparison with that of surrounding rocks. With all these deformation experiments, it is necessary to scale down as precisely as possible variables such as the time and velocity of the experiment and the viscosity and temperature of the material from the natural to the laboratory conditions.
Crystallography
In the 19th century crystallographers were able to study only the external form of minerals, and it was not until 1895 when the German physicist Wilhelm Conrad Röntgen discovered X-rays that it became possible to consider their internal structure. In 1912 another German physicist, Max von Laue, realized that X-rays were scattered and deflected at regular angles when they passed through a copper sulfate crystal, and so he produced the first X-ray diffraction pattern on a photographic film. A year later William Bragg of Britain and his son Lawrence perceived that such a pattern reflects the layers of atoms in the crystal structure, and they succeeded in determining for the first time the atomic crystal structure of the mineral halite (sodium chloride). These discoveries had a long-lasting influence on crystallography because they led to the development of the X-ray powder diffractometer, which is now widely used to identify minerals and to ascertain their crystal structure.
The chemical analysis of rocks and minerals
Advanced analytic chemical equipment has revolutionized the understanding of the composition of rocks and minerals. For example, the XRF (X-Ray Fluorescence) spectrometer can quantify the major and trace element abundances of many chemical elements in a rock sample down to parts-per-million concentrations. This geochemical method has been used to differentiate successive stages of igneous rocks in the plate-tectonic cycle. The metamorphic petrologist can use the bulk composition of a recrystallized rock to define the structure of the original rock, assuming that no structural change has occurred during the metamorphic process. Next, the electron microprobe bombards a thin microscopic slice of a mineral in a sample with a beam of electrons, which can determine the chemical composition of the mineral almost instantly. This method has wide applications in, for example, the fields of industrial mineralogy, materials science, igneous geochemistry, and metamorphic petrology.
Micropaleontology
Microscopic fossils, such as ostracods, foraminifera, and pollen grains, are common in sediments of the Mesozoic and Cenozoic eras (from about 251 million years ago to the present). Because the rock chips brought up in oil wells are so small, a high-resolution instrument known as a scanning electron microscope had to be developed to study the microfossils. The classification of microfossils of organisms that lived within relatively short time spans has enabled Mesozoic-Cenozoic sediments to be subdivided in remarkable detail. This technique also has had a major impact on the study of Precambrian life (i.e., organisms that existed more than 542 million years ago). Carbonaceous spheroids and filaments about 7–10 millimetres (0.3–0.4 inch) long are recorded in 3.5 billion-year-old sediments in the Pilbara region of northwestern Western Australia and in the lower Onverwacht Series of the Barberton belt in South Africa; these are the oldest reliable records of life on Earth.
Seismology and the structure of Earth
Earthquake study was institutionalized in 1880 with the formation of the Seismological Society of Japan under the leadership of the English geologist John Milne. Milne and his associates invented the first accurate seismographs, including the instrument later known as the Milne seismograph. Seismology has revealed much about the structure of Earth’s core, mantle, and crust. The English seismologist Richard Dixon Oldham’s studies of earthquake records in 1906 led to the discovery of Earth’s core. From studies of the Croatian quake of Oct. 8, 1909, the geophysicist Andrija Mohorovičić discovered the discontinuity (often called the Moho) that separates the crust from the underlying mantle.
Today there are more than 1,000 seismograph stations around the world, and their data are used to compile seismicity maps. These maps show that earthquake epicentres are aligned in narrow, continuous belts along the boundaries of lithospheric plates (see below). The earthquake foci outline the mid-oceanic ridges in the Atlantic, Pacific, and Indian oceans where the plates separate, while around the margins of the Pacific where the plates converge, they lie in a dipping plane, or Benioff zone, that defines the position of the subducting plate boundary to depths of about 700 kilometres.
Since 1950, additional information on the crust has been obtained from the analysis of artificial tremors produced by chemical explosions. These studies have shown that the Moho is present under all continents at an average depth of 35 kilometres and that the crust above it thickens under young mountain ranges to depths of 70 kilometres in the Andes and the Himalayas. In such investigations the reflections of the seismic waves generated from a series of “shot” points are also recorded, and this makes it possible to construct a profile of the subsurface structure. This is seismic reflection profiling, the main method of exploration used by the petroleum industry. During the late 1970s a new technique for generating seismic waves was invented: thumping and vibrating the surface of the ground with a gas-propelled piston from a large truck.