Table of Contents
References & Edit History Related Topics

News

For ages a source of food and common salt, the sea is increasingly becoming a source of water, chemicals, and energy. In 1967 Key West, Fla., became the first U.S. city to be supplied solely by water from the sea, drawing its supplies from a plant that produces more than 2 million gallons of refined water daily. Magnesia was extracted from the Mediterranean in the late 19th century; at present nearly all the magnesium metal used in the United States is mined from the sea at Freeport, Texas. Many ambitious schemes for using tidal power have been devised, but the first major hydrographic project of this kind was not completed until 1967, when a dam and electrical generating equipment were installed across the Rance River in Brittany. The seafloor and the strata below the continental shelves are also sources of mineral wealth. Concretions of manganese oxide, evidently formed in the process of subaqueous weathering of volcanic rocks, have been found in dense concentrations with a total abundance of 1011 tons. In addition to the manganese, these concretions contain copper, nickel, cobalt, zinc, and molybdenum. To date, oil and gas have been the most valuable products to be produced from beneath the sea.

Ocean bathymetry

Modern bathymetric charts show that about 20 percent of the surfaces of the continents are submerged to form continental shelves. Altogether the shelves form an area about the size of Africa. Continental slopes, which slant down from the outer edges of the shelves to the abyssal plains of the seafloor, are nearly everywhere furrowed by submarine canyons. The depths to which these canyons have been cut below sea level seem to rule out the possibility that they are drowned valleys cut by ordinary streams. More likely, the canyons were eroded by turbidity currents, dense mixtures of mud and water that originate as mudslides in the heads of the canyons and pour down their bottoms.

Profiling of the Pacific basin prior to and during World War II resulted in the discovery of hundreds of isolated eminences rising 1,000 or more metres above the floor. Of particular interest were seamounts in the shape of truncated cones, whose flat tops rise to between 1.6 kilometres and a few hundred metres below the surface. Harry H. Hess interpreted the flat-topped seamounts (guyots) as volcanic mountains planed off by action of waves before they subsided to their present depths. Subsequent drilling in guyots west of Hawaii confirmed this view; samples of rocks from the tops contained fossils of Cretaceous age representing reef-building organisms of the kind that inhabit shallow water.

Ocean circulation, currents, and waves

Early in the 20th century Vilhelm Bjerknes, a Norwegian meteorologist, and V. Walfrid Ekman, a Swedish physical oceanographer, investigated the dynamics of ocean circulation and developed theoretical principles that influenced subsequent studies of currents in the sea. Bjerknes showed that very small forces resulting from pressure differences caused by nonuniform density of seawater can initiate and maintain fluid motion. Ekman analyzed the influence of winds and Earth’s rotation on currents. He theorized that in a homogeneous medium the frictional effects of winds blowing across the surface would cause movement of successively lower layers of water, the deeper the currents so produced the less their velocity and the greater their deflection by the Coriolis effect (an apparent force due to Earth’s rotation that causes deflection of a moving body to the right in the Northern Hemisphere and to the left in the Southern Hemisphere), until at some critical depth an induced current would move in a direction opposite to that of the wind.

Results of many investigations suggest that the forces that drive the ocean currents originate at the interface between water and air. The direct transfer of momentum from the atmosphere to the sea is doubtless the most important driving force for currents in the upper parts of the ocean. Next in importance are differential heating, evaporation, and precipitation across the air-sea boundary, altering the density of seawater and thus initiating movement of water masses with different densities. Studies of the properties and motion of water at depth have shown that strong currents also exist in the deep sea and that distinct types of water travel far from their geographic sources. For example, the highly saline water of the Mediterranean that flows through the Strait of Gibraltar has been traced over a large part of the Atlantic, where it forms a deepwater stratum that is circulated far beyond that ocean in currents around Antarctica.

Improvements in devices for determining the motion of seawater in three dimensions have led to the discovery of new currents and to the disclosure of unexpected complexities in the circulation of the oceans generally. In 1951 a huge countercurrent moving eastward across the Pacific was found below depths as shallow as 20 metres, and in the following year an analogous equatorial undercurrent was discovered in the Atlantic. In 1957 a deep countercurrent was detected beneath the Gulf Stream with the aid of subsurface floats emitting acoustic signals.

Since the 1970s Earth-orbiting satellites have yielded much information on the temperature distribution and thermal energy of ocean currents such as the Gulf Stream. Chemical analyses from Geosecs makes possible the determination of circulation paths, speeds, and mixing rates of ocean currents.

Surface waves of the ocean are also exceedingly complex, at most places and times reflecting the coexistence and interferences of several independent wave systems. During World War II, interest in forecasting wave characteristics was stimulated by the need for this critical information in the planning of amphibious operations. The oceanographers H.U. Sverdrup and Walter Heinrich Munk combined theory and empirical relationships in developing a method of forecasting “significant wave height”—the average height of the highest third of the waves in a wave train. Subsequently this method was improved to permit wave forecasters to predict optimal routes for mariners. Forecasting of the most destructive of all waves, tsunamis, or “tidal waves,” caused by submarine quakes and volcanic eruptions, is another recent development. Soon after 159 persons were killed in Hawaii by the tsunami of 1946, the U.S. Coast and Geodetic Survey established a seismic sea-wave warning system. Using a seismic network to locate epicentres of submarine quakes, the installation predicts the arrival of tsunamis at points around the Pacific basin often hours before the arrival of the waves.

Glacier motion and the high-latitude ice sheets

Beginning around 1948, principles and techniques in metallurgy and solid-state physics were brought to bear on the mechanics of glacial movements. Laboratory studies showed that glacial ice deforms like other crystalline solids (such as metals) at temperatures near the melting point. Continued stress produces permanent deformation. In addition to plastic deformation within a moving glacier, the glacier itself may slide over its bed by mechanisms involving pressure melting and refreezing and accelerated plastic flow around obstacles. The causes underlying changes in rate of glacial movement, in particular spectacular accelerations called surges, require further study. Surges involve massive transfer of ice from the upper to the lower parts of glaciers at rates of as much as 20 metres a day, in comparison with normal advances of a few metres a year.

As a result of numerous scientific expeditions into Greenland and Antarctica, the dimensions of the remaining great ice sheets are fairly well known from gravimetric and seismic surveys. In parts of both continents it has been determined that the base of the ice is below sea level, probably due at least in part to subsidence of the crust under the weight of the caps. In 1966 a borehole was drilled 1,390 metres to bedrock on the North Greenland ice sheet, and two years later a similar boring of 2,162 metres was cut through the Antarctic ice at Byrd Station. From the study of annual incremental layers and analyses of oxygen isotopes, the bottom layers of ice cored in Greenland were estimated to be more than 150,000 years old, compared with 100,000 years for the Antarctic core. With the advent of geochemical dating of rocks it has become evident that the Ice Age, which in the earlier part of the century was considered to have transpired during the Quaternary Period, actually began much earlier. In Antarctica, for example, potassium-argon age determinations of lava overlying glaciated surfaces and sedimentary deposits of glacial origin show that glaciers existed on this continent at least 10 million years ago.

The study of ice sheets has benefited much from data produced by advanced instruments, computers, and orbiting satellites. The shape of ice sheets can be determined by numerical modeling, their heat budget from thermodynamic calculations, and their thickness with radar techniques. Colour images from satellites show the temperature distribution across the polar regions, which can be compared with the distribution of land and sea ice.

Britannica Chatbot logo

Britannica Chatbot

Chatbot answers are created from Britannica articles using AI. This is a beta feature. AI answers may contain errors. Please verify important information using Britannica articles. About Britannica AI.

Atmospheric sciences

Probes, satellites, and data transmission

Kites equipped with meteorgraphs were used as atmospheric probes in the late 1890s, and in 1907 the U.S. Weather Bureau recorded the ascent of a kite to 7,044 metres above Mount Weather, Virginia.

In the 1920s the radio replaced the telegraph and telephone as the principal instrument for transmitting weather data. By 1936 the radio meteorgraph (radiosonde) was developed, with capabilities of sending signals on relative humidity, temperature, and barometric pressure from unmanned balloons. Experimentation with balloons up to altitudes of about 31 kilometres showed that columns of warm air may rise more than 1.6 kilometres above Earth’s surface and that the lower atmosphere is often stratified, with winds in the different layers blowing in different directions. During the 1930s airplanes began to be used for observations of the weather, and the years since 1945 have seen the development of rockets and weather satellites. TIROS (Television Infra-Red Observation Satellite), the world’s first all-weather satellite, was launched in 1960, and in 1964 the Nimbus Satellite of the United States National Aeronautics and Space Administration (NASA) was rocketed into near-polar orbit.

There are two types of weather satellites: polar and geostationary. Polar satellites, like Nimbus, orbit Earth at low altitudes of a few hundred kilometres, and, because of their progressive drift, they produce a photographic coverage of the entire Earth every 24 hours. Geostationary satellites, first sent up in 1966, are situated over the Equator at altitudes of about 35,000 kilometres and transmit data at regular intervals. Much information can be derived from the data collected by satellites. For example, wind speed and direction are measured from cloud trajectories, while temperature and moisture profiles of the atmosphere are calculated from infrared data.

Weather forecasting

Efforts at incorporating numerical data on weather into mathematical formulas that could then be used for forecasting were initiated early in the century at the Norwegian Geophysical Institute. Vilhelm Bjerknes and his associates at Bergen succeeded in devising equations relating the measurable components of weather, but their complexity precluded the rapid solutions needed for forecasting. Out of their efforts, however, came the polar front theory for the origin of cyclones and the now-familiar names of cold front, warm front, and stationary front for the leading edges of air masses (see climate: Atmospheric pressure and wind).

In 1922 the British mathematician Lewis Fry Richardson demonstrated that the complex equations of the Norwegian school could be reduced to long series of simple arithmetic operations. With no more than the desk calculators and slide rules then available, however, the solution of a problem in procedure only raised a new one in manpower. In 1946 the mathematician John von Neumann and his fellow workers at the Institute for Advanced Study, in Princeton, N.J., began work on an electronic device to do the computation faster than the weather developed. Four years later the von Neumann group could claim that, given adequate data, their computer could forecast the weather as well as a weatherman. Present-day numerical weather forecasting is achieved with the help of advanced computer analysis (see weather forecasting).

Cloud physics

Studies of cloud physics have shown that the nuclei around which water condenses vary widely in their degree of concentration and areal distribution, ranging from six per cubic centimetre over the oceans to more than 4 million per cubic centimetre in the polluted air of some cities. The droplets that condense on these foreign particles may be as small as 0.001 centimetre in diameter. Raindrops apparently may form directly from the coalescence of these droplets, as in the case of tropical rains, or in the temperate zones through the intermediary of ice crystals. According to the theory of Tor Bergson and Walter Findeisen, vapour freezing on ice crystals in the clouds enlarges the crystals until they fall. What finally hits the ground depends on the temperature of air below the cloud—if below freezing, snow; if above, rain.

Properties and structure of the atmosphere

Less than a year after the space age began with the launching of the Soviet Sputnik I in 1957, the U.S. satellite Explorer I was sent into orbit with a Geiger counter for measuring the intensity of cosmic radiation at different levels above the ground. At altitudes around 1,000 kilometres this instrument ceased to function due to saturation by charged particles. This and subsequent investigations showed that a zone of radiation encircles the world between about latitude 75° N and 75° S, with maximum intensities at 5,000 and 16,000 kilometres. Named after the American physicist James Van Allen, a leading investigator of this portion of Earth’s magnetosphere, these zones are responsive to events taking place on the Sun. The solar wind, a stream of atomic particles emanating from the Sun in all directions, seems to be responsible for the electrons entrapped in the Van Allen region as well as for the teardrop shape of the magnetosphere as a whole, with its tail pointing always away from the Sun.

In 1898 Teisserenc de Bort, studying variations of temperature at high altitudes with the aid of balloons, discovered that at elevations of about 11 kilometres the figure for average decrease of temperature with height (about 5.5 °C per 1,000 metres of ascent) dropped and the value remained nearly constant at around −55 °C. He named the atmospheric zones below and above this temperature boundary the troposphere and the stratosphere.

Toward the end of World War II the B-29 Superfortress came into use as the first large aircraft to cruise regularly at 10,000 metres. Heading westward from bases in the Pacific, these planes sometimes encountered unexpected head winds that slowed their flight by as much as 300 kilometres per hour. The jet streams, as these high-altitude winds were named, have been found to encircle Earth following wavy courses and moving from west to east at velocities ranging upward to 500 kilometres per hour. Aircraft have also proved useful in studies of the structure and dynamics of tropical hurricanes. Following the destruction wrought to the Atlantic Coast of the United States in 1955 by hurricanes Connie and Diane, a national centre was established in Florida with the missions of locating and tracking and, it is hoped, of learning how to predict the paths of hurricanes and to dissipate their energy.