iron processing

print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

External Websites

iron processing, use of a smelting process to turn the ore into a form from which products can be fashioned. Included in this article also is a discussion of the mining of iron and of its preparation for smelting.

Iron (Fe) is a relatively dense metal with a silvery white appearance and distinctive magnetic properties. It constitutes 5 percent by weight of the Earth’s crust, and it is the fourth most abundant element after oxygen, silicon, and aluminum. It melts at a temperature of 1,538° C (2,800° F).

Iron is allotropic—that is, it exists in different forms. Its crystal structure is either body-centred cubic (bcc) or face-centred cubic (fcc), depending on the temperature. In both crystallographic modifications, the basic configuration is a cube with iron atoms located at the corners. There is an extra atom in the centre of each cube in the bcc modification and in the centre of each face in the fcc. At room temperature, pure iron has a bcc structure referred to as alpha-ferrite; this persists until the temperature is raised to 912° C (1,674° F), when it transforms into an fcc arrangement known as austenite. With further heating, austenite remains until the temperature reaches 1,394° C (2,541° F), at which point the bcc structure reappears. This form of iron, called delta-ferrite, remains until the melting point is reached.

The pure metal is malleable and can be easily shaped by hammering, but apart from specialized electrical applications it is rarely used without adding other elements to improve its properties. Mostly it appears in iron-carbon alloys such as steels, which contain between 0.003 and about 2 percent carbon (the majority lying in the range of 0.01 to 1.2 percent), and cast irons with 2 to 4 percent carbon. At the carbon contents typical of steels, iron carbide (Fe3C), also known as cementite, is formed; this leads to the formation of pearlite, which in a microscope can be seen to consist of alternate laths of alpha-ferrite and cementite. Cementite is harder and stronger than ferrite but is much less malleable, so that vastly differing mechanical properties are obtained by varying the amount of carbon. At the higher carbon contents typical of cast irons, carbon may separate out as either cementite or graphite, depending on the manufacturing conditions. Again, a wide range of properties is obtained. This versatility of iron-carbon alloys leads to their widespread use in engineering and explains why iron is by far the most important of all the industrial metals.

History

There is evidence that meteorites were used as a source of iron before 3000 bc, but extraction of the metal from ores dates from about 2000 bc. Production seems to have started in the copper-producing regions of Anatolia and Persia, where the use of iron compounds as fluxes to assist in melting may have accidentally caused metallic iron to accumulate on the bottoms of copper smelting furnaces. When iron making was properly established, two types of furnace came into use. Bowl furnaces were constructed by digging a small hole in the ground and arranging for air from a bellows to be introduced through a pipe or tuyere. Stone-built shaft furnaces, on the other hand, relied on natural draft, although they too sometimes used tuyeres. In both cases, smelting involved creating a bed of red-hot charcoal to which iron ore mixed with more charcoal was added. Chemical reduction of the ore then occurred, but, since primitive furnaces were incapable of reaching temperatures higher than 1,150° C (2,100° F), the normal product was a solid lump of metal known as a bloom. This may have weighed up to 5 kilograms (11 pounds) and consisted of almost pure iron with some entrapped slag and pieces of charcoal. The manufacture of iron artifacts then required a shaping operation, which involved heating blooms in a fire and hammering the red-hot metal to produce the desired objects. Iron made in this way is known as wrought iron. Sometimes too much charcoal seems to have been used, and iron-carbon alloys, which have lower melting points and can be cast into simple shapes, were made unintentionally. The applications of this cast iron were limited because of its brittleness, and in the early Iron Age only the Chinese seem to have exploited it. Elsewhere, wrought iron was the preferred material.

Although the Romans built furnaces with a pit into which slag could be run off, little change in iron-making methods occurred until medieval times. By the 15th century, many bloomeries used low shaft furnaces with water power to drive the bellows, and the bloom, which might weigh over 100 kilograms, was extracted through the top of the shaft. The final version of this kind of bloomery hearth was the Catalan forge, which survived in Spain until the 19th century. Another design, the high bloomery furnace, had a taller shaft and evolved into the 3-metre- (10-foot-) high Stückofen, which produced blooms so large they had to be removed through a front opening in the furnace.

The blast furnace appeared in Europe in the 15th century when it was realized that cast iron could be used to make one-piece guns with good pressure-retaining properties, but whether its introduction was due to Chinese influence or was an independent development is unknown. At first, the differences between a blast furnace and a Stückofen were slight. Both had square cross sections, and the main changes required for blast-furnace operation were an increase in the ratio of charcoal to ore in the charge and a taphole for the removal of liquid iron. The product of the blast furnace became known as pig iron from the method of casting, which involved running the liquid into a main channel connected at right angles to a number of shorter channels. The whole arrangement resembled a sow suckling her litter, and so the lengths of solid iron from the shorter channels were known as pigs.

Are you a student?
Get a special academic rate on Britannica Premium.

Despite the military demand for cast iron, most civil applications required malleable iron, which until then had been made directly in a bloomery. The arrival of blast furnaces, however, opened up an alternative manufacturing route; this involved converting cast iron to wrought iron by a process known as fining. Pieces of cast iron were placed on a finery hearth, on which charcoal was being burned with a plentiful supply of air, so that carbon in the iron was removed by oxidation, leaving semisolid malleable iron behind. From the 15th century on, this two-stage process gradually replaced direct iron making, which nevertheless survived into the 19th century.

By the middle of the 16th century, blast furnaces were being operated more or less continuously in southeastern England. Increased iron production led to a scarcity of wood for charcoal and to its subsequent replacement by coal in the form of coke—a discovery that is usually credited to Abraham Darby in 1709. Because the higher strength of coke enabled it to support a bigger charge, much larger furnaces became possible, and weekly outputs of 5 to 10 tons of pig iron were achieved.

Next, the advent of the steam engine to drive blowing cylinders meant that the blast furnace could be provided with more air. This created the potential problem that pig iron production would far exceed the capacity of the finery process. Accelerating the conversion of pig iron to malleable iron was attempted by a number of inventors, but the most successful was the Englishman Henry Cort, who patented his puddling furnace in 1784. Cort used a coal-fired reverberatory furnace to melt a charge of pig iron to which iron oxide was added to make a slag. Agitating the resultant “puddle” of metal caused carbon to be removed by oxidation (together with silicon, phosphorus, and manganese). As a result, the melting point of the metal rose so that it became semisolid, although the slag remained quite fluid. The metal was then formed into balls and freed from as much slag as possible before being removed from the furnace and squeezed in a hammer. For a short time, puddling furnaces were able to provide enough iron to meet the demands for machinery, but once again blast-furnace capacity raced ahead as a result of the Scotsman James Beaumont Nielsen’s invention in 1828 of the hot-blast stove for preheating blast air and the realization that a round furnace performed better than a square one.

The eventual decline in the use of wrought iron was brought about by a series of inventions that allowed furnaces to operate at temperatures high enough to melt iron. It was then possible to produce steel, which is a superior material. First, in 1856, Henry Bessemer patented his converter process for blowing air through molten pig iron, and in 1861 William Siemens took out a patent for his regenerative open-hearth furnace. In 1879 Sidney Gilchrist Thomas and Percy Gilchrist adapted the Bessemer converter for use with phosphoric pig iron; as a result, the basic Bessemer, or Thomas, process was widely adopted on the continent of Europe, where high-phosphorus iron ores were abundant. For about 100 years, the open-hearth and Bessemer-based processes were jointly responsible for most of the steel that was made, before they were replaced by the basic oxygen and electric-arc furnaces.

Apart from the injection of part of the fuel through tuyeres, the blast furnace has employed the same operating principles since the early 19th century. Furnace size has increased markedly, however, and one large modern furnace can supply a steelmaking plant with up to 10,000 tons of liquid iron per day.

Throughout the 20th century, many new iron-making processes were proposed, but it was not until the 1950s that potential substitutes for the blast furnace emerged. Direct reduction, in which iron ores are reduced at temperatures below the metal’s melting point, had its origin in such experiments as the Wiberg-Soderfors process introduced in Sweden in 1952 and the HyL process introduced in Mexico in 1957. Few of these techniques survived, and those that did were extensively modified. Another alternative iron-making method, smelting reduction, had its forerunners in the electric furnaces used to make liquid iron in Sweden and Norway in the 1920s. The technique grew to include methods based on oxygen steelmaking converters using coal as a source of additional energy, and in the 1980s it became the focus of extensive research and development activity in Europe, Japan, and the United States.