Most of the powerful abstract mathematical theories in use today originated in the 19th century, so any historical account of the period should be supplemented by reference to detailed treatments of these topics. Yet mathematics grew so much during this period that any account must necessarily be selective. Nonetheless, some broad features stand out. The growth of mathematics as a profession was accompanied by a sharpening division between mathematics and the physical sciences, and contact between the two subjects takes place today across a clear professional boundary. One result of this separation has been that mathematics, no longer able to rely on its scientific import for its validity, developed markedly higher standards of rigour. It was also freed to develop in directions that had little to do with applicability. Some of these pure creations have turned out to be surprisingly applicable, while the attention to rigour has led to a wholly novel conception of the nature of mathematics and logic. Moreover, many outstanding questions in mathematics yielded to the more conceptual approaches that came into vogue.
Projective geometry
The French Revolution provoked a radical rethinking of education in France, and mathematics was given a prominent role. The École Polytechnique was established in 1794 with the ambitious task of preparing all candidates for the specialist civil and military engineering schools of the republic. Mathematicians of the highest calibre were involved; the result was a rapid and sustained development of the subject. The inspiration for the École was that of Gaspard Monge, who believed strongly that mathematics should serve the scientific and technical needs of the state. To that end he devised a syllabus that promoted his own descriptive geometry, which was useful in the design of forts, gun emplacements, and machines and which was employed to great effect in the Napoleonic survey of Egyptian historical sites.
In Monge’s descriptive geometry, three-dimensional objects are described by their orthogonal projections onto a horizontal and a vertical plane, the plan and elevation of the object. A pupil of Monge, Jean-Victor Poncelet, was taken prisoner during Napoleon’s retreat from Moscow and sought to keep up his spirits while in jail in Saratov by thinking over the geometry he had learned. He dispensed with the restriction to orthogonal projections and decided to investigate what properties figures have in common with their shadows. There are several of these properties: a straight line casts a straight shadow, and a tangent to a curve casts a shadow that is tangent to the shadow of the curve. But some properties are lost: the lengths and angles of a figure bear no relation to the lengths and angles of its shadow. Poncelet felt that the properties that survive are worthy of study, and, by considering only those properties that a figure shares with all its shadows, Poncelet hoped to put truly geometric reasoning on a par with algebraic geometry.
In 1822 Poncelet published the Traité des propriétés projectives des figures (“Treatise on the Projective Properties of Figures”). From his standpoint every conic section is equivalent to a circle, so his treatise contained a unified treatment of the theory of conic sections. It also established several new results. Geometers who took up his work divided into two groups: those who accepted his terms and those who, finding them obscure, reformulated his ideas in the spirit of algebraic geometry. On the algebraic side it was taken up in Germany by August Ferdinand Möbius, who seems to have come to his ideas independently of Poncelet, and then by Julius Plücker. They showed how rich was the projective geometry of curves defined by algebraic equations and thereby gave an enormous boost to the algebraic study of curves, comparable to the original impetus provided by Descartes. Germany also produced synthetic projective geometers, notably Jakob Steiner (born in Switzerland but educated in Germany) and Karl Georg Christian von Staudt, who emphasized what can be understood about a figure from a careful consideration of all its transformations.
Within the debates about projective geometry emerged one of the few synthetic ideas to be discovered since the days of Euclid, that of duality. This associates with each point a line and with each line a point, in such a way that (1) three points lying in a line give rise to three lines meeting in a point and, conversely, three lines meeting in a point give rise to three points lying on a line and (2) if one starts with a point (or a line) and passes to the associated line (point) and then repeats the process, one returns to the original point (line). One way of using duality (presented by Poncelet) is to pick an arbitrary conic and then to associate with a point P lying outside the conic the line that joins the points R and S at which the tangents through P to the conic touch the conic. A second method is needed for points on or inside the conic. The feature of duality that makes it so exciting is that one can apply it mechanically to every proof in geometry, interchanging “point” and line” and “collinear” and “concurrent” throughout, and so obtain a new result. Sometimes a result turns out to be equivalent to the original, sometimes to its converse, but at a single stroke the number of theorems was more or less doubled.
Making the calculus rigorous
Monge’s educational ideas were opposed by Joseph-Louis Lagrange, who favoured a more traditional and theoretical diet of advanced calculus and rational mechanics (the application of the calculus to the study of the motion of solids and liquids). Eventually Lagrange won, and the vision of mathematics that was presented to the world was that of an autonomous subject that was also applicable to a broad range of phenomena by virtue of its great generality, a view that has persisted to the present day.
During the 1820s Augustin-Louis, Baron Cauchy, lectured at the École Polytechnique on the foundations of the calculus. Since its invention it had been generally agreed that the calculus gave correct answers, but no one had been able to give a satisfactory explanation of why this was so. Cauchy rejected Lagrange’s algebraic approach and proved that Lagrange’s basic assumption that every function has a power series expansion is in fact false. Newton had suggested a geometric or dynamic basis for calculus, but this ran the risk of introducing a vicious circle when the calculus was applied to mechanical or geometric problems. Cauchy proposed basing the calculus on a sophisticated and difficult interpretation of the idea of two points or numbers being arbitrarily close together. Although his students disliked the new approach, and Cauchy was ordered to teach material that the students could actually understand and use, his methods gradually became established and refined to form the core of the modern rigorous calculus, a subject now called mathematical analysis.
Traditionally, the calculus had been concerned with the two processes of differentiation and integration and the reciprocal relation that exists between them. Cauchy provided a novel underpinning by stressing the importance of the concept of continuity, which is more basic than either. He showed that, once the concepts of a continuous function and limit are defined, the concepts of a differentiable function and an integrable function can be defined in terms of them. Unfortunately, neither of these concepts is easy to grasp, and the much-needed degree of precision they bring to mathematics has proved difficult to appreciate. Roughly speaking, a function is continuous at a point in its domain if small changes in the input around the specified value produce only small changes in the output.
Thus, the familiar graph of a parabola y = x2 is continuous around the point x = 0; as x varies by small amounts, so necessarily does y. On the other hand, the graph of the function that takes the value 0 when x is negative or zero, and the value 1 when x is positive, plainly has a discontinuous graph at the point x = 0, and it is indeed discontinuous there according to the definition. If x varies from 0 by any small positive amount, the value of the function jumps by the fixed amount 1, which is not an arbitrarily small amount.
Cauchy said that a function f(x) tends to a limiting value 1 as x tends to the value a whenever the value of the difference f(x) − f(a) becomes arbitrarily small as the difference x − a itself becomes arbitrarily small. He then showed that if f(x) is continuous at a, the limiting value of the function as x tended to a was indeed f(a). The crucial feature of this definition is that it defines what it means for a variable quantity to tend to something entirely without reference to ideas of motion.
Cauchy then said a function f(x) is differentiable at the point a if, as x tends to a (which it is never allowed to reach), the value of the quotient [f(x) − f(a)]/(x − a) tends to a limiting value, called the derivative of the function f(x) at a. To define the integral of a function f(x) between the values a and b, Cauchy went back to the primitive idea of the integral as the measure of the area under the graph of the function. He approximated this area by rectangles and said that if the sum of the areas of the rectangles tends to a limit as their number increases indefinitely and if this limiting value is the same however the rectangles are obtained, then the function is integrable. Its integral is the common limiting value. After he had defined the integral independently of the differential calculus, Cauchy had to prove that the processes of integrating and differentiating are mutually inverse. This he did, giving for the first time a rigorous foundation to all the elementary calculus of his day.
Fourier series
The other crucial figure of the time in France was Joseph, Baron Fourier. His major contribution, presented in The Analytical Theory of Heat (1822), was to the theory of heat diffusion in solid bodies. He proposed that any function could be written as an infinite sum of the trigonometric functions cosine and sine; for example,
Expressions of this kind had been written down earlier, but Fourier’s treatment was new in the degree of attention given to their convergence. He investigated the question “Given the function f(x), for what range of values of x does the expression above sum to a finite number?” It turned out that the answer depends on the coefficients an, and Fourier gave rules for obtaining them of the form
Had Fourier’s work been entirely correct, it would have brought all functions into the calculus, making possible the solution of many kinds of differential equations and greatly extending the theory of mathematical physics. But his arguments were unduly naive: after Cauchy it was not clear that the function f(x) sin (nx) was necessarily integrable. When Fourier’s ideas were finally published, they were eagerly taken up, but the more cautious mathematicians, notably the influential German Peter Gustav Lejeune Dirichlet, wanted to rederive Fourier’s conclusions in a more rigorous way. Fourier’s methodology was widely accepted, but questions about its validity in detail were to occupy mathematicians for the rest of the century.
Elliptic functions
The theory of functions of a complex variable was also being decisively reformulated. At the start of the 19th century, complex numbers were discussed from a quasi-philosophical standpoint by several French writers, notably Jean-Robert Argand. A consensus emerged that complex numbers should be thought of as pairs of real numbers, with suitable rules for their addition and multiplication so that the pair (0, 1) was a square root of −1 (i). The underlying meaning of such a number pair was given by its geometric interpretation either as a point in a plane or as a directed segment joining the coordinate origin to the point in question. (This representation is sometimes called the Argand diagram.) In 1827, while revising an earlier manuscript for publication, Cauchy showed how the problem of integrating functions of two variables can be illuminated by a theory of functions of a single complex variable, which he was then developing. But the decisive influence on the growth of the subject came from the theory of elliptic functions.
The study of elliptic functions originated in the 18th century, when many authors studied integrals of the form
where p(t) and q(t) are polynomials in t and q(t) is of degree 3 or 4 i. Such integrals arise naturally, for example, as an expression for the length of an arc of an ellipse (whence the name). These integrals cannot be evaluated explicitly; they do not define a function that can be obtained from the rational and trigonometric functions, a difficulty that added to their interest. Elliptic integrals were intensively studied for many years by the French mathematician Adrien-Marie Legendre, who was able to calculate tables of values for such expressions as functions of their upper endpoint, x. But the topic was completely transformed in the late 1820s by the independent but closely overlapping discoveries of two young mathematicians, the Norwegian Niels Henrik Abel and the German Carl Jacobi. These men showed that if one allowed the variable x to be complex and the problem was inverted, so that the object of study became
considered as defining a function x of a variable u, then a remarkable new theory became apparent. The new function, for example, possessed a property that generalized the basic property of periodicity of the trigonometric functions sine and cosine: sin (x) = sin (x + 2π). Any function of the kind just described has two distinct periods, ω1 and ω2:
These new functions, the elliptic functions, aroused a considerable degree of interest. The analogy with trigonometric functions ran very deep (indeed, the trigonometric functions turned out to be special cases of elliptic functions), but their greatest influence was on the burgeoning general study of functions of a complex variable. The theory of elliptic functions became the paradigm of what could be discovered by allowing variables to be complex instead of real. But their natural generalization to functions defined by more complicated integrands, although it yielded partial results, resisted analysis until the second half of the 19th century.
The theory of numbers
While the theory of elliptic functions typifies the 19th century’s enthusiasm for pure mathematics, some contemporary mathematicians said that the simultaneous developments in number theory carried that enthusiasm to excess. Nonetheless, during the 19th century the algebraic theory of numbers grew from being a minority interest to its present central importance in pure mathematics. The earlier investigations of Pierre de Fermat had eventually drawn the attention of Leonhard Euler and Lagrange. Euler proved some of Fermat’s unproven claims and discovered many new and surprising facts; Lagrange not only supplied proofs of many remarks that Euler had merely conjectured but also worked them into something like a coherent theory. For example, it was known to Fermat that the numbers that can be written as the sum of two squares are the number 2, squares themselves, primes of the form 4n + 1, and products of these numbers. Thus, 29, which is 4 × 7 + 1, is 52 + 22, but 35, which is not of this form, cannot be written as the sum of two squares. Euler had proved this result and had gone on to consider similar cases, such as primes of the form x2 + 2y2 or x2 + 3y2. But it was left to Lagrange to provide a general theory covering all expressions of the form ax2 + bxy+ cy2, quadratic forms, as they are called.
Lagrange’s theory of quadratic forms had made considerable use of the idea that a given quadratic form could often be simplified to another with the same properties but with smaller coefficients. To do this in practice, it was often necessary to consider whether a given integer left a remainder that was a square when it was divided by another given integer. (For example, 48 leaves a remainder of 4 upon division by 11, and 4 is a square.) Legendre discovered a remarkable connection between the question “Does the integer p leave a square remainder on division by q?” and the seemingly unrelated question “Does the integer q leave a square remainder upon division by p?” He saw, in fact, that when p and q are primes, both questions have the same answer unless both primes are of the form 4n − 1. Because this observation connects two questions in which the integers p and q play mutually opposite roles, it became known as the law of quadratic reciprocity. Legendre also gave an effective way of extending his law to cases when p and q are not prime.
All this work set the scene for the emergence of Carl Friedrich Gauss, whose Disquisitiones Arithmeticae (1801) not only consummated what had gone before but also directed number theorists in new and deeper directions. He rightly showed that Legendre’s proof of the law of quadratic reciprocity was fundamentally flawed and gave the first rigorous proof. His work suggested that there were profound connections between the original question and other branches of number theory, a fact that he perceived to be of signal importance for the subject. He extended Lagrange’s theory of quadratic forms by showing how two quadratic forms can be “multiplied” to obtain a third. Later mathematicians were to rework this into an important example of the theory of finite commutative groups. And in the long final section of his book, Gauss gave the theory that lay behind his first discovery as a mathematician: that a regular 17-sided figure can be constructed by circle and straightedge alone.
The discovery that the regular “17-gon” is so constructible was the first such discovery since the Greeks, who had known only of the equilateral triangle, the square, the regular pentagon, the regular 15-sided figure, and the figures that can be obtained from these by successively bisecting all the sides. But what was of much greater significance than the discovery was the theory that underpinned it, the theory of what are now called algebraic numbers. It may be thought of as an analysis of how complicated a number may be while yet being amenable to an exact treatment.
The simplest numbers to understand and use are the integers and the rational numbers. The irrational numbers seem to pose problems. Famous among these is Square root of√2. It cannot be written as a finite or repeating decimal (because it is not rational), but it can be manipulated algebraically very easily. It is necessary only to replace every occurrence of (Square root of√2)2 by 2. In this way expressions of the form m + nSquare root of√2, where m and n are integers, can be handled arithmetically. These expressions have many properties akin to those of whole numbers, and mathematicians have even defined prime numbers of this form; therefore, they are called algebraic integers. In this case they are obtained by grafting onto the rational numbers a solution of the polynomial equation x2 − 2 = 0. In general an algebraic integer is any solution, real or complex, of a polynomial equation with integer coefficients in which the coefficient of the highest power of the unknown is 1.
Gauss’s theory of algebraic integers led to the question of determining when a polynomial of degree n with integer coefficients can be solved given the solvability of polynomial equations of lower degree but with coefficients that are algebraic integers. For example, Gauss regarded the coordinates of the 17 vertices of a regular 17-sided figure as complex numbers satisfying the equation x17 − 1 = 0 and thus as algebraic integers. One such integer is 1. He showed that the rest are obtained by solving a succession of four quadratic equations. Because solving a quadratic equation is equivalent to performing a construction with a ruler and a compass, as Descartes had shown long before, Gauss had shown how to construct the regular 17-gon.
Inspired by Gauss’s works on the theory of numbers, a growing school of mathematicians were drawn to the subject. Like Gauss, the German mathematician Ernst Eduard Kummer sought to generalize the law of quadratic reciprocity to deal with questions about third, fourth, and higher powers of numbers. He found that his work led him in an unexpected direction, toward a partial resolution of Fermat’s last theorem. In 1637 Fermat wrote in the margin of his copy of Diophantus’s Arithmetica the claim to have a proof that there are no solutions in positive integers to the equation xn + yn = zn if n > 2. However, no proof was ever discovered among his notebooks.
Kummer’s approach was to develop the theory of algebraic integers. If it could be shown that the equation had no solution in suitable algebraic integers, then a fortiori there could be no solution in ordinary integers. He was eventually able to establish the truth of Fermat’s last theorem for a large class of prime exponents n (those satisfying some technical conditions needed to make the proof work). This was the first significant breakthrough in the study of the theorem. Together with the earlier work of the French mathematician Sophie Germain, it enabled mathematicians to establish Fermat’s last theorem for every value of n from 3 to 4,000,000. However, Kummer’s way around the difficulties he encountered further propelled the theory of algebraic integers into the realm of abstraction. It amounted to the suggestion that there should be yet other types of integers, but many found these ideas obscure.
In Germany Richard Dedekind patiently created a new approach, in which each new number (called an ideal) was defined by means of a suitable set of algebraic integers in such a way that it was the common divisor of the set of algebraic integers used to define it. Dedekind’s work was slow to gain approval, yet it illustrates several of the most profound features of modern mathematics. It was clear to Dedekind that the ideal algebraic integers were the work of the human mind. Their existence can be neither based on nor deduced from the existence of physical objects, analogies with natural processes, or some process of abstraction from more familiar things. A second feature of Dedekind’s work was its reliance on the idea of sets of objects, such as sets of numbers, even sets of sets. Dedekind’s work showed how basic the naive conception of a set could be. The third crucial feature of his work was its emphasis on the structural aspects of algebra. The presentation of number theory as a theory about objects that can be manipulated (in this case, added and multiplied) according to certain rules akin to those governing ordinary numbers was to be a paradigm of the more formal theories of the 20th century.