Elaboration and generalization
Euler and infinite series
The 17th-century techniques of differentiation, integration, and infinite processes were of enormous power and scope, and their use expanded in the next century. The output of Euler alone was enough to dwarf the combined discoveries of Newton, Leibniz, and the Bernoullis. Much of his work elaborated on theirs, developing the mechanics of heavenly bodies, fluids, and flexible and elastic media. For example, Euler studied the difficult problem of describing the motion of three masses under mutual gravitational attraction (now known as the three-body problem). Applied to the Sun-Moon-Earth system, Euler’s work greatly increased the accuracy of the lunar tables used in navigation—for which the British Board of Longitude awarded him a monetary prize. He also applied analysis to the bending of a thin elastic beam and in the design of sails.
Euler also took analysis in new directions. In 1734 he solved a problem in infinite series that had defeated his predecessors: the summation of the series 1/12 + 1/22 + 1/32 + 1/42 +⋯. Euler found the sum to be π2/6 by the bold step of comparing the series with the sum of the roots of the following infinite polynomial equation (obtained from the power series for the sine function): sin (Square root of√x)/Square root of√x = 1 − x/3! + x2/5! − x3/7! +⋯ = 0. Euler was later able to generalize this result to find the values of the function for all even natural numbers s.
The function ζ(s), later known as the Riemann zeta function, is a concept that really belongs to the 19th century. Euler caught a glimpse of the future when he discovered the fundamental property of ζ(s) in his Introduction to Analysis of the Infinite (1748): the sum over the integers 1, 2, 3, 4, … equals a product over the prime numbers 2, 3, 5, 7, 11, 13, 17, …, namely
This startling formula was the first intimation that analysis—the theory of the continuous—could say something about the discrete and mysterious prime numbers. The zeta function unlocks many of the secrets of the primes—for example, that there are infinitely many of them. To see why, suppose there were only finitely many primes. Then the product for ζ(s) would have only finitely many terms and hence would have a finite value for s = 1. But for s = 1 the sum on the left would be the harmonic series, which Oresme showed to be infinite, thus producing a contradiction.
Of course it was already known that there were infinitely many primes—this is a famous theorem of Euclid—but Euler’s proof gave deeper insight into the result. By the end of the 20th century, prime numbers had become the key to the security of most electronic transactions, with sensitive information being “hidden” in the process of multiplying large prime numbers (see cryptology). This demands an infinite supply of primes, to avoid repeating primes used in other transactions, so that the infinitude of primes has become one of the foundations of electronic commerce.
Complex exponentials
As a final example of Euler’s work, consider his famous formula for complex exponentials eiθ = cos (θ) + i sin (θ), where i = Square root of√−1. Like his formula for ζ(2), which surprisingly relates π to the squares of the natural numbers, the formula for eiθ relates all the most famous numbers—e, i, and π—in a miraculously simple way. Substituting π for θ in the formula gives eiπ = −1, which is surely the most remarkable formula in mathematics.
The formula for eiθ appeared in Euler’s Introduction, where he proved it by comparing the Taylor series for the two sides. The formula is really a reworking of other formulas due to Newton’s contemporaries in England, Roger Cotes and Abraham de Moivre—and Euler may also have been influenced by discussions with his mentor Johann Bernoulli—but it definitively shows how the sine and cosine functions are just parts of the exponential function. This, too, was a glimpse of the future, where many a pair of real functions would be fused into a single “complex” function. Before explaining what this means, more needs to be said about the evolution of the function concept in the 18th century.
Functions
Calculus introduced mathematicians to many new functions by providing new ways to define them, such as with infinite series and with integrals. More generally, functions arose as solutions of ordinary differential equations (involving a function of one variable and its derivatives) and partial differential equations (involving a function of several variables and derivatives with respect to these variables). Many physical quantities depend on more than one variable, so the equations of mathematical physics typically involve partial derivatives.
In the 18th century the most fertile equation of this kind was the vibrating string equation, derived by the French mathematician Jean Le Rond d’Alembert in 1747 and relating to rates of change of quantities arising in the vibration of a taut violin string (see Musical origins). This led to the amazing conclusion that an arbitrary continuous function f(x) can be expressed, between 0 and 2π, as a sum of sine and cosine functions in a series (later called a Fourier series) of the form y = f(x) = a0/2 + (a1 cos (πx) + b1 sin (πx)) + (a2 cos (2πx) + b2 sin (2πx)) +⋯.
But what is an arbitrary continuous function, and is it always correctly expressed by such a series? Indeed, does such a series necessarily represent a continuous function at all? The French mathematician Joseph Fourier addressed these questions in his The Analytical Theory of Heat (1822). Subsequent investigations turned up many surprises, leading not only to a better understanding of continuous functions but also of discontinuous functions, which do indeed occur as Fourier series. This in turn led to important generalizations of the concept of integral designed to integrate highly discontinuous functions—the Riemann integral of 1854 and the Lebesgue integral of 1902. (See Riemann integral and Measure theory.)
Fluid flow
Evolution in a different direction began when the French mathematicians Alexis Clairaut in 1740 and d’Alembert in 1752 discovered equations for fluid flow. Their equations govern the velocity components u and v at a point (x, y) in a steady two-dimensional flow. Like a vibrating string, the motion of a fluid is rather arbitrary, although not completely—d’Alembert was surprised to notice that a combination of the velocity components, u + iv, was a differentiable function of x + iy. Like Euler, he had discovered a function of a complex variable, with u and v its real and imaginary parts, respectively.
This property of u + iv was rediscovered in France by Augustin-Louis Cauchy in 1827 and in Germany by Bernhard Riemann in 1851. By this time complex numbers had become an accepted part of mathematics, obeying the same algebraic rules as real numbers and having a clear geometric interpretation as points in the plane. Any complex function f(z) can be written in the form f(z) = f(x + iy) = u(x, y) + iv(x, y), where u and v are real-valued functions of x and y. Complex differentiable functions are those for which the limit f′(z) of (f(z + h) − f(z))/h exists as h tends to zero. However, unlike real numbers, which can approach zero only along the real line, complex numbers reside in the plane, and an infinite number of paths lead to zero. It turned out that, in order to give the same limit f′(z) as h tends to zero from any direction, u and v must satisfy the constraints imposed by the Clairaut and d’Alembert equations (see D’Alembert’s wave equation).
A way to visualize differentiability is to interpret the function f as a mapping from one plane to another. For f′(z) to exist, the function f must be “similarity preserving in the small,” or conformal, meaning that infinitesimal regions are faithfully mapped to regions of the same shape, though possibly rotated and magnified by some factor. This makes differentiable complex functions useful in actual mapping problems, and they were used for this purpose even before Cauchy and Riemann recognized their theoretical importance.
Differentiability is a much more significant property for complex functions than for real functions. Cauchy discovered that, if a function’s first derivative exists, then all its derivatives exist, and therefore it can be represented by a power series in z—its Taylor series. Such a function is called analytic. In contrast to real differentiable functions, which are as “flexible” as string, complex differentiable functions are “rigid” in the sense that any region of the function determines the entire function. This is because the values of the function over any region, no matter how small, determine all its derivatives, and hence they determine its power series. Thus, it became feasible to study analytic functions via power series, a program attempted by the Italian French mathematician Joseph-Louis Lagrange for real functions in the 18th century but first carried out successfully by the German mathematician Karl Weierstrass in the 19th century, after the appropriate subject matter of complex analytic functions had been discovered.