Table of Contents
References & Edit History Related Topics

Arithmetization of analysis

Before the 19th century, analysis rested on makeshift foundations of arithmetic and geometry, supporting the discrete and continuous sides of the subject, respectively. Mathematicians since the time of Eudoxus had doubted that “all is number,” and when in doubt they used geometry. This pragmatic compromise began to fall apart in 1799, when Gauss found himself obliged to use continuity in a result that seemed to be discrete—the fundamental theorem of algebra.

The theorem says that any polynomial equation has a solution in the complex numbers. Gauss’s first proof fell short (although this was not immediately recognized) because it assumed as obvious a geometric result actually harder than the theorem itself. In 1816 Gauss attempted another proof, this time relying on a weaker assumption known as the intermediate value theorem: if f(x) is a continuous function of a real variable x and if f(a) < 0 and f(b) > 0, then there is a c between a and b such that f(c) = 0.

The importance of proving the intermediate value theorem was recognized in 1817 by the Bohemian mathematician Bernhard Bolzano, who saw an opportunity to remove geometric assumptions from algebra. His attempted proof introduced essentially the modern condition for continuity of a function f at a point x: f(x + h) − f(x) can be made smaller than any given quantity, provided h can be made arbitrarily close to zero. Bolzano also relied on an assumption—the existence of a greatest lower bound: if a certain property M holds only for values greater than some quantity l, then there is a greatest quantity u such that M holds only for values greater than or equal to u. Bolzano could go no further than this, because in his time the notion of quantity was still too vague. Was it a number? Was it a line segment? And in any case how does one decide whether points on a line have a greatest lower bound?

The same problem was encountered by the German mathematician Richard Dedekind when teaching calculus, and he later described his frustration with appeals to geometric intuition:

For myself this feeling of dissatisfaction was so overpowering that I made a fixed resolve to keep meditating on the question till I should find a purely arithmetic and perfectly rigorous foundation for the principles of infinitesimal analysis.…I succeeded on November 24, 1858.

Dedekind eliminated geometry by going back to an idea of Eudoxus but taking it a step further. Eudoxus said, in effect, that a point on the line is uniquely determined by its position among the rationals. That is, two points are equal if the rationals less than them (and the rationals greater than them) are the same. Thus, each point creates a unique “cut” (L, U) in the rationals, a partition of the set of rationals into sets L and U with each member of L less than every member of U.

Dedekind’s small but crucial step was to dispense with the geometric points supposed to create the cuts. He defined the real numbers to be the cuts (L, U) just described—that is, as partitions of the rationals with each member of L less than every member of U. Cuts included representatives of all rational and irrational quantities previously considered, but now the existence of greatest lower bounds became provable and hence also the intermediate value theorem and all its consequences. In fact, all the basic theorems about limits and continuous functions followed from Dedekind’s definition—an outcome called the arithmetization of analysis. (See Sidebar: Infinitesimals.)

The full program of arithmetization, based on a different but equivalent definitions of the real numbers, is mainly due to Weierstrass in the 1870s. He relied on rigorous definitions of the real numbers and limits to justify the computations previously made with infinitesimals. Bolzano’s 1817 definition of continuity of a function f at a point x, mentioned above, came close to saying what it meant for the limit of f(x + h) to be f(x). The final touch of precision was added with Cauchy’s “epsilon-delta” definition of 1821: for each ε > 0 there is a δ > 0 such that |f(x + h) − f(x)| < ε for all |h| < δ.

Analysis in higher dimensions

While geometry was being purged from the foundations of analysis, its spirit was taking over the superstructure. The study of complex functions, or functions with two or more variables, became allied with the rich geometry of higher-dimensional spaces. Sometimes the geometry guided the development of concepts in analysis, and sometimes it was the reverse. A beautiful example of this interaction was the concept of a Riemann surface. The complex numbers can be viewed as a plane (see Fluid flow), so a function of a complex variable can be viewed as a function on the plane. Riemann’s insight was that other surfaces can also be provided with complex coordinates, and certain classes of functions belong to certain surfaces. For example, by mapping the plane stereographically onto the sphere, each point of the sphere except the north pole is given a complex coordinate, and it is natural to map the north pole to infinity, ∞. When this is done, all rational functions make sense on the sphere; for example, 1/z is defined for all points of the sphere by making the natural assumptions that 1/0 = ∞ and 1/ = 0. This leads to a remarkable geometric characterization of the class of rational complex functions: they are the differentiable functions on the sphere. One similarly finds that the elliptic functions (complex functions that are periodic in two directions) are the differentiable functions on the torus.

Functions of three, four, … variables are naturally studied with reference to spaces of three, four, … dimensions, but these are not necessarily the ordinary Euclidean spaces. The idea of differentiable functions on the sphere or torus was generalized to differentiable functions on manifolds (topological spaces of arbitrary dimension). Riemann surfaces, for example, are two-dimensional manifolds.

Manifolds can be complicated, but it turned out that their geometry, and the nature of the functions on them, is largely controlled by their topology, the rather coarse properties invariant under one-to-one continuous mappings. In particular, Riemann observed that the topology of a Riemann surface is determined by its genus, the number of closed curves that can be drawn on the surface without splitting it into separate pieces. For example, the genus of a sphere is zero and the genus of a torus is one. Thus, a single integer controls whether the functions on the surface are rational, elliptic, or something else.

The topology of higher-dimensional manifolds is subtle, and it became a major field of 20th-century mathematics. The first inroads were made in 1895 by the French mathematician Henri Poincaré, who was drawn into topology from complex function theory and differential equations. The concepts of topology, by virtue of their coarse and qualitative nature, are capable of detecting order where the concepts of geometry and analysis can see only chaos. Poincaré found this to be the case in studying the three-body problem, and it continues with the intense study of chaotic dynamical systems.

The moral of these developments is perhaps the following: It may be possible and desirable to eliminate geometry from the foundations of analysis, but geometry still remains present as a higher-level concept. Continuity can be arithmetized, but the theory of continuity involves topology, which is part of geometry. Thus, the ancient complementarity between arithmetic and geometry remains the essence of analysis.

John Colin Stillwell

calculus, branch of mathematics concerned with the calculation of instantaneous rates of change (differential calculus) and the summation of infinitely many small factors to determine some whole (integral calculus). Two mathematicians, Isaac Newton of England and Gottfried Wilhelm Leibniz of Germany, share credit for having independently developed the calculus in the 17th century. Calculus is now the basic entry point for anyone wishing to study physics, chemistry, biology, economics, finance, or actuarial science. Calculus makes it possible to solve problems as diverse as tracking the position of a space shuttle or predicting the pressure building up behind a dam as the water rises. Computers have become a valuable tool for solving calculus problems that were once considered impossibly difficult.

Calculating curves and areas under curves

The roots of calculus lie in some of the oldest geometry problems on record. The Egyptian Rhind papyrus (c. 1650 bce) gives rules for finding the area of a circle and the volume of a truncated pyramid. Ancient Greek geometers investigated finding tangents to curves, the centre of gravity of plane and solid figures, and the volumes of objects formed by revolving various curves about a fixed axis.

By 1635 the Italian mathematician Bonaventura Cavalieri had supplemented the rigorous tools of Greek geometry with heuristic methods that used the idea of infinitely small segments of lines, areas, and volumes. In 1637 the French mathematician-philosopher René Descartes published his invention of analytic geometry for giving algebraic descriptions of geometric figures. Descartes’s method, in combination with an ancient idea of curves being generated by a moving point, allowed mathematicians such as Newton to describe motion algebraically. Suddenly geometers could go beyond the single cases and ad hoc methods of previous times. They could see patterns of results, and so conjecture new results, that the older geometric language had obscured.

Equations written on blackboard
Britannica Quiz
Numbers and Mathematics

For example, the Greek geometer Archimedes (287–212/211 bce) discovered as an isolated result that the area of a segment of a parabola is equal to a certain triangle. But with algebraic notation, in which a parabola is written as y = x2, Cavalieri and other geometers soon noted that the area between this curve and the x-axis from 0 to a is a3/3 and that a similar rule holds for the curve y = x3—namely, that the corresponding area is a4/4. From here it was not difficult for them to guess that the general formula for the area under a curve y = xn is an + 1/(n + 1).

Calculating velocities and slopes

The problem of finding tangents to curves was closely related to an important problem that arose from the Italian scientist Galileo Galilei’s investigations of motion, that of finding the velocity at any instant of a particle moving according to some law. Galileo established that in t seconds a freely falling body falls a distance gt2/2, where g is a constant (later interpreted by Newton as the gravitational constant). With the definition of average velocity as the distance per time, the body’s average velocity over an interval from t to t + h is given by the expression [g(t + h)2/2 − gt2/2]/h. This simplifies to gt + gh/2 and is called the difference quotient of the function gt2/2. As h approaches 0, this formula approaches gt, which is interpreted as the instantaneous velocity of a falling body at time t.

This expression for motion is identical to that obtained for the slope of the tangent to the parabola f(t) = y = gt2/2 at the point t. In this geometric context, the expression gt + gh/2 (or its equivalent [f(t + h) − f(t)]/h) denotes the slope of a secant line connecting the point (tf(t)) to the nearby point (t + hf(t + h)) (see figure). In the limit, with smaller and smaller intervals h, the secant line approaches the tangent line and its slope at the point t.

Thus, the difference quotient can be interpreted as instantaneous velocity or as the slope of a tangent to a curve. It was the calculus that established this deep connection between geometry and physics—in the process transforming physics and giving a new impetus to the study of geometry.

Are you a student?
Get a special academic rate on Britannica Premium.

Differentiation and integration

Independently, Newton and Leibniz established simple rules for finding the formula for the slope of the tangent to a curve at any point on it, given only a formula for the curve. The rate of change of a function f (denoted by f′) is known as its derivative. Finding the formula of the derivative function is called differentiation, and the rules for doing so form the basis of differential calculus. Depending on the context, derivatives may be interpreted as slopes of tangent lines, velocities of moving particles, or other quantities, and therein lies the great power of the differential calculus.

An important application of differential calculus is graphing a curve given its equation y = f(x). This involves, in particular, finding local maximum and minimum points on the graph, as well as changes in inflection (convex to concave, or vice versa). When examining a function used in a mathematical model, such geometric notions have physical interpretations that allow a scientist or engineer to quickly gain a feeling for the behaviour of a physical system.

The other great discovery of Newton and Leibniz was that finding the derivatives of functions was, in a precise sense, the inverse of the problem of finding areas under curves—a principle now known as the fundamental theorem of calculus. Specifically, Newton discovered that if there exists a function F(t) that denotes the area under the curve y = f(x) from, say, 0 to t, then this function’s derivative will equal the original curve over that interval, F′(t) = f(t). Hence, to find the area under the curve y = x2 from 0 to t, it is enough to find a function F so that F′(t) = t2. The differential calculus shows that the most general such function is x3/3 + C, where C is an arbitrary constant. This is called the (indefinite) integral of the function y = x2, and it is written as ∫x2dx. The initial symbol ∫ is an elongated S, which stands for sum, and dx indicates an infinitely small increment of the variable, or axis, over which the function is being summed. Leibniz introduced this because he thought of integration as finding the area under a curve by a summation of the areas of infinitely many infinitesimally thin rectangles between the x-axis and the curve. Newton and Leibniz discovered that integrating f(x) is equivalent to solving a differential equation—i.e., finding a function F(t) so that F′(t) = f(t). In physical terms, solving this equation can be interpreted as finding the distance F(t) traveled by an object whose velocity has a given expression f(t).

The branch of the calculus concerned with calculating integrals is the integral calculus, and among its many applications are finding work done by physical systems and calculating pressure behind a dam at a given depth.

John L. Berggren