left up

1.1. Rigour and the axiomatic method

There are all sorts of arguments used in technical disciplines to establish the truth of assertions. Sometimes people become convinced of the truth of arithmetical conjectures on the basis of a preponderance of numerical evidence. But these methods can lead to errors, and since at least the time of the Greeks it has been generally accepted that a true mathematical proof must be deductive. That is, it must proceed from accepted, or at least explicitly stated, assumptions via a series of small incontrovertibly correct steps. Sometimes these steps themselves may be based on spatial or even mechanical intuition --- [upsenskii-mechanics] gives some examples of the latter. But such intuition too has fallen into disrepute in recent times, following the discovery of such pathologies as Peano's space-filling curve and Bolzano's and Weierstrass's everywhere continuous nowhere differentiable functions.

Now it doesn't seem fair to dismiss all geometric reasoning as non-rigorous. Since the time of Descartes it has been known how to reduce geometry to arithmetic (understood in a wide sense); sometimes this makes things easier, but at times it can obscure the immediacy and simplicity of geometric arguments. [arnold-newtonhooke] detects a vivid contrast already between the inventors of calculus: Newton's reasoning was strongly geometric whereas Leibniz stressed algebraic manipulation. More recently, Minkowski has shown how quite elementary geometrical reasoning can lead to deep and important results in number theory. Nevertheless, geometric reasoning nowadays tends to be seen as a convenient and illuminating shorthand for a purely symbolic proof, as [littlewood-miscellany] says:

A heavy warning used to be given that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself). [...] pictorial arguments, while not so purely conventional, can be quite legitimate [...] in the sense that in translating into symbols no step occurs that is not both unequivocal and trivial.

(By contrast, some early proofs of the Fundamental Theorem of Algebra, for example, leave some doubt over whether the simple geometric observations they use do in fact fit easily into a symbolic framework.) But geometry is just one particularly important example of how everyday intuitions can lead us astray. There is always a danger of being misled by the superficial similarity of an abstract structure to some particular concrete one; for example one might accidentally assume that multiplication in a ring must be commutative. Sometimes the results can be serendipitous, e.g. the use of complex numbers and Euler's successful manipulations of infinite series. But they can also lead to serious errors, e.g. the purported proof of Fermat's Last Theorem based on the assumption of unique factorization in arbitrary algebraic number fields.

The pathologies of real analysis and the discovery of non-Euclidean geometries, among other innovations, stimulated a general determination among mathematicians to bring out unstated assumptions and either justify them or avoid them. Even the Euclidean canon was not immune from criticism: indeed long before Gauss had pointed out that the notion of 'betweenness' is often used in Euclid, but never defined. This in turn led to the rise of the modern refinement of the axiomatic method, where a set of axioms is established and all conclusions must proceed from the axioms by steps which are purely logical and make no additional mathematical assumptions. Apart from the question of reliability, this turns out to be very useful, since the same set of axioms often admits various different realizations (a point made long before by Boole). Consequently the same reasoning can be applied in different situations, giving benefits of elegance and economy. One of the early classics of the axiomatic method was a treatise on geometry by [hilbert-fgeom], which made all assumptions explicit and analyzed the interdependence of the axioms in detail. Reputedly he once stated that it should be possible to replace the words 'point', 'line' and 'plane' by 'table', 'chair' and 'beer-mug' without invalidating any of the theorems of geometry.

Whatever the level of formalization of the deductive proof, it may be rather unenlightening to read. In cases where the deductive proof is arrived at via geometric intuition, then that intuition is valuable for understanding the proof. In general, the whole process by which the theorem and its proof were discovered may be of paramount importance to grasp the theorem, the proof and their significance (e.g. applicability to physical science or relation to other part of mathematics). It's well-known that one doesn't 'understand' a proof merely by having checked the correctness of each inference it contains. Some mathematicians prefer the intellectual purity of the deductive part, probably following the examples of such worthies as Newton and Gauss (the latter famously referred to other considerations as the scaffolding used to construct a beautiful building, which should be taken down when the building is finished). But generally, especially in introductory textbooks, there is a lot of discussion other than the purely deductive part. Trybulec distinguishes the 'formal stratum' and the 'informal stratum', which are present in different proportions depending on the style of the author and the intended audience.

Also present in the Zeitgeist at about the turn of the century was a foundationalist tendency. Not satisfied merely with uncovering the hidden assumptions in mathematics, many tried to justify them on the basis of less dubitable principles. Various mathematicians showed how the reals could be understood as certain infinite sets of natural numbers, and [dedekind-was] showed how all the properties of natural numbers could be deduced from a very small set of axioms or even constructed in pure set theory, given an infinite set; Frege proposed another definition which he claimed was based on pure logic.

But Cantor's set theory marked a bold introduction of the actual infinite into mathematics. This had long been viewed with scepticism, Gauss's interdiction being well-known. Cantor distinguished carefully between 'consistent' and 'inconsistent' multiplicities, but many who used superficially similar methods of reasoning started to encounter worrying paradoxes. Some of these paradoxes, notably Russell's ('the set of all sets which are not members of themselves'), involved only very simple concepts like membership and predication. Foundational studies were plunged into a new crisis, even the most trivial intuitive ideas seeming to cause major problems.

In this atmosphere, it's not surprising that the complete formalization of mathematical proof should be attempted; in any case, it can be seen as the natural next step in the evolution of ever greater precision and rigour in mathematics. If basic concepts like 'set' and 'member' are problematical, how can natural language and intuitive proof ever be trusted? Moreover Zermelo, who independently discovered Russell's paradox at about the same time, had now isolated a few set construction principles which seemed enough for all the applications of set theory in mathematics, and apparently did not lead to contradiction. But Zermelo's Aussonderungsaxiome3 relied on the notion of a ' definit' property, and attempts by others to make this precise inevitably focused on restricting the language in which that property was stated.


left right up home John Harrison 96/2/22; HTML by 96/4/5