\documentstyle{book} \font\bt=cmbtt10 \textwidth=13.5cm \textheight=21cm \voffset=-3.5cm \hoffset=-2cm \title{Mizar-MSE Introduction to Logic} \author{Marcin Mostowski} \date{} \begin{document} \maketitle \begin{center} \LARGE{\bf Preface} \end{center} \vskip 1em It took me a relatively long time to write this thin booklet: seven years have passed from the moment I started working on it in 1984. In spite of that my essential objective has not changed during that time. It is (and was intended to be) an introduction to logic not oriented towards any definite specialization and at the same time such which makes it possible to use the system Mizar-MSE in an elementary course in logic. This concise handbook was conceived neither as a technical reference manual to the system Mizar-MSE nor as an introduction to any computer-related branch of logic. It is simply an introduction to logic which makes use of the formal notation of the language Mizar-MSE. It additionally includes necessary explanations pertaining to that language and to the most widely spread version of the system Mizar-MSE so as to make it possible for the Reader to solve exercises without consulting additional sources. Checking the correctness of other people's proofs is toilsome and uninteresting. To make matters worse it requires a relatively highly skilled personnel. By availing oneself of the system Mizar-MSE one can employ the computer to do that ungratifying job. The computer will do that in a matter of seconds. Moreover the defects, if any, in the proof under consideration will be provided with a commentary (which we hope to be to the point). While the use of the presents handbook does not require a parallel work with the system Mizar-MSE the endeavours to make proofs and to test them by the system Mizar-MSE will signally facilitate the comprehension of the text of this handbook. Moreover, it seems that it is not possible to understand the concept of proof without a number of experiments with the construction of proofs. Proof construction is exactly the key problem in this handbook. But before we construct a proof we have precisely to formulate our thesis and the assumptions we may make. Problems related to the translation of sentences formulated in natural language into the formalized language of the predicate calculus are quite often raised in the present book. The typical treatment of the sentential calculus requires an explanation. In Poland, the basic concepts of that calculus are introduced in secondary schools. That is why in the introduction there is merely a brief review of those concepts. In further chapters the sentential calculus is used gradually parallel to the discussion of successive techniques of proof-writing. While the systems of proving for the sentential calculus are simpler than those for the predicate calculus the need for proofs for the tautologies in the sentential calculus seems hardly comprehensible. This is so not only for theoretical considerations, such as the zero-one method for the decision on the tautology of sentential calculus formulas. It was not accidentally that ancient Greeks in looking for formal arguments useful in in substantiation (justification) discovered syllogistic first and the sentential calculus only later. For a beginner in logic the choice of the formalism of the language Mizar-MSE should not be an obstacle: that formalism is much more readable than the standard mathematical notation. The present handbook had been based on numerous courses in logic held by its author mainly for humanistic sciences students in the Bia\l ystok Branch of Warsaw University in 1982-1990. The scope of problems discussed here goes beyond the courses for humanistic sciences students. Nevertheless almost the whole content of the book - except for the last chapter and the appendix - was used by this author for that purpose. When one selects one's material for a course of logic one finds it impossible to satisfy everyone, engineers and teachers, computer scientist and philosophers. This handbook is addressed rather to those interested in the social sciences and the humanities, which finds reflection in the choice of both topics and examples. The last chapter plays the role of an illustration of applications of certain logical concepts and techniques to certain problems known otherwise, and at the same time points to certain important logical problems. Two parts of the present text go much further beyond a typical course in logic, namely the chapter on the functioning of the module which verifies inferences (Checker) in Mizar-MSE, and the Appendix. Since this handbook was written above all with the idea of a practical use of the system Mizar-MSE in a course in logic, some information of the way of checking of inferences seemed indispensable. The Appendix plays a double role. First, a complement to the information of the checking of inferences. Secondly, an introduction to the algorithmic processing of logical formulas. Those problems play an increasing role in both theory and practical applications. Besides the possibility of making proofs in a rigorous manner this is the second important reason for the formulation of theorems in a formalized language. \newpage \begin{center} \LARGE{\bf Acknowledgments} \end{center} \vskip 1em The work on the present book was subsidized, in various periods, by: \begin{enumerate} \item The Polish Information Processing Society, \item The Philippe le Hodey Foundation, \item The RPBP III.\, 24 Grant. \end{enumerate} The form and the content of this book was largely influenced by my discussions and controversies with Andrzej Trybulec, the author of the language Mizar-MSE and the head of the Mizar research project. It is to be noted that his approach to my vision of the teaching of logic was generally at least critical, which is not to say unfriendly. My thanks are also due to my colleagues Czes\l aw Byli\'nski and Stanis\l aw \.Zukowski, who have taught me much about Mizar and related subjects, to Roman Matuszewski, whose help in organizational matters was very valuable, to Anna Zalewska who co-operated with me by holding similar courses (perhaps even without realizing that I was availing myself of some of her ideas), and to Barbara Kope\'c, who has translated by her much better written than they had been in my Polish original. I am also obliged to students L.\,Borys and D.\,Surowik for the checking of Mizar texts on the computer. Final English version of this book has been checked by Olgierd Wojtasiewicz, who also translated chapters 1, 19, 20, 21 and the appendix. I am very obliged to him to spare his time to help me. \vskip 2em \hspace*{20em} Marcin Mostowski \vskip 1em Warsaw, May 26, 1991 \newpage \begin{center} \LARGE{\bf Introduction} \end{center} \vskip 1em The subject matter of logic is conventionally defined as the formal theory of reasonings. Formal means that it is not interested in the content of reasonings but in its form alone. The question arises what that form is. For the time being we shall rest satisfied with a vaguely general answer: the form of reasoning is a linguistic schema with which that reasoning is in agreement. It is to be hoped that after having become acquainted with the present handbook the Reader will better understand problem. The sentential calculus is the simplest known and practical formal theory. If from among the expressions of the English language we single out sentential connectives (in grammar termed conjunctions) such as {\bt and}, {\bt or}, {\bt if}, etc., and if we disregard the inner structure of the sentences in which those connectives do not occur, then we obtain a natural interpretation of the sentential calculus. We shall now briefly describe the basic concepts pertaining to the sentential calculus. The sentential connectives are: \begin{itemize} \item negation, recorded as {\bt not} and corresponding to the expression {\it it is not the case that}, \item conjunction, recorded as {\bt \&} and corresponding to the expression {\it and}, \item disjunction, recorded as {\bt or} and corresponding to the expression {\it or}, \item implication, recorded as {\bt implies} and corresponding to the expression {\it if... then...}, \item equivalence, recorded as {\bt iff} and corresponding to the expression {\it if and only if}. \end{itemize} Negation is a sentential connective of one argument; the remaining connectives are of two arguments each. That is to say that to form a formula by means of negation we need exactly one formula, whereas the remaining sentential connectives form compound formulas by linking two formulas (which may be equiform). Sentential formulas are constructed by means of brackets and sentential connectives from atomic formulas, denoted by p, q, r, p', q', r', etc. Sentential formulas are formed as follows: either we write {\bt not} before an sentential formula, or we write sentential connective of two arguments between two sentential formulas, the whole being preceded on the left by the bracket "(" and followed on the right by the bracket ")". Of course, atomic formulas are sentential formulas. For instance:\\ \hspace*{7em}{\tt p, {\bt not} p', ((p {\bt or} q) {\bt implies} r), (p {\bt or not} p)}.\\ are sentential formulas. On the contrary,\\ \hspace*{7em}{\tt {\bt not} p {\bt not}, p {\bt or} q {\bt implies} r, p {\bt or not} p, ((p, (p)}.\\ are not sentential formulas. The meaning of the sentential connectives is determined by how the logical value (truth or falsehood) of the compound sentence constructed by means of those connectives depends on the logical value of simpler sentences, of which it consists. Since we have only two logical values, truth and falsehood, the meaning of the sentential connectives can be described by means of truth tables. Truth will be denoted by 1, falsehood by 0. Here are truth tables for the various connectives. \vskip 1em \hskip 2em negation \hskip 5em \begin{tabular}{c|c} {\tt p} & {\tt {\bt not} p} \\ \hline 1 & 0 \\ \hline 0 & 1 \end{tabular} \vskip 1em \hskip 2em conjunction \hskip 4em \begin{tabular}{c|c|c} {\tt p} & {\tt q} & {\tt (p {\bt \&} q)} \\ \hline 1 & 1 & 1 \\ \hline 1 & 0 & 0 \\ \hline 0 & 1 & 0 \\ \hline 0 & 0 & 0 \end{tabular} \vskip 1em \hskip 2em disjunction \hskip 4em \begin{tabular}{c|c|c} {\tt p} & {\tt q} & {\tt (p {\bt or} q)} \\ \hline 1 & 1 & 1 \\ \hline 1 & 0 & 1 \\ \hline 0 & 1 & 1 \\ \hline 0 & 0 & 0 \end{tabular} \vskip 1em \hskip 2em implication \hskip 4em \begin{tabular}{c|c|c} {\tt p} & {\tt q} & {\tt (p {\bt implies} q)} \\ \hline 1 & 1 & 1 \\ \hline 1 & 0 & 0 \\ \hline 0 & 1 & 1 \\ \hline 0 & 0 & 1 \end{tabular} \vskip 1em \hskip 2em equivalence \hskip 4em \begin{tabular}{c|c|c} {\tt p} & {\tt q} & {\tt (p {\bt iff} q)} \\ \hline 1 & 1 & 1 \\ \hline 1 & 0 & 0 \\ \hline 0 & 1 & 0 \\ \hline 0 & 0 & 1 \end{tabular} \vskip 2em In the first columns we write, line after line, all the possible distribution of the logical values of the simple sentences, and in the last column we write the logical values of the corresponding compound sentence. By making use of the above tables we can compute for any formula how its logical value depends on the values of the atomic sentences which occur in it. This will be illustrated by two sentential formulae: \vskip 1em \nopagebreak \begin{tabular}{c|c|c|c|c} {\tt p} & {\tt q} & {\tt (p {\bt or} q)} & {\tt (p {\bt implies} q)} & {\tt ((p {\bt or} q)) {\bt implies} ((p {\bt implies} q))} \\ \hline 1 & 1 & 1 & 1 & 1 \\ \hline 1 & 0 & 1 & 0 & 0 \\ \hline 0 & 1 & 1 & 1 & 1 \\ \hline 0 & 0 & 0 & 1 & 1 \end{tabular} \vskip 1em Thus the formula {\tt ((p {\bt or} q)) {\bt implies} ((p {\bt implies} q))} is false only if the formula p is true and the formula q is false; in the remaining cases that the formula is true. \vskip 1em \nopagebreak \begin{tabular}{c|c|c|c|c} {\tt p} & {\tt q} & {\tt (q {\bt or} p)} & {\tt (p {\bt or} q)} & {\tt ((q {\bt or} p)) {\bt implies} ((p {\bt or} q))} \\ \hline 1 & 1 & 1 & 1 & 1 \\ \hline 1 & 1 & 1 & 1 & 1 \\ \hline 0 & 1 & 1 & 1 & 1 \\ \hline 0 & 0 & 0 & 0 & 1 \end{tabular} \vskip 1em Thus the formula {\tt ((q {\bt or} p)) {\bt implies} ((p {\bt or} q))} is always true regardless of the logical values of the atomic sentences of which it consists. Formulas which have that property are termed tautologies of the sentential calculus. \newpage {\large\bf REMARK.} Those sentential connectives whose meaning can be described by truth tables are extensional connectives (sometimes also called truth functions or truth connectives). Nonextensional connectives are termed intensional. The connective of one argument "it is necessary that ..." is the simplest example of an intensional connective. You will probably agree, dear Reader, that it is true that "it is necessary that 2 + 2 = 4", but it would be difficult to accept as true the sentence "it is necessary that you read a book at this moment": it is just a false sentence because you might do at this moment many other things. Thus when constructing a compound sentence by preceding of true sentences by the phrase "it is necessary that" we can obtain equally well a true sentence and a false sentence. This proves the nonextensionality of the connective "it is necessary that ...". Intensional connectives will not be discussed in this book. \medskip {\large\bf REMARK 2.} Doubts are sometimes voiced on two points: (i) the correctness of reading certain connectives as corresponding expressions, for instance, reading the disjunction as the word {\bt or}; (ii) the adequacy of the truth labels for certain connectives.\\ Both doubts are closely linked to one another. It is always so that either one or the other makes no sense. But the problem is serious. By maintaining that both the way of reading a given connective is correct and its truth table is adequate we assert that certain expression in natural language have a certain definite properties, and hence we state certain empirical truths about the world, truths which may prove false. Those problems have a vast literature. We shall confine ourselves here to the statement that in a fairly broad scope the way of reading the connectives adopted her is in agreement with the truth tables. \newpage \begin{center} \LARGE{\bf 1.Logical Relationship} \end{center} \vskip 1em Logical relationships between sentences are those which tell us how the validity of one sentence depends on the validity of other sentences. Moreover, only those relationships are considered logical which can be described by a reference to the grammatical structure of sentences. The most important example of a logical relationship is the relationship of logical consequence. For any two sentences $\alpha$ and $\beta$ the following relationship holds: If $\alpha$ is a logical consequence of $\beta$ then, if $\beta$ is true then so is $\alpha$. The above statement defines a condition necessary for $\alpha$ to be a logical consequence of $\beta$. If $\alpha$ is true and $\beta$ is false then the relationship of sentence $\alpha$ being the logical consequence of sentence $\beta$ does not hold. On the other hand, sentence "Warsaw is the capital of Poland" is not a logical consequence of sentence "Paris is the capital of France" although if the latter is true then so is the former (since they are both true). Let us consider three sentences:\\ $\alpha$: John is a student,\\ $\beta$: John is a man,\\ $\gamma$: Somebody is a man.\\ Sentence $\beta$ follows from sentence $\alpha$ since if sentence $\alpha$ is true then sentence $\beta$ is also true. Similarly, sentence $\gamma$ follows from sentence $\alpha$. Nevertheless, only the latter can be considered as a case of logical consequence. By replacing the word "man " in sentence $\beta$ by another word of the same grammatical form, for instance "chimney-sweep" , we would obtain a new sentence $\beta'$ not necessarily following from sentence $\alpha$. $\beta'$ does not follow from $\alpha$ because $\alpha$ is true and $\beta'$ is not. Note also that the substitution of the word "chimney-sweep" for the word "man" in sentence $\alpha$ yields, in effect, a sentence; the word "man" does not appear in this sentence, hence no such substitution can alter it. This demonstrates the fact that the following of sentence $\beta$ from sentence $\alpha$ is not solely a matter of their grammatical structure. Meanwhile let us replace the words "John" and "student" in sentences $\alpha$ and $\gamma$ by other words possessing the same grammatical form, for example "Paul" and "chimney-sweep". We shall obtain the following two sentences:\\ $\alpha'$: Paul is a chimney-sweep,\\ $\gamma'$: somebody is a chimney-sweep.\\ Sentence $\gamma'$ follows from sentence $\alpha'$ since if sentence $\alpha'$ is true then so is sentence $\gamma'$. Because "somebody" can be grammatically characterized as a personal pronoun of the male gender in the third person singular, nominative, then replacing it by another word possessing the same characteristics is not able to interfere with the relationship of one of these sentences following from the other. Thus we now have at our disposal the following criterion of logical consequence: {\it For any two sentences $\delta$ and $\epsilon$, sentence $\delta$ is a logical consequence of sentence $\epsilon$ if and only if for any substitution of words preserving the grammatical structure of sentences $\delta$ and $\epsilon$, if $\delta'$ is a sentence obtained from $\delta$ and $\epsilon'$ is a sentence obtained from $\epsilon$ by this substitution, then $\delta'$ follows from $\epsilon'$ (or, equivalently, $\epsilon'$ is true then so is $\delta'$).} \medskip {\large\bf REMARK.} There is a divergence between grammatical sentence structure investigated by linguists and the sentence structure considered by logicians. Nowadays we discern surface structure and deep structure of sentences. Deep structure of a sentence determines the content of the communication it contains; surface structure defines the form in which it is recorded. Logic is concerned only with the deep structure of sentences, while to linguistics both the deep and the surface structures are of interest. Therefore, when we investigate logical relationships between sentences, we can sometimes ascribe to them a grammatical structure other than the one that could be infered from the way they are recorded. Moreover, increasingly widespread attention is drawn by the fact that the same language may possess many nonequivalent grammatical descriptions. Which relationships are considered logical depends on the choice of grammar. Thus the above accepted definition of a logical relationship does not provide an absolute method of discerning whether a given relationship is logical or not. \medskip {\large\bf HISTORICAL\hskip 1em REMARK.} The idea of isolating logical relationships between sentences (that is, relationships between their validity, as defined by their grammatical structure) comes from Aristotle of Stagyra (384 - 322 B.C.). He was the creator of the first logical theory describing logical relationships, later known as syllogistics. Sentential Calculus is also a theory of this type; it was created by the Stoics in the third century B.C. The first distinct criterion distinguishing logical relationships was provided by the logicians of the Middle Ages. They introduced the division of words into categorematic and syncategorematic (or, in other words, denotational and codenotational). Categorematic were the nouns, adjectives and verbs; the category of adjectives was not recognized and they were ranked either among verbs (in ancient times) or nouns (in the Middle Ages). The word "is" (Latin "est") employed as in $\alpha$, $\beta$ and $\gamma$ was considered to be a connective and syncategorematic. Nowadays we distinguish such a use of the verb "to be" by saying that here it performs the role of an auxiliary verb. The mediaeval criterion distinguishing logical relationships can be summarized as follows: such truth relationships of sentences are logical which hold under any admissible substitution for categorematic terms. We propose that the reader compare the mediaeval criterion with the one introduced in this chapter through isolating categorematic and syncategorematic words in sentences $\alpha$, $\beta$ and $\gamma$, and subsequently investigating whether $\beta$ follows logically from $\alpha$ and $\gamma$ from $\alpha$ (according to the mediaeval criterion). The mediaeval notions of syn- and categorematicity were revived by a contemporary American logician Quine who proposed their new understanding as follows: {\it Categorematic are those expressions which can be interpreted by relating them to definite concrete or abstract objects of the external world. Syncategorematic are those expressions which (possibly together with other expressions) indicate certain grammatical constructs but do not have to correspond to any definite objects in the world.} \newpage \begin{center} \LARGE{\bf 2. Predicate Calculus} \end{center} \vskip 1em Sentential Calculus is too poor theory to describe the logical dependencies between sentences. In an example as simple as the logical following of sentence $\gamma$ ("Somebody is a student.") from sentence $\alpha$ ("John is a student.") Sentential Calculus already appears to be inadequate. Predicate Calculus is a theory describing logical dependencies between sentences in which not only sentential connectives and simple sentences but also predicates, individual constants, variables, and the universal and existential quantifiers are distinguishependencies between sentences in which not only sentential connectives and simple sentences but also predicates, individual constants, variables, and the universal and existential quantifiers are distinguished. For example, individual constants (proper names) are "John", "Paul", "Warsaw", "Bia\l ystok", "Poland", "Europe", "Warsaw University", etc. Predicate symbols are, for example, "is a student", "is a man", "is a chimney-sweep", "sleeps", "is white", etc. It is more difficult to isolate expressions performing the role of variables and quantifiers in the English language. Let us consider the following paraphrase\footnote {A paraphrase of a sentence is another sentence of the same language possessing the same meaning as the given sentence, having different surface structure it could have the same deep structure.} of sentence $\gamma$:\\ $\gamma'$: There exists somebody who is a student. In this sentence the words "somebody" and "who" represent the same object, it should therefore be acknowledged that they correspond to the same symbol, possibly serving two different functions; we say that this words represent an individual variable. For example, let x denote this variable. When both words are replaced by x we obtain the following sentence:\\ $\gamma''$: There exists x (such that) x is a student.\\ In order to conform to the stylistic rules of the English language we had to add the words "such that". The two-part phrase "there exists ... such that". is an existential quantifier (deriving its name from the scholastic Latin {\em existere} - to exist). Personal ("he", "she", "who") or impersonal ("this", "that", "it") pronouns may perform the role of variables. However, it is not always that the phrases functioning as the individual constants can be identified in a given sentence. In such a case we must so paraphrase the considered sentence that their identification is possible. Similar difficulties may be encountered also with respect to the existential quantifier; also in this case an appropriate paraphrase may be helpful. Let us consider the example of one more sentence:\\ $\delta$: Every man is mortal.\\ This can be paraphrased as\\ $\delta'$: For any individual, if he is a man then he is mortal.\\ This sentence is a little clumsy, nevertheless it states the same thing as sentence $\delta$. In $\delta'$, the words "individual" and "he"\footnote{\footnotesize In many European languages (particulary in English and Polish) the pronoun of third person masculine gender is used as a personal pronoun independent of sex. Habit of writing "she or he" or even more radical proposals as inventing a new word for a general pronoun are just projects for changing linguistical rules. Compare the ambiguity of using "man" for human being, and "man" for male human being. However due to the popularization of this new linguistic theory sometimes native speakers do not know what is correct. I had noticed something this: "he wrote a very serious letter, he was she, it was a woman" - said an American. For my question about the correctness of this way of speaking he answerd "it was incorrect, of course !". No sexist prejudices he had, because he was he.} concern the same object and represent a variable which we will denote by y.\\ $\delta''$: For any y if y is a man then y is mortal.\\ The expression "for any" appearing in this sentence is the universal quantifier. \newpage \begin{center} \LARGE{\bf 3. The Language Mizar-MSE} \end{center} \vskip 1em The majority of formalizations of Predicate Calculus bears no resemblance to natural language, not even to that fragment of natural language which is used in mathematical reasonings. It was the intention of the author of Mizar-MSE to create a version of Predicate Calculus preserving its simplicity while simultaneously, reasonably approximating the language of mathematics. In the Mizar-MSE language we can formulate two types of statements: sentences and texts. In natural language we talk about different kinds of sentences, such as affirmative, interrogative or imperative. Mathematicians, however, have for a long time been accustomed to call by the name 'sentence' only affirmative sentences and to this kind of sentence are their interests often limited. Hence the sentences of Mizar-MSE correspond to affirmative sentences of natural language. Now we shall determine the kinds of inscriptions which are sentences in the Mizar-MSE language. Let us begin with some examples. Sentences $\alpha$, $\gamma$, $\delta$ appearing in the previous chapters can be translated into Mizar-MSE as follows:\\ $\alpha^{*}$ : {\tt student[John]}\\ $\gamma^{*}$ : {\tt {\bt ex} x {\bt st} man[x]}\\ $\delta^{*}$ : {\tt {\bt for} y {\bt holds} man[y] {\bt implies} mortal[y]}\\ The last sentence can also be translated as\\ $\delta^{**}$ : {\tt {\bt for} y {\bt st} man[y] {\bt holds} mortal[y]}\\ In the Mizar-MSE language sentences $\delta^{*}$ and $\delta^{**}$ are paraphrases of each other. Instead of "John is a student" we can write {\tt student[John]}. This is a much simpler notation, since it does not require complicated grammatical rules for constructing simple sentences from names and predicate symbols. In order to determine which inscriptions are sentences of Mizar-MSE we cannot refer to any list of sentences of that language. This is caused by the fact that for any finite list of sentences we can point out a sentence not appearing on the list; we can, for example, take the conjunction of two longest sentences on the list. Therefore, to establish which inscriptions are sentences and which are not, we must avail ourselves of a certain grammar. To that purpose we shall introduce an auxiliary notion of a formula and then we shall say that sentences are formulae satisfying a certain additional conditions. First we distinguish the following kinds of expressions: \begin{enumerate} \item individual expressions \begin{itemize} \item individual constants (e.g. John) \item individual variables (e.g. x, y) \end{itemize} \item predicate symbols (e.g. student, man, mortal) \item square brackets: [, ] \item comma: , \item round brackets: (, ) \item sentential connectives: {\bt not}, {\bt \&}, {\bt or}, {\bt implies}, {\bt iff} \item quantifier expressions: {\bt for}, {\bt st}, {\bt holds}, {\bt ex}. \end{enumerate} Individual expressions and predicate symbols can be arbitrary sequences of letters and digits. Atomic formulae are created by writing first a predicate symbol, then the left square bracket [, then a list of individual expressions and, finally, the right square bracket ]. A list of individual expression is a sequence of these expressions separated by commas. Thus, for example, the following are atomic formulae: man[John], m[j], John[man], older[Peter,x], s[p,x,s], etc. To construct atomic formulae we use only the expressions described in 1 - 4 The remaining three types of expressions are used to construct complex formulae out from simpler ones - atomic formulae are the simplest. Now we shall describe the rules of creating complex formulae: I) If expression $\alpha$ is a formula then the expression {\bt not} $\alpha$ is a formula. II) If expressions $\alpha$ and $\beta$ are formulae then the expressions $(\alpha$ {\bt \&} $\beta)$, $(\alpha$ {\bt or} $\beta)$, $(\alpha$ {\bt implies} $\beta)$, $(\alpha$ {\bt iff} $\beta)$ are formulae. III) If expression $\alpha$ is a formula and $\upsilon$ is a variable list then the expressions $(${\bt ex} $\upsilon$ {\bt st} $\alpha)$, $(${\bt for} $\upsilon$ {\bt holds} $\alpha)$ are formulae. IV) If expressions $\alpha$ and $\beta$ are formulae and $\upsilon$ is a variable list then the expression $(${\bt for} $\upsilon$ {\bt st} $\alpha$ {\bt holds} $\beta)$ is a formula. A variable list is a sequence of individual variables separated by commas. Moreover, we shall say that formulae are only those expressions which can be obtained from atomic formulae by applying rules I) - IV) in a finite number of steps. Observe that according to this definition neither $\gamma$ nor $\delta^{*}$ or $\delta^{**}$ are formulae. However, we can obtain formulae also from these expressions if in appropriate places we insert round brackets. For example, the expression {\tt ({\bt ex} x {\bt st} student[x])} is a formula according to rule III), since the expression student[x] is an atomic formula and expression x is a variable list. Round brackets play a very important part because they let us distinguish, for example between the sentence\\ {\tt A: (student[John] {\bt or} (drunk[John] {\bt \&} thief[John]))}\\ and the sentence\\ {\tt B: ((student[John] {\bt or} drunk[John]) {\bt \&} thief[John])}.\\ Although these sentences differ merely in the position of brackets the sentence thief[John] logically follows only from the latter. According to the given formula definition the formulae would, nevertheless, contain too many brackets - particularly the outside brackets appear completely superfluous. We shall therefore adopt the following rules of dropping brackets: \begin{itemize} \item outside brackets can always be dropped; \item any outside brackets at a given stage of construction \end{itemize} can be dropped if only the next applied construction rule is either III) or IV). It should, of course, be remembered that brackets may be dropped only in pairs, that is, if we drop the left bracket then we must also drop the corresponding right bracket. We can also drop any pair of brackets introduced by rule II) together with symbol {\bt \&}. Thus the following expression is a formula: {\tt C: student[John] {\bt or} drunk[John] {\bt \&} thief[John]}. There arises the problem whether C is to be interpreted like A or like B. To avoid such ambiguities we shall introduce binding rules for connectives and quantifiers: {\bt not} is the strongest binder, {\bt \&} is weaker, {\bt or}, {\bt implies}, {\bt iff} are still weaker and quantifiers are the weakest. Thus having determined the binding strenght we can now insert the missing brackets again. If formulae regain their brackets then in formula \\ \hspace*{7em}$\alpha$ {\bt or} {\bt not} $\beta$ {\bt \&} $\gamma$ \\ we can insert them as follows\\ \hspace*{7em}$(\alpha$ {\bt or} $(${\bt not} $\beta$ {\bt \&} $\gamma))$.\\ {\bt not} binds most strongly, so we cannot insert the brackets thus\\ \hspace*{7em}$(\alpha$ {\bt or} {\bt not} $(\beta$ {\bt \&} $\gamma))$;\\ {\bt \&}, in turn, binds more strongly than {\bt or} therefore\\ \hspace*{7em}$((\alpha$ {\bt or} {\bt not} $\beta)$ {\bt \&} $\gamma)$;\\ is also incorrect reconstruction of brackets structure. We propose that the reader attempt to insert round brackets in formula C. The Mizar-MSE language gives various possibilities of simplifying notation. If such a need arises they will be presented in later sections. Now that we know what are Mizar-MSE formulae we can say that sentences of the Mizar-MSE language are formulae with no free variables. If x is an individual variable then the formula student[x] is not a sentence, because here x is a free variable. On the other hand $\gamma^{*}$ is a sentence since x is not a free variable in $\gamma^{*}$. A variable x is a free variable in formula $\alpha$ if and only if x appears in formula $\alpha$ outside a scope of any quantifier. This is of course an informal explication. To define what it means that a variable x is free in a formula $\alpha$ we shall proceed as follows: first we define what it means when $\alpha$ is an atomic formula, then assuming that the relation is defined for all subformulae of $\alpha$ we reduce the problem of deciding if x is free in $\alpha$ to problems of deciding if x is free in some subformulae of $\alpha$.\footnote{\footnotesize Compare this procedure with inductive definitions the arithmetic of natural numbers, but do not worry, we will go into the theory of inductive definitions. This technique will not be used in the following chapter} Every variable appearing in an atomic formula is a free variable in this formula. A variable appearing in a formula of the form {\bt not} $\alpha$ is free in it if and only if this variable is a free variable in formula $\alpha$. In formulae of the form $\alpha$ {\bt \&} $\beta$, $\alpha$ {\bt or} $\beta$, $\alpha$ {\bt implies} $\beta$, $\alpha$ {\bt iff} $\beta$ free variables are those which are free in $\alpha$ or in $\beta$. If $\upsilon$ is a variable list and $\alpha$ is a formula then in formulae of the form {\bt ex} $\upsilon$ {\bt st} $\alpha$, {\bt for} $\upsilon$ {\bt holds} $\alpha$ free variables are those which do not appear on the variable list $\upsilon$ and are free in formula $\alpha$. Finally, in the formula {\bt for} $\upsilon$ {\bt st} $\alpha$ {\bt holds} $\beta$ free are those variables which do not appear on the variable list $\upsilon$ and are free in $\alpha$ or in $\beta$. For example, in the formula\\ \hspace*{10em}{\tt ({\bt for} x, y {\bt st} p[x] {\bt holds} p[y]) {\bt \&} p[x]}\\ x is free variable since its last occurrence is outside the scope of any quantifier. The reader is encouraged to consider if x is a free variable in the following formula:\\ \hspace*{10em}{\tt {\bt for} x, y {\bt st} p[x] {\bt holds} p[y] {\bt \&} p[x].} Observe that to find the correct answer the problem of inserting the missing brackets should first be solved. The division of individual expressions into variables and constants was used only to distinguish between formulae and sentences. On that occasion we have said that both constants and variables can be arbitrary sequences of letters and digits. Thus the same expressions can play the part of either constants or variables. The question whether a given expression is, or is not, a constant is resolved in the actual text. Since constants are introduced into the text through special constructions the question whether a given formula is, or is not, a sentence is also resolved in the actual text and, as will turn out later, in a specific place of that text. Moreover, let us add that in spite of the considerable freedom in choosing names for variables and constants the following restriction holds: {\it In a definite text and in a definite place the same expression cannot be both a variable and a constant.} The ways of constructing texts in Mizar-MSE will be discussed in later sections. \newpage \begin{center} \LARGE{\bf 4. Proving} \end{center} \vskip 1em As a method of justifying the truth of sentences proving appeared in ancient Greece around VI century B.C. The first known example of applying a proof to demonstrate the truth of a sentence which could not be justified by referring to experience was probably the proof that the side and diagonal of a square are incommensurable. This reasoning, attributed to Pythagoras\footnote{According to tradition in Pythagorean's school this discovery has been top secret, contradicting (at least as they thought) the theory which they propagated - that everything is built up of numbers. Probably they considered quotient of diagonal by side of a square as not corresponding to any existing number.}, involves certain properties of natural numbers and certain geometrical facts. Let us recall the reasoning: \vskip 1em {\bf Theorem.} There are no common measure for a diagonal and a side of a square. \vskip 1em {\bf Proof} Assume that our thesis is false, i.e., that for a certain square K there exists such an interval {\it m'} that for certain natural numbers {\it i, j} the side of the square can be obtained by the {\it i}-fold composition of the interval {\it m'}, and its diagonal by the {\it j}-fold composition of this interval. If there exists such a natural number {\it a} $>$ 1 that both {\it i} and {\it j} are divisible by {\it a} then, there exists a maximal number with this property. Thus let $k = \frac{i}{a}$ and $n = \frac{j}{a}$. Let {\it m} be an interval obtained by composing the interval {\it m'} a times. To recapitulate: from the assumption that our thesis is not true follows that there exist a square K, natural numbers {\it k, n} and interval {\it m} such that: \begin{enumerate} \item any side of the square K can be obtained by the {\it k}-fold composition of the interval {\it m}; \item a diagonal of the square K can be obtained by the {\it n}-fold composition of the interval {\it m}; \item the numbers {\it k, n} are mutually prime (i.e., there does not exist a natural number \\ {\it l} $>$ 1 dividing {\it n} and {\it k} without a remainder); another way to state this fact is to say that m is the greatest common measure of the side and the diagonal of square K. \end{enumerate} By the known Pythagoras' theorem the area of the square the side of which is the diagonal of our square K is twice as great as the area of square K, i.e., $2k^{2}m^{2} = n^{2}m^{2}$, where $m^{2}$ denotes the area of a square with side m. Hence follows that $2k^{2} = n^{2}$. Continuing our proof we make use of the fact that a number with an even square is itself even. $n^{2} = 2k^{2}$, $n^{2}$ is even, therefore n is even also; thus for some natural number {\it p} we have {\it n} = {\it 2p}. Hence follows that $(2p)^{2} = 2k^{2}$, i.e., $4p^{2} = 2k^{2}$. Thus $k^{2} = 2p^{2}$ which proves that also {\it k} is an even number. We have therefore arrived at the conclusion that both {\it k} and {\it n} are even; this contradicts 3), i.e., the fact that {\it k} and {\it n} are mutually prime. Thus, having assumed that for a certain square its side and diagonal are commensurable, we obtained a contradiction. This proves that the side and diagonal of any square are incommensurable. {\bf Q.E.D.}\footnote{\footnotesize The latin abbreviation Q.E.D. means quod erat demonstrandum (in English: which has to be proved), it is used to mark the end of a proof} \vskip 1em This discovery of the Pythagoreans is difficult to understand if Greek geometry is viewed as a theory of measuring areas. The incommensurability of the side and diagonal of a square can not be ascertained by real measuring, as measures performed with an actual tool are always burdened with some innate error characteristic to that tool. Even if this error were very small, the incommensurability of two intervals still could not be discovered. Not only cannot the theorem just proved be justified by referring to experience but, what is more, it is not obvious. In such cases we are particularly inclined to most carefully scrutinize the reasoning which led us to so paradoxical conclusion. If this reasoning is to convince us we must be assured that it is correct. The best method to find this out is to check the whole demonstration step by step. Since the quoted proof was constructed, the language of mathematics became much more precise and unambiguous. In principle, the verification of a proof's correctness can be performed mechanically (if no premisses or parts of reasoning have been omitted). Creativity may be required of us to find a proof but not to check if it is correct. Predicate Calculus is a theory characterizing the basic part of that fragment of natural language which is employed for mathematical reasonings. Both the language of Predicate Calculus and proofs in that language can be unambiguously and precisely described. Thus it is possible to satisfy the requirement of the checkability of the proofs correctness. In the following sections we will discuss an actual version of the Predicate Calculus, namely Mizar-MSE. \newpage \begin{center} \LARGE{\bf 5. Justification in Mizar-MSE} \end{center} \vskip 1em If a sentence in the Mizar-MSE language logically follows from the other sentences in the same language, then it can be justified through assuming the validity of the sentences from which it follows. For example, the sentence\\ \hspace*{5cm}1. Socrates is mortal.\\ follows from sentences\\ \hspace*{5cm}2. Every man is mortal.\\ and\\ \hspace*{5cm}3. Socrates is a man.\\ If it is assumed that sentences 2 and 3 are true, the justification of the sentence 1 can be recorded in the following way: \begin{tabbing} aa\=aa\=aa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} y {\bt for} something;} \\ \>{\tt {\bt given} Socrates {\bt being} something;} \\ \>{\tt A2: {\bt for} y {\bt st} man[y] {\bt holds} mortal[y];} \\ \>{\tt A3: man[Socrates];} \\ \> \>{\bt begin} \\ \>{\tt T1: mortal[Socrates] {\bt by} A2, A3;} \end{tabbing} The phrase "{\tt {\bt reserve} y {\bt for} something;}" \footnote{\footnotesize In older versions of Mizar-MSE "{\bt let {\rm ...} denote {\rm ...}}" has been used instead of "{\bt reserve {\rm ...} for {\rm ...}}".} serves to introduce the variable y, where "something" is the name of the type of this variable; the phrase "{\tt {\bt given} Socrates {\bt being} something}" introduces the individual constant "Socrates" of type "something", i.e. the same as the type of variable y (this is important since a change of the type of the constant in this text would render the justification incorrect). A2 and A3 are labels of sentences assumed to be true throughout our reasoning. The word {\bt begin} marks the end of the environment and the begining of the text proper: every sentence following {\bt begin} must be justified. T1 is the label of the sentence undergoing justification; {\bt by} means that it is justified by reference to premisses labelled A2 and A3. Such justification is correct since it is obvious that sentence T1 follows logically from sentences A2 and A3. The semicolon ";" performs the part of a full-stop. However, in order to justify a sentence it is not always sufficient to simply use {\bt by} followed by the premise labels. Sometimes a proof must be presented. Following is an exemplary reasoning where from the fact that every man is an animal and every animal is mortal we infer that every man is mortal: \begin{tabbing} aa\=aaa\=aa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} something;} \\ \>{\tt B1: {\bt for} x {\bt st} man[x] {\bt holds} animal[x];} \\ \>{\tt B2: {\bt for} x {\bt st} animal[x] {\bt holds} mortal[x];} \\ \> \>{\bt begin} \\ \>{\tt {\bt for} x {\bt st} man[x] {\bt holds} mortal[x]} \\ \> \>{\bt proof} \\ \> \>{\tt {\bt let} a {\bt be} something {\bt such that} 1: man[a];} \\ \> \>{\tt animal[a] {\bt by} B1, 1;} \\ \> \>{\tt {\bt hence} mortal[a] {\bt by} B2;} \\ \> \>{\bt end;} \end{tabbing} The proof begins with {\bt proof} and ends with {\bt end}\footnote {\footnotesize{\bt end} is used here in the same way as Q.E.D. in the previous chapter.}. The construction "{\tt {\bt let} a {\bt be} something {\bt such} that ...}", called a generalization construction, is used to introduce a special constant {\tt a} of the same sort as the variable x. Simultaneously, we assume that the sentence {\tt man[a]} is true (or, in other words, that {\tt a} is a man). Once we have so assumed it remains to be proved that {\tt mortal[a]} which is our thesis. Such an argument is often used to prove conditional universal sentences like the statement being proved in our example. Indeed, in order to prove that every man is mortal it suffices to choose some indefinite object and, assuming only that it is a man, to prove that it is mortal too. The individual constant {\tt a} is defined only inside the proof and is therefore called a local constant (as opposed to constants introduced prior to {\bt begin} such as Socrates in the previous example). Since constant {\tt a} corresponds to the variable bounded by the universal quantifier in the proved statement (in a sense replacing it in a proof), it is called a generalized constant. In order to elucidate the function of the word {\bt hence} let us formulate the proof in another way: \begin{tabbing} aaaaa\=aa \kill \>{\bt proof} \\ \>{\tt {\bt let} a {\bt be} something {\bt such that} 1: man[a];} \\ \>{\tt 2: animal[a] {\bt by} B1, 1;} \\ \>{\tt {\bt thus} mortal[a] {\bt by} B2, 2;} \\ \>{\bt end;} \end{tabbing} This proof is as correct as the previous, although the word {\bt hence} does not appear. Actually, it was replaced by: \begin{itemize} \item the word {\bt thus}, \item equipping the sentence {\tt animal[a]} with label {\tt 2}, \end{itemize} and \begin{itemize} \item inserting this label into the justification of the subsequent statement. \end{itemize} Thus the words {\bt hence} and {\bt thus} serve to indicate that the subsequent statement is the thesis of the proof, or, in other words, that which should be proved at a given stage of the reasoning. The semicolon denotes the end of a text fragment\footnote{\footnotesize In technical description such text fragments are called also items.} and is omitted only after words such as {\bt environ}, {\bt begin} and {\bt proof}. It also does not appear after statements being proved. A text fragment can be recorded in several lines, or, contrarwise, one line can contain several text fragments. The division of the text into fragments determines its contents and the division into lines the manner of recording. Let us recapitulate all that has been said on justification. There are two ways to justify a statement: \begin{enumerate} \item by reference (i.e. indicating the premises from which the given sentence is inferred), \item by a proof. \end{enumerate} When using the first method we write {\bt by} followed by labels of the sentences referred to; in order to refer to the immediately preceding sentence as the only premise we may replace {\bt by} by {\bt then} or, possibly, we can use {\bt hence} instead of {\bt thus} if the justified sentence is the conclusion of the proof. In the second method we omit the semicolon after the sentence to be proved, instead recording its proof in the following text fragments. The proof of a universal conditional sentence of the form\\ \hspace*{7em}{\tt {\bt for} x {\bt st} $\alpha$(x) {\bt holds} $\beta$(x)},\\ where $\alpha${\tt (x)} and $\beta${\tt (x)} are formulae with one free variable {\tt x} of type {\tt t}, we begin with the fragment\\ \hspace*{5em}{\tt {\bt let} b {\bt be} t {\bt such} {\bt that} $\alpha$(b)};\\ Further, we must prove the statement $\beta${\tt (b)}, in other words $\beta${\tt (b)} is our thesis for the remainder of the proof. The final conclusion (i.e. the final thesis) must be preceded either by the the word {\bt thus} or the word {\bt hence}. Here {\tt b} is a local generalized constant: its "localness" indicates its being defined only in a specific text fragment, its generality, that it was introduced for the proof of a universal sentence and that its function corresponds to the variable bounded by the universal quantifier. The word {\bt proof} denotes the beginning, and {\bt end} the end of a proof. \newpage \begin{center} \LARGE{\bf 6. Implication and Assumptions, Deduction Theorem} \end{center} \vskip 1em So far we have presented the method of proving sentences of one type only, namely, universal conditional sentences. Now we shall show how to construct proofs of arbitrary conditional sentences. A conditional sentence is a sentence made up of two other sentences through joining them by the word {\bt implies}, e.g., {\tt man[John] {\bt implies} mortal[John]}. Consider the following reasoning: \begin{tabbing} aa\=aaa\=aa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} something;} \\ \>{\tt {\bt given} John {\bt being} something;} \\ \>{\tt B1: {\bt for} x {\bt st} man[x] {\bt holds} animal[x];} \\ \>{\tt B2: {\bt for} x {\bt st} animal[x] {\bt holds} mortal[x];} \\ \> \>{\bt begin} \\ \>{\tt man[John] {\bt implies} mortal[John] } \\ \> \>{\bt proof} \\ \> \>{\tt {\bt assume} man[John];} \\ \> \>{\tt {\bt then} animal[John] {\bt by} B1;} \\ \> \>{\tt {\bt hence} mortal[John] {\bt by} B2;} \\ \> \>{\bt end;} \end{tabbing} The word {\bt assume} indicates that the following sentence is an assumption of the proof. The word {\bt then} means that the following sentence is justified by reference to the immediately preceding one (the use of {\bt then} serves the same function as the substitution of {\bt hence} for {\bt thus} before the conclusion of a proof). The above reasoning is correct. Let us investigate why this is so. The proved sentence is an implication. We know that an implication with a true antecedent is true when its subsequent is true. In our proof we have assumed that the implication antecedent is true and with this assumption we have proved that also the subsequent of the implication is true. Hence the implication has to be true. If our assumption were not true then also the conclusion {\tt mortal[John]} would not have to be true. But in such a case the antecedent of the implication would be false and, again, the implication would be true since an implication with a false antecedent is always true regardless of its subsequent. We have mentioned that the sentences \begin{center} {\tt {\bt for} y {\bt st} man[y] {\bt holds} mortal[y]},\\ {\tt {\bt for} y {\bt holds} man[y] {\bt implies} mortal[y]}\\ \end{center} are mutual paraphrases. Therefore the proof from the previous section can be formulated as follows: \begin{tabbing} aa\=a\=aaa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} something;} \\ \>{\tt B1: {\bt for} x {\bt st} man[x] {\bt holds} animal[x];} \\ \>{\tt B2: {\bt for} x {\bt st} animal[x] {\bt holds} mortal[x];} \\ \> \>{\bt begin} \\ \>{\tt {\bt for} x {\bt st} man[x] {\bt holds} mortal[x]} \\ \> \>{\bt proof} \\ \> \>{\tt {\bt let} a {\bt be} something;} \\ \> \>{\tt {\bt assume} 1: man[a];} \\ \> \>{\tt animal[a] {\bt by} B1, 1;} \\ \> \>{\tt {\bt hence} mortal[a] {\bt by} B2;} \\ \> \>{\bt end;} \end{tabbing} After the phrase "{\tt {\bt let} a {\bt be} something;}" that which remains to be proved is the sentence {\tt man[a] {\bt implies} mortal[a]}, we can therefore apply the discussed method of constructing proofs for implications. Let us consider the following reasoning: \begin{tabbing} aa\=aaa\=\=aa\=aa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} something;} \\ \> \>{\bt begin} \\ \>{\tt (({\bt for} x {\bt st} man[x] {\bt holds} animal[x])} \\ \> \>\> \>{\tt {\bt \&} {\bt for} x {\bt st} animal[x] {\bt holds} mortal[x]))} \\ \> \>{\tt implies ({\bt for} x {\bt st} man[x] {\bt holds} mortal[x])} \\ \> \>{\bt proof} \\ \> \>{\tt {\bt assume that}} \\ \> \>\>{\tt B1: {\bt for} x {\bt st} man[x] {\bt holds} animal[x]} \\ \> \>{\tt and} \\ \> \>\>{\tt B2: {\bt for} x {\bt st} animal[x] {\bt holds} mortal[x];} \\ \> \>{\tt {\bt let} a {\bt be} something {\bt such that} 1: man[a];} \\ \> \>{\tt animal[a] {\bt by} B1, 1;} \\ \> \>{\tt {\bt hence} mortal[a] {\bt by} B2;} \\ \> \>{\bt end;} \\ \end{tabbing} The phrase "{\tt {\bt assume that} A: $\alpha$ {\bt and} B: $\beta$;}" is equivalent to the following \begin{center} {\tt {\bt assume} C: $\alpha$ {\bt \&} $\beta$;}\\ {\tt A: $\alpha$ {\bt by} C;}\\ {\tt B: $\beta$ {\bt by} C;} \end{center} In the above example we have proved a sentence of the form $\alpha$ {\bt \&} $\beta$ {\bt implies} $\gamma$. Previously, when proving sentence $\gamma$, we treated sentences $\alpha$ and $\beta$ as axioms\footnote{\footnotesize Axioms (from Greek: $\alpha\xi\iota o \mu\alpha$) statements assumed without proof at the beginning of a mathematical theory. The term is taken from "Elements" written by Euclid about 300 B.C.}. These proofs differ only by one text fragment; at the beginning of the proof of $\alpha$ {\bt \&} $\beta$ {\bt implies} $\gamma$ we have written "{\tt {\bt assume} {\bt that} B1: $\alpha$ {\bt and} B2: $\beta$;}", the remainder is unchanged. From this fact we infer a more general relationship which we shall formulate as a metatheorem\footnote{The prefix "meta-" is taken from greek scientific terminology (compare methaphysics and physics). Orginally it meant: behind.}, i.e., a theorem about the formal language we are now using. \medskip We say that $\beta$ is deducible from $\alpha_{1}$ , ... , $\alpha_{n}$ if and only if there exists a correct proof of sentence $\beta$ using sentences $\alpha_{1}$ , ... , $\alpha_{n}$ as the only axioms. \newpage {\Large\bf Deduction Theorem} For any sentences $\alpha_{1}$ , ... , $\alpha_{n}$, $\beta$ of the Mizar-MSE language $\beta$ is deducible from\\ \mbox{$\alpha_{1}$ , ... , $\alpha_{n}$} if and only if the sentence $\alpha_{1}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt implies} $\beta$ possesses a proof using no axioms. \vskip 1em {\Large\bf Proof (of the Deduction Theorem)} Given are sentences $\alpha_{1}$ , ... , $\alpha_{n}$ , $\beta$. Assume that $\beta$ is deducible from $\alpha_{1}$ , ... , $\alpha_{n}$ i.e., there exists a proof of a sentence $\beta$ using sentences $\alpha_{1}$ , ... , $\alpha_{n}$ as the only axioms. We have, therefore, the following correct reasoning: \begin{tabbing} aaaaaabc\=de\=fghijkl\=mn\=op \kill \> \> \>{\tt A1:} $\alpha_{1}$ ; \\ \> \> \>{\tt A2:} $\alpha_{2}$ ; \\ \> \> \>.\hskip 1em.\hskip 1em. \\ \> \> \>{\tt An:} $\alpha_{n}$ ; \\ \>(I)\> \>{\bt begin} \\ \> \> \> \>$\beta$ \\ \> \> \>{\bt proof} \\ \> \>{\large Q} \{ \> \>$\vdots$ \\ \> \> \>{\bt end;} \end{tabbing} (Here {\large Q} represents the text of the proof enclosed between the words {\bt proof} and {\bt end}). With such a correct reasoning at our disposal we can formulate the following proof: \begin{tabbing} a\=aa\=aa\=a\=aa\=a\=a\=aaaa\=aa \kill \> \> \> \>{\bt begin} \\ \> \> \> \> \>{\tt $\alpha_{1}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt implies} $\beta$} \\ \> \> \> \>{\bt proof} \\ \> \> \>{\bt assume that}\> \> \> \>{\tt A1:} $\alpha_{1}$ \\ \> \> \> \> \> \>{\bt and}\> \> {\tt A2:} $\alpha_{2}$ \\ \>(II)\>\> \> \> \> \> \>.\hskip 1em.\hskip 1em. \\ \> \> \> \> \> \>{\bt and}\> \> {\tt An:} $\alpha_{n}$; \\ \> \>{\large Q} \{ \> \> \> \> $\vdots$ \\ \> \> \> \>{\bt end;} \end{tabbing} After sentences $\alpha_{1}$ , ... , $\alpha_{n}$ are assumed we only have to prove sentence $\beta$. {\large Q} is a correct proof of $\beta$ using $\alpha_{1}$ , ... , $\alpha_{n}$ as axioms; in the proof {\large Q} we refer to these sentences employing labels {\tt A1, A2, ... , An} for justification by reference. For such justifications to be correct it is irrelevant whether the labelled sentences are axioms or whether they merely appeared earlier in the proof. Hence reasoning (II) is correct given that reasoning (I) is correct. Now let us assume that the sentence $\alpha_{1}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt implies} $\beta$ possesses a proof employing no axioms, i.e., that there exists a correct reasoning of the form \begin{tabbing} a\=aa\=aaaaaa\=aa\=a\=aaaa \kill \> \> \>{\bt begin} \\ \> \> \> \> \>{\tt $\alpha_{1}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt implies} $\beta$} \\ \>(III) \> \>{\bt proof} \\ \> \>{\large D} \{ \> \> \>$\vdots$ \\ \> \> \>{\bt end;} \end{tabbing} (Here {\large D} represents the text of the proof). \newpage Then the following reasoning will be correct: \begin{tabbing} a\=aaaaaaa\=a\=a\=aa\=aaaaaa\=aaa\=a \kill \> \> \> \>{\tt A1:} $\alpha_{1}$; \\ \> \> \> \>{\tt A2:} $\alpha_{2}$; \\ \> \> \> \>.\hskip 1em.\hskip 1em. \\ \> \> \> \>{\tt An:} $\alpha_{n}$; \\ \> \> \>{\bt begin} \\ \> \> \> \> \>$\beta$ \\ \> \> \> \>{\bt proof} \\ \> \> \> \> \>{\tt B: $\alpha_{1}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt implies} $\beta$} \\ \>(IV) \> \> \> \> \>{\bt proof} \\ \> \> \> \>{\large D} \{ \>\>\>$\vdots$ \\ \> \> \> \> \> \>{\bt end;} \\ \> \> \> \>{\bt thus} $\beta$ {\bt by} {\tt B, A1, A2, ... , An;} \\ \> \> \> \>{\bt end;} \end{tabbing} By the assumption of the correctness of reasoning (III) \begin{tabbing} aaaaaaaaaaaaa\=aaa\=aaa\=aa \kill \> \>{\bt proof} \\ \>{\large D} \{ \> \>$\vdots$ \\ \> \>{\bt end;} \end{tabbing} is a correct justification of the sentence labelled {\tt B}. It is also correct to justify the sentence $\beta$ by referring to {\tt B, A1, A2, ... , An} since it is obvious to the Mizar-MSE system that sentence $\beta$ follows from sentences $\alpha_{1}$ , ... , $\alpha_{n}$ , ($\alpha_{1}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt implies} $\beta$). {\bf Q.E.D.} \newpage \begin{center} \LARGE{\bf 7.Nested Proofs, Equivalence Proving} \end{center} \vskip 1em Any sentence in Mizar-MSE can be justified either by a reference or by a proof. If a sentence occurring in a proof is itself justified by a proof a nested proof is obtained. Let this be illustrated by one more example of the reasoning already discussed. \begin{tabbing} aa\=aaa\=a\=aaaa\=aa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} something;} \\ \>{\tt B1: {\bt for} x {\bt st} man[x] {\bt holds} animal[x];} \\ \> \>{\bt begin} \\ \> \>{\tt ({\bt for} x {\bt st} animal[x] {\bt holds} mortal[x])} \\ \> \> \> \>{\tt {\bt implies} ({\bt for} x {\bt st} man[x] {\bt holds} mortal[x])} \\ \> \>{\bt proof} \\ \> \>{\bt assume} \\ \> \> \>{\tt B2: {\bt for} x {\bt st} animal[x] {\bt holds} mortal[x];}\\ \> \> \>{\tt {\bt thus for} x {\bt st} man[x] {\bt holds} mortal[x]}\\ \> \> \> \>{\bt proof} \\ \> \> \> \>{\tt {\bt let} a {\bt be} something {\bt such that} 1: man[a];} \\ \> \> \> \>{\tt animal[a] {\bt by} B1, 1;}\\ \> \> \> \>{\tt {\bt hence} mortal[a] {\bt by} B2;}\\ \> \> \> \>{\bt end;}\\ \> \>{\bt end;} \end{tabbing} Every occurence of the word {\bt proof} is associated with exactly one occurence of the word {\bt end}. Let us recall that {\bt proof} denotes the begining of a proof and {\bt end} its end. In the above example the word {\bt end} is repeated twice; the latter occurrence is associated with the first {\bt proof} and denotes the end of the whole proof, and the former is associated with the second {\bt proof} and denotes the end of the nested proof. Now we will discuss one of the more useful applications of nested proofs. This technique can be very naturally applied to proving of equivalences. The equivalence of two sentences, that is a sentence of the form $\alpha$ {\bt iff} $\beta$ $(\alpha$ if and only if $\beta)$ is identified with the conjunction of two implications, that is with the sentence\\ \hspace*{7em}$(\alpha$ {\bt implies} $\beta)$ {\bt \&} $(\beta$ {\bt implies} $\alpha)$.\\ Since a conjunction is true exactly when both its conjuncts are true, for the proof of the equivalence it suffices to prove both the implications. Let us consider the following proof of a sentence stating that: Every fool is wicked if and only if every fool is both a fool and is wicked. \begin{tabbing} aa\=a\=aa\=aaa\=a\=aaaaaaa\=aa \kill \>{\bt environ} \\ \>{\bt reserve} x {\bt for} something; \\ \>{\bt begin} \\ \> \>{\tt ({\bt for} x {\bt st} fool[x] {\bt holds} wicked[x])}\\ \> \> \>{\tt {\bt iff} ({\bt for} x {\bt st} fool[x] {\bt holds} fool[x] {\bt \&} wicked[x])}\\ \>{\bt proof} \\ \> \>{\tt {\bt thus} ({\bt for} x {\bt st} fool[x] {\bt holds} wicked[x]) {\bt implies}}\\ \> \> \> \> \>{\tt ({\bt for} x {\bt st} fool[x] {\bt holds} fool[x] {\bt \&} wicked[x])} \\ \> \> \> \> \>{\bt proof} \\ \> \> \> \> \>{\tt {\bt assume} 1: ({\bt for} x {\bt st} fool[x] {\bt holds} wicked[x]);}\\ \> \> \> \> \>{\tt {\bt let} a {\bt be} something {\bt such that}}\\ \> \> \> \> \> \>{\tt 11: fool[a];} \\ \> \> \> \> \>{\tt wicked[a] {\bt by} 1, 11;} \\ \> \> \> \> \>{\tt {\bt hence} fool[a] {\bt \&} wicked[a] {\bt by} 11;} \\ \> \> \> \> \>{\bt end;} \\ \> \>{\tt {\bt thus} ({\bt for} x {\bt st} fool[x] {\bt holds} fool[x] {\bt \&} wicked[x])} \\ \> \> \> \> \>{\tt {\bt implies} ({\bt for} x {\bt st} fool[x] {\bt holds} wicked[x])} \\ \> \> \> \> \>{\bt proof} \\ \> \> \> \> \>{\bt assume} \\ \> \> \> \>{\tt 2: {\bt for} x {\bt st} fool[x] {\bt holds} fool[x] {\bt \&} wicked[x];}\\ \> \> \> \> \>{\tt {\bt let} b {\bt be} something {\bt such that}}\\ \> \> \> \> \> \>{\tt 22: fool[b];} \\ \> \> \> \> \>{\tt fool[b] {\bt \&} wicked[b] {\bt by} 2, 22;} \\ \> \> \> \> \>{\tt {\bt hence} wicked[b];} \\ \> \> \> \> \>{\bt end;} \\ \>{\bt end;} \end{tabbing} In the above proof we have additionaly used a new construction. So far we have assumed that the word {\bt thus} (or {\bt hence}) precedes the conclusion of the reasoning, that is, that it must be followed by the whole sentence constituting our actual thesis (the part remaining to be proved). In the presented equivalence proof {\bt thus} appears at its very beginning, followed by a sentence which definitely is not our thesis at this point. This sentence is, however, the first conjunct of a conjunction with which (in Mizar-MSE) our thesis is identified. To prove the conjunction $\alpha$ {\bt \&} $\beta$ we can split it into two parts and first prove the first conjunct $\alpha$. This part of the reasoning is terminated by the expression {\bt thus} $\alpha$, and still remains to prove $\beta$. If $\beta$ was also a conjunction it could, in turn, be split into two (or several, if it had a form $\gamma_{1}$ {\bt \&} $\gamma_{2}$ {\bt \&} ... {\bt \&} $\gamma_{n}$, for n $>$ 2) parts and its conjuncts could be proved separately. In the discussed example we did not make use of all possible instances of splitting the thesis. The proof of the first implication can also be recorded as follows: \begin{tabbing} aaaaaa\=aa\=a\=aaaaaaa\=aaaaaa \kill \>{\tt ({\bt for} x {\bt st} fool[x] {\bt holds} wicked[x]) {\bt implies}} \\ \> \>{\tt ({\bt for} x {\bt st} fool[x] {\bt holds} fool[x] {\bt \&} wicked[x])} \\ \> \> \>{\bt proof} \\ \> \> \>{\tt {\bt assume} 1: {\bt for} x {\bt st} fool[x] {\bt holds} wicked[x];} \\ \> \> \>{\tt {\bt let} a {\bt be} something {\bt such that}} \\ \> \> \> \>{\tt 11: fool[a];} \\ \> \> \>{\tt {\bt thus} fool[a] {\bt by} 11;} \\ \> \> \>{\tt {\bt thus} wicked[a] {\bt by} 1, 11;} \\ \> \> \>{\bt end;} \end{tabbing} \newpage \begin{center} \LARGE{\bf 8.Proving Aided by Sentential Calculus} \end{center} \vskip 1em Compared with Predicate Calculus, Sentential Calculus is a much poorer theory, as it takes into account considerably fewer logical relationships. Nevertheless, the difficulties most frequently appearing during proof construction involve the manner in which sentential calculus is employed. Every tautology, and every inference true by Sentential Calculus is obvious to the Mizar-MSE system. Still, this fact does not always facilitate the search for a valid proof; this is because not all tautologies and inferences of Sentential Calculus are obvious for human beings. Let us consider the reasoning \begin{tabbing} aa\=aa\=aaaaa\=aa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} man;} \\ \>{\tt Z: {\bt for} x {\bt st} (fat[x] {\bt implies} gloomy[x]) {\bt holds} gloomy[x];} \\ \>{\bt begin} \\ \>{\tt {\bt for} x {\bt st} (gloomy[x] {\bt implies} fat[x]) {\bt holds} fat[x]} \\ \> \>{\bt proof} \\ \> \>{\tt {\bt let} a {\bt be} man {\bt such that}} \\ \> \> \>{\tt 1: gloomy[a] {\bt implies} fat[a]} \\ \> \>{\tt {\bt thus} fat[a] {\bt by} 1,Z;} \\ \> \>{\bt end;} \end{tabbing} The above reasoning is valid but does not seem to be particularly obvious. Especially un-obvious appears the justification of the sentence fat[a] by reference to sentences 1 and Z. In this justification we actually employ a sentence which is a straightforward corollary of sentence Z, namely \\ {\tt ZA: (fat[a] {\bt implies} gloomy[a]) {\bt implies} gloomy[a]}. Thus, the un-obviousness of the discussed justification boils down to the un-ob\-vious\-ness of the fact that sentence fat[a] follows from sentences 1, ZA. However, the justification is correct, since for any two sentences $\alpha$, $\beta$, if the sentences \\ {\tt A:} $(\alpha$ {\bt implies} $\beta)$ {\bt implies} $\beta$ and {\tt B:} $(\beta$ {\bt implies} $\alpha)$ are true, then true is also sentence $\alpha$. This relationship can be verified, for example, by the 0--1 method, but there is another way: Let us assume, that sentence $\alpha$ is not true, that is, $\alpha$ is false. We want to show that both the sentences A and B cannot be true. Let us assume that A and B are true; therefore $\beta$ is false, since an implication with a false subsequent is true if and only if its antecedent is also false, and, from our assumptions follows that $\beta$ {\bt implies} $\alpha$ is true while $\alpha$ is false. Hence the sentence $\alpha$ {\bt implies} $\beta$ is true and $(\alpha$ {\bt implies} $\beta)$ {\bt implies} $\beta$ is false, because the latter is an implication with atrue antecedent and a false subsequent. This conclusion, however, contradicts the assumption that sentence A is true, hence follows that our assumptions were contradictory: it cannot happen that both A and B are true and a is false. Therefore we may conclude that a must be true as long as A and B are true too. The above reasoning demonstrates that the following inference rule is correct: \begin{center} $(\alpha$ {\bt implies} $\beta)$ {\bt implies} $\beta$, \end{center} (*) \hskip 14em \begin{tabular}{c} $\beta$ {\bt implies} $\alpha$ \\ \hline $\alpha$ \end{tabular} The sentences above the line are called the premises and the sentence below is called the conclusion. We say that an inference rule is correct if and only if whenever its premises are true then also true is its conclusion. When recording inference rules we do not use actual sentences but sentence scheme. Inference rule (*) remains correct when any pair of sentences is substituted for $\alpha$ and $\beta$. Thus an inference rule describes a certain inference scheme; according to this scheme many inferences may be conducted. The correctness of an inference rule means that any inference with a scheme conforming to this rule will lead to a true conclusion as long as its premises are true. We shall now present a set of inference rules particularly useful for proof construction. Actual inferences will be obtained from this rules when actual Mizar-MSE sentences are substituted for the symbols $\alpha$, $\beta$ and $\gamma$. (The most commonly used names of the corresponding rules are given in parentheses.) \newpage CI: \hskip 2em \begin{tabular}{c} $\alpha$, $\beta$ \\ \hline $\alpha$ {\bt \&} $\beta$ \end{tabular} \hskip 3em ( the conjunction introduction rule ) \vskip 2em OC1: \hskip 2em \begin{tabular}{c} $\alpha$ {\bt \&} $\beta$ \\ \hline $\alpha$ \end{tabular} \vskip 0.1em \hspace*{12em} ( the conjunction omitting rules ) \vskip 0.1em OC2: \hskip 2em \begin{tabular}{c} $\alpha$ {\bt \&} $\beta$ \\ \hline $\beta$ \end{tabular} \vskip 2em ID1: \hskip 2em \begin{tabular}{c} $\alpha$ \\ \hline $\alpha$ {\bt or} $\beta$ \end{tabular} \vskip 0.3em \hspace*{12em} ( the disjunction introduction rules ) \vskip 0.3em ID2: \hskip 2em \begin{tabular}{c} $\beta$ \\ \hline $\alpha$ {\bt or} $\beta$ \end{tabular} \vskip 2em OD: \hskip 2em \begin{tabular}{c} $\alpha$ {\bt or} $\beta$, $\alpha$ {\bt implies} $\beta$, $\beta$ {\bt implies} $\gamma$ \\ \hline $\gamma$ \end{tabular} \hspace*{1em}( the disjunction omitting rule ) \vskip 2em MP: \hskip 2em \begin{tabular}{c} $\alpha$, $\alpha$ {\bt implies} $\beta$ \\ \hline $\beta$ \end{tabular} \hskip 4em ( modus ponens ) \vskip 2em TR1: \hskip 2em \begin{tabular}{c} {\bt not} $\alpha$ {\bt implies not} $\beta$ \\ \hline $\beta$ {\bt implies} $\alpha$ \end{tabular} \vskip 2em TR2: \hskip 2em \begin{tabular}{c} $\alpha$ {\bt implies} $\beta$ \\ \hline {\bt not} $\beta$ {\bt implies not} $\alpha$ \end{tabular} \vskip 0.3em \hspace{3em} ( transposition rules, from the Latin {\em transpositio} - change places ) \vskip 2em CR1: \hskip 2em \begin{tabular}{c} $\alpha$, {\bt not} $\alpha$ \\ \hline {\bt contradiction} \end{tabular} \vskip 0.3em \hspace{15em} ( contradiction rules ) \vskip 0.3em CR2: \hskip 2em \begin{tabular}{c} {\bt contradiction} \\ \hline $\alpha$ \end{tabular} \newpage In the last two rules the place of a sentence is taken by the {\bt contradiction} expression. This is a special, always false sentence - in algebra or arithmetic often replaced by the sentence 0=1. The applications of {\bt contradiction} will be discussed in the next section. Sentences of the form $\alpha$ {\bt implies contradiction} are identified with not $\alpha$. It means that the rule CR1 is superfluous - it is special case of the modus ponens rule (MP). The most surprising is the rule CR2, on the other hand it is obvious - it is impossible that $\alpha$ is false and contradiction is true, independently of a choice of $\alpha$. Also, let us recall that sentences of the form $\alpha$ {\bt iff} b are identified with $(\alpha$ {\bt implies} $\beta)$ {\bt \&} $(\beta$ {\bt implies} $\alpha)$, therefore we do not have to formulate separate rules for equivalence. In order to prove an equivalence it suffices, as we have said, to prove a conjunction of two implications. Similarly, when referring to an equivalence we will in fact refer to a conjunction of two implications. Sentences of the form {\bt not not} $\alpha$ are identified with $\alpha$. Thus, the addition or omission of an even number of negations always yields a formula interpreted in exactly in the same way. The reader is invited to exercise his skill by applying the inference rules presented in this section to proving of the following tautologies (i.e. formulae logically true). \vskip 2em \begin{enumerate} \item p[\ ] {\bt implies} (q[\ ] {\bt implies} p[\ ]),\hskip 1em (simplification) \item ((p[\ ] {\bt implies} q[\ ]) {\bt implies} p[\ ]) {\bt implies} p[\ ],\hskip 1em(Peirce law) \item (p[\ ] {\bt implies} (q[\ ] {\bt implies} r[\ ])) {\bt implies} (q[\ ] {\bt implies} (p[\ ] {\bt implies} r[\ ])), \hskip 1em(commutation law) \item (p[\ ] {\bt implies} q[\ ]) {\bt implies} ((q[\ ] {\bt implies} r[\ ]) {\bt implies} (p[\ ] {\bt implies} r[\ ])),\\ \hspace*{2em}(hypothetical syllogism) \item {\bt not} p[\ ] {\bt implies} (p[\ ] {\bt implies} q[\ ]),\hskip 1em(Duns Scotus law) \item (p[\ ] {\bt \&} q[\ ] {\bt implies} r[\ ]) {\bt implies} (p[\ ] {\bt implies} (q[\ ] {\bt implies} r[\ ])),\\ \hspace*{2em}(exportation law) \item (p[\ ] {\bt implies} (q[\ ] {\bt implies} r[\ ])) {\bt implies} (p[\ ] {\bt \&} q[\ ] {\bt implies} r[\ ]),\\ \hspace*{2em}(importation law) \item (p[\ ] {\bt \&} q[\ ] {\bt implies} r[\ ]) {\bt iff} (p[\ ] {\bt implies} (q[\ ] {\bt implies} r[\ ])) \item p[\ ] {\bt or} {\bt not} p[\ ],\hskip 2em(excluded middle law) \item {\bt not} (p[\ ] {\bt \&} {\bt not} p[\ ]),\hskip 2em(contradiction law) \item {\bt not} (p[\ ] {\bt \&} q[\ ]) {\bt iff} ({\bt not} p[\ ] {\bt or} {\bt not} q[\ ]), \vskip 4pt \hskip 9em(De Morgan laws) \item {\bt not} (p[\ ] {\bt or} q[\ ]) {\bt iff} ({\bt not} p[\ ] {\bt \&} {\bt not} q[\ ]), \item p[\ ] {\bt \&} (q[\ ] {\bt or} r[\ ]) {\bt iff} (p[\ ] {\bt \&} q[\ ]) {\bt or} (p[\ ] {\bt \&} r[\ ]), \vskip 4pt \hskip 9em(distribution laws) \item p[\ ] {\bt or} (q[\ ] {\bt \&} r[\ ]) {\bt iff} (p[\ ] {\bt or} q[\ ]) {\bt \&} (p[\ ] {\bt or} r[\ ]). \end{enumerate} \newpage \begin{center} \LARGE{\bf 9. Existential Quantifier, Inference Rules For Quantifiers} \end{center} \vskip 1em The existential quantifier corresponds to such expressions of the English language as: there is, exists, certain, somebody, something, etc. An example of an existential sentence is: "Somebody is a student.". This sentence is termed existential since, when translated into Predicate Calculus, it corresponds to a sentence beginning with an existential quantifier. In Mizar-MSE, for example, it could be recorded as\\ \hspace*{7em}{\bt ex} x {\bt st} student[x].\\ The simplest way of justifying existential sentences is justification by example. The sentence "Somebody is a student." could be justified by reference to the sentence "John Smith is a student.". Let us consider a reasoning where justification by example is applied. We shall prove that:\\ If everybody likes himself then everybody likes somebody. \begin{tabbing} aa\=aaa\=aaaa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x, y {\bt for} man;} \\ \>{\bt begin} \\ {\tt ({\bt for} x {\bt holds} likes[x,x]) {\bt implies} ({\bt for} x {\bt ex} y {\bt st} likes[x,y])} \\ \> \>{\bt proof} \\ \> \>{\tt {\bt assume} 1: {\bt for} x {\bt holds} likes[x,x];} \\ \> \>{\tt {\bt let} a {\bt be} man;} \\ \> \>{\tt likes[a,a] {\bt by} 1;} \\ \> \>{\tt {\bt hence} {\bt ex} y {\bt st} likes[a,y];} \\ \> \>{\bt end;} \end{tabbing} The word {\bt hence} used instead of {\bt thus} before the conclusion of the proof indicates that the sentence "{\bt ex} y {\bt st} likes[a,y]" is justified by reference to the preceding sentence, that is, likes[a,a]. Observe that there is a certain symmetry between the existential and the universal quantifier. Let $\alpha$(x) be a formula where the only free variable is x, and let $\alpha$(a) be a sentence obtained by replacing all occurences of expressions x in $\alpha$(x) by constant a. Then the sentence "{\bt ex} x {\bt st} $\alpha$(x)" logically follows from $\alpha$(a), and $\alpha$(a) logically follows from "{\bt for} x {\bt holds} $\alpha$(x)". In other words, to justify an existential statement it is sufficient to find one example, on the other hand from an universal statement we can infer all possible examples. This fact can be expressed in a form of two inference rules {\bt for} quantifiers: \vskip 1em \begin{tabular}{c} $\alpha$(a) \\ \hline {\bt ex} x {\bt st} $\alpha$(x) \end{tabular} \hskip 3em ( existential generalization rule ), \vskip 2em \begin{tabular}{c} {\bt for} x {\bt holds} $\alpha$(x) \\ \hline $\alpha$(a) \end{tabular} \hskip 3em ( instantiation rule, dictum de omni ). \vskip 1em An actual, correct reasoning will be obtained from these rules if $\alpha$(x) is replaced by a formula with at most one free variable x, and $\alpha$(a) by a sentence created by substituting a constant a of the same type as x for all free occurences of variable x in $\alpha$(x) (it suffices to say that substitution has to be minimal such that obtained formula is a sentence). We remind that an occurence of variable x is free in a formula $\alpha$(x) if it is not bounded by any quantifier. Instead of the earlier discussed method of justifying universal conditional sentences we can apply appropriate inference rules introduced in this and previous sections. We suggest to the reader such a reconstruction of the proof of the sentence "Every man is mortal." from sentences "Every man is an animal." and "Every animal is mortal." that every justification in that proof conforms to an inference rule of Sentential Calculus or to the instantiation rule. Observe that the inference rules and proof constructions introduced so far suffice to deduce any valid logical relationship not involving existential quantifiers. This does not mean, however, that all important and useful proof construction methods have been presented - in the subsequent sections we shall describe several more of such methods. Let us consider the following two sentences:\\ \hspace*{2em} A: There are honest people. \\ \hspace*{2em} B: It is not so that no man is honest. \\ These sentences express the same fact - they are equivalent. In Mizar-MSE we can record them as follows:\\ \hspace*{2em} A1: {\bt ex} x {\bt st} honest[x],\\ \hspace*{2em} B1: {\bt not} ({\bt for} x {\bt holds not} honest[x]).\\ Sentences A1 and B1 will remain equivalent also if the formula honest[x] is in both of them replaced by any other formula. In view of this equivalence, in Mizar-MSE we have assumed that for any formula $\alpha$ sentences "{\bt ex} x {\bt st} $\alpha$" and {\bt not} ({\bt for} x {\bt holds not} $\alpha$) obtain the same interpretation and are identified with each other, just like {\bt not not} $\alpha$ is identified with $\alpha$, an equivalence with the conjunction of two implications, and the sentence {\bt not} $\alpha$ with the sentence $\alpha$ {\bt implies contradiction}. Because every sentence including existential quantifiers can be identified with some sentence where no existential quantifiers appear, the already introduced proof means and methods are complete in the following sense: {\it If a proof of a sentence can be constructed then it can be constructed employing the introduced proof means and methods}\footnote{\footnotesize The fact that every logically true statement can be proved by means of a given proof system is called Completeness Theorem for this system. Usually it is much easier to prove that some proof systems for Predicate Calculus are equivalent, then it is essential to prove the completness only for one of them. The first such proof has been given by kurt Goedel in 1930. We shall not discuss this problem for Mizar-MSE in this book.}. Additional facilities discussed in subsequent sections make proving easier. Observe, moreover, that even some of already presented inference rules are superfluous, that is if something can be proved with their help it can also be proved without it. \newpage \begin{center} \LARGE{\bf 10. Checking Inferences in Mizar-MSE} \end{center} \vskip 1em The Mizar-MSE language is equipped with a program, usually written in the programming language Pascal\footnote{\footnotesize Actually there are implementation in other programming languages such as Lisp, Prolog, MAID-code. As far as I know also C has been used for implementation of Mizar systems.}. This programm verifies the grammatical correctness of the text and the logical correctness of inferences. When the text to be verified is processed by the Mizar-MSE program all errors found in the text are identified and each is supplied with an appropriate comment describing the character of the error. Among all possible errors we will now concentrate on the one described by the comment: {\em This inference is not obvious for the Checker}\footnote{\footnotesize Considered error in some versions of Mizar-MSE can be commented in another way.}.\\ This error appears when our inference of a sentence from other sentences is inadequately justified from the point of view of the Checker, i.e., the inference-checking part of the Mizar-MSE program. Inferences correspond to justifications "by reference". A logical rule is a special case of an inference. Both of them have a list of premises and a conclusion. An inference $\frac{\alpha_{1}, ... , \alpha_{n}}{\alpha}$ is logically valid ifand only if $\alpha$ logically follows from $\alpha_{1}$ , .... , $\alpha_{n}$ . By the Deduction Theorem it means that: An inference $\frac {\alpha_{1}, ... , \alpha_{n}}{\alpha}$ is logically valid if and only if the sentence $\alpha_{1}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt implies} $\alpha$ is logically true. Instead of "logically valid" we shall say simply "valid". If an inference is not valid we say that it is invalid. We use this terminology also for justifications. When $\alpha$ is justified by reference to $\alpha$ , ... , $\alpha$ then this justification is valid (invalid) exactly when an inference $\frac {\alpha_{1}, ... , \alpha_{n}}{\alpha}$ is valid (invalid)\footnote{ \footnotesize This sentence represents two sentences: first obtained by omission of the text in parenthesis, second by replacing the term being before parenthesis by the term being inside of parenthesis in the sentence. This way of formulating statements is usual in mathematical texts.}. When a checker is correct (I hope it is so in a case of Mizar-MSE you use) then any justification accepted by the checker is valid. However, not every valid justification can be accepted by the correct checker. This is so, as has been proved, because there are no algorithm cheking correctly if a given formula is logically valid. Then for practical purposes we have to define weaker concept of validity, we call it: logical obviousness. Logically obvious are simply those inferences which are accepted by a given checker. It means that what is logically obvious depends on that what checker is used. However, some common features of checkers used in actual Mizar-MSE realizations can be fixed. Let us describe our assumptions related to what has been accepted by a checker: \begin{enumerate} \item all valid inferences which use only Sentential Calculus means; \item existential generalization and instantiation rules \end{enumerate} in so generalized form that the following inferences are obviuos: \vskip 1em \begin{tabular}{c} $\alpha(a_{1},...,a_{n})$ \\ \hline {\bt ex} $x_{1},...,x_{n}$ {\bt st} $\alpha(x_{1},...,x_{n})$, \end{tabular} \vskip 1em \begin{tabular}{c} {\bt for} $x_{1},...,x_{n}$ {\bt holds} $\alpha(x_{1},...,x_{n})$ \\ \hline $\alpha(a_{1},...,a_{n})$. \end{tabular} \vskip 1em The basic checker for Mizar-MSE, so called classical checker, will be discussed later. \vskip 1em {\large\bf REMARK.} A decision method for a set of sentences A is an algorithm (a computer program) which reading a sentence answers {\em yes} when it belongs to A and answers {\em no} when it does not belong to A. The Church Theorem (proved in 1936 by Alonzo Church) says that the set of logically valid sentences (tautologies) of Predicate Calculus has no decision method. The same is true about the set of valid inferences. A checker is a computer program hence an algorithm. Then it cannot accept exactly valid justifications "by reference". \newpage \begin{center} \LARGE{\bf 11. Types and Multiargument Predicates} \end{center} \vskip 1em All the predicates investigated so far were unary or at most binary predicates. Apart from these, however, predicates with a greater number of arguments are also often used, both in the natural language and in mathematics. Here are some exaples of atomic sentences constructed with the help of multiargument predicates. \begin{enumerate} \item John is brother of Paul. \item Peter is taller than John. \item Paris is the capital of France. \item Ma\l kinia lies between Bia\l ystok and Warsaw. \item It is nearer from Bia\l ystok to Warsaw than from Rzesz\'{o}w to Szczecin. \end{enumerate} In these sentences there appear the binary predicates:\\ "... is brother of ...", "... is taller than ...", "... is the capital of ...", one three-argument predicate: "... lies between ... and ...", and one four-argument predicate "is nearer from ... to ... than from ... to ...".\\ If appropriate individual constants (proper names) are substituted for the dots ... , we shall obtain atomic sentences. Binary predicate symbols, such as: $<$ (... is less than ...), $\geq$ (... is greater than or equal to ...), $=$ (... is equal to ...) are often used in elementary mathematics. Similarly as in the previous example, if we substitute names of concrete mathematical objects for the dots ... we shall obtain concrete mathematical formulae. However, in the case of some objects we may obtain nonsensical sentences. Let, for example, K and L be the names of two concrete straight lines; then we cannot understand the sentence K$<$L. This is so because the predicate $<$ can be correctly applied only to numerical expressions, if it is not redefined for other kinds of objects. Sentences such as K$<$L are considered incorrect. The sentence K$<$L is correct only if K and L are names of objects of appropriate type. Similar situations are encountered also outside mathematics, words are not interchangeble in every contex independently of their semantical cathegories. For instance, countable and uncountable nouns cannot be replaced one by another preserving correctness. The first part of a Mizar-MSE text, called the enviroment, contains, among other things, a more precise definition of the language fragment to be used in the remainder of the text. In the enviroment the type of variables used in the sequel can be initially established be the phrase:\\ \hspace*{7em}{\tt {\bt reserve} x, y {\bt for} man;}\\ As a consequence of such a preliminary definition, also called a predeclaration, the sentence\\ \hspace*{7em}{\tt {\bt for} x {\bt st} p[x] {\bt holds} r[x]}\\ will be interpreted as: \hspace*{3em}For every x of type man such that p[x] holds r[x].\\ If the type of variable x was not initially defined as man we can always do it when the variable is actually being used by writting\\ \hspace*{5em}{\tt {\bt for} x {\bt being} man {\bt st} p[x] {\bt holds} r[x].} In order to illustrate the functioning of both the presented constructions we shall investigate the following texts: \begin{tabbing} aaa\=aa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} man;} \\ \>{\tt A: {\bt for} x {\bt st} intelligent[x] {\bt holds} happy[x];} \\ \>{\bt begin} \end{tabbing} and \begin{tabbing} aaa\=aa \kill \>{\bt environ} \\ \>{\tt B: {\bt for} x {\bt being} man {\bt st} intelligent[x] {\bt holds} happy[x];} \\ \>{\bt begin} \end{tabbing} Both these texts are correct and sentences A and B will both be interpreted in the same way. In contrast, the text \begin{tabbing} aaa\=aa \kill \>{\bt environ} \\ \>{\tt C: {\bt for} x {\bt st} intelligent[x] {\bt holds} happy[x];} \\ \>{\bt begin} \end{tabbing} is incorrect, because the variable x has no definite type. In order to define a predicate it does not suffice to indicate its symbol. The same symbol may be possessed by many different predicates. Predicates having different numbers of arguments can possess the same symbol. For example, the text \begin{tabbing} aaa\=a\=a\=a \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x, y {\bt for} T;} \\ \>{\bt begin} \\ \> \>{\tt {\bt for} x, y {\bt st} p[x] {\bt \&} {\bt not} p[x] {\bt holds} p[x,y];} \\ \> \> \>{\bt proof} \\ \> \> \>{\tt {\bt let} a, b {\bt be} T {\bt such that} p[a] {\bt \&} {\bt not} p[a];} \\ \> \> \>{\tt {\bt then} p[a,b];} \\ \> \> \>{\bt end;} \end{tabbing} is semanticaly correct (not syntacticaly), in spite of two different predicates being denoted by the same symbol. Moreover, two predicates may differ not only in the number of arguments. Let us consider the following text: \begin{tabbing} aaa\=aa \kill \>{\bt environ} \\ \>{\tt A: {\bt for} x {\bt being} animal {\bt holds} mortal[x];} \\ \>{\bt begin} \\ \>{\tt B: {\bt for} x {\bt being} man {\bt holds} mortal[x] {\bt by} A;} \end{tabbing} This text is correctly formulated; in particular, correct is the use of x as the symbol of a type animal in one place and as the symbol of a variable of a type man in another. However, the justification of sentence A by sentence B will not be accepted by the checker. On the other hand, the following justification will be correct: \begin{tabbing} aaa\=aa \kill \>{\bt environ} \\ \>{\tt A: {\bt for} x {\bt being} animal {\bt holds} mortal[x];} \\ \>{\bt begin} \\ \>{\tt B: {\bt for} y {\bt being} animal {\bt holds} mortal[y] {\bt by} A;} \end{tabbing} The unary predicate symbolised by mortal with argument of type animal is different from the unary predicate mortal with argument of type man. Sentences differing only in the symbol of the bounded variable are interpreted identically, it is therefore correct to justify sentence A1 by sentence B1. In contrast, the interpretations of sentences A and B are different because each is constructed with a different predicate. To recapitulate: a predicate is defined by indicating I) the predicate symbol, II) the number of arguments and types of every argument (arity). Thus, two predicates are identical if and only if they possess the same symbol, the same number of arguments and when the arguments at corresponding positions are of the same type. For example, let $a, b$ be constants of type $T$ and $c, d, e, f$ be constants of type $S$. Then the formulae P$[a,c,d]$ and P$[a,e,f]$ are constructed with the help of the same predicate, while each of the formulae P$[a,b,c]$, P$[d,e,f]$, Q$[d,e,f]$ and P$[a,b,c,e]$ is constructed with a predicate different from the others. In Mizar-MSE a predicate is defined by its use. In the text we can introduce an arbitrary finite number of predicates denoted by arbitrary symbols which are always finite, non-empty strings of letters and digits. The possibility to define a predicate by its use is due to the fact that the predicate symbol and its arity can be read from the text and the types of its arguments - variables and individual constants - are defined in its preceding parts. \newpage \begin{center} \LARGE{\bf 12. Thesis Reconstruction} \end{center} \vskip 1em During our presentation we have many times used the expression "that which remains to be proved". Usually it is easy to determine the sentence "remaining to be proved" at a given stage of reasoning. This sentence we shall call the thesis. Let us consider the following: \begin{tabbing} aaa\=aaa\=a \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} man;} \\ \>{\bt begin} \\ \>{\tt Z: {\bt for} $x$ {\bt st} foolish[x] {\bt holds} happy[x] {\bt implies} foolish[x]} \\ \> \>{\bt proof} \\ \> \>{\tt {\bt let} a {\bt be} man {\bt such that}} \\ \> \>{\tt 1: foolish[a];} \\ \> \>{\tt {\bt assume} happy[a];} \\ \> \>{\tt {\bt thus} foolish[a] {\bt by} 1;} \\ \> \>{\bt end;} \end{tabbing} Now we shall attempt to find out what is the thesis at each stage of the above reasoning. After the word {\bt proof} the thesis is the sentence labelled Z since precisely this sentence is what we have to prove. In the subsequent text fragment we have introduced a new constant a of type man about which we have assumed that foolish$[a]$, therefore our thesis is now "{\tt happy[a] {\bt implies} foolish[a]}". The following assumption, happy$[a]$, transforms the thesis into foolish$[a]$ and this sentence appears after the word {\bt thus} as our conclusion. To represent the sentence "remaining to be proved" at a given point we use a special constant {\bt thesis}. Thus, instead of "{\tt {\bt thus} foolish[a] {\bt by} 1;}" we could have written "{\tt {\bt thus thesis by} 1;}" and the reasoning would still have been correct. The following reasoning is not correct proof for Z. \begin{tabbing} aaaaaa\=aa \kill \>{\bt proof} \\ \>{\tt {\bt let} a {\bt be} man {\bt such that}} \\ \>{\tt 1: foolish[a];} \\ \>{\tt {\bt thus} foolish[a] {\bt by} 1;} \\ \>{\bt end;} \end{tabbing} However, if in the conclusion foolish$[a]$ is replaced by {\bt thesis}, it will again become correct, that is, correct will be the proof: \begin{tabbing} aaaaaa\=a \kill \>{\bt proof} \\ \>{\tt {\bt let} a {\bt be} man {\bt such that}} \\ \>{\tt 1: foolish[a];} \\ \>{\tt {\bt thus} {\bt thesis} {\bt by} 1;} \\ \>{\bt end;} \end{tabbing} We ask the reader to consider why the latter is a correct proof of sentence Z. In order for {\bt thesis} to be meaningfully used there should be a method of determining the actual sentence its every occurence represents. Now we shall describe such a method. \newpage \begin{enumerate} \item If we are proving sentence $\alpha$ then after the word proof the word {\bt thesis} represents $\alpha$. \item If in a given text fragment thesis represents sentence of the form "$\alpha$ {\bt implies} $\beta$" and the following text fragment contains the expression "{\bt assume} $\alpha$" then in the subsequent text fragment thesis represents sentence $\beta$. \item If in a given text fragment {\bt thesis} represents a sentence of the form "{\tt {\bt for} x}\\ {\tt {\bt being} T {\bt st} $\alpha$(x) {\bt holds} $\beta$(x)}" and the following text fragment contains the phra\-se "{\tt {\bt let} a {\bt be} T {\bt such that} $\alpha$(a);}" then in the subsequent text fragment {\bt thesis} represents sentence $\beta$(a) (if the following text fragment contains the expression "{\tt {\bt let} a {\bt be} T;}" with no assumption then in the subsequent text fragment thesis represents sentence "$\alpha$(a) {\bt implies} $\beta$(a)"). Let us notice that the same rule can be applied if we replace x by a list of variables and a by a list of constants with the same lenght and corresponding types. \item If in a given text fragment {\bt thesis} represents a sentence of the form "{\tt {\bt for} x}\\ {\tt {\bt being} T {\bt holds} $\alpha$(x)}" and the following text fragment contains the expression "{\tt {\bt let} a {\bt be} T;}" then in the subsequent text fragment {\bt thesis} represents sentence $\alpha(a)$. The same remark related to substitutions of lists in places of x and $a$ is adequate here. \item If in a given text fragment {\bt thesis} represents sentence $\alpha$ {\bt \&} $\beta$ and the following text fragment contains the phrase "{\bt thus} $\alpha$" or the phrase "{\bt hence} $\alpha$" then in the subsequent text fragment thesis represents sentence $\beta$. \item If in a given text fragment {\bt thesis} represents sentence $\alpha$ and the following text fragment contains the phrase "{\bt assume not} $\alpha$" or the phrase "{\bt assume not thesis}" then in the subsequent text fragment {\bt thesis} represents sentence {\bt contradiction}. \end{enumerate} The rules 1-6 enable us to reconstruct in any reasoning the sentence represented by the word {\bt thesis} wherever it occurs. It should be remembered that the meaning of {\bt thesis} evolves in the course of a proof since "that which remains to be proved" usually undergoes change. In the following reasoning we have recorded the meaning of {\bt thesis} after every text fragment. Double colon (::) is used as sign of commet, it means that a remainder of a line after "::" is ignored by the checking program. \begin{tabbing} aa\=aaa\=aaa\=aaa\=aa\=a\=a\=aaaaaaa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x, y {\bt for} man;} \\ \>{\bt begin} \\ \>{\tt T: {\bt for} x {\bt st} happy[x] {\bt \&} foolish[x] {\bt holds}} \\ \> \> \> \> \>{\tt {\bt ex} y {\bt st} happy[y] {\bt \&} foolish[y]} \\ \> \>{\bt proof} \> \> \>:: here {\bt thesis} is sentence T \\ \> \>{\tt {\bt let} a {\bt be} man;} \\ \> \> \>:: \>here {\bt thesis} is the sentence \\ \> \> \>:: \> \>{\tt happy[a] {\bt \&} foolish[a] {\bt implies}} \\ \> \> \>:: \> \>{\tt {\bt ex} y {\bt st} happy[y] {\bt \&} foolish[y]} \\ \> \>{\tt {\bt assume} happy[a] {\bt \&} foolish[a];} \\ \> \> \>:: \>here {\bt thesis} is {\tt {\bt ex} y {\bt st} happy[y] {\bt \&} foolish[y]}\\ \> \>{\bt hence thesis;} \\ \> \>{\bt end;} \\ \end{tabbing} According to our reconstruction {\bt thesis} in the conclusion represents the sentence "{\tt {\bt ex} y {\bt st} happy[y] {\bt \&} foolish[y]}". If the conclusion {\bt thus thesis} or {\bt hence thesis} appears in a reasoning we can always reconstruct the sentence represented by thesis with help of rules 1-6. By rule 1 we can always determine the meaning of {\bt thesis} at the beginning of the proof and the remaining rules help us to determine the modifications of this sentence dependent on the used constructs. Now let us consider an exaple of incorrect reasoning \begin{tabbing} aaa\=aaa\=aaa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} man;}\\ \>{\bt begin} \\ \>{\tt S: {\bt for} x {\bt st} foolish[x] {\bt \&} happy[x] {\bt holds} happy[x] {\bt \&} foolish[x]} \\ \> \>{\bt proof} \\ \> \>{\tt {\bt let} a {\bt be} man {\bt such that}} \\ \> \>{\tt 1: foolish[a] {\bt \&} happy[a];}\\ \> \>{\tt {\bt thus} happy[a] {\bt by} 1;} \\ \> \>{\bt end;} \end{tabbing} Probably Mizar-MSE will find at the end of this text an error commenting it like this: {\it Something still remains to be proved.}\\ Let us see why. Calculating the sentence represented by the word {\bt thesis} we can find that at the end of the proof it is foolish$[a]$ (please check it). It means that the proof is not finished, we have to add a conclusion "{\tt {\bt thus} foolish[a]"}. We say that a proof is completed if and only if thesis at the end of this proof represents sentence "{\bt not contradiction}", which is trivially logically true sentence. If a proof is completed then the error discussed here cannot happen. You can convince about this checking the following text: \begin{tabbing} aaa\=aaaa\=a \kill \>{\bt environ} \\ \>{\bt begin} \\ \>{\tt p[\ ] {\bt implies} p[\ ] {\bt \&} {\bt not} {\bt contradiction}} \\ \> \>{\bt proof} \\ \> \>{\tt {\bt assume} p[\ ];} \\ \> \>{\tt {\bt hence} p[\ ];} \\ \> \>{\bt end;} \end{tabbing} On the other hand, in some versions of Mizar-MSE system the following will be incorrect. \begin{tabbing} aaa\=aaaa\=aa \kill \>{\bt environ} \\ \>{\bt begin} \\ \> {\bt not contradiction} \\ \> \>{\bt proof} \\ \> \>{\bt end;} \end{tabbing} This is so because there are no conclusion in this proof, but not because still something remains to be proved. We have listed several identifications of different sentences (with different surface structure, but the same deep structure). Let us add to this list the following, sentences: $\alpha$, "$\alpha$ {\bt \& not contradiction}" and "{\bt not contradiction \&} $\alpha$" are all interpreted in the same way. \newpage \begin{center} \LARGE{\bf 13. Reverse Thesis Reconstruction} \end{center} \vskip 1em It is also possible to make a reverse reconstruction; given a proof where the word {\bt thesis} does not appear we can reconstruct the sentence being proved. We shall now describe a method of reconstructing a thesis when we have a correct proof at our disposal. To begin with, some sentences in the proof may themselves be justified by proofs; let us remove all these proofs leaving only the semicolon (;) after the justified sentence. The proof thus obtained may not be correct since some sentences may not be adequately justified. For our purpose, however, this is immaterial - the correctness of the initial proof is quite enough. We start thesis reconstraction from the end of the proof; in this point the result of thesis reconstruction is "{\bt not contradiction}". We continue moving up by one text fragment. If at a given point the result of thesis reconstruction is $\alpha$ and if the text fragment we have just moved to begins with {\bt hence} or {\bt thus} followed by $\beta$, then the current result of the reconstruction is the formula $\beta$ {\bt \&} $\alpha$, and again we move up by one text fragment. If at a given point the result of thesis reconstruction is the formula $\alpha$ and the current text fragment begins with {\bt assume} followed by formula $\beta$ then the result of thesis reconstruction becomes the formula "$\beta$ {\bt implies} $\alpha$" and we continue upwards (if we encounter "{\tt {\bt assume that} L1: $\beta_{1}$ {\bt and} L2: $\beta_{2}$ {\bt and} ... {\bt and} Ln: $\beta_{n}$;}" then the result of thesis reconstruction becomes\\ \hspace*{2em}$\beta_{1}$ {\bt \&} ... {\bt \&} $\beta_{n}$ {\bt implies} $\alpha$). If at a given point the result of thesis reconstruction is the formula $\alpha(a)$ and the current text fragment begins with the phrase "{\tt {\bt let} a {\bt be} T {\bt such that}}" followed by formula $\beta(a)$ then the result of thesis reconstruction becomes the formula "{\tt {\bt for} a {\bt being} T {\bt st} $\alpha$(a) {\bt holds} $\beta$(a)}", and we move up by one text fragment (if we encountered "{\tt {\bt let} a {\bt be} T}" the result of thesis reconstruction becomes "{\tt {\bt for} a {\bt being} T {\bt holds} $\alpha$(a)}"). In this case we can deal with a in the same way as with a list of local constants (see the algorthm described in the previous chapter). Here we use the same notation "$a$" for bounded variable and for local constant. It is correct because before introduction of $a$ as a local constant $a$ as a local constant is undefined, and bounded variable is undefined outside of its scope, then there are no colision of the same notation for different objects. If at a given point the result of thesis reconstruction is the formula "$\alpha$ {\bt implies contradiction}" then we assume that the result of thesis reconstruction is the formula {\bt not} $\alpha$ ({\bt not not} $\alpha$ is identified with $\alpha$). If the current text fragment does not begin with one of the words {\bt thus}, {\bt hence}, {\bt assume}, let then the result of thesis reconstruction does not change and we only move up by one text fragment. Thesis reconstruction terminates when we reach the word {\bt proof}. Its final result, derived from a correct proof, is a sentence justified precisely by this proof. In order to convince himself (or herself maybe) that the presented method of thesis reconstruction is correct the reader is encouraged to apply it to the example proofs appearing in this and previous sections. In some cases it is more convenient to avoid an explicit formulation of the proved proposition; then we start the proof with the word {\bt now} instead of the word {\bt proof}. Before {\bt now} we can introduce a label, for example E, associated with the thesis being proved in the reasoning. Later reference to E is a reference to the thesis reconstructed according to the described method. Let us consider the following example: \begin{tabbing} aa\=aa\=a\=aaaaaaa\=aa \kill \>{\bt environ}\\ \>{\tt {\bt reserve} x, y {\bt for} man;}\\ \>{\tt {\bt given} John {\bt being} man;}\\ \>{\tt A: student[John];}\\ \> \>{\bt begin}\\ \>{\tt E: {\bt now}}\\ \> \>{\tt {\bt let} a {\bt be} man {\bt such that}}\\ \> \>{\tt 1: student[a];}\\ \> \>{\tt {\bt assume} 2: {\bt for} x {\bt st} student[x] {\bt holds} diligent[x];}\\ \> \>{\tt {\bt thus} diligent[a] {\bt by} 1, 2;}\\ \> \>{\bt end;}\\ \>{\bt F: ({\bt for} y {\bt st} student[y] {\bt holds} diligent[y])}\\ \> \>{\tt {\bt implies} diligent[John] {\bt by} A, E;}\\ \end{tabbing} A proof starting with {\bt now}, possessing no explicit formulation of the proved thesis, is called an auxiliary reasoning - the name indicates the part in which these reasonings most frequently appear. The sentence labelled E can be reconstructed as \begin{center} {\tt {\bt for} z {\bt being} man {\bt st} student[z] {\bt holds}}\\ {\tt ({\bt for} x {\bt st} student[x] {\bt holds} diligent[x]) {\bt implies} diligent[z]} \end{center} Observe that in the justification of sentence F by reference to A and E the substitution of x for the bounded variable y does not influence the correctness of the justification. To see how useful can the technique of auxilary reasonings let us consider the following simplified version of a reasoning discussed earlier: \begin{tabbing} aa\=a\=a\=a\=aaa\=a\=aaaaaaa\=a \kill \>{\bt environ}\\ \>{\tt {\bt reserve} x {\bt for} something;}\\ \>{\bt begin}\\ \> \>{\tt ({\bt for} x {\bt st} fool[x] {\bt holds} wicked[x])}\\ \> \> \> \>{\tt {\bt iff} ({\bt for} x {\bt st} fool[x] {\bt holds} fool[x] {\bt \&} wicked[x])}\\ \>{\bt proof}\\ \> \> \>A: {\bt now}\\ \> \> \> \> \> \>{\bt assume} \\ \> \> \> \> \>{\tt 1: ({\bt for} x {\bt st} fool[x] {\bt holds} wicked[x]);}\\ \> \> \> \> \> \>{\tt {\bt let} a {\bt be} something {\bt such that}}\\ \> \> \> \> \> \> \>{\tt 11: fool[a];}\\ \> \> \> \> \> \>{\tt wicked[a] {\bt by} 1, 11;}\\ \> \> \> \> \> \>{\tt {\bt hence} fool[a] {\bt \&} wicked[a] {\bt by} 11;}\\ \> \> \> \> \> \>{\bt end;}\\ \> \> \> \> \> \>{\tt {\bt now assume} 2: {\bt for} x {\bt st} fool[x] {\bt holds} fool[x] {\bt \&} wicked[x];}\\ \> \> \> \> \> \>{\tt {\bt let} b {\bt be} something {\bt such that}}\\ \> \> \> \> \> \> \>{\tt 22: fool[b];}\\ \> \> \> \> \> \>{\tt fool[b] {\bt \&} wicked[b] {\bt by} 2, 22;}\\ \> \> \> \> \> \>{\tt {\bt hence} wicked[b];}\\ \> \> \> \> \> \>{\bt end;}\\ \> \> \>{\bt hence thesis by} {\tt A;}\\ \>{\bt end;} \end{tabbing} \newpage {\large\bf NOTE.} In an auxiliary reasoning we can use the word {\bt thesis} only when it had a definite meaning prior to the reasoning; then {\bt thesis} will retain that meaning throughout the auxiliary reasoning. Thus, for example, the following text is not correct: \begin{tabbing} aaaa\=aa\=aaaa \kill \>{\bt environ}\\ \>{\bt begin}\\ \>{\bt now}\\ \> \>{\bt assume not thesis;}\\ \> \>{\bt hence contradiction;}\\ \>{\bt end;} \end{tabbing} In this text the thesis of the auxiliary reasoning cannot be reconstructed ! \newpage \begin{center} \LARGE{\bf 14. Choice Rule} \end{center} \vskip 1em So far we have only considered such reasonings where justification was based only on universal sentences. Sometimes, however, we have to refer to an existential sentence. Let us investigate the problem of justifying the sentence \\ \hspace*{10em}There are men who are not fools. \\ by the sentence \\ \hspace*{10em}Every man who is a fool is happy.\\ and \\ \hspace*{10em}There are men who are not happy.\\ We can construct this justification as follows: \begin{tabbing} aa\=aa\=a\=a\=aa \kill \>{\bt environ}\\ \>{\tt {\bt reserve} x {\bt for} man;}\\ \>{\tt A: {\bt for} x {\bt st} fool[x] {\bt holds} happy[x];}\\ \>{\tt B: {\bt ex} x {\bt st not} happy[x];}\\ \> \> \>{\bt begin}\\ \>{\tt {\bt ex} x {\bt st not} fool[x]}\\ \> \>{\bt proof}\\ \> \>{\tt consider a {\bt being} man {\bt such that}}\\ \> \> \> \>{\tt 1: {\bt not} happy[a] {\bt by} B;}\\ \> \>{\tt {\bt not} fool[a] {\bt by} 1, A;}\\ \> \>{\tt {\bt hence ex} x {\bt st} {\bt not} fool[x];}\\ \> \>{\bt end;} \end{tabbing} In the above proof we have applied a new construction\\ \hspace*{6em}{\tt {\bt consider} a {\bt being} man {\bt such that} . . .}\\ Through this construction, called the choice construction, we can introduce new constants satisfying certain conditions. In the above example the choice construction was used to introduce a new constant a of type man; moreover, this constant was introduced satisfying a certain condition, namely that\\ \hspace*{10em}{\tt {\bt not} happy$[a]$}\\ We have already discussed a construction serving to introduce new constants in the form of the generalization construction\\ \hspace*{6em}{\tt {\bt let} ... {\bt be} ... {\bt such that} ...}\\ Both these constructions not only allow for the introduction of new constants but also let us determine the conditions the constants are to fulfil. There is, however, an essential difference beween them: the condition introduced through the generalization construction does not have to be (and even cannot be) justified, while the condition introduced by the choice construction may require justification. In order to introduce a new constant satisfying some condition we have first to justify a sentence stating that this condition is satisfied by anything at all. In our example sentence B is such a sentence, therefore it was precisely the sentence we referred to when justifying the possibility of introducing a constant $a$ of type man satisying the condition {\bt not} happy$[a]$. An individual constant introduced through the choice rule is called a local constant or a nongeneralized local constant, to distinguish it from the constants introduced through the {\bt let} construction. Actually, the choice rule is applied to make use of existential premises and may be employed {\bt for} the justification of different, not necessarily existential sentences. Let us consider an example where the choice rule is used to justify a nonexistential sentence. We know that every patient likes every doctor, and that a certain patient does not like any quack. From these sentences we want to infer that no quack is a doctor. \begin{tabbing} aa\=aa\=aaaaaaa\=aaa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x, y {\bt for} man;}\\ \>{\tt first: {\bt for} x, y {\bt st} patient[x]}\\ \> \> \>{\tt {\bt holds for} y {\bt st} doctor[y] {\bt holds} likes[x,y];}\\ \>{\tt second: {\bt ex} x {\bt st} patient[x] {\bt \&}}\\ \> \> \>{\tt ({\bt for} y {\bt st} quack[y] {\bt holds not} likes[x,y]);}\\ \> \>{\bt begin} \\ \>{\tt {\bt for} x {\bt st} quack[x] {\bt holds not} doctor[x]}\\ \> \>{\bt proof}\\ \> \>{\tt {\bt let} c {\bt be} man {\bt such that} 1: quack[c];}\\ \> \>{\tt {\bt consider} a {\bt being} man {\bt such that}}\\ \> \>{\tt 2: patient[a] {\bt and}}\\ \> \>{\tt 3: {\bt for} y {\bt st} quack[y] {\bt holds not} likes[a,y] {\bt by} second;}\\ \> \>{\tt quack[c] {\bt implies not} likes[a,c] {\bt by} 3;}\\ \> \>{\tt then 4: {\bt not} likes[a,c] {\bt by} 1;}\\ \> \>{\tt {\bt for} y {\bt st} doctor[y] {\bt holds} likes[a,y] {\bt by} 2, first;}\\ \> \>{\tt {\bt hence not} doctor[c] {\bt by} 4;}\\ \> \>{\bt end;} \end{tabbing} Note that in this example the phrase \begin{tabbing} aa\=aa \kill \>{\tt {\bt consider} a {\bt being} man {\bt such that}} \\ \>{\tt 2: patient[a]}{\bt and}\\ \>{\tt 3: {\bt for} y {\bt st} quack[y] {\bt holds not} likes[a,y] {\bt by} second;}\\ \end{tabbing} is equivalent to \begin{tabbing} aa\=aa\=aaa\=a \kill \> \>{\tt {\bt consider} a {\bt being} man {\bt such that} 1': {\bt ex} x {\bt st} patient[x] {\bt \&}}\\ \> \> \>{\tt ({\bt for} y {\bt st} quack[y] {\bt holds not} likes[x,y]) {\bt by} second;}\\ \> \>{\tt 2: patient[a] {\bt by} 1'}\\ \> \>{\tt 3: {\bt for} y {\bt st} quack[y] {\bt holds not} likes[a,y] {\bt by} 1';} \end{tabbing} This is an example of applying a sentence decomposition method similar to the one already discussed in the case of the generalization construction and in the case of assumptions. Generally, a text of the form \begin{tabbing} aaaaaaaaaaaaaa\=aaaa\=aaa \kill \>{\tt {\bt consider} a {\bt being} T {\bt such that}} \\ \> \>{\tt 1: $\alpha_{1}$} {\bt and} \\ \> \>{\tt 2: $\alpha_{2}$} {\bt and} \\ \> \>.\hskip 1em.\hskip 1em . \\ \> \>{\tt n: $\alpha_{n}$ {\bt by} E1, ... ,Ek;} \end{tabbing} is equivalent to the following: \begin{tabbing} aaaaaaaaaaa\=aaa\=aaaa\=aaaa \kill \> \>{\tt {\bt consider} a {\bt being} T {\bt such that}}\\ \>{\tt E: $\alpha_{1}$ {\bt \&} $\alpha_{2}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt by} E1, ... ,Ek;}\\ \> \> \>{\tt 1: $\alpha_{1}$ {\bt by} E;}\\ \> \> \>{\tt 2: $\alpha_{2}$ {\bt by} E;}\\ \> \> \>.\hskip 1em.\hskip 1em.\\ \> \> \>{\tt n: $\alpha_{n}$ {\bt by} E;}\\ \end{tabbing} It should be remembered that label E must be inaccessible in the position where it was introduced for the sentence "{\tt $\alpha_{1}$ {\bt \&} $\alpha_{2}$ {\bt \&} ... {\bt \&} $\alpha_{n}$}". This is most simply accomplished by introducing a new label not appearing elsewhere in the text. Why does the sentence introduced through the choice construction require justification? Let us consider the following incorrect reasoning: \begin{tabbing} aaa\=aaa\=aaa\=a \kill \>{\bt environ}\\ \>{\tt {\bt reserve} x {\bt for} something;}\\ \> \>{\bt begin}\\ \>{\tt {\bt ex} x {\bt st} devil[x]}\\ \> \>{\bt proof}\\ \> \>{\tt {\bt consider} lucifer {\bt being} something {\bt such that}}\\ \> \> \>{\tt E: devil[lucifer];}\\ \> \>{\tt {\bt thus ex} x {\bt st} devil[x] {\bt by} E;}\\ \> \>{\bt end;} \end{tabbing} This reasoning is obviously erroneous, since in order to correctly introduce the sentence labelled E through the choice construction it is first necessary to justify the sentence "{\tt {\bt ex} x {\bt st} devil[x]}". Observe that when the justification is omitted in the choice construction then it is possible to prove any sentence at all. We encourage the reader to justify this statement. \newpage \begin{center} \LARGE{\bf 16. Indirect Proofs} \end{center} \vskip 1em For some people it seems very funny how often mathematicians prove contradictions in their papers. The most famous example of a proof with a contradiction as a conclusion is that presented in the chapter 4. The scheme of the proof presented there can be described as follows: at first stage we assume a negation of our thesis and then we prove acontradiction using our assumption. In this way we show the impossibility of the negation of our thesis, it means that the thesis cannot be false. Proofs with this scheme are called {\it indirect proofs} or {\it proofs by contradiction}. Indirect proofs have very clear interpretation in Mizar-MSE. A sentence "{\bt not} $\alpha$" is identified with "$\alpha$ {\bt implies contradiction}", and every sentence a is identified with {\bt not not} $\alpha$. Therefore we can prove our thesis assuming its negation and then justifying the sentence {\bt contradiction}. Let us consider a simple example: \begin{tabbing} aa\=aa\=aaa\=a \kill \>{\bt environ}\\ \>{\bt begin}\\ {\tt {\bt for} x {\bt being} girl {\bt holds} beautiful[x] {\bt or not} beautiful[x]}\\ \> \>{\bt proof}\\ \> \>{\bt assume not thesis;} \\ \> \>{\tt {\bt then consider} a {\bt being} girl {\bt such that}} \\ \> \> \>{\tt 1: {\bt not} beautiful[a] {\bt and}}\\ \> \> \>{\tt 2: beautiful[a];} \\ \> \>{\tt {\bt thus contradiction by} 1, 2;} \\ \> \>{\bt end;} \end{tabbing} Here we have assumed the negation of the sentence to be proved. Because this sentence is universal then its negation is existential and the choice rule can be applied. Let us see a little more complicated example: \begin{tabbing} aa\=aa\=a\=a\=a\=a \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} girl;}\\ {\tt A: {\bt for} x {\bt st} beautiful[x] {\bt holds} goodlooking[x];}\\ \> \> \>{\bt begin}\\ {\tt ({\bt ex} x {\bt st} {\bt not} goodlooking[x])}\\ \> \> \> \>{\tt {\bt implies not}({\bt for} x {\bt holds} beautiful[x])}\\ \> \>{\bt proof}\\ \> \>{\tt {\bt assume ex} x {\bt st} {\bt not} goodlooking[x];}\\ \> \>{\tt {\bt then consider} a {\bt being} girl {\bt such that}}\\ \> \> \> \> \>{\tt 1: {\bt not} goodlooking[a];}\\ \> \>{\bt assume not thesis;}\\ \> \>{\tt {\bt then} beautiful[a];}\\ \> \>{\tt {\bt then} goodlooking[a] {\bt by} A;}\\ \> \>{\bt hence contradiction by}{\tt 1;}\\ \> \>{\bt end;} \end{tabbing} Notice that the scheme of this proof is not indirect in general, only the second part of it could be recognized as an indirect proof. Notice also that one step is superfluous here. We can reformulate this proof in less readable but shorter form as follows: \newpage \begin{tabbing} aaaa\=aaa\=a \kill \>{\bt proof} \\ \>{\bt assume} {\tt 0:} {\bt not thesis;}\\ \>{\tt {\bt then consider} a {\bt being} girl {\bt such that}}\\ \> \>{\tt 1: {\bt not} goodlooking[a];}\\ \>{\tt beautiful[a] {\bt by} 0;}\\ \>{\tt {\bt hence contradiction by} A, 1;}\\ \>{\bt end;} \end{tabbing} {\large\bf REMARK 1.} The technique of indirect proof is used by the Mizar-MSE checker (so called classical checker). Taking an inference $\frac{\alpha_{1}, ... , \alpha_{n}}{\alpha}$ it translates this inference into the sentence $\alpha_{1}$ {\bt \&} $\alpha_{2}$ {\bt \&} ... {\bt \&} $\alpha_{n}$ {\bt \& not} $\alpha$ {\bt being} logically false exacly when the inference is valid. Then it tries to infer from it a contradiction. \vskip 11pt {\large\bf REMARK 2.} Indirect proofs are easy justified in Mizar-MSE by the identification of $\alpha$ with {\bt not not} $\alpha$, but they are widely considered as stronger than others. There are two main reasons for it: 1. Every proof can be easy reconstructed as an indirect proof with the same rules of reasoning. For instance a scheme: \begin{tabbing} aaaaaaaaaaaaa\=aaa\=aaa\=aaaaaa\=aaa\=a \kill \> \> \>$\alpha$ \\ \> \>{\bt proof} \\ \>{\large D} \{ \> \>$\vdots$ \\ \> \>{\bt end;} \end{tabbing} can be replaced by: \begin{tabbing} aaaa\=aa\=a\=aaa\=aa\=aa\=\=aaa\=a \kill \> \>$\alpha$ \\ \>{\bt proof}\\ \>{\tt {\bt assume} A: {\bt not thesis;}}\\ \> \> \> \> \> \>$\alpha$\\ \> \> \> \> \>{\bt proof}\\ \> \> \>{\large D} \{ \> \>$\vdots$ \\ \> \> \> \> \>{\bt end;}\\ \>{\bt hence contradiction by} {\tt A;}\\ \>{\bt end;} \end{tabbing} 2. For some known systems of logical rules it is essentially easier to find indirect proofs than direct ones. You can try using rules from chapter 8 to prove the following theorems of Sentential Calculus (try to construe direct and indirect proofs for every statement). \begin{tabbing} aaaaaaaaaa\=aa \kill \>{\tt p[\ ] {\bt or not} p[\ ]},\\ \>{\tt {\bt not} (p[\ ] {\bt \&} q[\ ]) {\bt iff} ({\bt not} p[\ ] {\bt or {\bt not}} q[\ ]),}\\ \>{\tt {\bt not} (p[\ ] or q[\ ]) {\bt iff} ({\bt not} p[\ ] {\bt \& not} q[\ ])}. \end{tabbing} {\large\bf REMARK 3.} Justifying the technique of indirect proof we have told that as an effect we obtained a justification that the thesis could not be false - but why it has to be true? This question states explicitly what is assumed when we use classical logic. Every sentence is true or false, because it says about some reality and a state of affairs agrees with what the sentence says about or not. Sometimes we do not know whether a given sentence is true or not, but it is always either true or false. However, there is another way of thinking. Some mathematicians try to eliminate non-positive methods of justifying theorems, maintaining that existential theorems have to be justified by constructions of objects which have to exist according to these theorems. Therefore a justification of an impossibility that nothing has a given property does not suffice to prove that there is something having this property. This idea is called "intuitionism" or "constructivism". Notice that intuitionists have to reject an identification $\alpha$ with {\bt not not} $\alpha$ (and also the equivalence $\alpha$ {\bt iff not not} $\alpha$), because, as we saw, the technique of indirect proofs can be justified using this identification. The simplest way to show that some rules or laws have to be rejected by intuitionists is to justify the technique of indirect proof using these rules or laws. Try to discover which laws and rules from this chapter and the chapter 8 must be rejected by intuitionists. Let us see a simple example of an unconstructive proof for an existential theorem. Irrational numbers are those which cannot be expressed in a form $\frac{a}{b}$, for some integer numbers a and b (numbers having such presentation are rational numbers). The number $\sqrt[]{2}$ is irrational, because it represents lenght of diagonal of a square with sides of lenght 1 (see proof of the theorem from chapter 4). {\bf Theorem.} There are irrational numbers a, b such that $a^{b}$ is rational. \vskip 1em {\bf Proof} Let us assume that \ $\sqrt[]{2}\:^{\sqrt[]{2}}$ \ is rational; then we have irrational \ $\sqrt[]{2}$ \ and \ $\sqrt[]{2}$ \ such that \ $\sqrt[]{2}\:^{\sqrt[]{2}}$ \ is rational. Now let us assume that \ $\sqrt[]{2}\:^{\sqrt[]{2}}$ \ is irrational; than \ $\sqrt[]{2}\:^{\sqrt[]{2}}$ \ and \ $\sqrt[]{2}$ \ are irrational and \\ $(\sqrt[]{2}\:^{\sqrt[]{2}})\:^{\sqrt[]{2}}$ = $\sqrt[]{2}\:^{(\sqrt[]{2}\:\sqrt[]{2})}$ = $\sqrt[]{2}\:^{2}$ = 2 is rational. Therefore independently whether \ $\sqrt[]{2}\:^{\sqrt[]{2}}$ \ is rational or not our thesis is true. {\bf Q.E.D.} \vskip 1em The reason for dissatisfaction is that having a proof we have no example of two irrational numbers a, b with rational $a^{b}$. Why this proof is non-constructive, which logical rules unacceptable for intuitionists were applied here? \newpage \begin{center} \LARGE{\bf 17. Reasoning by Cases} \end{center} \vskip 1em We have discussed the disjunction omtting rule in chapter 8. This rule can be considered as a basic scheme of so called reasoning by cases. The idea is that having several possible cases with no information which of them takes the place we have to justify our thesis in a way independent of that which one of them takes the place. If we know that at least one of them holds then our justification is sufficient. The non-constructive proof discussed at the end of the previous chapter is a good example of reasoning by cases. We know that either \ $\sqrt[]{2}\:^{\sqrt[]{2}}$ \ is rational or it is irrational, but we do not know which answer is actually correct. Hence we have justified the thesis, first assuming the first part of the disjunction, and then assuming the second part of it. Let us consider another example of similar reasoning. In the time of so called Irangate affair some journalists propagated the following reasoning: \begin{tabbing} aa\=a\=aaaaa\=a\=aa\=aaaa\=a \kill \>{\bt environ}\\ \>{\tt {\bt reserve} x {\bt for} man;}\\ \>{\tt {\bt given} Regan {\bt being} man;}\\ \>{\tt I: knew[Regan] {\bt implies} supported\_\/Iran[Regan];}\\ \> \>:: If Regan knew about collaboration USA with Iran\\ \> \>:: then he supported Iran.\\ \>{\tt II: competent[Regan] {\bt implies} knew[Regan];}\\ \> \>:: If Regan is competent then he knew about\\ \> \>:: collaboration USA with Iran.\\ \>{\tt III: {\bt for} x {\bt st} supported\_\/Iran[x] {\bt or not} competent[x]}\\ \> \> \> \>{\tt {\bt holds} is\_\/bed\_\/president[x];}\\ \> \>:: Everybody supporting Iran or being not competent\\ \> \>:: is bed as a president of USA.\\ \> \>{\bt begin}\\ \> \>{\tt is\_\/bed\_\/president[Regan]}\\ \> \> \>{\bt proof}\\ \> \> \>{\tt A: knew[Regan] {\bt or not} knew[Regan];}\\ \> \>:: the Law of Excluded Middle\\ \> \> \>{\tt B: {\bt now assume} knew[Regan];}\\ \> \> \> \> \> \>{\tt {\bt then} supported\_\/Iran[Regan];}\\ \> \> \> \> \> \>{\tt {\bt hence thesis by} III;}\\ \> \> \> \> \>{\bt end;}\\ \> \> \>{\tt C: {\bt now assume not} knew[Regan];}\\ \> \> \> \> \> \>{\tt {\bt then not} competent[Regan] {\bt by} II;}\\ \> \> \> \> \> \>{\tt {\bt hence thesis by} III;}\\ \> \> \> \> \>{\bt end;}\\ \> \> \>{\tt {\bt thus thesis by} A, B, C;}\\ \> \> \>{\bt end;} \end{tabbing} In the above reasoning we have used the law of excluded middle as a basic disjunction in reasoning by cases. In fact this law can be omitted as a premis, becase the justification by B and C for the main conclusion is sufficient in Mizar-MSE (not for intuitionists). To see an example of reasoning by cases with nontrivial premise-disjunction let us consider a justification for the fact that miracles cannot be predicted. \newpage \begin{tabbing} aa\=a\=aaa\=a\=aaaa\=aaa\=aa \kill \>{\bt environ}\\ \>{\tt {\bt reserve} x, y {\bt for} person;}\\ \>{\tt {\bt reserve} e {\bt for} event;}\\ \>{\tt A1: {\bt for} x {\bt holds} pesimist[x] {\bt or} stupid[x];}\\ \> \>:: Everybody is either pesimist or stupid.\\ \>{\tt A2: {\bt for} x, e {\bt st} stupid[x] {\bt \&} predict[x,e]}\\ \> \> \> \> \> \>{\tt {\bt holds not} happen[e];}\\ \> \>:: If somebody stupid predicts something then\\ \> \>:: it will not happen.\\ \>{\tt A3: {\bt for} e {\bt st} miracle[e] {\bt holds} good[e];}\\ \> \>:: Miracles are good.\\ \>{\tt A4: {\bt for} x, e {\bt st} pesimist[x] {\bt \&} predict[x,e]}\\ \> \> \> \> \> \>{\tt {\bt holds not} good[e];}\\ \> \>:: If somebody being pesimist predicts something\\ \> \>:: then it is not good.\\ \> \> \>{\bt begin}\\ \>{\tt {\bt for} e {\bt st} miracle[e] {\bt \&} happen[e]}\\ \> \> \> \> \> \>{\tt {\bt holds} ({\bt for} x {\bt holds not} predict[x,e])}\\ \> \>:: Every happening miracle is not predicted by anybody.\\ \> \>{\bt proof}\\ \> \>{\tt {\bt let} e {\bt be} event {\bt such that}}\\ \> \> \> \>{\tt 1: miracle[e] {\bt and}}\\ \> \> \> \>{\tt 2: happen[e];}\\ \> \> \> \>{\tt 3: good[e] {\bt by} 1, A3;}\\ \> \>{\tt {\bt let} x {\bt be} person;}\\ \> \>{\tt I: pesimist[x] {\bt or} stupid[x] {\bt by} A1;}\\ \> \>{\tt II: {\bt now assume} pesimist[x];}\\ \> \> \> \> \>{\tt {\bt hence not} predict[x,e] {\bt by} 3, A4;}\\ \> \> \> \>{\bt end;}\\ \> \>{\tt III: {\bt now assume} stupid[x];}\\ \> \> \> \> \>{\tt {\bt hence not} predict[x,e] {\bt by} 2, A2;}\\ \> \> \> \>{\bt end;}\\ \> \>{\tt {\bt thus thesis by} I, II, III;}\\ \> \>{\bt end;} \end{tabbing} Recapitulating, when we would like to use disjunction "$\alpha$ or $\alpha$ or ... or $\alpha$ " to justify b then we can construe the reasoning according to the following scheme: \begin{tabbing} aaaaa\=aaa\=a\=aa \kill \>{\tt 0: $\alpha_{1}$ {\bt or} $\alpha_{2}$ {\bt or ... or} $\alpha_{n}$;}\\ \>{\tt 1: {\bt now assume} $\alpha_{1}$;}\\ \> \> \>{\tt .\hskip 1em.\hskip 1em.}\\ \> \>{\bt hence} $\beta$;\\ \> \>{\bt end;}\\ \> \> \>$\vdots$\\ \>{\tt n: {\bt now assume} $\alpha_{n}$;}\\ \> \> \>{\tt .\hskip 1em.\hskip 1em.}\\ \> \>{\bt hence} $\beta$;\\ \> \>{\bt end;}\\ \>{\tt $\beta$ {\bt by} 0, 1, ... ,n;}\\ \end{tabbing} \newpage \begin{center} \LARGE{\bf 18. How the Checker Works} \end{center} \vskip 1em Since in order to formulate correct justifications it might be helpful to know how the checker works we will now give its brief description\footnote{\footnotesize The basic checker for Mizar-MSE is so called classical checker. Even if many different, correct checkers to collaborate with Mizar-MSE can be constructed then the classical checker is actually the most common, and we will describe in this chapter its general scheme.}. Let us begin with an example. Consider the inference presented in the example at the beginning of section 5. In order to verify the justification of proposition T1 by referring to sentences A2 and A1 we must demonstrate that the sentence labelled T1 can be inferred from the conjunction of sentences labelled A1 and A2. First the checker creates the conjunction of sentences labelled A1 and A2 with the negation of the sentence labelled T1, i.e.:\\ \hspace*{5em}{\tt ({\bt for} y {\bt st} man[y] {\bt holds} mortal[y]) {\bt \&} man[Socrates] {\bt \&}}\\ \hspace*{7em}{\tt ({\bt not} mortal[Socrates])}\\ Subsequently the checker tries to find a contradiction in the obtained sentence. As a matter of fact it is contradictory but unfortunately this fact cannot be ascertained solely through the use of the laws of Sentential Calculus; as far as Sentential Calculus is concerned the structure of this sentence is the same as\\ \hspace*{7em} $\alpha$ {\bt \&} $\beta$ {\bt \& not} $\gamma$ \\ which, of course, is not contradictory. Therefore, to obtain the desired contradiction, we must look inside the quantifier of the first conjunct\footnote{\footnotesize When we consider a sentence $\alpha$ {\bt \&} $\beta$ then $\alpha$ and $\beta$ are called conjuncts; similarly, considering $\alpha$ {\bt or} $\beta$, we call $\alpha$ and $\beta$ disjuncts.}. Thus, the checker drops the universal quantifier of the first conjunct and records the obtained formula using conjunction, disjunction and negation. The sentence\\ \hspace*{7em}{\tt {\bt for} y {\bt st} man[y] {\bt holds} mortal[y]}\\ is transformed into\\ \hspace*{7em}{\tt {\bt not} man[y] or mortal[y]}\\ Then the disjunctive-conjunctive normal form is created\\ \hspace*{4em}{\tt ({\bt not} man[y] {\bt \&} man[Socrates] {\bt \& not} mortal[Socrates]) {\bt or}}\\ \hspace*{6em}{\tt (mortal[y] {\bt \&} man[Socrates] {\bt \& not} mortal[Socrates])}\\ This formula is not a sentence since y is a free variable. Hence, in order to obtain a contradiction a certain constant must be substituted {\bt for} y. Therefore the checker investigates the possible contradiction-yielding substitutions by comparing conjuncts having free variables with remaining conjuncts. Finally it finds the required substitution; the individual constant Socrates is substituted {\bt for} the free variable y (actually this is the only substitution permitted in this case). Thus we obtain the following\\ \hspace*{2em}{\tt ({\bt not} man[Socrates] {\bt \&} man[Socrates] {\bt \& not} mortal[Socrates]){\bt or}}\\ \hspace*{4em}{\tt (mortal[Socrates] {\bt \&} man[Socrates] {\bt \& not} mortal[Socrates])}\\ The above is a disjunction of two sentences. In order to infer contradiction from this disjunction the checker must ascertain that both of the disjuncts are themselves contradictory. The first disjunct is a conjunction of three sentences where the second sentence is a negation of the first, hence the first disjunct is contradictory; the other is also a conjunction of three sentences but here the third sentence is a negation of the first, so the second disjunct is contradictory too. Hence the disjunction itself yields a contradiction. Thus the checker accepts our inference since it is now convinced that any attempt to question its validity allows for inferring a contradiction. If sentence $\alpha$ is justified by reference to sentences $\beta_{1}$ , ... , $\beta_{n}$ then the checker verifies our inference as follows: \\ 1. \hskip 3em A conjunction $\beta_{1}$ {\bt \&} ... {\bt \&} $\beta_{n}$ {\bt \& not} $\alpha$ is created (further stages of its activity will consist of attempts to derive a contradiction from the obtained sentence); \\ 2. \hskip 3em The checker will reduce the obtained sentence to the disjunctive-conjunctive normal form treating formulae with a quantifier as the main functor in the same way as atomic formulae. The resulting sentence $\gamma$ is a disjunction $\gamma_{1}$ {\bt or} $\gamma_{2}$ {\bt or} ... {\bt or} $\gamma_{m}$ whose every disjunct $\gamma_{i}$ is a conjunction of the form $\gamma_{i1}$ {\bt \&} $\gamma_{i2}$ {\bt \&} ... {\bt \&} $\gamma_{ik_{i}}$, for i = 1, 2, ... , m; hence every sentence $\gamma_{ij}$ is either an atomic formula or its negation, or universal sentence or its negation; \\ 3. \hskip 3em Since the checker tries to derive a contradiction from the disjunction $\gamma$, it therefore must derive a contradiction from each of its disjuncts. First the checker tries to obtain contradiction from $\gamma_{1}$, then from $\gamma_{2}$ and so on until $\gamma_{m}$. If a contradiction can be derived from every $\gamma_{i}$ then the checker accepts our inference, otherwise it communicates to us that the inference is not obvious for him. Then the problem has been reduced to checking separate disjuncts. Now we shall describe how the checker attempts to obtain a contradiction from the sentence $\gamma_{i}$ that is \\ \hspace*{7em} $\gamma_{i1}$ {\bt \&} $\gamma_{i2}$ {\bt \&} ... {\bt \&} $\gamma_{ik_{i}}$ \\ If for some s, t $\leq$ k we have $\gamma_{is}$ = {\bt not} $\gamma_{it}$ then the contradicion is exposed; if there are no such s, t then we have to check if any of the sentences $\gamma_{i1}$, $\gamma_{i2}$, ... ,$\gamma_{ik}$ is an universsal sentence. If such a sentence is not found the checker terminates its work and communicates that it is not obvious for him; otherwise it takes the first consecutive universal statement $\gamma_{ij}$ , drops universal quantifiers at the beginning of it, then replacing $\gamma_{ij}$ by the obtained formula it creates a disjunctive conjunctive normal form again, obtaining $\gamma'$. Then the work of the checker is similar to that with $\gamma$. But it does not look for universal statements and trivial contradictions could be not only $\delta$ and {\bt not} $\delta$, but also effects of some substitutions of constants in place of free variables. If in the formula $\gamma'$ there are free variables x$_{1}$, ..., x$_{n}$ then for every x$_{i}$ in every place the same constant have to be substittuted, and only such contradictions are conclusive which are obtained from two sentences (without free variables). \newpage \begin{center} \LARGE{\bf 19. Identity and Difference} \end{center} \vskip 1em In the language Mizar-MSE we have two special predicates: identity = and difference $<>$. If s and t are fixed nominal expressions (in Mizar -MSE they can only be individual constants), then we say that the sentence "s = t" is true if and only if the expressions s and t denote the same object. For instance, the sentence "4 + 5 = 9" is true because the expression "4 + 5" denotes the same number as does the expression \mbox{"9"}. Analogically we say that the sentence "s $<>$ t" is true if and only if the expressions s and t denote different objects. \vskip 1em {\large\bf PHILOSOPHICAL COMMENT.} The above explanation may seem rather shallow if we consider the fact that the truth of the sentences of the form "s = t"and \mbox{"s $<>$ t"} is explained by reference to the concepts of identity and difference of object. It is, unfortunately, natural that if we are to have a defined concept of object we must have a defined concept of identity of objects. There is a definition of the concept of identity of objects which seems to explain a little more. This is the so-called Leibnitz's definition of identity: the objects {\it a} and {\it b} are identical if and only if every property possessed by the object {\it a} also is possessed by the object {\it b}. In the same spirit we shall say that the objects {\it a} and {\it b} are different if and only if there is property possessed by the object {\it a} and not possessed by the object {\it b}. If we admit such properties as "being identical with the object {\it a}" then we see that Leibnitz's definition is in agreement with our intuitive concepts of identity and difference, but then, unfortunately, that definition seems not to contribute anything new. In principle, Leibnitz's definition reduces the problem of explaining the problem of identity to that of explaining the problem of property. \smallskip Note that the formula "s $<>$ t" is equivalent to the formula "{\tt {\bt not} s = t}". The system Mizar-MSE identifies the formulae of these two forms. Certain properties of the relations of identity and difference are recognized by the checking module. We can see that by checking, for instance, the correctness of the following justification. \begin{tabbing} aa\=a\=aa\=a \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x,y {\bt for} something;} \\ \>{\bt begin} \\ \> \>{\tt {\bt for} x,y {\bt st} x = y {\bt \&} P[x] {\bt holds} P[y]} \\ \> \> \>{\bt proof} \\ \> \> \>{\tt {\bt let} a,b {\bt be} something;} \\ \> \> \>{\bt thus thesis;} \\ \>{\bt end;} \end{tabbing} One can likewise justify other fundamental properties of the relation of identity, such as. \begin{tabbing} aa\=a\=aa\=aa \kill \> \>{\tt {\bt for} x {\bt holds} x = x} \\ \> \>{\tt {\bt for} x,y {\bt st} x = y {\bt holds} y = x} \\ \> \>{\tt {\bt for} x,y,z {\bt st} x = y {\bt \&} y = z {\bt holds} x = z} \\ \> \>{\tt {\bt for} x,y,x',y' {\bt st} x = x' {\bt \&} y = y' {\bt \&} P[x,y] {\bt holds} P[x',y']} \end{tabbing} The details are left to the Reader. By making use of the predicates of identity and difference we can express certain numerical concepts such as "there are at least five objects such that ...", "there are exactly five objects such that ...", etc. In Mizar-MSE the sentence of the form "there are least five objects x such that $\alpha$(x)" can be expressed as follows:\\ {\tt A: {\bt ex} x, y, z, s, t {\bt st} x $<>$ y {\bt \&} x $<>$ z {\bt \&} x $<>$ s {\bt \&} x $<>$ t {\bt \&} y $<>$ z {\bt \&}}\\ {\tt y $<>$ s {\bt \&} y $<>$ t {\bt \&} z $<>$ s {\bt \&} z $<>$ t {\bt \&} s $<>$ t {\bt \&}} \\ {\tt $\alpha$[x] {\bt \&} $\alpha$[y] {\bt \&} $\alpha$[z] {\bt \&} $\alpha$[s] {\bt \&} $\alpha$[t];}\\ or equivalently, more briefly but not necessarily more simply:\\ {\tt A':{\bt for} x, y, z, s {\bt ex} t {\bt st} t $<>$ x {\bt \&} t $<>$ y {\bt \&} t $<>$ z {\bt \&} t $<>$ s {\bt \&} $\alpha$[t]}.\\ The sentence of the form "there are exactly five objects x such that $\alpha$ (x)" can be expressed thus:\\ {\tt B: {\bt ex} x, y, z, s, t {\bt st} x $<>$ y {\bt \&} x $<>$ z {\bt \&} x $<>$ s {\bt \&} x $<>$ t {\bt \&} y $<>$ z {\bt \&}}\\ {\tt y $<>$ s {\bt \&} y $<>$ t {\bt \&} z $<>$ s {\bt \&} z $<>$ t {\bt \&} s $<>$ t {\bt \&}}\\ {\tt $\alpha$[x] {\bt \&} $\alpha$[y] {\bt \&} $\alpha$[z] {\bt \&} $\alpha$[s] {\bt \&} $\alpha$[t] {\bt \&}} \\ {\tt ({\bt for} a {\bt st} $\alpha$[a] {\bt {\bt holds}} a = x {\bt or} a = y {\bt or} a = z {\bt or} a = s {\bt or} a = t);}\\ or by using the trick resembling that used an A':\\ {\tt B':({\bt for} x, y, z, s {\bt ex} t {\bt st} t $<>$ x {\bt \&} t $<>$ y {\bt \&} t $<>$ z {\bt \&} t $<>$ s {\bt \&} $\alpha$[t])}\\ {\tt {\bt \&} ({\bt ex} x, y, z, s, t {\bt st} ({\bt for} a {\bt st} $\alpha$[a] {\bt holds} a = x {\bt or} a = y {\bt or} a = z {\bt or}}\\ {\tt a = s {\bt or} a = t))}. By making use of the predicate of identity we can express the statement that the number of objects in the world is exactly such and such (obviously on the assumption that their number is finite). We can also name a finite number of objects and formulate the statement that they are all the objects which exist. Let us imagine a world in which there are exactly five objects. Let them be named a, b, c, d, e. Then for any predicate P we can prove the following equivalences: \begin{center} {\tt ({\bt for} x {\bt holds} P[x]) {\bt iff} (P[a] \& P[b] \& P[c] \& P[d] \& P[e])},\\ {\tt ({\bt ex} x {\bt st} P[x]) {\bt iff} (P[a] {\bt or} P[b] {\bt or} P[c] {\bt or} P[d] {\bt or} P[e])}.\\ \end{center} This shows that in such a situation the use of the universal and the existential quantifier can be replaced by the appropriate use of conjunction and disjunction. We shall prove the first of these equivalences. \begin{tabbing} aaa\=a\=a\=aaa\=aaaa\=a\=a \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x {\bt for} anything;} \\ \>{\tt {\bt given} a, b, c, d, e {\bt being} anything;} \\ {\tt A: {\bt for} x {\bt holds} x = a {\bt or} x = b {\bt or} x = c {\bt or} x = d {\bt or} x = e;} \\ \>{\bt begin} \\ \>{\tt {\bt for} x {\bt holds} P[x]) {\bt iff} (P[a] \& P[b] \& P[c] \& P[d] \& P[e])} \\ \> \>{\bt proof} \\ \> \> \>{\tt B: {\bt now assume} B0: {\bt for} x {\bt holds} P[x];} \\ \> \> \> \>{\tt B1: P[a] {\bt by} B0;} \\ \> \> \> \>{\tt B2: P[b] {\bt by} B0;} \\ \> \> \> \>{\tt B3: P[c] {\bt by} B0;} \\ \> \> \> \>{\tt B4: P[d] {\bt by} B0;} \\ \> \> \> \>{\tt B5: P[e] {\bt by} B0;} \\ \> \> \> \>{\tt {\bt thus} P[a] \& P[b] \& P[c] \& P[d] \& P[e] {\bt by};} \\ \> \> \> \> \> \>{\tt B1, B2, B3, B4, B5;} \\ \> \> \>{\bt end;} \\ \> \> \>{\tt C: {\bt now assume} C0: P[a] \& P[b] \& P[c] \& P[d] \& P[e];} \\ \> \> \> \> \>{\tt {\bt let} x {\bt be} anything;} \\ \> \> \> \>{\tt C01: x = a {\bt or} x = b {\bt or} x = c {\bt or} x = d {\bt or} x = e {\bt by} A;} \\ \> \> \> \>{\tt C1: x = a {\bt implies} P[x] {\bt by} C0;} \\ \> \> \> \>{\tt C2: x = b {\bt implies} P[x] {\bt by} C0;} \\ \> \> \> \>{\tt C3: x = c {\bt implies} P[x] {\bt by} C0;} \\ \> \> \> \>{\tt C4: x = d {\bt implies} P[x] {\bt by} C0;} \\ \> \> \> \>{\tt C5: x = e {\bt implies} P[x] {\bt by} C0;} \\ \> \> \> \> \>{\tt {\bt thus} P[x] {\bt by} C01, C1, C2, C3, C4, C5;} \\ \> \> \>{\bt end;} \\ \> \>{\tt {\bt thus thesis by} B, C;} \\ \> \>{\bt end;} \end{tabbing} We suggest that the Reader should prove, for the same assumptions, that: \begin{center} {\tt ({\bt ex} x {\bt st} P[x]) {\bt iff} (P[a] {\bt or} P[b] {\bt or} P[c] {\bt or} P[d] {\bt or} P[e])}.\\ \end{center} \newpage \begin{center} \LARGE{\bf 20. Translation Problem} \end{center} \vskip 1em To be able to prove theorems so that their proofs could be checked by Mizar -MSE we must first record those theorems in the language Mizar-MSE. This is to say that we first have to solve the problem of translation. This may involve difficulties of two kinds: (i) the poverty of the language Mizar-MSE, (ii) the difficulty of identifying the deep structure of sentences. The language Mizar-MSE does not admit many well-known, well-described and often used linguistic constructions. The lack of function symbols is perhaps the most vexing. We are accustomed to using such symbols as that of addition: +. How are we to record in Mizar-MSE for instance the law of the commutativity of addition:\\ \hspace*{4em} for any numbers x, y there holds x + y = y + x ? \\ The symbol of identity = can be used in Mizar-MSE and even the fundamental properties of the relation of identity are identified by the so-called classical Checker, but without any function symbol we cannot record the law of commutativity in a natural way. However it is known that every function symbol of n arguments can be replaced by a predicate of n+1 arguments. For instance, let us introduce the predicate add of three arguments. If that predicate is to replace + we have to adopt the following axioms:\\ \hspace*{3em} existence: {\tt {\bt for} x, y {\bt ex} z {\bt st} add[x,y,z];}\\ \hspace*{2em} uniqueness: {\tt {\bt for} x, y, z, z' {\bt st} add[x,y,z] \& add[x,y,z'] {\bt holds} z = z';} If we replace the formula add[x,y,z] by x + y = z, then both axioms become true and this is so regardless of the meaning assigned to the symbol +; the interpretation of + as a symbol of multiplication, subtraction, or division would not affect the truth of those axioms. We can now record the law of commutativity thus:\\ \hspace*{2em} commutativity: {\tt {\bt for} x, y, z, z' {\bt st} add[x,y,z] \& add[x,y,z']}\\ \hspace*{8em} {\tt {\bt holds} z = z';} The law of associativity of addition, usually formulated thus:\\ \hspace*{3em} for any x, y, z there holds (x + y) + z = x + (y + z),\\ when formulated in Mizar-MSE looks even less convincing:\\ \hspace*{3em} associativity: {\tt {\bt for} x, y, z, a, b, a', b' {\bt st} add[x,y,a] \&}\\ \hspace*{6em}{\tt add[a,z,b] \& add[x,a',b'] \& add[y,z,a'] {\bt holds} b = b';} The lack of function symbols in Mizar-MSE does not cause limitations in the possibilities of translation, but certainly markedly reduces the legibility of those sentences which we ordinarily record making use of function symbol. Another essential limitation consist in a lack of the possibility to formulate and to prove schemata of theorems. Let us consider the situation discussed in the preceding chapter: \begin{tabbing} aa\=a\=a \kill \> \>{\bt environ} \\ \> \>{\tt {\bt reserve} x {\bt for} anything;} \\ \> \>{\tt {\bt given} a, b, c, d, e {\bt being} anything;} \\ \>{\tt A: {\bt for} x {\bt holds} x = a {\bt or} x = b {\bt or} x = c {\bt or} x = d {\bt or} x = e;} \\ \> \>{\bt begin} \\ \> \>{\tt ({\bt for} x {\bt holds} P[x]) {\bt iff} (P[a] \& P[b] \& P[c] \& P[d] \& P[e])} \end{tabbing} That is followed by a fairly long proof. If in that proof we replaced the predicate P by another predicate R of one argument, then we obtain a correct proof of the sentence:\\ \hspace*{1em}{\tt I: ({\bt for} x {\bt holds} R[x]) {\bt iff} (R[a] \& R[b] \& R[c] \& R[d] \& R[e])}.\\ Likewise, if we replace the formula P[x] by any other formula $\alpha$(x) with one free variable x in the proof under consideration, then we obtain a correct proof of the sentence:\\ \hspace*{1em}{\tt II: ({\bt for} x {\bt holds} $\alpha$(x)) {\bt iff} ($\alpha$(a) \& $\alpha$(b) \& $\alpha$(c) \& $\alpha$(d) \& $\alpha$(e))}.\\ It must be borne in mind in this connection that the occurrences of the formulas of the form P[t] must be replaced by occurrences of formulas of the form $\alpha$(t). Thus both the sentences of the form I and any sentence of the form II can be proved on the basis of the assumption:\\ \hspace*{1em}{\tt A: {\bt for} x {\bt holds} x = a {\bt or} x = b {\bt or} x = c {\bt or} x = d {\bt or} x = e}.\\ But each of those sentences must be proved separately by the repetition each time of almost the same proof. We are not in a position to prove a single general schema II and then to justify all the remaining theorems subsumed under that schema by just referring to II. This limitation lengthens the proof, in some cases quite essentially. This limitation has also another serious consequence. Some important mathematical theories cannot be axiomatized in a finite number of sentence while they can be axiomatized by a finite number of schemata. This refers in particular to Peano's elementary arithmetic, whose axiom system includes infinitely many formulas subsumed under the schema:\\ \hspace*{2em}{\tt $\alpha$(0) {\bt \&} ({\bt for} x {\bt st} $\alpha$(x) {\bt holds} $\alpha$(x+1))}\\ \hspace*{5em}{\tt {\bt implies} ({\bt for} x {\bt holds} $\alpha$(x))}.\\ where $\alpha$(x) is any formula in the theory of numbers with one free variable.\footnote{\footnotesize The limitation discussed here pertain to the system Mizar-MSE. There are several other systems in the Mizar family, designed by Andrzej Trybulec, to which all those remarks are not applicable. In particular, the fairly well tested system PC Mizar which has a quite comprehensive library of mathematical texts. Nevertheless it seems that for the teaching of logic Mizar-MSE remains the most suitable; its limitations are outbalanced by the simplicity of description and use.} Difficulties of another kind will be illustrated by the following sentence: for any two planes P and Q, if there is a point x which lies on both P and Q, then there is a point y such that y $\neq$ x and y lies on both P and Q. It might seem that this sentence can easily be recorded in Mizar-MSE. We have objects of two kinds: points (x, y) and planes (P, Q), and one predicate "... lies on ...", which has a point as its first argument and a plane as its second argument. Let {\tt on} stand for the predicate "... lies on ...". It might seem that our sentence can be recorded thus:\\ \hspace*{2em}{\tt {\bt for} P, Q {\bt st} ({\bt ex} x {\bt st} on[x,P] \& on[x,Q])}\\ \hspace*{5em}{\tt {\bt holds} ({\bt ex} y {\bt st} y $<>$ x \& on[y,P] \& on[y,Q]}.\\ Unfortunately we have obtained a formula which, in the sense of the Mizar-MSE syntax, is not a sentence, because x occurs in it as a free variable in that part of the formula which follows {\bt holds}. The correct translation is as follows:\\ \hspace*{2em}{\tt {\bt for} P, Q, x {\bt st} on[x,P] \& on[x,Q] {\bt holds}}\\ \hspace*{5em}{\tt ({\bt ex} y {\bt st} y $<>$ x \& on[y,P] \& on[y,Q])}. \newpage \begin{center} \LARGE{\bf 21. Antonimies and Paradoxes} \end{center} \vskip 1em The present chapter is intended to provide a number of examples of reasonings in Mizar-MSE and at the same time to present best known antinomies and paradoxes based on properties of semantic expressions, that is such which refer to meaning or truth. At variance with chronology we shall begin with the simplest examples. \vskip 1em {\Large\bf Russell's Antinomy} In 1879, German logician Gottlob Frege published his work {\em "Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens"}. He presented in it an axiom system of logic and set theory, the first in history. Unfortunately, as it occurs in the case of pioneer undertakings, Frege's intricate construction had an essential error in its very foundations. On June 16, 1902, Bertrand Russel sent to Frege a letter in which he pointed to that error. Frege's system included as one of its axioms the so-called Comprehension Axiom, on the strength of which for any property P there is a set S containing exactly those objects which have the property P. In other words, for every property P there is a set S such that, for each x, x is in S if and only if x has the property P. Let us consider Russel's property R of not being its own element, that is:\\ \hspace*{3em} x has property R if and only if it is not the case that x is in x. If we substitute the property R in the Comprehension Axiom, then we obtain:\\ \hspace*{3em} there is a set S such that for each x, x is in S if and only if it is not the case that x is in x. If such a set S exist then, by applying the choice rule, we can introduce a new constant which satisfies the following condition: \\ \hspace*{3em} for each x, x is in S if and only if it is not the case that x is in x. In turn, by applying the rule {\em dictum de omni} we obtain:\\ \hspace*{3em} S is in S if and only if it is not the case that S is in S.\\ However the above sentence is, unfortunately, self-contradictory. The essence of Russell's reasoning, as presented above, can be reduced to the indirect proof showing that the property R does not denote any set. We shall record that reasoning in Mizar-MSE. \begin{tabbing} aaa\=aaaa\=aaaaaaaaaaaaaaa\=aa \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x, y, z {\bt for} ANYTHING;} \\ \>{\tt RUSSELLIAN: {\bt for} x {\bt holds} property[x] {\bt iff not} belongs[x,x];} \\ {\tt :: belongs[x,y] means here 'x belongs to y'} \\ \>{\bt begin} \\ \> \>{\tt ANTINOMY: ({\bt ex} z {\bt st} {\bt for} x {\bt holds} belongs[x,z] {\bt iff} property[x])} \\ \> \> \>{\bt implies contradiction} \\ \> \>{\bt proof} \\ \> \>{\bt assume not thesis;} \\ \> \>{\tt 1: {\bt for} x {\bt holds} belongs[x,z] {\bt iff} property[x];} \\ \> \>{\tt belongs[z,z] {\bt iff} property[z] {\bt by} 1;} \\ \> \>{\tt {\bt hence contradiction by} RUSSELLIAN} \\ \>{\bt end;} \end{tabbing} We now suggest that the Reader should train such argumentation on two examples drawn from practical experience. \newpage \begin{tabbing} aaa\=aaaa\=aaaaaaaaaaaaa\=a \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x, y, z {\bt denote} ANYTHING;} \\ \>{\tt {\bt given} Jack {\bt being} ANYTHING;} \\ \>{\bt begin} \\ \> \>{\tt EGOIST: ({\bt for} x {\bt holds} likes[Jack,x] {\bt iff not} likes[x,x])} \\ \> \> \>{\tt {\bt implies} Jack\_\/is\_\/an\_\/egoist[\ ]} \end{tabbing} The next example comes from Russell himself \begin{tabbing} aaa\=aaaa\=aaaaaaaaaaaaa\=a \kill \>{\bt environ} \\ \>{\tt {\bt reserve} x, y, z {\bt for} ANYTHING;} \\ \>{\tt {\bt given} Barber {\bt being} ANYTHING;} \\ \>{\bt begin} \\ \> \>{\tt BARBER: ({\bt for} x {\bt holds} shaves[barber,x] {\bt iff not} shaves[x,x])} \\ \> \> \>{\bt implies contradiction} \end{tabbing} Frege gave an axiom system which was in harmony with our intuitions; unfortunately, it proved to be self-contradictory. Our intuitions pertaining to the concepts of property and set have proved contradictory, too. Moreover, that contradiction was not due to any overlooking. Frege's axiom system cannot be improved in any simple way. To put it more preciesely, it has proved impossible to find the only correct improvement. Today we have a number of non-equivalent axiomatizations of the concept of set (set theories), and each of them is based on a certain idea intended to avoid Russell's antinomy. \vskip 1em {\Large\bf Epimenides' Paradox} An antinomy is a paradoxical contradiction in our ideas about the world or in a scientific theory, but not only contradictions can be striking. Striking conclusions which do not result in a contradiction are simply termed paradoxes. The so-called Epimenides' paradox is undoubtedly one of the best known semantic paradoxes. Epimenides was a Cretan who said to have claimed that all Cretans always lie. It seems odd that from the above we can conclude that some Cretan sometimes tell the truth. \begin{tabbing} aaa\=a\=aaaa\=aaaaa\=a \kill \>{\bt environ} \\ \> \>{\tt {\bt reserve} s, z {\bt for} SENTENCE;} \\ \> \>{\tt {\bt reserve} x, y {\bt for} MAN;} \\ \> \>{\tt {\bt given} Epimenides {\bt being} MAN;} \\ \> \>{\tt {\bt given} the\_\/sentences {\bt being} SENTENCE;} \\ \> \>{\tt A1: {\bt for} z {\bt holds} false[z] {\bt iff not} true[z];} \\ \> \>{\tt A2: true[the\_\/sentence] {\bt iff} ({\bt for} x {\bt st} cretan[x] {\bt holds} liar[x]);} \\ \> \>{\tt A3: {\bt for} y {\bt holds} liar[y] {\bt iff} ({\bt for} s {\bt st} maintain[y,s] {\bt holds} false[s]);} \\ \> \>{\tt A4: cretan[Epimenides];} \\ \> \>{\bt begin} \\ \> \> \>{\tt FIRST: maintain[Epimenides,the\_\/sentence] {\bt implies} false[the\_\/sentence])} \\ \> \> \>{\bt proof} \\ \> \> \>{\tt {\bt assume} 1: maintain[Epimenides,the\_\/sentence];} \\ \> \> \>{\tt {\bt assume} 2: {\bt not} false[the\_\/sentence];} \\ \> \> \>{\tt {\bt then} true[the\_\/sentence] {\bt by} A1;} \\ \> \> \>{\tt {\bt then} liar [Epimenides] {\bt by} A4;} \\ \> \> \>{\tt {\bt then for} s {\bt st} maintain[Epimenides,s] {\bt holds} false[s] {\bt by} A3;} \\ \> \> \>{\tt {\bt hence contradiction by} 1, 2} \\ \> \>{\bt end;} \\ \> \> \>{\tt maintain[Epimenides,the\_\/sentence] {\bt implies}} \\ \> \> \> \>{\tt ({\bt ex} s,x {\bt st} true[s] \& maintain[x,s] \& cretan[x])} \\ \> \> \>{\bt proof} \\ \> \> \>{\tt {\bt assume} maintain[Epimenides,the\_\/sentence];} \\ \> \> \>{\tt {\bt then} false[the\_\/sentence] {\bt by} FIRST;} \\ \> \> \>{\tt {\bt then not} true[the\_\/sentence] {\bt by} A1;} \\ \> \> \>{\tt {\bt then consider} y {\bt such that}} \\ \> \> \> \>{\tt 1: cretan[y] {\bt and}} \\ \> \> \> \>{\tt 2: {\bt not} liar[y] {\bt by} A2;} \\ \> \> \>{\tt {\bt consider} z {\bt such that}} \\ \> \> \> \>{\tt 3: maintain[y,z] {\bt and}} \\ \> \> \> \>{\tt 4: {\bt not} false[z] {\bt by} A3,2;} \\ \> \> \>{\tt true[z] {\bt by} A1,4;} \\ \> \> \>{\tt {\bt hence thesis by} 1,3} \\ \> \>{\bt end;} \end{tabbing} The above reasoning does not yield a contradiction. To find that it does not suffice to make several unsuccessful attempts to prove a contradiction under the assumptions made. But one can to imagine an extremely simple world in which there is only one man (Epimenides) and only two sentences, and such that all the assumptions of our reasoning are satisfied in it. This proves the non-contradictory character of our assumptions because in such a situation they would truly describe a certain reality (perhaps rather odd), and a contradiction cannot truly describe any reality. \begin{tabbing} aaa\=a\=aaaaa\=aaaaaaaaaa\=aaaaaa \kill \>{\bt environ} \\ \> \>{\tt {\bt given} the\_\/sentence1, the\_\/sentence2 {\bt being} SENTENCE;} \\ \> \>{\tt {\bt given} Epimenides {\bt being} MAN;} \\ :: The complete description of our artificial world is given below. \\ \>{\tt B1: maintain[Epimenides,the\_\/sentence1];} \\ \>{\tt B2: maintain[Epimenides,the\_\/sentence2] ;} \\ \>{\tt B3: false[the\_\/sentence1] \& {\bt not} true[the\_\/sentence1];} \\ \>{\tt B4: true[the\_\/sentence2] \& {\bt not} false[the\_\/sentence2];} \\ \>{\tt B5: cretan[Epimenides];} \\ \>{\tt B6: {\bt not} liar[Epimenides];} \\ :: We can construct a world in which there are only two sentences and only \\ :: one man and such that everything given in the previous environment is true \\ \>{\tt B7: {\bt for} x {\bt being} MAN {\bt holds} x = Epimenides;} \\ \>{\tt B8: {\bt for} s {\bt being} SENTENCE {\bt holds} s = the\_\/sentence1 {\bt or}} \\ \> \> \>{\tt s = the\_\/sentence2;} \\ \> \>{\bt begin} \\ :: Now try to prove AA1 - AA3 using only B1 - B8 axioms. \\ \> \> \>{\tt AA1: {\bt for} z {\bt holds} false[z] {\bt iff} {\bt not} true[z]} \\ \> \> \>{\bt proof} \\ :: Here insert your proof \\ \> \> \>{\bt end;} \\ \> \> \>{\tt AA2: true[the\_\/sentence1] {\bt iff} ({\bt for} x {\bt st} cretan[x] {\bt holds} liar[x])} \\ \> \> \>{\bt proof} \\ :: Here insert your proof \\ \> \> \>{\bt end;} \\ \> \> \>{\tt AA3: {\bt for} y {\bt holds} liar[y] {\bt iff}} \\ \> \> \> \>{\tt ({\bt for} s {\bt st} maintain[y,s] {\bt holds} false[s])} \\ \> \> \>{\bt proof} \\ :: Here insert your proof \\ \> \> \>{\bt end;} \\ :: Making the rather obvious justification you arrive at \\ \> \> \>{\tt AA4: cretan[Epimenides] {\bt by} B5;} \end{tabbing} In this way, by interpreting the\_\/sentence as a name of the same thing as the\_\/sentence1 you have obtained a proof showing that there is a world (even if very odd) described by the axioms A1-A4. Hence they cannot be contradictory! \vskip 1em {\Large\bf Eubulides' Antinomy} As we have seen, Epimenides' paradox is not antinomy. But we can slightly reformulate the content of his paradox in order to obtain an antinomy. Let us consider the sentence :\\ E: {\it The sentence I am uttering now is false.}\\ This formulation comes from Eubulides, another Greek sophist. Is the sentence E true? If it were true, then there would be what it says, and hence it would be false. And if it were false, then there would be not what it says, and hence it would not be false. We have thus arrived at a contradiction. The schema of this reasoning is very much like Russell's reasoning described earlier. We suggest that the Reader should describe Eubulides' antinomy (also called the antinomy of the liar) in Mizar-MSE and point out the contradiction resulting from the assumptions made. A reasoning with a similar schema, analysed by Alfred Tarski, led him to a certain important theorem. He demonstrated that in the languages rich enough so that one can speak in them about expressions in those languages the correct concept of truth cannot be expressed (a correct concept of truth is such which cover all true sentences and true sentences only). Unfortunately, all those languages in which we can express the addition and multiplication of natural numbers and can use quantifiers and sentential connectives are such excessively complicated languages. This applies in particular to the language of the arithmetic of natural numbers and set theory, not to speak about natural languages such as Polish or English. \newpage \begin{center} \LARGE{\bf Appendix. Algorithms in Logic} \end{center} \vskip 1em This Appendix goes beyond the scope of the problems presented in this handbook. It is addressed to those Readers who are interested in the algorithmic aspect of the issues discussed in this book. An algorithm is an unambiguous recipe or instruction for carrying out certain operations step by step. That instruction should be purely mechanical, which is to say that it should not refer to any additional knowledge or assessment of the person who follows the algorithm: as in an orderly twentieth-century army, if in the instruction it is written that one should press the red button, then one should press it without engaging in any discussions. When we have at the input a certain datum and proceed in accordance with the instruction specified by a given algorithm, we obtain at the output a certain datum. In school mathematics many algorithms are used even though they are not always termed so. Here are some examples of well-known algorithms. (i) The algorithm of adding decimal numbers at the input takes two numbers in a decimal notation and at the output yields their sum in the decimal notation. (ii) The algorithm of solving systems of linear equations, At the input we have a system of linear equations, and at the output one of the following answers: the given system of equations is contradictory, or it has more than one solution, or its has exactly one solution which we obtain at the output. (iii) The algorithm of solving quadratic equations. At the input we have a quadratic equation, and at the output either the answer "this equation has no solution" or the answer "this equation has only one solution" and the value of that solution, or a pair of numbers which are solutions of the equation under consideration. Nowadays computer programs are the best known examples of algorithms. A program written in the language Basic (or any other programming language such as Pascal or C) is an instruction written in a way comprehensible to the computer (or, to put it more precisely, to the interpreter or the compiler). The system Mizar-MSE is also an example of an algorithm: when it receives at the input a text written in the language Mizar-MSE it produces at the output information about errors or a lack of errors or, according to the version of the system, a text with comments. It is common to call algorithms informal recipes for procedure, leaving the term "program" for recipes for procedure formulated in a well defined language, such as computer programming languages. Usually, when writing a program, that is an algorithm precise enough to be used by a computer, we first formulate the appropriate algorithm in a non-formalized form. In this Appendix we shall describe below several key algorithms used in the processing of sentential calculus formulas. For that purpose we need certain metalogical concepts pertaining to the sentential calculus, which are defined below. \vskip 1em {\bf Definition} \begin{enumerate} \item The expressions $p_{1}$, $p_{2}$, ... , and additionally {\bt contradiction} are atomic formulas of the sentential calculus. \item Every atomic formula is a formula in the sentential calculus. \item If $\alpha$ and $\beta$ are sentential calculus formulae, then the following expressions: {\bt not} $\alpha$, $(\alpha$ {\bt \&} $\beta)$, $(\alpha$ {\bt or} $\beta)$, $(\alpha$ {\bt implies} $\beta)$, $(\alpha$ {\bt iff} $\beta)$, are sentential calculus formulas. \item Only those expressions which can be obtained from atomic formulas by means of the constructions specified under 3 are sentential calculus formulas. \end{enumerate} Note that the above definition makes it possible to prove the properties of sentential calculus formulas by specific induction. In order to demonstrate that every sentential calculus formula has a fixed property P it suffices to prove that the following two conditions are satisfied: I. Every atomic formula has the property P. II. If the expressions $\alpha$ and $\beta$ have the property P, then the following expressions \mbox{{\bt not} $\alpha$, $(\alpha$ {\bt \&} $\beta)$, $(\alpha$ {\bt or} $\beta)$, $(\alpha$ {\bt implies} $\beta)$, $(\alpha$ {\bt iff} $\beta)$}, also have the property P. The above reasoning can be made more similar to ordinary mathematical induction by comparing formulae as to their length (number of letters). Thus in order to demonstrate that every sentential calculus formula has the property P it suffices to show that: for any formula $\alpha$: if every formula shorter than $\alpha$ has the property P, then the formula $\alpha$ also has the property P. Sentential calculus formulae are interpreted by substituting for atomic formulae certain sentences with a definite logical value (that is either true or false) and by replacing sentential connectives by the corresponding expressions, while for {\bt contradiction} we substitute some sentence which is evidently false, for instance 0 = 1. \vskip 1em {\bf Definition} \begin{enumerate} \item We say that a given sentential calculus formula $\alpha$ is a tautology if and only if it is true for every interpretation. \item We say that a given sentential calculus formula $\alpha$ is contradictory if and only if it is false for every interpretation. \item We say that a given formula $\beta$ follows from a formula $\alpha$ if and only if, for any interpretation which assigns the same sentences to the same atomic formulas in $\alpha$ and in $\beta$, if the formula $\alpha$ is true than the formula $\beta$ is true, too. \item We say that two formulas $\alpha$ and $\beta$ are equivalent if and only if, for any interpretation which assigns the same sentences to the same atomic formulas in $\alpha$ and in $\beta$, these formulas have the same logical values. \end{enumerate} We shall also formulate, without proof, a lemma which characterizes the concept defined above. \vskip 1em {\bf Lemma 1} \begin{enumerate} \item A formula $\alpha$ is a tautology if and only if the formula {\bt not} $\alpha$ is contradictory. \item A formula $\alpha$ is contradictory if and only if any formula follows from $\alpha$. \item If the formulae $\alpha(\beta)$ and $\alpha(\gamma)$ are obtained from the formula $\alpha(p_{1})$ by the substitution for the atomic formula $p_{1}$ of $\beta$ and $\gamma$, respectively, and if the formulae $\beta$ and $\gamma$ are equivalent, then the formulas $\alpha(\beta)$ and $\alpha(\gamma)$ are equivalent, too. \end{enumerate} We shall hereafter use the style of the notation of the language Mizar-MSE forgetting the brackets (where they are superfluous) and writing formulae which are somewhat ambiguous (from the point of the definition adopted in this Appendix), such as $\gamma_{1}$ {\bt or} $\gamma_{2}$ {\bt or} ... {\bt or} $\gamma_{n}$. Since for any grouping of the formulas $\gamma_{1}$, $\gamma_{2}$, ... , $\gamma_{n}$ we obtain equivalent formulae such notation should not pose any problems. But it must be borne in mind that in accordance with the definition adopted in this Appendix such a recording denotes many different formulas, and hence where reference is made to the identity of formulae and not to their equivalence, such a recording cannot be used. On the other hand, an analogous recording in the language Mizar-MSE is in agreement with the definition of that language. \vskip 1em {\Large\bf The 0--1 Algorithm for Checking Tautologies} At the outset we have illustrated by examples the so-called 0-1 algorithm of deciding whether a given sentential formula is a tautology. At this point we shall describe it in greater detail. At the input we have a sentential formula $\alpha$ whose all atomic subformulae are contained between $p_{1}$, ... ,$p_{n}$. We now write out all subformulae of the formula \mbox{$\alpha$: $\beta_{1}$, ... ,$\beta_{k}$.} Then we form the following table: \vskip 1em \nopagebreak \begin{tabular}{c|c|c|c|c||c|c|c|c} $p_{1}$ & $p_{2}$ & $\ldots$ & $p_{n-1}$ & $p_{n}$ & $\beta_{1}$ & $\ldots$ & $\beta_{k}$ & $\alpha$ \\ \cline{1-5} 1 & 1 & $\ldots$ & 1 & 1 & & & & \\ 1 & 1 & $\ldots$ & 1 & 0 & & & & \\ 1 & 1 & $\ldots$ & 0 & 1 & & & & \\ & & $\vdots$ & & & & & & \\ 0 & 0 & $\ldots$ & 0 & 1 & & & & \\ 0 & 0 & $\ldots$ & 0 & 0 & & & & \\ \end{tabular} \vskip 1em in which in the first {\em n} columns we write out, line after line, all possible assignments of the values 0 and 1 to the formulae $p_{1}$ , ... , $p_{n}$. Each assignment is termed valuation. It can easily be seen that for {\em n} atomic formulas the number of possible valuations is 2$^{n}$. Next in the successive {\em k} columns we write out, in each line, in the n+i-th column the corresponding values 0 or 1 according to the value taken by the formula $\beta_{j}$ for the valuation defined by the values written out in the first {\em n} columns in a given line. The columns are filled in such order that in the filling of the column corresponding to a given formula all the columns corresponding to its subformulae should be filled; that can easily be obtained by starting with the shortest formulae. At the end we find whether the last column is filled with l's only. If that is so we state that the formula $\alpha$ is a tautology; if not, we state that $\alpha$ is not a tautology. The algorithm described above does not look too complicated, and hence it might seem that nothing is simpler than to write a computer program which would do for us that relatively simple work. Unfortunately, even should we assume that one line of the table would be filled very quickly we would face difficulties already in the case of formulae containing relatively few atomic subformulas. For instance, should we assume that the formula $\alpha$ has 60 atomic subformulas and that the writing out of single line would take only 0.000001 seconds, the writing out of the entire table would have to take more than 3366 centuries. \vskip 1em {\Large\bf The Algorithm of Reduction to the Normal}\\ {\Large\bf Disjunctive-Conjunctive Form} We say that a sentential formula $\alpha$ is in a disjunctive-conjunctive normal form if and only if it is of the form $\gamma_{1}$ {\bt or} $\gamma_{2}$ {\bt or} $\gamma_{m}$ and $\gamma_{i}$ is a conjunction of the form $\gamma_{i1}$ {\bt \&} $\gamma_{i2}$ {\bt \&} ... {\bt \&} $\gamma_{ik_{i}}$ for i = 1, 2, ... ,m, and every formula $\gamma_{ij}$ is either an atomic formula or the negation of an atomic formula. Formulae in the disjunctive-conjunctive normal form are particularly interesting because it is exceptionally easy to find whether a formula in such a form is contradictory or not. We simply check whether in every conjunction of the form $\gamma_{i1}$ {\bt \&} $\gamma_{i2}$ {\bt \&} ... {\bt \&} $\gamma_{ik}$ we can find two formulae $\gamma_{ij}$ and $\gamma_{ij}$, such that one of them is the negation of the other. That results from the following two lemmas. \vskip 1em {\bf Lemma 2} If a formula $\alpha$ is of the form $\gamma_{1}$ {\bt or} $\gamma_{2}$ {\bt or} ... {\bt or} $\gamma_{m}$, then $\alpha$ is contradictory if and only if $\gamma_{i}$ is contradictory for every i = l, ... , m. \vskip 1em {\bf Proof} Let $\alpha$ be of the form $\gamma_{1}$ {\bt or} $\gamma_{2}$ {\bt or} ... {\bt or} $\gamma_{m}$. Assume that the formula $\alpha$ is contradictory, which is to say that for every formula $\beta$: $\beta$ follows from $\alpha$, but for any i = l, ... , m $\alpha$ follows from $\gamma_{i}$ (because the formula \mbox{$\gamma_{i}$ {\bt implies} $(\gamma_{1}$ {\bt or} $\gamma_{2}$ {\bt or} ... {\bt or} $\gamma_{m})$} is a tautology). Hence for every formula $\beta$ and for any i = l, ... , m $\beta$ follows from $\gamma_{i}$, which proves that $\gamma_{i}$ is contradictory for any i = l, ... , m. Assume now that the formula $\gamma_{i}$ is contradictory for any i = l, ... , m. Then by making use of reasoning by cases we infer that each formula follows from the formula $\alpha$. {\bf Q.E.D.} \vskip 2em {\bf Lemma 3} If $\gamma$ is conjunction $\gamma_{1}$ {\bt \&} $\gamma_{2}$ {\bt \&} ... {\bt \&} $\gamma_{k}$ such that every formula $\gamma_{1}$ is either an atomic formula or the negation of an atomic formula, then $\gamma$ is contradictory if and only if there are in that conjunction no two formulas $\gamma_{i}$ and $\gamma_{j}$ such that one of them is the negation of the other. \vskip 1em {\bf Proof} For if there is no such a pair of formulae, then there are various atomic formulae \mbox{$\gamma_{i}'$, $\gamma_{j}'$, ... , $\gamma_{m}'$} such that $\gamma_ {i}$ is either identical with $\gamma_{i}'$ or with {\bt not} $\gamma_{i}'$, \mbox{for i = 1, 2, ... , k.} Let us select two sentences, one true $\alpha$ and one false $\beta$. The formula $\gamma'$ is obtained from the formula $\gamma$ by substituting $\alpha$ for $\gamma_{i}'$, if $\gamma_{i}$ is identical with $\gamma_{i}'$ and by substituting $\beta$ for $\gamma_{i}'$ if $\gamma_{i}$ is identical with {\bt not} $\gamma_{i}'$. It can easily be seen that the sentence $\gamma'$ obtained in this way is true and at the same time is an interpretation of the formula $\gamma$. {\bf Q.E.D.} \vskip 1em If it is so easy to find whether a given formula in its normal form is contradictory or not, then the following question arise: \begin{enumerate} \item Is every sentential formula equivalent to a certain formula in its normal form? \item How are we to construct, for a given formula, its equivalent formula in the normal form? \end{enumerate} The following theorem answers the first question: {\bf Theorem 1.} For any sentential formula there is its equivalent formula in the disjunctive-conjunctive normal form. \vskip 1em {\bf Proof} Let $\alpha$ be a sentential formula and let $p_{1}$, ... , $p_{n}$ be all atomic formulae occurring in $\alpha$. We construct the truth table for $\alpha$ in the manner described in the opening part of this Appendix. Let k = 2$^{n}$. We form formulae $\gamma_{1}'$, ... , $\gamma_{k}'$ in the following way: if in the i-th line in the column of the formula $\alpha$ (i.e., the last column) there is the value 1, then $\gamma_{i}$, is the conjunction $\gamma_{i1}'$ {\bt \&} $\gamma_{i2}'$ {\bt \&} ... {\bt \&} $\gamma_{in}'$, where $\gamma_{ij}'$ is either $p_{j}$ or {\bt not} $p_{j}$ according to whether in the i-th line in the j-th column there is 1 or 0. If in the i-th line in the column of the formula $\alpha$ there is value 0, then $\gamma_{i}$ is {\bt contradiction}. Let $\beta$ be the formula \mbox{$\gamma_{i}$ {\bt or} ... {\bt or} $\gamma_{k}$.} Of course, $\beta$ is in the disjunctive-conjunctive normal form. To every line in the table there corresponds a certain interpretation, i.e., valuation, and every formula in the form $\gamma_{i}$ is true exactly for one such valuation, being false for any other valuation. Hence $\alpha$ and $\beta$ are true exactly for the same interpretations, which is to say that they are equivalent. {\bf Q.E.D.} \vskip 1em The proof of the above theorem immediately gives a certain answer to the second question: how to find a formula in its normal form equivalent to a given formula. Unfortunately, the algorithm described above, which translates formulae into their normal forms, is as bad as the 0-1 method of checking tautologies: to use it we would have to construct the truth table for the input formula, and that - as we known - might take too much time. We shall now describe a somewhat better algorithm for the reduction of formulae to their normal forms. But we shall begin with the description of the procedure of pushing negation down. \medskip We say that a formula $\alpha$ is in a reduced form if and only if $\alpha$ is constructed of atomic formulae and negations of atomic formulae by means of conjunction and disjunction only. \medskip {\bf Theorem 2.} For any sentential formula $\alpha$ there is a formula $\beta$ in a reduced form equivalent to $\alpha$. \vskip 1em {\bf Proof} Note first that we may assume that the connectives of implication and equivalence do not occur $\alpha$. Equivalence is eliminated by the replacement of the occurrences of formulae of the form $(\gamma$ {\bt implies} $\delta)$ by formulae in the form $((\gamma$ {\bt implies} $\delta)$ {\bt \&} $(\delta$ {\bt implies} $\gamma))$. Implication is eliminated by the replacement of the occurrences of the formulae of the form $(\gamma$ {\bt implies} $\delta)$ by formulae of the form $(${\bt not} $\gamma$ {\bt or} $\delta)$. The key problem consists in shifting negation as far down into the formula as it is possible, or, speaking more picturesquely, in sinking it. We shall now describe the procedure of negation sinking. \vskip 1em {\Large\bf Procedure of Negation Sinking} If $\alpha$ is an atomic formula or the negation of an atomic formula, then $\alpha$ is the result of negation sinking in the formula $\alpha$. If $\alpha$ is of the form $(\gamma$ {\bt or} $\delta)$ or $(\gamma$ {\bt \&} $\delta)$, and $\gamma'$ is the result of negation sinking in $\gamma$ and $\delta'$ is the result of negation sinking in $\delta$, then $(\gamma'$ {\bt or} $\delta')$ or, correspondingly, $(\gamma'$ {\bt \&} $\delta')$ is the result of negation sinking in $\alpha$. If $\alpha$ is of the form {\bt not} $\alpha'$, then\\ \hspace*{3em} either $\alpha'$ is an atomic formula (that case has been analysed) above \\ \hspace*{3em} or $\alpha'$ is of the form {\bt not} $\gamma$, and $\gamma'$ is the result of negation sinking in $\gamma$, and then $\gamma'$ is the result of negation sinking in $\alpha$, \\ \hspace*{3em} or else $\alpha$ is of the form $(\gamma$ {\bt or} $\delta)$ or $(\gamma$ {\bt \&} $\delta)$, and $\gamma'$ is the result of negation sinking in {\bt not} $\gamma$, and $\delta'$ is the result of negation sinking in {\bt not} $\delta$, and then $(\gamma'$ {\bt \&} $\delta')$ or correspondingly $(\gamma'$ or $\delta')$ is the result of negation sinking in $\alpha$.\\ The algorithm described above reduces the problem of finding the reduced form for any formula constructed of atomic formulae by negation, conjunction, and disjunction, to the same problem but for simpler formulas. Algorithms of this kind are termed recursive. To bring this proof to conclusion we have still to demonstrate that for any formulae $\alpha$ and $\beta$: if $\beta$ is the result of negation sinking in $\alpha$, then $\alpha$ is in the reduced form and $\alpha$ and $\beta$ are equivalent. Let P be a property such that the formula $\alpha$ has the property P if and only if for any formula $\beta$ which is the result of negation sinking in $\alpha$, $\alpha$ and $\beta$ are equivalent. We shall show that every formula in which neither equivalence nor implication occurs has property P. Assume that every formula shorter than $\alpha$ has the property P. We shall show that for that assumption $\alpha$ too has the property P. If $\alpha$ is an atomic formula, then $\alpha$ is the result of negation sinking in $\alpha$, $\alpha$ is equivalent to $\alpha$, and hence $\alpha$ has the property P. If $\alpha$ is of the form $(\beta$ {\bt \&} $\gamma)$ or $(\beta$ {\bt or} $\gamma)$, then $\beta$ and $\gamma$ are shorter than $\alpha$ and as such have the property P. Let $\beta'$ and $\gamma'$ be the results of negation sinking in $\beta$ and $\gamma$. Since $(\beta'$ {\bt \&} $\gamma')$ and $(\beta'$ {\bt or} $\gamma')$ are the results of negation sinking in $(\beta$ {\bt \&} $\gamma)$ and $(\beta$ {\bt or} $\gamma)$, respectively, both $(\beta$ {\bt \&} $\gamma)$ and $(\beta$ {\bt or} $\gamma)$ have the property P. If $\alpha$ is of the form {\bt not} $\beta$, then $\beta$ has the property P and neither equivalence nor implication occur in $\beta$. Thus is $\beta$ either an atomic formula or is the negation of an atomic formula. In both cases it can easily be seen that the formula {\bt not} $\beta$ has the property P, or else one of the following cases holds: \begin{enumerate} \item $\beta$ is of the form $(\gamma$ {\bt \&} $\delta)$, or \item $\beta$ is of the form $(\gamma$ {\bt or} $\delta)$. \end{enumerate} These cases are symmetrical and hence it suffices to analyse only the first of the two. Since {\bt not} $\gamma$ and {\bt not} $\delta$ are shorter than $\alpha$, they have the property P. Let then $\gamma'$ and $\delta'$ be the results of negation sinking in {\bt not} $\gamma$ and {\bt not} $\delta$. Negation sinking in $\alpha$ yields \mbox{$(\gamma'$ {\bt or} $\delta')$;} at the same time $\gamma$ and {\bt not} $\gamma'$ are equivalent, and $\delta$ and {\bt not} $\delta'$ are equivalent, so that $\alpha$ is equivalent to {\bt not} $(${\bt not} $\gamma'$ {\bt \&} {\bt not} $\delta')$, and that formula in turn is equivalent to $(\gamma'$ {\bt or} $\delta')$. Hence the formula $\alpha$ has the property P. We have thus demonstrated that every formula in which neither equivalence nor implication occurs has the property P. The proof of the fact that the result of negation sinking is in a reduced form is left to the ingenuity of the Reader. {\bf Q.E.D.} \vskip 2em {\bf Lemma 3} For any sentential formulae $\alpha, \beta_{1}, ... , \beta_{n}$ (n $>$ 1) the following formulas are mutually equivalent: \begin{center} $\alpha$ {\bt \&} $(\beta_{1}$ {\bt or} ... {\bt or} $\beta_{n})$, \\ $(\beta_{1}$ {\bt or} ... {\bt or} $\beta_{n})$ {\bt \&} $\alpha$, \\ $((\alpha$ {\bt \&} $\beta_{1}$) {\bt or} ... {\bt or} $(\alpha$ {\bt \&} $\beta_{n}))$. \end{center} \newpage {\bf Proof} The equivalence of the first two formulas can be checked directly, hence it suffices to demonstrate the equivalence of the first and the third formula. This lemma will be proved by induction relative to n. For n = 2 the thesis follows from the equivalence of the formulae: $\alpha$ {\bt \&} $(\beta$ {\bt or} $\gamma)$ and $(\alpha$ {\bt \&} $\beta)$ {\bt or} $(\alpha$ {\bt \&} $\gamma)$, which equivalence can be checked directly. Assume that the thesis is true for n-1, so that the formulae \begin{center} $\alpha$ {\bt \&} $(\beta_{1}$ {\bt or} ... {\bt or} $\beta_{n-1})$, \\ $(\alpha$ {\bt \&} $\beta_{1})$ {\bt or} ... {\bt or} $(\alpha$ {\bt \&} $\beta_{n-1})$, \\ \end{center} are equivalent. Hence the following formulae are equivalent, too: \begin{center} $\alpha$ {\bt \&} $(\beta_{1}$ {\bt or} ... {\bt or} $\beta_{n})$, \\ $\alpha$ {\bt \&} $((\beta_{1}$ {\bt or} ... {\bt or} $\beta_{n-1})$ {\bt or} $\beta_{n})$, \\ $(\alpha$ {\bt \&} $(\beta_{1}$ {\bt or} ... {\bt or} $\beta_{n-1}))$ {\bt or} $(\alpha$ {\bt \&} $\beta_{n})$, \\ $(\alpha$ {\bt \&} $\beta_{1})$ {\bt or} ... {\bt or} $(\alpha$ {\bt \&} $\beta_{n-1})$ {\bt or} $(\alpha$ {\bt \&} $\beta_{n})$, \\ $(\alpha$ {\bt \&} $\beta_{1})$ {\bt or} ... {\bt or} $(\alpha$ {\bt \&} $\beta_{n})$, \\ \end{center} {\bf Q.E.D.} \vskip 1em We shall now describe our improved algorithm of reducing sentential formulae to the disjunctive-conjunctive normal form. Note than on the strength of the previous theorems we may assume that the input formulae of the algorithm are in a reduced form. \vskip 1em {\Large\bf A Somewhat Better Algorithm of Reduction}\\ {\Large\bf Formulae to the Disjunctive-Conjunctive}\\ {\Large\bf Normal Form} If the formula $\alpha$ is an atomic formula or the negation of an atomic formula, then $\alpha$ is the normal form of the formula $\alpha$. If $\alpha$ is the of the form $(\beta$ {\bt or} $\gamma)$, and $\beta'$ is the normal form of $\beta$, and $\gamma'$ is the normal form of $\gamma$, then $(\beta'$ {\bt or} $\gamma')$ is the normal form of $\alpha$. If $\alpha$ is of the form $(\beta$ {\bt \&} $\gamma)$ and $\gamma_{1}$ {\bt or} ... {\bt or} $\gamma_{n}$ (n $>$ 1) is the normal form of $\gamma$, and the formulae $\beta$ {\bt \&} $\gamma_{1}$, ... , $\beta$ {\bt \&} $\gamma_{n}$ have their normal forms, respectively, in $\delta_{1}$, ... , $\delta_{n}$, then $\delta_{1}$ {\bt or} ... {\bt or} $\delta_{n}$ is the normal form of $\alpha$. If such as n $>$ 1 does not exist, this means that the connective of disjunction does not occur in $\gamma$. In such a case we exchange the position of $\beta$ and $\gamma$: if the connective of disjunction does not occur in $\beta$ either, then we assume that $\alpha$ is the normal for of $\alpha$. If $\alpha$ is the input formula of the above algorithm and $\beta$ is its output formula, then we have to demonstrate that $\alpha$ and $\beta$ are equivalent. For that purpose we refer to Lemma 4. \end{document}