To Table of Contents 8'98
Mathesis Universalis     No.8 - Autumn 1998
When using any part of this text - by Witold Marciszewski - refer, please, to the URL listed at the bottom

A Debate on Strong AI
critically examined

This paper is to discuss the main issue of Mind versus Computer. The book should be appreciated for having such a common issue in spite of so many subjects and contributors,

This main problem is hinted at with the subtitle Were Dreyfus and Winograd right? Since both Dreyfus and Winograd argued against strong AI in favour of a weaker version, this question turns to be equivalent to that: Is strong AI right?. The answers can be divided into the following classes.

Class (c) contains a variety of views which agree in that there is a crisis in strong AI but it can be overcome through a relevant research project. Thus, with such a hope they indirectly support the strong AI doctrine. There is no chance to discuss all of them in a reasonable time, hence I suggest to focus attention on those contributions which in direct way attack the main issue.


1.1. The paper which is specially representative and, moreover, worth attention because of a historical introduction, is due to M.Gams "Is Weak AI Stronger that Strong AI?".

The paper starts from a useful short overview of main facts in the history of AI. It begins from 1955, and the process reported is divided into recurring stages of enthusiasm and scepticism. The story is more concerned with some renowned great projects, their costs and their commercial results, than with the development of ideas. Were the latter intended, the narration would begin from 1948 - the year in which Alan Turing had read his technical paper ,,Intelligent Machinery'' containing all his seminal ideas (some of them, as the so-called Turing test, having been repeated in ,,Computing Machinery and Intelligence'', Mind, October 1950).

As to the last period discussed by Gams, that from 1995, there appears a remark in his text which requires a comment. The Author writes as follows.

"At the same time, bold new ideas were emerging, challenging the fundamentals of computer science as well as science in general - the Turing machine paradigm, Gödel's theorem and Church's thesis."

Obviously (as the Author himself recalls at other places), those ideas - new and bold indeed - emerged sixty years earlier. If the Auther mentions them when reporting on the period after 1995, he is supposed to claim that there was their revival in the time mentioned, which contributed something essential to the AI development in this time. Unfortunately, no more is said than that in the passage quoted.

Furthermore, one may wonder why those ideas are said to challenge ,,the fundamentals of computer science as well as science in general''. In fact, when they emerged, from 1931 to 1936, there was no computer science to be challenged. On the contrary, Turing's idea of universal digital machine was that which initiated the fundamentals of computer science.

As to challenging science in general, there is a bit of exaggeration in such a saying. The challenge was addressed to Hilbert's Programme for mathematics (alone), and was not very dramatic, for Hilbert himself perfectly absorbed e.g. Gödel's results and made a fruitful use of them in the 2nd volume (1939) of his (with P.Bernays) Grundlagen der Mathematik.

1.2. Let me quote a next passage worth considering, which appears in the context of discussing the Fifth Generation Computer Systems Project.

"The most crucial question posed is: is logic appropriate for real-life tasks? Obviously, it has several advantages, among them a very strict formal basis, and great expressive power. However, while it may be suitable for computers and formalists, it may not be so for humans and intelligent systems in general. [...] The logical approach effectively assumes that AI is a subset of logic and that intelligence and life can be captured in a global and consistent logical form."

Here the Author touches a very crucial question, indeed. A deep approach to this question can be found in John von Neumann's The Computer and the Brain (1st edition 1957; p 81 of the edition of 1979).

"It is only proper to realize that language is largely a historical accident. The basic human languages are traditionally transmitted to us in various forms, but their very multiplicity proves that there is nothing absolute and necessary about them. Just as languages like Greek or Sanscrit are historical facts and not absolute logical necessities, it is only reasonable to assume that logic and mathematics are similarly historical, accidental forms of expression. They may have essential variants, i.e., they may exist in other forms than the ones to which we are accustomed. Indeed, the nature of the central nervous system and of the message systems that it transmits indicate positively that this is so. [...] Thus logic and mathematics in the central nervous system, when viewed as languages, must structurally be essentially different from those languages to which our common experience refers." (In the volume discussed an approach like that of von Neumann is found in M.F.Peschl's paper "Why Philosophy? On the Importance of Knowledge Representation [...]".)

Now, when speaking of logic as a basis for artificial intelligence does one think of those historical forms of expression or logic in the central nervous system viewed as a language? In the FGCS Project, people meant a logical thery as produced in a historical process, but the failure of that attempt does not imply that another logic, that found in the brain, would not succeed. Here two problems arise.

First, there is the great problem for empirical research in biology. It should discover that inner logic of brain, and represent it in a theory which would be logical and biological as well.

Second, provided that such a brain-logical code is discovered, together with a logical system recorded in it, then there arises the intriguing question: how great would be the cognitive power of that logical system? The term "cognitive power" stands for such properties of a system as completeness, decidability, etc. The question is so fascinating since we know the cognitive power of that logic which is obeyed by the universal Turing machine; and that machine, in turn, is equivalent to the digital computer. Thus, the main issue in AI debate, that of whether the digital computer can match the human brain could be expressed in precise metamathematical terms. As for the logic of Turing machine, it amounts to quantification logic (called also the functional calculus). This is so rendered by Turing himself.

"I propose to show that there can be no general process for determining whether a given formula A of the functional calculus is provable, i.e. that there can be no machine which, supplied with any one A of these formulae, will eventually say whether A is provable." See Section 11 in A.M.Turing's "On Computable Numbers, with an Application to the Entscheidungsproblem", Proc. London Math. Soc. 2>, pp. 230-265.

Turing did show what he intended. Thus, likewise Gödel and Church, he has discovered limitations of mechanical procedures of discrete-state machines, hence limitations of digital computers. Namely, the cognitive power of the logic of digital machines is defined by the fact that this logic is complete (as previously shown by Gödel), being at the same time undecidable (as shown by Turing and by Church).

In this light let us consider the objection (approvingly quoted by Gams) against the claim "that intelligence and life can be captured in a global and consistent logical form." Which logical form is meant in this context? If that of quantification logic, then the view that intelligence and life cannot be so captured may prove compatible with that of von Neumann that the logic of organic life, as encoded in a nervous system, should be different from the historically evolved quantification logic.

Now, what to think about the cognitive power of such a biological logic? Would it be greater than that of the digital computer? Should it be greater, then digital computers, in this respect, will not match brains - against the strong AI claim.


2.1. Most decidedly strong AI is defended by Ben Goertzel in his contribution "Self and Self-Organisation in Complex AI Systems".

Goertzel briefly states "strong AI is possible", and argues as follows.

As the Author precedes the above argument with the saying "the argument is simple one", the reader can confirm that it is extremely simple, indeed. In the second premise it represents the pure reductionist orthodoxy, without yielding to any temptation as put by a holistic approach. However, let us ask the following questions.

  1. Whether a software also belongs to the category od systems?
  2. If a softare necessary to explain the functioning of a system consisting of a hardware and a software, say, a computer?
  3. Does any software belongs to systems governed by the equations of physics?

If 1 and 2, the latter expressing a holistic point, are answered in the affirmative. while 3 with `no', then it will not be the case that the laws of physics are sufficient to explain the functioning of humans as systems consisting of a software and a hardware. To continue this argument, one has to know how the Author would settle the above questions; anyway, they seem to open a new path in the discussion.

2.2. The next intriguing point is concerned with the phrase "to approximate, to within any degree of accuracy".

This postutate presupposes that the brain is a discrete-state machine (i.e. not a continuous-state machine), there are in the brain discrete symbols (as at the tape of Turing machine) to be processed by programs, and that no other kind of information processing can appear in the brain. The latter claim (italicized), characteristic of symbolic school, is challenged both by connectionist school (see, e.g. D.Michie's contribution to the volume) and by those who stress the role of analog processing (as R.S.Stufflebeam in the same volume). Let it be noted that analog processes must be expressed in irrational numbers too (unlike those of Turing machine, expressible in natural numbers alone); this, however, could be reconciled with Goertzel's view which does not demand a complete accuracy in rendering physical processes in the machine, but only the possibility of any desired approximation.

Let us examine this approximation claim as being vital for Goertzel's argument. Should we believe that there is no technological limit in reaching ever greater degrees of accuracy? Two problems are to be attacked to assess such a belief: that of computer technology and that of physical experiment technology. As for the former (for the latter, see Section 2.3), there is an instructive discussion concerning chaotic systems, say, weather - being that classical case with which the story of chaos started, owing to insufficiency of computational technology as experienced by Edward Lorenz in M.I.T. (and it was von Neumann who recognized that weather modelling could be an ideal task for a computer).

Lorenz came to the understanding which provides a useful analogy with brain's creativity. To wit, he saw the famous Butterfly Effect, that is, the inpredictable dependence on initial conditions, as producing the rich repertoire of earthly weather. That beautiful multiplicity - let us notice - may be by far surpassed by that multiplicity of factors which is necessary for a brain to produce a creative thought.

What is most important for further discussion it is to realize the following. In systems like the weather and, most possibly, the brain, sensistive dependence on initial conditions is an inescapable consequence of the way small scales intertwine with large (italics are to hint at the importance of this notion).

2.3. Thus there emerges one of the greatest problems of AI research: to find out the smallest scales relevant to the brain activity. Here one has to resort to physics, in accordance with Goetzler's approach.

In Goetzler's argument the key role is played by the notion of laws (equations) of physics. This argument should be completed by the Author's explanation with which level of physical reality the laws in question are concerned. Should we take into account the quantum level? If so. are we sure, that at that level nature behaves like Turing machine? I do not insist that we should follow Roger Penrose in his answering in the negative, but the answer in the affirmative does not seem to be sure either.

Moreover, it is not necessarily so that the level of quanta constitutes the last stratum of reality governed by physical laws. Let me recall that among the most influential interpretations of quantum theory there is that developed by David Bohm and anticipated by Louis de Broglie, according to which there have to be ever deeper levels governed by even new kinds of physical laws. Here is de Broglie's approving account on that approach; it prefaces D.Bohm's book Causality and Chance in Modern Physics (London, 1957).

"Theoretical physics will always lead to the discovery of deeper and deeper levels of the physical world, and this process will continue without any limit; quantum physics has no right to consider its present concepts definitive, and it cannot stop researchers imagining deeper domains reality than those which it has already explored."

However, there should be a technological limit of such an exploration, as explained by Stephen Hawking when considering the same possibility of reaching ever deeper levels of complexity of matter. Here is a statement made in his A Brief History of Time (London, 1992, p.66).

"More recently, we have learned how to use electromagnetic fields to give particles energies of at first milions and then thousands of milions of electron volts. And so we know that particles that were thought ``elementary'' twenty years ago are, in fact, made up of smaller particles. May these, as we go to still higher energies, in turn be found to be made from still smaller particles?"

The answer in the affirmative, considered by Hawking as a possibility, by de Broglie and Bohm is regarded as something most likely; however, such nuances can be disregarded in the present context. What matters here, it is the lack of certainty that the laws of physics which pertain to brain functioning do not go beyond a definite level of physical complexity. Should someone assert that he has got such certainty, then it is up to him to define the lowest level involved and to defend his answer.

The moral to conclude this discussion amounts to advicing a prudent reserve, called docta ignorantia. We know already enough - about brain, physics, and Turing machine - to be aware of how much we have to learn still.