To Table of Contents 8'98
Mathesis Universalis     No.8 - Autumn 1998

A.1. Strong AI: an Adolescent Disorder

Donald Michie, Professor Emeritus, University of Edinburgh, UK, Associate Member, Josef Stefan Institute, Ljubljana, Slovenia

Keywords: strong and weak AI, Turing's test, middle-ground

Abstract: Philosophers have distinguished two attitudes to the mechanization of thought. "Strong AI'' says that given a sufficiency of well chosen axioms and deduction procedures we have all we need to program computers to out-think humans. "Weak AI'' says that humans don't think in logical deductions anyway. So why not instead devote ourselves to (1) neural nets, or (2) ultra-parallelism, or (3) other ways of dispensing with symbolic domain-models?

A.2. AI Progress, Massive Parallelism and Humility

J. Geller, Department of Computer and Information Sciences, New Jersey Institute of Technology, Newark NJ 07102,

Keywords: massive parallelism, knowledge representation, Connection Machine

Abstract: The author outlines a view of AI that is between the extreme that "everything is just fine in AI" and "AI is hopeless, we might as well give up". The author's own approach to AI is presented, the one based on the combination of Knowledge Representation with Massively Parallel hardware. The conclusion is to the effect that Massive Parallelism might be helpful for the development of AI. However, the large investments necessary for the the development of Massive Parallelism itself will require a determined involvement of one even several cooperating governments.

A.3. Self and Self-Organization in Complex AI Systems

Ben Goertzel, Comoc Communications, Inc., New York, USA

Keywords: artificial intelligence, complex systems, self, psynet model

Abstract: In order to make strong AI a reality, formal logic and formal neural network theory must be abandoned in favor of complex systems science. The focus must be placed on large-scale emergent structures and dynamics. Creative intelligence is possible in a computer program, but only if the program is devised in such a way as to allow the spontaneous organization and emergence of "self- and reality-theories." In order to obtain such a program it may be necessary to program whole populations of interacting, "artificially intersubjective" AI programs.

A.4. Is Weak AI Stronger than Strong AI?

Matjaz Gams, Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia, -

Keywords: strong and weak AI, principle of multiple knowledge, Church's thesis, Turing machines

Abstract: A short overview of strong-weak AI opposition is presented. Strong AI is refuted by several arguments, such as empirical lack of intelligence in the fastest and most complex computers. Weak AI rejects the old formalistic approach based only on computational models and endorses ideas in several directions, from neuroscience to philosophy and physics. The proposed line distinguishing strong from weak AI is set by the principle of multiple knowledge, declaring that single-model systems can not achieve intelligence. Weak AI reevaluates and upgrades several foundations of AI and computer science in general: Church's thesis and Turing machines. In the long term, weak AI is indeed more perspective than strong AI.

A.5. Naive Psychology and Alien Intelligence

Stuart Watt, Department of Psychology, The Open University, Milton Keynes MK7 6AA. UK,

Keywords: naive psychology, common sense, anthropomorphism

Abstract: This paper argues that artificial intelligence has failed to address the whole problem of common sense, and that this is the cause of a recent stagnation in the field. The big gap is in common sense-or naive-psychology, our natural human ability to see one another as minds rather than as bodies. This is especially important to artificial intelligence which must eventually enable us humans to see computers not as grey boxes, but as minds. The paper proposes that artificial intelligence study exactly this-what is going on in people's heads that makes them see others as having minds.

A.6. Cramming Mind into Computer: Knowledge and Learning for Intelligent Systems

Kevin J. Cherkauer, Department of Computer Sciences, University of Wisconsin-Madison, 1210 West Dayton St., Madison, WI 53706, USA, -

Keywords: artificial intelligence, knowledge acquisition, knowledge representation, knowledge refinement, machine learning, psychological plausibility, philosophies of mind, research directions

Abstract: The task of somehow putting mind into a computer is one that has been pursued by artificial intelligence researchers for decades, and though we are getting closer, we have not caught it yet. Mind is an incredibly complex and poorly understood thing, but we should not let this stop us from continuing to strive toward the goal of intelligent computers. Two issues that are essential to this endeavor are knowledge and learning. These form the basis of human intelligence, and most people believe they are fundamental to achieving similar intelligence in computers. This paper explores issues surrounding knowledge acquisition and learning in intelligent artificial systems in light of both current philosophies of mind and the present state of artificial intelligence research. Its scope ranges from the mundane to the (almost) outlandish, with the goal of stimulating serious thought about where we are, where we would like to go, and how to get there in our attempts to render an intelligence in silicon.

A.7. The Quest for Meaning

Louis Marinoff, Department of Philosophy, The City College of New York, 137th Street at Convent Avenue, New York 10031,

Keywords: Turing test, formalism, holism, strong AI thesis

Abstract: This is a report of a three-tiered experiment designed to resemble a limited Turing imitation test. In tier #1, optical character recognition software performed automated spell-checking and "correction'' of the first stanza of Jabberwocky (Carroll, 1871). In tier #2, human subjects incognizant of the poem spell-checked and "corrected'' the same stanza. In tier #3, a widely-qualified group of academics and professionals attempted to identify the version rendered by the computer. Discussion of the experiment and its results leads to the notion of a "reverse Turing test'', and ultimately to an argument against the strong AI thesis.