POP culture

Premises Of Post-Objectivism


CONSCIOUSNESS WITHOUT BIOLOGY

AND ITS CONSEQUENCES
Copyright Gary McGath

Nordic Artificial Intelligence Magazine no. 2 1991.

This article is based on a lecture originally presented to the Prometheus Forum on September 12, 1987. An earlier version of the article was published in the electronic magazine Atlantis # 2.

Computer scientists, working in the field of artificial intelligence, often claim that they will someday create a combination of computer hardware and software that thinks, and even that activity on the borderline of thought has already been achieved in computers. The question which we have to ask in response to this claim is a scientific one: is thought possible in computers? But the key issue in dealing with this question is not so much one of accumulating and weighing the evidence, but one of the methodology used. This is an issue of philosophy of science: A certain method is being used to arrive at a conclusion. Is that method valid?

Philosophy of science pertains to the methodology of science, not to its specific conclusions. It can offer a criticism of the way a conclusion was reached; it can affirm or deny the validity of a scientific argument based on the available data; but it can't replace the process of assessing evidence, observing the facts, and building a conclusion of what is happening, has happened, or will happen based upon that information.

Suppose, to take a crude example, that someone claimed that bicycles experience emotions, and supported his claim by means of numerology. One could ask scientific questions about his approach; the validity of his calculations could be checked. But before going to this level, it would be necessary to ask a more basic question: is the method valid? Does numerology deserve to be considered as a method for arriving at any conclusion? If it isn't, then running a program to re-check his calculations would only cause the computer unnecessary grief.

Often the method by which people reach a conclusion isn't stated outright. Most people simply assume the epistemology which they use, rather than explaining its basis and methods. As long as the epistemology is valid, this approach is reasonable; people would rather get on with reaching conclusions than constantly re-examine the method which they use to reach them.

The methodology by which computer scientists propose to establish that a computer thinks is fairly straightforward. It consists of testing the machine's ability to converse, to reach conclusions from data, to deal with human languages, to process images - in brief, to engage in the kind of actions which are associated with reasoning in human beings.

The best-known version of this method is the test proposed by A. M. Turing in his 1951 article, Computing Machinery and Intelligence. In this article, he proposed letting an interrogator communicate with a computer and with a human being solely by means of data terminals. The interrogator starts out not knowing which terminal leads to which subject; he may ask any questions he wants of the subjects in an effort to tell which is which. If his determination of which one is the human is no better than random guessing would give, then the computer is considered to be thinking.

For example, if I were the interrogator, I might ask the computer to write a sonnet. It could respond by giving a sonnet, or it might simply say, as in Turing's example, "Count me out on this one. I could never write poetry." After all, that is a perfectly plausible human response.

Various criticisms of Turing's "imitation game" have been made, but there is still broad support for his principle: if a machine can give responses which would be respectable from a thinking being, it is regarded as thinking.

THE BLACK BOX FALLACY

In order to judge whether this approach is appropriate, we have to step back and see what the broader method implied in the approach is, and what assumptions it relies on. This is where we ask if the methodological principle is correct; this is where we are dealing in philosophy of science.

The methodological principle which is used here could be stated in colloquial terms as, "If it looks like a duck, walks like a duck, quacks like a duck, and swims like a duck, then it is a duck." More technically, we can call this the operational method. Its premise is that an entity's characteristics may be presented in terms of a model, and that correspondence to a model implies essential similarity. The duck-model consists of certain modes of appearance, locomotion, sound, and so forth. The model for thinking is one of dealing with information in certain ways. The question of what prior conditions result in correspondence to the model is considered secondary and unimportant.

Obviously, this method isn't totally contrary to reason. If you can construct a set of characteristics for a duck, or a human being, and something meets those characteristics, chances are it is a duck or a human. If it walks and eats, it is very likely alive. If it conducts an intelligent-sounding conversation with you, the possibility that it thinks is bound to come to mind.

However, the operational methodology contains a serious fallacy. This fallacy is present in the usual approach to the question of whether computers think. It is also present in the way other areas of science - notably quantum physics - are treated. This fallacy has not been widely recognized, and the lack of its recognition is a danger to science.

This can be called the black box fallacy. It consists of the implicit idea that as long as we don't see what is happening inside a box, all that counts is what comes out of the box.

Or we could call it the black hole fallacy, since black holes are the ultimate black boxes. A black hole is a region of collapsed matter and energy in space, so dense that its gravitation keeps anything whatsoever from escaping. Any light radiated in a black hole literally falls back in.

Actually, this isn't quite true; because of the uncertainty built into quantum physics, some energy can escape from a black hole. Because black holes let nothing out except this random spillover, physicist Stephen Hawking has demonstrated that what comes out must be completely random and uncorrelated.

Jerry Pournelle, a generally rational science fiction writer, tells us this means that "we don't know anything and can't know anything; that causality is a local phenomenon of a purely temporary nature; that time travel is possible; that Chtulhu might emerge from a singularity, and indeed is as probable as, say, H. P. Lovecraft." (Jerry Pournelle, A Step Farther Out, Ace, 1979, p. 184.)

At best, Hawking's conclusion is the physical equivalent of saying that if enough monkeys type randomly long enough, they will produce the complete works of H. P. Lovecraft. But because these particular monkeys are in a black hole, they are taken as undermining the very fabric of reality with their unknowability.

The error of the black-box fallacy lies in neglect of the hierarchy of knowledge. In observing reality, we do not merely collect a host of particular data to be collected into a model. We also discover the preconditions of observation, the facts which make such concepts as science and knowledge possible.

This example shows where the outer limits of the black-box fallacy lie. But for all its ludicrousness, it is exactly the same type of fallacy which lies behind the claims that computer programs can think; and the inversion of the hierarchy of knowledge is just as fundamental.

In Turing's test, both the computer and the human subject are black boxes to the interrogator. All that may be tested is their input-output behavior; if they are the same, then they fit the same model, so the processes producing them are assumed to be the same.

The Turing double-blind as a strict limitation on testing for intelligence isn't popular with computer scientists today; they claim that they are concerned with what is going on inside the box. But they still think in terms of models and limit themselves to what can be observed from outside. The difference is that they deal with the structure of the model, rather than only its input-output behavior. Anything which can't be observed from without and made a part of the model is dismissed as mysticism and religion, just as Pournelle grants that Lovecraftian demons may be waiting to pop out of black holes.

Notice this correlation very carefully. The advocates of thinking computers regard what is inside the black box as mystical and treat it with scorn. Pournelle treats what is in black holes as mystical and regards it with fear and awe. But only the evaluation is different; the relegation of the unseen to the realm of non-identity is the common principle in both cases.

What the promoters of thinking computers regard as observable is information and the organization thereof. The AI model of the human mind is an informational model. Information is present in minds and computer programs alike, not in the sense that one can open either one up and point to the information lying inside, but in the sense that the presence of information can be established by objective tests which an observer can run. By answering certain questions, I can demonstrate that I possess information about people, about computers, about the Boston area, and so on, and that I can make certain kinds of use of this information.

What is not observable is that I am aware of these facts and of what I am doing with them, and that I can choose to deal with one set of facts or another or to dismiss them all from my mind. No external test can demonstrate my consciousness, as opposed to my merely claiming to be conscious. Thus, this is inside the black box and excluded from operational-scientific consideration.

But excluding consciousness from scientific consideration is an inversion of the hierarchy of knowledge just as fundamental as Pournelle's claim that a scientific observation invalidates all our knowledge. For what is it we are saying when we claim to be thinking? We are saying that we are aware of some fact, and relating it to other facts of which we are aware. Without the concept of consciousness, the concept of thought has no meaning. To say that consciousness must be excluded from consideration of whether a device thinks is to say that we must not consider the basis of thinking in dealing with thought. To say that we should define consciousness in terms of information is to leave the concept of information without any basis; the root of the hierarchy of cognitive concepts has been moved out to the branch.

If the advocates of thinking computers admit the issue of consciousness, they allow it to be inferred only from the observable behavior of an entity. If a computer acts, as far as we can observe, just like a human being, then it is supposedly human chauvinism to doubt that the machine is also conscious.

This claim involves an even cruder version of the operational fallacy, one which requires ignoring what was put into the black box in judging what comes out of it. We do not have to assume that only beings with two arms and two legs can think; but when we have a sufficient explanation of an entity's behavior which does not make reference to thought, it is entirely superfluous and arbitrary to suppose that it is thinking. The explanation which we can offer, in the case of all existing computer programs that produce quasi-intelligent behavior, is the program itself. A program is a complete specification of the way a computer will act when it is run, nitpicking about compiler differences, random number generators, and the like aside. If a computer acted in a way which could not be accounted for by its programs, then we might have a phenomenon worthy of investigation. (Usually, though, such a phenomenon is known as a hardware bug. This is worthy of investigation, but only so that it can be fixed.)

The conventional approach to artificial intelligence relies entirely on the visible effects of a model, without regard to what lies behind that model. It does not even take into account whether an entity can tie its model to reality. A typical computer "knowledge base" consists of a collection of relationships among various primitive objects and attributes, organized in "frames" which permit effective handling of exceptions and contexts. The results are often very impressive; but these primitive objects are ultimately undefined and unrelated to any perceptual data. A knowledge base could contain various relationships about objects, such as "The toe bone is connected to the foot bone," attributes of bones in general, such as being breakable and filled with marrow, and exceptions to these attributes where appropriate. But it contains no perceptual basis for the concept of bone.

AI programs do not build up concepts as we do from experience. They are given descriptions in terms of data relationships. When humans understand things only in this way, it is regarded as a very poor sort of understanding. The student who can rattle off answers to quiz questions but has no idea what the subject matter is really about falls into this category. This is not, in fact, understanding at all, since it can be based on false premises as easily as true ones. One could imagine, for instance, an AI program which gave thoroughly correct answers on Tolkien's Middle Earth, but had no idea that its "knowledge" was not about reality.

Here I should mention neural nets as a different approach, one which does build up descriptions from elemental data. The idea of a neural net was proposed early in the study of artificial intelligence, dropped largely because of the lack of sufficiently powerful computers to obtain interesting results, and revived within the past few years. It consists, in broad terms, of building up relationships among incoming data without any pre-imposed structure, by trial and error, so to speak. This approach is much closer to a model of human thinking, though it is still actually programmed manipulations of data. All its actions are explainable in terms of its program, although this program may be quite small compared with its accumulated information structures.

The AI view of intelligence equates thinking with information processing. But this is another inversion. Information is a teleological concept. Apart from an imputed purpose of relating one set of events to another, there is no way to distinguish information from noise. Regularities alone do not constitute information; a sine wave is one of the most regular things there is, but it conveys very little information at best. The term information has a scientific meaning, which is more quantitative than the everyday notion of information; it can specify the exact amount of information conveyed in a given message. It does not escape association with a purpose, though. The scientific concept of information requires an implicit context in which information is distinguished from redundant signals and noise; this ultimately requires a purpose, such as knowledge or action, by which we judge what information is.

Thus, defining thinking in terms of information puts things backwards. Only because there are thinking beings does the concept of information make sense. A machine that processes information is one which deals with data which are meaningful to a thinking being, or conducive to some human purpose. An information-processing machine, however powerful it may be, is not a thinking machine.

A CHOICE OF METHODOLOGIES

I want to stress the methodology here. The conclusion that a computer can't think isn't very exciting in itself. It would be more exciting if I could have demonstrated that computers can think. But in order to advance knowledge, in order to achieve new discoveries and inventions, we have to use the correct methodology. An incorrect one may provide lots of exciting promises, but will not provide many results.

The operational method of thinking, with its stress on models, encourages thinking by analogy. Operationally, if two things act alike, they are alike. For some purposes, analogies are very useful; seeing a similarity between a new situation and a previously understood one can lead to a valuable insight. However, it is also necessary to understand when to break away from analogies, when to form a new conceptual framework. Being unable to do this leads to stagnation.

Regarding computers as thinking machines is, in fact, such a path to stagnation. It can distract one from identifying the role which computers can and do in fact play, as adjuncts to human thought rather than as thinkers. And in fact, virtually all the advances in the use of computers have come from people who have not clung to the idea that a computer is a low-grade human mind. This includes many of the advances in artificial intelligence itself, which have resulted from backing away from models of thought and instead approaching specific problems.

The real key to computers is contained in the term information processing. The basic action of a computer is to accept information of some kind, and produce information of another kind. A word processor creates formatted text from keystrokes. A data base manager creates responses from queries and from its data base. A game creates visual or verbal effects from its internal data and the player's input.

Thinking in these terms, and not in terms imitative of humans, is the key to creativity. For example, Apple Computer's HyperCard deals with units of textual or graphic information called cards, organizes these in stacks, and allows user input to affect the progression from one card to another. The spreadsheet - VisiCalc and its thousand imitators - is another example; its creators thought about what people needed and how that need could be met in terms of the computer's information-processing capability. People don't think like spreadsheets; they don't think like stacks of cards; but they use both of these as tools. An analogy from the tools people use, transferred to the new context of information processing, was the key to both of these creations. But in addition, elements were introduced which were not possible without the computer; a "sheet of paper" in the middle of which new rows and columns can be created, and which calculates formulas automatically; cards that have areas on them which reach out directly on request to other cards.

The idea of information processing can be extended in many directions. I have been very interested in the idea of interactive fiction, of a story whose progress changes according to the "reader's" choices. Whoever thinks of and devises the next widely useful application of information processing will be in a position to make a lot of money. The possibilities are limitless, if we don't restrict our concept of the computer to less than it can be.

And this, ironically, is what is wrong with regarding computers as thinking machines: not that it is too much to expect of a computer, but that it is the wrong thing to expect, and therefore ultimately limiting. Discarding false concepts keeps the future open for new discoveries, and perhaps for computers doing things much more amazing than any robot science-fiction has ever suggested.

Gary McGath is a software-consultant in Penacook, New Hampshire.


The address of this document:
https://home.nuug.no/~thomas/po/consciousness-without-biology.html

Author's address:
gmcgath@shore.net

Index to the Post-Objectivism web site:
https://home.nuug.no/~thomas/po/articles.html