Objectivity Vol. 1 no. 6 (1993)
I enjoyed Gary McGath's clear and concise exposition of the errors and weaknesses of the Turing test. However, I do think McGath overstates the "jurisdiction" of Turing's ideas. That is to say, not all ideas about artificial intelligence presupposes Turing's assumptions, and refuting Turing is not the same as refuting the possibility of artificial intelligence as such.
Anyone can observe a proportional relationship between, on the one hand, the increasing complexity of the nervous system in the animal kingdom, and on the other hand, a corresponding incremental increase in their mental or cognitive capacities. From observing this correspondence in many independent and diverse instances (i.e., species) one is justified in concluding that there is a necessary link, a causal connection, between the degree of complexity of the physical support structure and the possibility for, origin of, and degree, scope and intensity of consciousness. This is further supported by the observation that damaging specific parts of a brain damages specific or corresponding parts of the organism's mental or cognitive capacities.
Given that consciousness is not some mystical Cartesian substance, it must exist as an integrated part of the system that constitute the organism as a whole, and co-develop synergistically and bi-causally with the physical support structure. This is what we observe, phylogenetically and ontogenetically. It follows that given the right kind of physical support structure, awareness will emerge. Consciousness is an emergent property. "Awakening", that is, reaching a conceptual identification of the self, constitutes the final stage of the emergence of consciousness, namely self-awareness.
Yet the Black Box Fallacy does not apply to those situations where the copy or simulation is better than the original. A perhaps trivial example of this would be old books or paintings which are copied with modern computer-graphics technology, producing copies that are better than the originals ever were; clearer, brighter, more detailed and so on. Computer simulation technology is moving at an accelerating pace along the path of creating virtual or "hyperreal" environments. Maybe it is wrong to classify such copies or simulations as models - maybe they should be regarded as originals in their own right, as new territory rather than maps.
Since the possibility of creating an artificial physical structure that may support consciousness has not been ruled out in principle, we must be prepared to recognize such an entity as a new original, not a model, even if models of human cognition went into the effort of its creation, and the Black Box Fallacy would not apply to it.
The assumptions of the Turing test should not be confused with the perfectly plausible idea that consciousness need not necessarily have a carbon-based support structure; that what matters is not the building materials of the support structure per se, but their organization and architecture, the nature of the structure; its complexity, the type and number of connections and so forth.
This is not to say that consciousness is independent of a physical basis, since the structural demands restrict what building materials may be used. However, it is inappropriate to conclude from one observed instance (human beings) that only one support structure for self-awareness or a conceptual consciousness is possible. This is the inductive fallacy of premature generalization, or "jumping to conclusions". There is simply no basis for such a generalization. Similarly, there is no evidence for the belief that only one type of physical structure may support and give rise to perceptual consciousness, and it seems hopelessly parochial to hold such a belief.
McGath has neither shown that noncarbon-based conscious life is impossible, nor that creating such a being artificially is impossible. Hence he has not demonstrated that the pursuit of a noncarbon-based artifial mind or intelligence is a "holy grail" (i.e., an impossibility). What he has shown is that The approach of the Turing test (and so a lot of today's AI research) is going in a misguided and unfruitful direction. This does not prove that there are no other approaches that may lead to the creation of artificial life or minds. One methodology that comes to mind, and that to my knowledge does not depend on The assumptions of the Turing test, is neural nets or connectionism. Others are organic computers, and nanotechnology.
Incidentally, Heinlein's Mike seems to be based upon something resembling connectionism. The counterargument to Heinlein would be to prove that the parts that his "machine" were made of could not possibly give rise to consciousness by any kind of reshuffling, or rearrangement, of them in their current form. That is an easy task with any of today's computers, which is one important reason, I assume, that Heinlein invented some new terms, like neuristors, to describe the building blocks of his machine.
In one sense, the question "Can a computer think?" is as easy to answer with a resounding "No!" as the question "Can an amoeba think?". And if they could think, they would no longer be computer and amoeba respectively. The historical fact remains that something that started out in as primitive a form as an amoeba evolved and eventually ended up giving rise to complex physical forms able to support consciousness, namely, the higher animals. Today's computers are silicon amoebas. Neural nets may give rise to the first silicon animals.
While the Turing test skips the question of necessary physical preconditions for consciousness, refuting it and its assumptions do not preclude the possibility of an evolution from simple physical structures into complex physical structures capable of supporting consciousness, and even a conceptual, self-aware consciousness. After all, this has already happened at least once (when human consciousness originated), and the main guide of that develoment was The Blind Watchmaker of random mutations and natural selection. One may expect that systematic changes and rational selection by purposeful and goal-directed human minds will enable a much faster completion of an artificial version of this process. In terms of fruitfulness it would be a terrible waste not to pursue the creation of an artificial mind if the creation of such be possible.
The address of this document:
https://home.nuug.no/~thomas/po/artificial-consciousness.html
Author's address:
thomas@gramstad.no
Index to the Post-Objectivism web site:
https://home.nuug.no/~thomas/po/articles.html