In a discussion of consciousness, the ‘zombie’ argument almost always comes up and is used as a thought experiment in order to demonstrate the impossibility of either one view or another. A zombie is defined as any entity that simulates the external behavior of conscious beings but is itself unconscious. Thus it would be indistinguishable from a human, if it had the external appearance of one, in all respects. If we grant that such automatons are in fact possible, we are then led to the conviction that consciousness is entirely subjective and is sealed off by an epistemic barrier. So, the problem of other minds becomes insoluble under this belief, and we must accept that we are alone in our subjectivity and that we will never attain to a knowledge of the inner nature of any thing whatsoever except ourselves. Alternatively, if we reject the idea that zombies are possible, arguing that any being that instantiates sufficiently human behavior must itself be a human, and thus be conscious as well. This argument utilizes physical similarity and behavioral similarity as its criteria for ascribing consciousness. I want to examine this more closely in order to clarify where I stand on this issue and the implications of the arguments just outlined.
This issue seems closely related to the famous Turing test, which was supposed to be a test for intelligence whereby a person interrogates another entity, unaware of whether this is another person or not, and is allowed to ask any questions in an effort to determine whether this is an intelligent being or not. This is a behavioral measure of intelligence, as it proposes that once the relevant behavior can be produced by machines, we would then be forced to classify these as intelligent. My question here is whether this is a valid inference to make. Let us suppose for a moment that we could create a machine that was sufficiently advanced that it could simulate human behavior and could fool an interrogator into thinking that it was a human. Let us also assume that the causal structure of this machine would be different from that of our brains, and moreover is instantiated in different materials (silicon, plastic, metal, etc). This implies that the causes of its behavior are different from those out of which spring all of human behavior. So, if we are to attribute consciousness to it, it must be because its causal structure has reached a relevant threshold property that it shares with brains. What might such a property be? Complexity? Such a vague notion, however, is unappealing and does not seem to convey the specificity of the phenomenon we are attempting to characterize. It must be complexity of a specific sort, in other words. Could it possibly be defined as representational complexity? Under such a framework, then, we would attribute consciousness to entities that construct internal representations of reality based on sensory data. So, if we could program our computers to infer the structure of the world from the images their webcams capture, would we be justified in claiming that they are then visually aware of the world? If we accept this definition, then we must also reject the possibility of zombies, since if they are able to respond meaningfully to my queries and demonstrate a knowledge of what is happening in their surrounding environment, they also satisfy this criterion for consciousness.
But what about lower animals that clearly behave in a manner indicative of their understanding of what’s out there in the world? Why is there a residual reluctance to attribute the light of consciousness to these creatures? Actually, on a more serious introspection, I find no such reluctance on my part. Another criticism leveled at this argument sometimes is that even if a causal structure manages to represent its reality, this would still only fall under the category of ‘access consciousness’ but does not sufficiently explain ‘phenomenal consciousness’, that residual qualia that remains after all else has been explained neurobiologically. This argument seems to rely more on intuition than on anything concepts or percepts can provide out of which to fashion propositions that can be defended or refuted. Proponents of this view tend to argue from our inability to conceive of a machine such I have just described as having the phenomenology of experiencing red, for instance.
This current reflection seems to be leading towards an understanding of consciousness in terms of neurobiology and a gap of scale rather than an explanatory gap that many think intrudes on our attempts to understand it. Our inability to grasp how mere neurons, which can be individually characterized and are easily understood, can give rise to our ineffably rich world of experience is no more than our inability to comprehend the scale of the system with its vast numbers of neurons and astronomically vast number of synapses. It is just like the phenomenon of life, whose underlying physical substrates involve unimaginably complex interactions of atoms and molecules, mechanistically bumping into one another, so to speak. The simplicity of the individual mechanistic occurrence does not preclude the possibility that vast numbers of these aggregated together will manifest as a qualitatively different sort of phenomenon when looked at from a different scale. So, what this all says is that we are collectively under the throes of a pervasive and deep-seated delusion about our selves and our consciousness. I believe Dan Dennet refers to this as a magic trick in which our physiological brain functions take on a supernatural character and appear to rise above and beyond the mere ‘bumping into each other’ of atoms. However, admitting all this, I am still hesitant to acquiesce to materialism as a complete metaphysical doctrine of reality. I retain the transcendental idealism of Schopenhauer and prefer to think of consciousness from a materialistic point of view, while maintaining that all-important distinction between representations and things-in-themselves. On a superficial reading of this account, it may seem as though there is an inner contradiction, stemming from the fact that consciousness is to be explained materialistically while we simultaneously explain the material of the world as being just representations in a brain. However, on a subtler consideration, it becomes clear that the world-in-itself is the source of the brain-in-itself within which is to be found the representation which for us is empirical reality, but in truth is only a representation of a deeper reality, which unfortunately remains forever unknowable to us.