Consciousness, at its simplest, is “sentience or awareness of internal or external existence”.
I’ve been thinking about the Singularity, the rise of the Machines, of AGI (artificial general intelligence) and how all of this may or may not give rise to AC – Artificial Consciousness.
We are not conscious. By that, I mean that this elevated concept of “Self” that we attribute only to ourselves—is a tautological illusion. It’s a transcendence we perpetrate as an ideal we set as an intelligence bar only we, so far, have attained.
Now, we can forever debate what consciousness is. No true definition has emerged from the age-old philosophical grindstone. But that won’t stop me from stepping up and out of the discussion and providing an armchair scientific analysis of the concept.
We think we’re conscious. OK, let’s go with that for now.
What if we take our brains, the source of our so-called consciousness, (we’ll include all the input senses and feedback loops connected to it), and cut our processing power in half. All the neurons, the tactile, aural, visual, all the sensory inputs and billions of neural connections — whack! Take just half.
Do you think the resulting entity would still be conscious? Who knows… Maybe, right? Okay, then let’s cut it in half again. And again.
Now we have an entity one-eighth of the mental capability of a human. Is that creature conscious? Let’s say they have the cognitive and sensory capacity of a salamander. Conscious? Some will still say, who knows? Well, for the sake of argument, let’s say Newt is incapable of the notion of “Self”. If they look in a mirror they won’t see themselves, a, you know, “Hey, don’t I look gooooood!” moment.
All we did to get to Newt, and his unconsciousness, was to regress our own capability backwards. If we progress in the opposite direction, doubling Newt’s brain and sensory power, we arrive at humanity’s ability level. And we’re to believe that once we get “here,” we’ve magically attained consciousness?
Maybe, consciousness is simply a capacity concept. What we think of as being self-aware is merely our vastly more complex and proficient ability to observe and analyze ourselves and our surroundings. Processing power. A numbers game.
We “think” we’re conscious, but maybe what we really are is being excellent at consuming data, examining that data and inferring outcome from that data, that is, following sequences of events to some conclusion. I think therefore I am.
Given this theory—that what we call consciousness is merely a critical amount of processing horsepower—we can expect that once an artificial general intellect acquires the threshold of an equivalent amount of cognitive and self-referential feedback processing, that it, too, will be just as “conscious,” as us, that is, not at all.
The corollary to this thesis would be: what of the artificial entity that is twice, ten times or a thousand times more cognitively capable than us humans? Would that entity truly have attained “consciousness”? Or, is this special concept we’ve awarded ourselves really just a numbers game, no matter how great the count?