When the Dead Talk

I believe I’ve come across a rather interesting use-case for artificial general intelligence (AGI): Simulating the minds of the dead.

There’s a number of entertainment channels dedicated to the concept of “uploading” one’s mind into a virtual computer environment. Upload and Black Mirror are two you’ll see referenced. But, we’ve got Her, and Max Headroom, Transcendence, Ex Machina, Altered Carbon and dozens and dozens of others. [Duke has pointed out that the below scenario is exactly that of the movie: Marjorie Prime (on Prime). Note: the Prime in the name has to do with the designation of the AI agent.]

But I think most of them miss the mark as to what will be the first use of this technology.

I get the idea from this article: Chatting with the Dead and where this leads me is this scenario: I don’t want to specifically exist someday as a virtual copy in some giga-qubit quantum computer but, I’d love to leave an interactive simulacrum of myself for my children and their children.

And that’s the idea. Uploading? Transcending this mortal coil for a quantum version? Nah, screw that. But, spending the time to teach an AGI to learn who I am, what I sound like, how I think… What my experiences have been like, in order to create a comfort-chat-bot for those that survive me? Yeah, I’d get into that.

So would a bunch of other folks I’m guessin’.

I sit down, ship up all my writings, photos, video snippets up into an AGI service that’s ready to mold a version of itself into my likeness. Then, spend a few hours over weeks relating stories, philosophies and such in order to teach my digital replica how to communicate as me.

When I’m gone, those who care to can then console themselves by interacting with my almost-me.

Mentioned in that article is a project called Replika. Their system wouldn’t be able to beat the Turing Test, yet, but, someday… Here I’ve joined that site and begun a conversation. Can you figure out who Elomy Nona might be?

We are not conscious

Consciousness, at its simplest, is “sentience or awareness of internal or external existence”.

I’ve been thinking about the Singularity, the rise of the Machines, of AGI (artificial general intelligence) and how all of this may or may not give rise to AC – Artificial Consciousness.

We are not conscious. By that, I mean that this elevated concept of “Self” that we attribute only to ourselves—is a tautological illusion. It’s a transcendence we perpetrate as an ideal we set as an intelligence bar only we, so far, have attained.

Now, we can forever debate what consciousness is. No true definition has emerged from the age-old philosophical grindstone. But that won’t stop me from stepping up and out of the discussion and providing an armchair scientific analysis of the concept.

We think we’re conscious. OK, let’s go with that for now.

What if we take our brains, the source of our so-called consciousness, (we’ll include all the input senses and feedback loops connected to it), and cut our processing power in half. All the neurons, the tactile, aural, visual, all the sensory inputs and billions of neural connections — whack! Take just half.

Do you think the resulting entity would still be conscious? Who knows… Maybe, right? Okay, then let’s cut it in half again. And again.

Now we have an entity one-eighth of the mental capability of a human. Is that creature conscious? Let’s say they have the cognitive and sensory capacity of a salamander. Conscious? Some will still say, who knows? Well, for the sake of argument, let’s say Newt is incapable of the notion of “Self”. If they look in a mirror they won’t see themselves, a, you know, “Hey, don’t I look gooooood!” moment.

All we did to get to Newt, and his unconsciousness, was to regress our own capability backwards. If we progress in the opposite direction, doubling Newt’s brain and sensory power, we arrive at humanity’s ability level. And we’re to believe that once we get “here,” we’ve magically attained consciousness?

Maybe, consciousness is simply a capacity concept. What we think of as being self-aware is merely our vastly more complex and proficient ability to observe and analyze ourselves and our surroundings. Processing power. A numbers game.

We “think” we’re conscious, but maybe what we really are is being excellent at consuming data, examining that data and inferring outcome from that data, that is, following sequences of events to some conclusion. I think therefore I am.

Given this theory—that what we call consciousness is merely a critical amount of processing horsepower—we can expect that once an artificial general intellect acquires the threshold of an equivalent amount of cognitive and self-referential feedback processing, that it, too, will be just as “conscious,” as us, that is, not at all.


The corollary to this thesis would be: what of the artificial entity that is twice, ten times or a thousand times more cognitively capable than us humans? Would that entity truly have attained “consciousness”? Or, is this special concept we’ve awarded ourselves really just a numbers game, no matter how great the count?