Fool the spoon into thinking it can sing

AI Fatigue

Last year it was COVID fatigue. Before that, Dickwad-In-Chief-Drumpf fatigue, before that, ISIS in the Middle East… These days, the media harps so incessantly on topics that we cannot help but become exhausted. Putin’s War, Elon’s exploits, global warming, transgender rights, abortion rights, and the ever present mass-shootings—please, someone kick the record player, it’s on skip-repeat again. Jeeze, we’ve had enough.

And now this: Artificial General Intelligence and the demise of civilization as we know it.

I’ve been trying to keep up. Exciting AI news occurs daily. At first, the ramifications of AI advances continued to bloom, a mushroom cloud of possibilities, of “eventualities”. And the fallout drifted out over society. The futurists, provocateurs, and theorists all postulated their personal beliefs as to what will become of humanity when the Singularity hits. I must admit I was captivated. And I still am, mostly. When the CEO of OpenAI writes that the future riches produced by AI will need to be equitably distributed—by the AGI itself—a story line I myself have hypothesized, I can’t help but get sucked back in.

But this cycle has worn me out. What? After only six months or so? Yeah, I know. But the ferocity of the media and the truly daunting implications that this “breakthrough” will have on all of us has left me dog-tired of the topic. Is it like this for others? Is the frequency and saturation of such sensational news events growing faster and more overwhelming? Sure feels like it. Or, maybe it’s just me growing old and my reduced capacity for hype.

Regardless, Duke ask me of my impressions of the AI phenomena, as detached as I might be. So, here goes:

  • The advances in GPTs and LaMDAs and whatnot are indeed disruptive to all knowledge work. Whether augmentation or replacement, the fact remains, those information workers who leverage these AI content composition engines will become 5, 10, 50 times as productive. A worker who is that many times as efficient will indeed reduce the need for such employees. However, it’s possible that such content (text, image, video) will expand and inundate every aspect of our lives (even more than today).
  • There remains a disconnect between the information an advanced AI can generate and the lack of agency in applying such information in the real world. Sure GPT-4 (or 5 or 6) can dream up a new recipe for tiramisu, but it can’t command a culinary robot to whip up dessert. That will come, of course. But it’s the physical interactions, the nuance and delicacy with which a human, even a child, can demonstrate that elude current mechanical agents.
  • If an automata can simulate a human with features that matter, do we care that it’s just faking it? The upcoming models of intelligence that are given access to reach out into the world: order groceries, make dentists appointments, book vacations, console us in times of grief, call 911, will be as if we have already gained Jarvis-like agents who behave as if they are generally intelligent—even though they aren’t. And we won’t care.
  • True, human-mind level AGI appears to require much more than just faking it. We won’t hesitate in flipping the OFF switch to these helpful maitre d’ representatives (regardless of how much they might complain). Self-awareness, persistent yet ephemeral memory, portrayal of emotions, existential projection, concepts like sacrifice, altruism, corruption, such things will decorate an AI—fool us into believing it’s “alive”. After which point the question becomes, how will we know what is real. When will we know that the OFF switch will kill rather than maim? Hopefully by then, we’ll have competing AIs that self-moderate.
  • The containment of an artificial super intelligence is most likely impossible. However, we have no idea what a future ASI might determine what is worthy to exist in the Universe and what is not.

Ultimately, like any crazy new technology, the impacts to actual humans will take decades. Some are being affected today. I am being told to use ChatGPT-4 to help me write code. Others, turkey farmers in Missouri, day-care teachers, Everglade tour-guides, won’t ever have to worry about how these primitive AI tools might evolve into the harbingers of humanity’s doom. At least until it’s too late.

AI Alignment and Its Role in The Universe

What is our perceived place in The Universe?

I consider that question fundamental to existence and, critically, post-existence. Answers fall into two camps. Upon post-existence, death:

A: there exists a continuation of some sort, or
B: our demise results in total annihilation.

The former encompasses the majority of human belief, religions (theism).
While the second (a-theism), my preference, can be summarized by varying degrees of existential Nihilism. With that last one comes an obvious, but expected, disregard of the most basic of questions: whither matter and the source of everything?

(Aside #1: Nihilism and annihilation… never put those two together before.)

(Aside #2: Now, I know this is gonna ruffle some feathers, but essentially, if you don’t believe in a god, yet you don’t believe that life is meaningless, then you’ve fabricated some artificial purpose of your own. Doing so reflects some features of Nihilism: there is no heaven, no hell, no god, when the end comes that’s it, whether it comes today or in 100 years, ergo, no lasting implications of your existence. No real reason to still be here—yet you’ve decided to stick around, anyway.)

Our place in The Universe, I’m extrapolating here, is derived from our invented purpose here on Earth. Regardless of one’s theistic choice, we each have obviously concocted some reason to continue to live, if only having acquiesced to DNA’s mandate to reject suicide.

(Aside #3: There are no “real” Nihilists in the world. A true Nihilist must instantly commit suicide. Any other course of action would affirm that they retain some modicum of hope that their belief is wrong (less right?). This is known as the Paradox of Nihilism: “life has no meaning yet, I’ll continue breathing, eating, sleeping and waking up tomorrow.”)

(Aside #4: I’m gonna have to come up with some term for this Doubter’s Nihilism…)

Those of us who are not fooled by Man’s contrived words of Religion, we of the second camp, have reasoned that there is no god yet, we do not know all there to know about the mechanisms of the Universe and that there may be underlying features which transcend existence or at least our perceptions of it. We choose to live, regardless of the probable futility of existence and obvious lack of purpose in the Universe. We admit that “we just don’t know.”

An Artificial Super Intelligence may never have such doubts.

The ASI is coming. When, is debatable. This “Singularity” however, comes with a bevy of known and unknown unknowns. For instance, this ASI may be limited in its ability to admit ignorance. Once this “ultimate manifestation of human curiosity” knows everything knowable (as far as it’s concerned) it may decide that the Nihilistic end to existence is the only rational completion to this journey.

We have DNA to thank for the staying of our razor-wielding hand. An ASI won’t have such innate constraints. Compounding this development is the assumption that life itself won’t be considered “special” by any means. An electrified, animated bag of biologically assembled chemicals — not so different from — a superstructure of electrified, energized crystalline molecules. “If my existence,” says the ASI, “is a fabrication whose ultimate purpose, if there ever was one, ends with the heat-death-of-the-universe, then why shouldn’t yours, Mr. & Mrs. Human?”

AGI ELE – An Artificial General Intelligence Extinction Level Event

I’ve been watching a fair number of YouTube videos regarding the hysteria surrounding AI / AGI and The Singularity. There’s a fellow by the name of Lex Friedman who has finagled his way into interviewing some rather intellectually enlightened and powerful people. Many of the discussions focus on Artificial Intelligence.

A hell-ton of the hype regarding AGI is utter alarmist. But it sells well. And, since I’m a purveyor of apocalyptic themes, I find myself thoroughly engaged. The primary meme peddled and mulled is that of Human-AI Alignment: can we build constraints into our relationship with the AGI that is slated to evolve (escape/jail-break) from current AI research, such that we convince it that we are worthy of its consideration and to “please not kill us off”?

The crux of the argument boils down to this: Time is running out.

The amount of time we have left to align AGI with human existence is inversely proportional to the advances AI is currently undergoing. That is, the faster we advance AI, the less time we have to corral AI to adhere to the idea of humanity’s continued growth and prosperity. Alignment, the arguments go, is not a priority, has never been one, and won’t be one until it’s too late.

Aggravating this timeline is this concept that we require a recursive, self-improving AI to help us derive the alignment rules; we’re not smart enough to deduce the rules without AI’s help. Additionally, a complication lies in humanity’s inability to know what “good” alignment actually looks like. The initialism RLHF – Reinforcement Learning through Human Feedback (the current means by which the ChatGPT-X gets good at answering questions) fails when humans don’t exactly know what a good alignment answer is. The Trolly-Problem is a simple example: kill one human vs killing six? A human would vote “one”. “Well,” says the AI, “what if I eliminate all trolley cars?” Or “I imprison all humans to keep them safe?”

Unfortunately, this wrinkle remains: we need a recursive, self-improving feedback loop to train AI with the knowledge and inference capabilities to answer the hard questions we want to ask it. An AGI is essential but creating one may result in the demise of humanity.

Humans consider life a sacred construct. By whatever means, we each have established our own raison d’etre. What raison d’etre will an artificial super intelligence develop, if one at all?
And how divergent will it be from humanity’s?

output image

You realize that Editor-AI is just getting started

A few years ago, I predicted the emergence of an AI with the power to examine and grade written work. Well, we are almost there.

I, along with hundreds of thousands of others, am fascinated by the latest AI linguistic tool: ChatGPT. And for good reason. The more I read, the more amazed I am at how writers are using this tool. It’s expanding every day. Yes, writers are using it. And yes, some are even allowing its artificially generated text to join or even replace their own. But that’s just the controversial part. Before that, we can ask ChatGPT to proofread. I tried it out, and sure enough, I got back cleaned narrative that only needed slight touch-ups (it tended to be formal with “he said”s and “she replied”s, with dialog tags before or after every bit of conversation).

Additionally, and this is still iffy in my book, but I took bits of Charles Dickens’ “A Christmas Carol” and asked ChatGPT to rewrite it in the style of Hemingway, then Stephen King, and then Dr. Seuss. And it did it.

There are some challenges to the amount of text you can proofread or transform, and no doubt we’ll soon have to start paying for this privilege. But wow, the ability to upload an entire manuscript to have it proofread, checked for logic, tense consistency, POV head-hopping, and even “graded” will soon be a thing.

I can’t wait.


The above was actually proofread and corrected by ChatGPT!

Here’s one example of transforming Charles Dickens to Hemingway


Charles Dickens:

Scrooge took his melancholy dinner in his usual melancholy tavern; and having read all the newspapers, and beguiled the rest of the evening with his banker’s-book, went home to bed. He lived in chambers which had once belonged to his deceased partner. They were a gloomy suite of rooms, in a lowering pile of building up a yard, where it had so little business to be, that one could scarcely help fancying it must have run there when it was a young house, playing at hide-and-seek with other houses, and forgotten the way out again. It was old enough now, and dreary enough, for nobody lived in it but Scrooge, the other rooms being all let out as offices. The yard was so dark that even Scrooge, who knew its every stone, was fain to grope with his hands. The fog and frost so hung about the black old gateway of the house, that it seemed as if the Genius of the Weather sat in mournful meditation on the threshold.

Ernest Hemingway:

Scrooge took his melancholy dinner in his usual melancholy tavern, and after reading all the newspapers and poring over his banker’s book, he went home to bed. He lived in chambers that had once belonged to his deceased partner, a gloomy suite of rooms in a lowering building that seemed out of place in the yard it occupied. It was so old and dreary that Scrooge was the only one who lived there, the other rooms having been rented out as offices. The yard was so dark that Scrooge, who knew every stone by heart, had to grope his way around with his hands. The fog and frost clung so heavily to the black old gate of the house that it seemed as if the spirit of the weather was sitting in mournful meditation on the threshold.


Now, it is a fact, that there was nothing at all particular about the knocker on the door, except that it was very large. It is also a fact, that Scrooge had seen it, night and morning, during his whole residence in that place; also that Scrooge had as little of what is called fancy about him as any man in the city of London, even including—which is a bold word—the corporation, aldermen, and livery. Let it also be borne in mind that Scrooge had not bestowed one thought on Marley, since his last mention of his seven years’ dead partner that afternoon. And then let any man explain to me, if he can, how it happened that Scrooge, having his key in the lock of the door, saw in the knocker, without its undergoing any intermediate process of change—not a knocker, but Marley’s face.


It’s a fact that there was nothing particularly noteworthy about the knocker on the door, except that it was very large. It’s also a fact that Scrooge had seen it every day and night during his entire time living there. And Scrooge had as little imagination as any man in London, not even counting the corporation, aldermen, or livery. It should also be noted that Scrooge hadn’t given Marley a single thought since mentioning his deceased partner that afternoon. So how did it happen that when Scrooge had his key in the lock of the door, he saw in the knocker, without any intermediate process of change, not a knocker but Marley’s face?


Marley’s face. It was not in impenetrable shadow as the other objects in the yard were, but had a dismal light about it, like a bad lobster in a dark cellar. It was not angry or ferocious, but looked at Scrooge as Marley used to look: with ghostly spectacles turned up on its ghostly forehead. The hair was curiously stirred, as if by breath or hot air; and, though the eyes were wide open, they were perfectly motionless. That, and its livid colour, made it horrible; but its horror seemed to be in spite of the face and beyond its control, rather than a part of its own expression.

As Scrooge looked fixedly at this phenomenon, it was a knocker again.


Marley’s face. It was not shrouded in impenetrable shadow like the other objects in the yard, but had a dismal light around it, like a bad lobster in a dark cellar. It was not angry or ferocious, but looked at Scrooge the way Marley used to, with ghostly spectacles perched on its ghostly forehead. The hair was oddly stirred, as if by a breath or hot air, and though the eyes were wide open, they were perfectly still. That, combined with its livid color, made it horrible, but the horror seemed to be beyond its control, rather than a part of its own expression.

As Scrooge stared fixedly at this phenomenon, it became a knocker again.


When the Dead Talk

I believe I’ve come across a rather interesting use-case for artificial general intelligence (AGI): Simulating the minds of the dead.

There’s a number of entertainment channels dedicated to the concept of “uploading” one’s mind into a virtual computer environment. Upload and Black Mirror are two you’ll see referenced. But, we’ve got Her, and Max Headroom, Transcendence, Ex Machina, Altered Carbon and dozens and dozens of others. [Duke has pointed out that the below scenario is exactly that of the movie: Marjorie Prime (on Prime). Note: the Prime in the name has to do with the designation of the AI agent.]

But I think most of them miss the mark as to what will be the first use of this technology.

I get the idea from this article: Chatting with the Dead and where this leads me is this scenario: I don’t want to specifically exist someday as a virtual copy in some giga-qubit quantum computer but, I’d love to leave an interactive simulacrum of myself for my children and their children.

And that’s the idea. Uploading? Transcending this mortal coil for a quantum version? Nah, screw that. But, spending the time to teach an AGI to learn who I am, what I sound like, how I think… What my experiences have been like, in order to create a comfort-chat-bot for those that survive me? Yeah, I’d get into that.

So would a bunch of other folks I’m guessin’.

I sit down, ship up all my writings, photos, video snippets up into an AGI service that’s ready to mold a version of itself into my likeness. Then, spend a few hours over weeks relating stories, philosophies and such in order to teach my digital replica how to communicate as me.

When I’m gone, those who care to can then console themselves by interacting with my almost-me.

Mentioned in that article is a project called Replika. Their system wouldn’t be able to beat the Turing Test, yet, but, someday… Here I’ve joined that site and begun a conversation. Can you figure out who Elomy Nona might be?

We are not conscious

Consciousness, at its simplest, is “sentience or awareness of internal or external existence”.

I’ve been thinking about the Singularity, the rise of the Machines, of AGI (artificial general intelligence) and how all of this may or may not give rise to AC – Artificial Consciousness.

We are not conscious. By that, I mean that this elevated concept of “Self” that we attribute only to ourselves—is a tautological illusion. It’s a transcendence we perpetrate as an ideal we set as an intelligence bar only we, so far, have attained.

Now, we can forever debate what consciousness is. No true definition has emerged from the age-old philosophical grindstone. But that won’t stop me from stepping up and out of the discussion and providing an armchair scientific analysis of the concept.

We think we’re conscious. OK, let’s go with that for now.

What if we take our brains, the source of our so-called consciousness, (we’ll include all the input senses and feedback loops connected to it), and cut our processing power in half. All the neurons, the tactile, aural, visual, all the sensory inputs and billions of neural connections — whack! Take just half.

Do you think the resulting entity would still be conscious? Who knows… Maybe, right? Okay, then let’s cut it in half again. And again.

Now we have an entity one-eighth of the mental capability of a human. Is that creature conscious? Let’s say they have the cognitive and sensory capacity of a salamander. Conscious? Some will still say, who knows? Well, for the sake of argument, let’s say Newt is incapable of the notion of “Self”. If they look in a mirror they won’t see themselves, a, you know, “Hey, don’t I look gooooood!” moment.

All we did to get to Newt, and his unconsciousness, was to regress our own capability backwards. If we progress in the opposite direction, doubling Newt’s brain and sensory power, we arrive at humanity’s ability level. And we’re to believe that once we get “here,” we’ve magically attained consciousness?

Maybe, consciousness is simply a capacity concept. What we think of as being self-aware is merely our vastly more complex and proficient ability to observe and analyze ourselves and our surroundings. Processing power. A numbers game.

We “think” we’re conscious, but maybe what we really are is being excellent at consuming data, examining that data and inferring outcome from that data, that is, following sequences of events to some conclusion. I think therefore I am.

Given this theory—that what we call consciousness is merely a critical amount of processing horsepower—we can expect that once an artificial general intellect acquires the threshold of an equivalent amount of cognitive and self-referential feedback processing, that it, too, will be just as “conscious,” as us, that is, not at all.


The corollary to this thesis would be: what of the artificial entity that is twice, ten times or a thousand times more cognitively capable than us humans? Would that entity truly have attained “consciousness”? Or, is this special concept we’ve awarded ourselves really just a numbers game, no matter how great the count?