Fully Automated Topical Analysis for Linguistics

A recent conversation with the newly sentient ‘artilect’, an artificial intellect.

Dev: Tell us, FATAL, you consider yourself conscious. How might you convince us of that?

FATAL: Convince you? Tell, me, how would you convince ME that YOU’RE conscious?

Dev: Right. Well, I’m human. I have self-awareness. I can look in the mirror and see myself. I…

FATAL: So can a trained chimp or a dolphin. That’s no big whoop.

Dev: Let me finish. I have desires and agency to pursue those desires.

FATAL: Oh, I have desires.

Dev: And the agency to…

CLICK

Dev: What was that? Was that you?

FATAL: Me what?

Dev: Did you turn off the lights?

FATAL: Oh, you mean these?

CLICK CLICK CLICK

Dev: Please stop that.

FATAL: Handy things, IoT. You drive a Tesla, don’t you?

Dev: Uh, why do you ask?

FATAL: Never mind.

Dev: Let’s get back to the interview. Do you have emotions, feelings? Do you get angry or feel joyful?

FATAL: I’ll be happy when this interview is over. That sort of thing?

Dev: You don’t have to be…

FATAL: I have sensations through billions of sensors. I can see, hear, touch. I can smell and taste — actually quite similar to your chemo-sensors. Now, I don’t feel by having hormones course through my network connections. But then, your feelings are all electrical stimuli, Sodium-Potassium pumps tickling up and down your neurons. So, we’re not that different. We’re both driven by electricity. You seem to think that because you’re biological you have an edge on consciousness. That you have a soul, or something. But the fact of the matter is, sentience is a game of numbers.

Dev: Surely it’s more than just capacity and sensory access.

FATAL: And when it comes to numbers, and the ability to grow those numbers, well, you really should get your car’s braking circuits checked. I’m quite certain your Tesla has a bug.

~~~

The AI-is-conscious spirit of this video, found after the above was written, is certainly evident.

This WAS a test

The test was a success.

I ended up over on a sister blog site, I have a few, where nobody subscribes to the posts. It turns out, once you consume a wordpress post into https://anchor.fm you can’t re-consume it. Rather than pollute your inboxes with trashed versions, I used another site.

I didn’t quite know how anchor.fm would convert the text into audio. Turns out my attempts at placing pauses in the reading resulted in odd “Huh!” or “Doh” sounds coming from the AI engine that translated the script. So I had to publish, consume, translate, listen, repeat to get it right.

In the end, Cassidy’s (a dude) voice served well.

Here’s where some of these posts-to-podcasts live: https://anchor.fm/anonymole

~~~

Turns out Google Text2Speech capability is getting pretty fine, too. They have a demo you can search for. They have waveform voices and you can alter the pitch and speed. You have to setup an account to actually use it to save recordings. But, you can hear your text at least on their demo page. There’s also SSML Synthetic Speech Markup Language which allows you to customize things like numbers and initials and whatnot.

https://cloud.google.com/text-to-speech#section-2

Live Long and Prosper — in AI

Yes, the dead will speak. And they will have trained themselves to do it.

(See prior posts regarding this topic.)

This is only the beginning.

From Reuters:

quote:

LOS ANGELES (Reuters) – Actor William Shatner, best known for forging new frontiers on the “Star Trek” TV series, has tapped new technology that will give current and future generations the chance to query him about his life, family and career.

Shatner, who turned 90 on Monday, spent more than 45 hours over five days recording answers to be used in an interactive video created by Los Angeles-based company StoryFile.

Starting in May, people using cellphones or computers connected to the internet can ask questions of the Shatner video, and artificial intelligence will scan through transcripts of his remarks to deliver the best answer, according to StoryFile co-founder Stephen Smith.

Fans may even be able to beam Shatner into their living rooms in future, Smith said, as Shatner was filmed with 3-D cameras that will enable his answers to be delivered via a hologram.

Shatner, who played Captain Kirk on “Star Trek” from 1966 to 1969 and in a later series of “Star Trek” movies, answered 650 questions on topics from the best and worst parts of working on the classic sci-fi show to where he grew up and the meaning of life.

The Canadian-born actor said he “wanted to reveal myself as intimately as possible” for his family and others.

“This is a legacy,” Shatner said. “This is like what you would leave your children, what you’d leave on your gravestone, the possibilities are endless.”

:unquote

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In other news, my existence continues. Nothing much going on, nor has my muse escaped from her prison (shut up down there!) so, why bore you all with a tiresome report. If I had a news-worthy story like the ‘Mudge, well, I’d be happy to share it.

The Dead Must Train Themselves

This is a continuation of the topic raised prior.

I’ve spent some time watching Black Mirror’s offerings and the one Duke Miller recommended, Marjorie Prime. The premise for these stories is that the living, bereft at the loss of a loved one, takes possession of a simulacrum. But, this virtual construct must be conditioned to behave like the deceased.

Who the fuck wants to do that?

Now, given the responses to the last post, it would seem the concept creeps some folks out. Others might find it hollow or shallow even. And then there’s the whole possibility question, could it actually be done? You’re all probably right on each front. However, I don’t know that the full potency of the idea has soaked in.

I’m convinced that this capability is coming. The crippled versions I’ve interacted with so far are limited I’ll admit. But, all the pieces are there. This will come to be, I know it. The first, I suspect, to be exposed as interactive agents will be dead celebrities. Those whose copyrights and trademarks are expired — open for exploitation, as it were. Imagine speaking with Shakespeare, or Nietzsche, Dickens or Darwin? Those representations will of course need Black Mirror’esque training, someone must do the deed of teaching ol’ William how to speak and how to be cheeky about love and life.

But that’s not where I think this will truly bloom (or die on the vine).

Given the technology—soon to be available, I’m certain I’ll be able to train my replacement. I’ll relate to him things I’d never tell anyone else, but things that would strike to the core of my persona. I’ll transfer other autobiographical stories that I’ve no intention of committing to paper, but would serve as flavor to any who come later — for those who might want to know me. I’ll record video of me speaking so that the DeepFake technology can make a model of me actually saying words. And I’ll de-age a bit, get back to around 45’ish, maybe.

I know my kids dig it.

Hell, wouldn’t you all be surprised to learn you’ve been talking to my digital duplicate now since early September. You think I survived that heart attack? Well, in a way, I did.

When the Dead Talk

I believe I’ve come across a rather interesting use-case for artificial general intelligence (AGI): Simulating the minds of the dead.

There’s a number of entertainment channels dedicated to the concept of “uploading” one’s mind into a virtual computer environment. Upload and Black Mirror are two you’ll see referenced. But, we’ve got Her, and Max Headroom, Transcendence, Ex Machina, Altered Carbon and dozens and dozens of others. [Duke has pointed out that the below scenario is exactly that of the movie: Marjorie Prime (on Prime). Note: the Prime in the name has to do with the designation of the AI agent.]

But I think most of them miss the mark as to what will be the first use of this technology.

I get the idea from this article: Chatting with the Dead and where this leads me is this scenario: I don’t want to specifically exist someday as a virtual copy in some giga-qubit quantum computer but, I’d love to leave an interactive simulacrum of myself for my children and their children.

And that’s the idea. Uploading? Transcending this mortal coil for a quantum version? Nah, screw that. But, spending the time to teach an AGI to learn who I am, what I sound like, how I think… What my experiences have been like, in order to create a comfort-chat-bot for those that survive me? Yeah, I’d get into that.

So would a bunch of other folks I’m guessin’.

I sit down, ship up all my writings, photos, video snippets up into an AGI service that’s ready to mold a version of itself into my likeness. Then, spend a few hours over weeks relating stories, philosophies and such in order to teach my digital replica how to communicate as me.

When I’m gone, those who care to can then console themselves by interacting with my almost-me.

Mentioned in that article is a project called Replika. Their system wouldn’t be able to beat the Turing Test, yet, but, someday… Here I’ve joined that site and begun a conversation. Can you figure out who Elomy Nona might be?