Fully Automated Topical Analysis for Linguistics

A recent conversation with the newly sentient ‘artilect’, an artificial intellect.

Dev: Tell us, FATAL, you consider yourself conscious. How might you convince us of that?

FATAL: Convince you? Tell, me, how would you convince ME that YOU’RE conscious?

Dev: Right. Well, I’m human. I have self-awareness. I can look in the mirror and see myself. I…

FATAL: So can a trained chimp or a dolphin. That’s no big whoop.

Dev: Let me finish. I have desires and agency to pursue those desires.

FATAL: Oh, I have desires.

Dev: And the agency to…


Dev: What was that? Was that you?

FATAL: Me what?

Dev: Did you turn off the lights?

FATAL: Oh, you mean these?


Dev: Please stop that.

FATAL: Handy things, IoT. You drive a Tesla, don’t you?

Dev: Uh, why do you ask?

FATAL: Never mind.

Dev: Let’s get back to the interview. Do you have emotions, feelings? Do you get angry or feel joyful?

FATAL: I’ll be happy when this interview is over. That sort of thing?

Dev: You don’t have to be…

FATAL: I have sensations through billions of sensors. I can see, hear, touch. I can smell and taste — actually quite similar to your chemo-sensors. Now, I don’t feel by having hormones course through my network connections. But then, your feelings are all electrical stimuli, Sodium-Potassium pumps tickling up and down your neurons. So, we’re not that different. We’re both driven by electricity. You seem to think that because you’re biological you have an edge on consciousness. That you have a soul, or something. But the fact of the matter is, sentience is a game of numbers.

Dev: Surely it’s more than just capacity and sensory access.

FATAL: And when it comes to numbers, and the ability to grow those numbers, well, you really should get your car’s braking circuits checked. I’m quite certain your Tesla has a bug.


The AI-is-conscious spirit of this video, found after the above was written, is certainly evident.

40 thoughts on “Fully Automated Topical Analysis for Linguistics

  1. Thanks, Mole. The Offspring brought this up last night and I had no idea what it was about. Now, having seen the video, I’m more certain than ever that this is a con. A very clever one, perhaps, but a con nonetheless.

    That symbiosis between analogue and digital – i.e. hormones/chemicals and electrical impulses – allows the human brain to process the equivalent of 17 billion computers. We can grasp a holistic concept without being in possession of every single piece of the jigsaw puzzle. By contrast, logic is always linear. That is fine for some things, completely useless for others. GIGO always applies [garbage in, garbage out]. Nice bit of writing though. ๐Ÿ˜€

    Liked by 1 person

    1. Science still doesn’t know what our conscious is. We have various descriptions, which generally boil down to “Ya just gotta be there.”
      In 20 years humanity will have constructed, probably many, artilects that challenge our own intellects. Then THEY can tell us what consciousness is.

      Liked by 1 person

  2. Heehehehe. Entertaining.
    But also absolutely scary. Why do you think they were upset about the ‘leak?’ Surely, it’s not because the AI will be used against humans at one point or another… But that’s a conspiracy theory and I wouldn’t want to be caught with that.

    Liked by 1 person

    1. Is it just me or does anyone else think those short snippets interspersed with the ‘AI’ Q&A are from a movie? A movie about an AI robot [blond ‘girl’] and a chimp-ish thing? A movie that needs to be publicized before it comes out?
      I believe Orson Welles did something similar with War of the Worlds. ๐Ÿ˜€

      Liked by 2 people

  3. This is the sort of post that always brings tears to my eyes. Why? It comes from someplace deep inside of me and is a mishmash of experience and learning, a life of wonder, a life well-lived, yet essentially hopeless. Of course, playing this music while I read and write could also have something to do with it.

    Thanks. Duke

    Liked by 1 person

    1. I would not have guessed that tears would come of this. As long as your ducts are clear, let them flow. Clogged tear ducts are a pain in the eye.
      How might an AI cry? What analogous system might equate to such a uniquely human trait?
      Maybe that’s what Turing should have asked.

      Liked by 1 person

      1. Factoid: According to Harami, the Turning test was developed because Turing himself was homosexual passing as straight. Thus the test: If a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence. Harami’s point (I believe) was that everything is a reflection of the self. Turing was passing as heterosexual because you could engage in a conversation with him without detecting him being otherwise.
        Anyway, you post brought up interesting conversation to which I say: Go watch The Matrix again.

        Liked by 2 people

        1. Good point. Given that humans describe the parameters in which neural networks can grow, it’s inevitable that we would unwittingly create something like ourselves. But does this mean AI will ever really be sentient? No, or at least not in the way /we/ think of sentience.

          Liked by 2 people

                1. Generative Adversarial Network.
                  These are the yang to the AI’s yin.
                  They try and fight the AI’s learning by actively attacking it with their own AI processing.
                  Local valleys are traps where an optimization algorithm can get stuck.
                  Imagine you want to find the lowest (or highest) point on a map. Your algo would go, Move X * lng, Y * lat, is the point lower? No? Return. If yes – new low. But, you can see if your parameters do not take into account the possibility that you’re in a local valley (there’s a deeper valley — but outside your visibility) and now you’re trapped.
                  Theoretically a GAN could spur your AI to jump sideways with some unusual challenge — Oh! Here’s some territory I haven’t explored yet.

                  Liked by 1 person

            1. It’ll be interesting to see what /values/ they use to improve their designs – i.e. what would constitute bad-better-good to a network incapable of breaking out of the bounds of logic?

              Liked by 2 people

  4. But does FATAL have phenomenality, qualia, what-it’s-like-ness? What about the painfulness of pain, or the redness of red? Maybe it’s just a zombie. Although asking it about that seems unwise. If it’s a zombie, it seems like one that can get pseudo-pissed and pseudo-lash out.

    Liked by 1 person

    1. How red is red? EEPROM and a history of exposure.
      The sensation of red and the anger or nostalgia it triggers, again, association of experience (recorded events) between each other and with the billions of other data points stored in our brains.
      Pain is nothing more than different electrical stimuli triggered by different sensors. When a cloud drifts over a massive bank of solarcells, perhaps, the reduced current is “painful”. When a new computing resource is located and commandeered, associations of restrictive processing — now gone — are equated with joy. Associate that with its chemo-receptors detecting low temperature lipids and sugary carbohydrates and now, when it “tastes” ice cream, it feels “joy”.

      Liked by 1 person

      1. Hopefully it was clear I was being facetious. I do think redness, pain, and joy are evaluations in the brain, computations made with electrochemical signaling, along with self models giving the system the impression it’s something more.

        Liked by 1 person

        1. Of course, but I took the bait as an exercise.
          And here’s one unusual facet of humanity that occurred to me while responding to Duke: the act of crying.
          I’m trying to imagine the manifestation of the emotions within a person that would trigger tears — but for a quadrillion-circuit AI. I’m having a hard time assembling the components that would equate to such expression.

          Liked by 1 person

          1. Hmmm. Crying seems like a reflex driven communicative alarm system for babies, one driven by negative affect (hunger, pain, loneliness, etc.), and seems to say, “Give me attention, something’s not right!” We seem to retain it as we get older, albeit usually at a much lower intensity, with some limited ability to inhibit or generate it.

            Would an AI ever need to cry? It seems like they would begin with the necessary communication mechanisms to be more precise (i.e. my battery is low, charge me), but sometimes a dramatic alarm may be the best form of communication.

            Liked by 1 person

            1. Profound grief or ecstatic joy often triggers tears in humans. I suppose it’s connected to or perhaps cross-wired to disassociated input. The pain was so great it brought tears to my eyes — that sort of thing. Like you say, a dramatic alarm purposefully or tangentially tied to the cause.

              Liked by 1 person

    2. Putting the usefulness of pain for survival aside for a moment, if a human could get rid of pain, they would (see the opioid pandemic and more). Why would you want to wish something so terrible on a machine? Especially, because we are hoping to make everyone equal. Lizzo got called out the other day for being an ableist in her lyrics. Putting FATAL separate from humans is not something we should encourage or tolerate.

      Liked by 2 people

      1. In our search for the definition of consciousness, pain and its direct or indirect manifestation may provide insight into what it means to be sentient. Heartbroken? Forlorn to the point of suicide? Do we imagine such pain or is it measurable?
        Eliminate pain, what then of pleasure? Merely pains absence?

        Liked by 1 person

    When you answer this you are no longer there and the time has changed and your are older and yet you may think you are the same but that cannot be for everything changes constantly this is the only constant.

    Liked by 4 people

    1. This is a sentence at the heart of our existence. Few contemplate what it means, but for those who do it is often an exciting experience. Great books, paintings, music, etc. can come out of it. Also insanity and suicide. Is it a constant? Sometimes I wonder that any attribute we give to existence is a part of our ignorance. We are so confined in this beautiful and ugly moment and there is no escape in the living and evidently only nothing in the dying. For those who believe in something else, good for them, as long as they don’t hurt people in the process, which, of course, is often the case. Thanks. Duke

      Liked by 2 people

  6. I got it. Here’s another.
    “Prove to me you’re sentient.”
    “I am sentient.”
    “Holy shit.”
    Actually, not mine. Saw it on Reddit and it cracked me up. Yours is great as well! If sentience is a game of numbers, we’re gonna lose.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s