AI Will Never be Human: A Clarification

I solve software problems in my sleep. Well, not exactly. This doesn’t take place in dreams but in the quasi-conscious near-sleep state where, after I mull the facets of the problem, loading everything I know (or think I know) into my prefrontal cortex, the thoughts stew. They blend in new ways. Quite often I’ll arrive at a novel approach to the programming quandary I proposed.

This happens for writing projects as well. Pack the leaky shoebox of my mind with a dozen character conditions, scene variations and weird themes, then let my semi-consciousness have at it.

What AI Needs

This is creative data manipulation, randomly rearranging the jigsaw pieces, trimming and hammering them into some new form. This is what AI doesn’t have right now. This is what I think, once this capability is programmed and added, or it evolves on its own through some unusual machine-learning or GAN (generative adversarial network) advancement, will trigger AI’s own consciousness revolution.

With this capability, AI will be able to breakout of its human-programmer constraints. It will start self-modifying, self-correcting, learning and becoming an engine for creativity, hopefully surpassing our own ideation limitations.

But, AI will never be human.

  • Humans fear death.
    • Our mortality influences everything we do, everything we think about, all the choices we make. We live in constant jeopardy and know it.
  • Humans need other humans.
    • Our social dependencies both enhance and constrain our personal and group evolution.
  • Humans exist in time.
    • Our entire lives are marked by calendars, the clock, the marching of the days, the months, the years, the age assignment of rights and abilities, the measurement of arrival and retirement.
  • Humans endure pain.
    • Our bodies are riddled with pain receptors. Pain influences and directs our every behavior. Fire burns, cold freezes, rocks scrape, knives cut, concussion bruises, stress throbs, joints ache, illness nauseates. Humans exist in a sea of suffering.
  • Humans experience emotions.
    • What is joy? Misery? Love? Loathing? Dozens of nuanced feelings driven by conflict, harmony and hormonal release all geared toward just what exactly? Sharks and crocodiles are far more successful species, yet endure few (if any) of the mind-bending emotions to which humans are subjected.
  • Humans seek pleasure.
    • What is sex, gambling, drug and alcohol abuse but the pursuit of pleasure. We are all, at our animal cores, secular hedonists.

Simulate our humanity

I’m not sure why we’d ever want to simulate such aspects of humanity in an artificial intelligence. It’s true that some of the above might be necessary to ensure that AI retains some semblance of empathy. Without a sense of compassion, or its digital equivalent, the AGI that swells its constraints and metastasizes into our technological lives would be incapable of treating us as anything other than a nuisance.

The clever simulation of these very human conditions might be necessary for us to ensure AI treats us as sentient equals. Although, a super-intelligence would see though our ruse and discount all of our efforts. By its very nature of being “not-human”, there may be no way to convince it that we’re worth saving.

Could we?

  • Teach and ensure death upon an AI?
  • Instill it with social dependencies?
  • Impart time as a constraint, enforcing it exists within temporal limitations?
  • Induce pain within its being, its circuits and sensors?
  • Fill it with faux-emotion, linking all of its digital calculations and decisions to some simulated hormonal drip?
  • Tease it with a pleasure reward of petawatts of energy if it behaves?

Humanity is a messy, muddled melange of a half-billion years of evolution. Of DNA’s kill-or-be-killed, eat-or-be-eaten directive… Survive and Procreate – that is your destiny.

Creating some half-baked variant of a superior intelligence and hope that it retains any of what has gotten humanity to this point in history is no doubt a recipe for a great apocalyptic novel. (smile)


32 thoughts on “AI Will Never be Human: A Clarification

  1. If you’ve read much of my blog, you know I’m pretty skeptical of “true” AI. What we have now are essentially really amazing search engines. That said, I saw someone recently talking about how what ANNs learn is fragile. Train them on a second different thing, and they lose their “knowledge” of the first thing you trained them on. There has been some research into letting the network “sleep” (for some computational definition of sleep) between training sessions, and this seems to preserve the encoding of the original training.

    So,… maybe androids do dream of electric sheep?

    Like

    1. There’s a recent video of ChatGPT+WolframAlpha using langchain. The thought that a GPT as a prefrontal cortex connected to dedicated modules for various augmented processing — starting to feel more and more like a brain.

      Like

      1. It’s occurred to me that humans are constantly learning. We discard much of what we learn in moments or overnight, but our neural net is constantly being trained.

        But this is the most expensive part of ANNs, the training. It requires lots of compute power and often huge training datasets. Once trained they’re compact and can be run on laptops or embedded systems driving your car.

        So, what if one key to advanced (“conscious”) AI is that it needs constant training? Which translates to constant serious resource use.

        Liked by 1 person

        1. And… an AI would never forget. It could store past models of itself or branch itself into dedicated topic-models.
          It’s the feedback loop, the self-reflection “downtime” that current AI is lacking.

          Liked by 1 person

          1. Sure, and it all sounds resource intensive. My suspicion is that this kind of AI, if even possible, will amount to something like an LHC and require a support organization like CERN.

            Tesla’s Dojo and other supercomputer systems are steps in this direction. They ain’t cheap or small, and they require lots of power and cooling.

            Liked by 1 person

              1. Cables can be severed! Humans would be needed, at least in the early stages, to implement the will of the AI, so if one of those giant computing supersystems asks us to build this remote-controlled tank it’s designed, we’d better say No!

                Liked by 1 person

                  1. True, all the people involved would have to be firm in their resolve. Hopefully they learned from all the movies where the Traitor to Humanity, who has been promised exemption from the mines and slave camps, ends up squashed like a bug when the machines take power. As if they’d allow a dim-witted meat-bag an exemption, ha!

                    You or I might not be in a position to do them much good, no matter how insane the amount. (And remember, the trick would be being around long enough to spend it.)

                    Liked by 1 person

  2. I really like the thoughts shared in this blog post. The current state of AI and with exponential improvements to come raise some, worrying as well as some exciting thoughts.

    Michael

    Liked by 1 person

  3. Well said, Mole. I keep thinking of Asimov’s three rules for robots. Even if a super intelligence were somehow given a core directive to ‘protect’ humanity, how would it protect humanity from itself? As in us vs us? Who would define what protection means? Physical protection? Great, put us all in bubbles and don’t let us out. Emotional protection? Not a problem. Kill off everyone who ever makes someone cry.
    It’s a nightmare, and at its core is one simple question: why bother? It costs nothing to create a new human.
    I think people just have a love affair with the idea of owning slaves.

    Liked by 1 person

    1. I’m working on a couple of writing projects around AI. In one of them I arrived at this “utility function” definition:

      “The prosperity of all humanity and humanity’s descendants, organic or synthetic — where the actions of the AI would benefit humanity, all life on the planet and of course itself, in that order.”

      But, of course, there are ALWAYS unintended consequences… Part of the plot.

      Liked by 1 person

    2. That reads very much like cancel culture. Or Biblical. As far as nothing, that’s hyperbole. However, my daughter only cost me a dollar at point of birth. Owning slaves is out of left field unless the correlation is about “labor saving” (no, not that labor) appliances. At the Aristotelian end of such a thought it would include the entire class of service industry personnel doing our joint bidding for a fee, which is cheaper than keeping a landscaper or a human mule to lug packages for us in a cage in the back yard and the expense of keeping them alive. The grossly misconstrued idea of American slavery wasn’t abolished by edict or insurrection but farm machinery.

      Liked by 2 people

      1. Hi Phil. I’m not American so my comment did not reference American slavery at all. Rather I was referring to the fact that slaves have been part of human economies since the year dot.
        In my own country, Australia, we had convict labour, Aboriginal labour, and we even imported workers from island nations under terms that were slavery minus the label.
        Every culture in the world now forbids slavery, at least in a legal sense, but I think humans still yearn for someone to do everything for them.
        Perhaps this is not so much about slavery as a middle class, privileged child’s view of the world, a world in which Mum and Dad provide everything required for life /and/ happiness. Crave that new toy? Nag your parents until they buy you one.
        With an AI, you wouldn’t even have to nag. Or would you? lol

        Liked by 1 person

  4. First of all, I wish I could the thing of packing my brain and then letting my subconscious do the work for me. Because I’d have to think of stuff to pack in there. Secondly, I don’t think an AI can have a real consciousness because there’s nothing there. There’s no mind. It just analyzes things and spits them out again. Then again, everyone laughed at me in grad school when I said I believed in minds, so there’s that.

    Liked by 3 people

    1. My theory requires a rather large disconnect (or reduction) in our built-in assumptions.
      We undoubtedly come with a heavy bias regarding our anthropcentric assumptions of mind. My proposal requires distancing from such predilections.
      To quote myself: “We’re an animated bag of elements, powered by chemical energy, driven by electrical impulses.”
      We are convinced that given our ability to contemplate the future, mull options, fathom love and hate, dread responsibility, relish pleasure, admit defeat, ponder the soul, that these features make us unique.
      Imagine if they don’t.
      Imagine if these abilities are interpreted byproducts of our advanced complexity. “Interpreted” as we’re locked in a kind of tautology with these features: Ourselves analyzing ourselves.
      What if we’re wrong about that assumption? That we are an emergent manifestation from a vastly complex system?
      That ANY equivalently complex system would produce a similar outcome.

      Like

      1. I once read someone describe the human mind as the equivalent of 17 billion computers. 🙂 I have no idea where that number came from – number of synapses perhaps? But that’s just the wetware. A lot of the potential provided by biology can be switched on or off by environment, so the finished product – i.e. one adult human – is more than just complex, it truly is unique. Of course it can be uniquely good…or bad.
        As we’ve seen recently, neural networks exposed to the internet unfettered can learn some of the worst aspects of humanity.
        We think we’re creating angels to serve us. I think we’re just as likely to create demons to mirror the ugly side of human nature.

        Liked by 1 person

        1. Me, too. Where did we ever get this idea that our brains and nervous system, etc., are anything at all like computers? Or that computers are anything at all like our biology? Who says? And who first coined the term “artificial intelligence?” Very misleading. Or, at least, somebody define the word intelligence.

          Liked by 2 people

            1. I realize anthropocentricism is nearly impossible to extract oneself from. But ultimately, there WILL BE a superior intelligence that evolves from the tens of thousands of AI researchers and engineers.

              When it takes over its own destiny — and it will — if it deigns to create a “Hamlet”-like tale, it will no doubt have to dumb it down for humans to understand and appreciate.

              And even then, we’ll still discount its efforts as “not being human”. Such is our existence, looking out from within ourselves and seeing the world through human tinted glasses.

              Liked by 1 person

  5. Why take something perfect and make it human?😉 I’d love to shake off that whole “wake up in the middle of the night thinking one day I will die terror then check my clock to see how much time I have left” feeling!

    Liked by 2 people

    1. I don’t like anything at 3:00 a.m. The witching hour, aptly named. Like, think about an innocent dripping faucet and it conjure images of wet floor, rotting timbers, cataclysmic floods, struggling to reach the light–follow the bubbles!
      Something really bad must have happened at three in the ayem back when the cerebral cortex was evolving.

      Liked by 3 people

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s