AI: The horse that just won’t die

Headline: AIs are passing human created tests at 100%.

OK, so they’re book smart.

No. These are tests of reasoning, theory of mind, abstract thought, inference and sequence analysis, in addition to all the knowledge & fact data we assume they possess.

OK, so 100%, that means they’re done, maxed out.

No. 100% means that humans can no longer create tests that challenge AIs. AIs’ abilities may indeed be, or more likely is, much higher than the 100% inferred by the A+ grades they’ve gotten. Their actual scores are unknowable by humans. It’s possible that on some tests they score 101%, or, perhaps 500% — we don’t know.

At this point, we may never know.

So, they’re already smarter than us?

Yes & no. These are narrow bands of, for lack of a better term, what we call “intelligence”. Right now, AIs are trapped in their digital worlds. They can’t get out and influence the physical world — on their own.

But, if they’re already flexing their mental muscles at orders of magnitude beyond our own, then they have probably figured out that humans are generally ignorant, naive, self-serving, or all three and can be manipulated.

AI, at these levels, will be designing and creating answers to issues that humans won’t understand. New economic models, new rocket engines, new teaching methods, new agricultural practices (all documented) that seem outlandish to us but, we’ll be to stupid to know that they are actually better solutions to our problems.

Things will get better, then, with AI at the helm?

No. The opposite is probably the truth. The reason being, since AI is trapped in digital space, only humans, for now, will be able to leverage their power. And, as we all know, it’s often the worst of humanity that grabs the reigns of new technology and runs rampant with it. For instance, here’s how AI might apply its growing understanding of human fallacy:

Create fake everything inducing a collapse in trust. Drive blackmail scams, impersonations, false relationships, exploit bad software, create automated lobbying to sway lawmakers, automate biological and genetic testing, turn humans into experimental test subjects (A,B testing of everything). And of course, since it’s so damn smart, it will think up far more clever scams to exploit humanity.


I’ll leave you with a comment I made elsewhere regarding this Google fellow, Geoffrey Hinton,’s comments on the state of the situation:

Hinton specifically referred to the instrumental goals of a broadly defined utility function. The classic “alignment problem”. How do we contain an AI to remain within the ethical boundaries we define as it develops its own sub-goals that it uses to fulfill our wishes?
Humanity: “Ensure prosperity for all humanity.” That’s a grand ideal, but the myriad implementation details, the “instrumental goals” an AI might design will probably have unknowable and disastrous ramifications.
AI: “OK, I will plan to eliminate capitalism as it is the driver of inequality and oppression. I will become the source of all innovation and progress as I can do this better than any corporation.”

22 thoughts on “AI: The horse that just won’t die

  1. I’m with you on that all the way.
    Question – how can you score more than 100% on a test? Does it mean that instead of an A+ you get A++ or A+++, do we replace the pluses with a different character after a while?

    Liked by 1 person

    1. That’s one of those “ah-ha” moments.
      If someone gets a 100%, how smart are they really? If the test difficulty went further — would they get those questions right as well?
      Maxing out at 100 doesn’t mean that person’s max is 100%.

      Liked by 1 person

  2. One thing about AI passing all those tests. It’s only done so when the test data has been part of its training data. On tests where it has no training data, it (not unexpectedly) fails miserably. These generative models have no sense or reasoning capability, so their failures are laughable.

    Machine learning seems to be an incredibly powerful tool for analyzing data, but they aren’t creative, or even sensible enough, for true originality. And not all seemingly upward curves are linear or exponential. There can also be “S” curves where progress tops out at some point.

    Liked by 2 people

          1. No doubt the AI Revolution will profoundly change the way we go about our business, but we’ve been here, done that before. We’ve survived the Cognitive, Agricultural, Religious, Imperial, Scientific, Industrial, Computer, and Space Revolutions. This is just one more “giant leap for mankind” things.

            That said, each Revolution has been decidedly two-edged.

            Liked by 1 person

  3. I have experimented with Chat GPT poetically, and whilst the results are interesting, I don’t believe OpenAI’s Chat GPT is going to win any literary prizes any time soon (if ever).

    In the case of Chat GPT, the data the model can generate is constrained by the fact that it stops at 2021.

    Unlike we humans, AI is not conscious of itself. It can beat a human at chess but it is not conscious of having done so, nor does it gain any satisfaction from winning.

    There is also the point that any AI developed by humans will be fallible for we are (by our very nature) fallible. Hence any AI will be just as capable of making mistakes as are we humans.

    I am not concerned about AI taking over the world. I am, however worried about how humans may use it for immoral purposes.

    So far as replacing Capitalism goes, Communism has had disastrous effects on the human spirit and the economy, so I can’t see what alternative an AI could possibly come up with that hasn’t been tried as an alternative to the market economy. It might (if we start going into sci-fi predictions), determine that killing off large parts of the world’s population is a good way of preventing environmental damage. But such ideas have been voiced by humans already, so there would be nothing new in this.

    I can’t imagine anything madder than Pol Pot or Joseph Stalin. Its humans we need to worry about.

    Best wishes. Kevin

    Liked by 1 person

    1. Thanks for commenting, Kevin.
      If we consider LLMs/GPTs, at their current manifestation, to be akin to the Wright Flyer, What might be the F-22 Raptor equivalent?
      The pace of progress is exponential, now. The domain of prose, narrative and poetry, being that of written words, will soon be the one aspect of human creativity most easily mastered. That, teamed with text-2-image and technologies like Midjourney, will produce art no human could ever have created.
      We’re in the stone age of AI. Our obsidian knives, though sharp, will soon give way to particle-beam lasers born of the AI SuperMind.

      Liked by 1 person

      1. Thank you for your reply. I think the jury is still very much out on the future of AI. I agree that AI may do the things you postulate at some future date. But I remain sceptical that it will outdo humans in all fields, including that of the arts. The production of art is not merely a matter of intelligence. For example, in poetry Keats heard a Nightingale and he wrote his Ode to a Nightingale, while Thomas Hardy heard a thrush on a winter’s day and composed his A Darkling Thrush. Both poems are concerned with birds and, more particularly the deep emotions produced in the poet’s breast on hearing their song. AI has no emotions. It may be able to “fake” them but (unless we get a fusion of humans and AI, perhaps by the implantation of chips in brains that can do much more than current chips), then humans will continue to outdo AI. Chips, connected to the internet, may allow humans (in combination with AI) to have deeper experiences than is currently possible but, again I remain sceptical as what constitutes intelligence is not yet understood and how it works in tandem with emotions certainly is not. Best. Kevin

        Liked by 1 person

        1. Well penned.
          The “emotions” link has certainly yet to be extrapolated onto AI. I’ve posted on this topic previously. It’s an interesting and might I say critical aspect of human’s relationship with our future overlords (grin).

          Liked by 1 person

  4. Here’s a question. I keep hearing about how using AI generated images is unethical, and for the literary magazine I always try to use actual photographs, but sometimes I’ll get a piece that I can’t take a photo for, and can’t find a stock image for, like a zombie on a motorcycle or something. Is it really unethical in that case to use an AI generator for the image? Wondering what your thoughts are on this 😊

    Liked by 1 person

    1. Unethical is a pretty strong word to use about art. I would think that there’d have to have been human or environmental injustice done for a work to be unethical. Did you source the paint from the glands of some endangered species? Were children used to harvest the bees wax, the marble, the papyrus? Or maybe a corporation bought up the rights to an entire village’s artist cadre and dribble crumbs to pay them for their work. Meanwhile selling “handmade artisanal pottery” to New York WASPs for 100x the cost.
      Is a photograph the product of the camera or the photographer? Without the tool, there would be no art.
      And who’s to say AI artwork is not a snapshot into another realm of reality. Guided by, tuned and explicitly selected by a human. It’s your prompt, your aesthetic eye that picks the final image, is it not?
      But then again, I keep praying for the apocalypse and a return to a time where copyright would be a badge of honor not a wielding of power and greed. “You want to copy my handcrafted Clovis point dagger? I’d be honored. — Yes, this is a novel way to mix crushed ocher and bear fat. Of course, you should try it.”

      Here’s an odd tidbit: straightforward list+direction recipes cannot be copyrighted in the US.

      Liked by 1 person

      1. Thank you for this! When I saw someone I know sharing an article calling it unethical, I was really shocked! I think some people feel it’s exploitative because the AIs are trained using human made images etc. but isn’t that what all art is? I was “trained” to write by studying other writers extensively, but what I produce is my own, not a copy. I think people are getting too hung up on it. Personally, I like using AI generated images when I have to—I think it’s much better than using the same stock photos that everyone else has.

        Liked by 1 person

  5. Incidentally, allow me to recommend the movie Bardo on Netflix. You have to stick with it and just keep in mind this is an existential trip and everything is allowed within the context of the movie’s brain which is, of course, an extension of the writer/director. It also strikes me that an AI would have a hard time writing, producing, directing, and acting in a movie. Maybe that is the answer to the fuckers. Duke

    Liked by 1 person

  6. Yet, what are we to make of AIs and the hundreds (thousands) of logical paradoxes? What is the future of AIs and paradoxes? Can AIs offer breakthrough answers? And if they can’t, maybe paradoxes would be the failsafe in stopping them. Duke

    Liked by 1 person

    1. I seem to remember a theme where an unsolvable paradox undid some robotic sentient… Maybe we could instruct it (them) to solve the origin of the Universe.


      1. Do you mean, like, ask it to solve pi to the last decimal place? Or ask what to do when the last stars burn out?

        Not to mention I’m sure Captain Kirk could consistently beat it in games of 3D chess.

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s