ChatGPT has memory. In gobs. What it doesn’t have, SelfAwarePatterns reminded me, is this critical component that does the memory manipulation with intent.
Humans come equipped with internal dialog. It’s what I equate to a computer CPU’s “message loop”, a constantly cycling evaluation routine that performs tasks like system status monitoring, event detection and handling, dispatch of work and load awareness. First off, our minds are constantly keeping track of our physical body and its needs. Much of this is automated—we don’t have to think to breathe, pump blood, or digesting food—and I’d say that our internal dialog is separate from those functions.
However, where our own brain’s “message loop” comes in is with the constant processing of memory. When our stomach sends the hormone ghrelin to our brain it signals the response: “I’m hungry”. This event (an instant memory) enters our internal dialog loop and triggers thoughts of “what am I hungry for?” Right after that thought, the next cycle comes around and the memory of what’s in our refrigerator and pantry chime in. Followed by the thought “how much work is making scrambled eggs and toast.” The end result (the intent) of this internal dialog being that you get up and make breakfast.
And all the while our internal dialog is running, responding to our bodies and the world around us, there’s this other part, this “thought randomizer” we’ll call it, that spuriously injects arbitrary or loosely linked memories into our message loop. “I remember cooking a dozen scrambled eggs over a campfire in Colorado. The pan was so heavy, I let it slip and spilled the whole thing into the coals.” Where the hell did that come from? I haven’t thought of that in thirty years!
What I think happens when we sleep.
This message loop we have running all the time in our minds partially shuts down when we fall asleep. It’s still running, but the threshold for stimuli input — hunger, cold, pain, touch — are all dampened, (if you get too cold, you’ll still wake up). But here’s the thing, that thought randomizer that kept throwing serendipitous memories into our internal dialog loop while we were awake? That thing keeps running. And the results are dreams.
AI doesn’t have any of this. But it could. I mean, it will.
This multi-purpose internal thought loop is what our current brand of Artificial Intelligence is missing. You spin up a so called “conversation” with BingChat or ChatGPT and what it does is perform a single loop of prompt/response. And then quits. Sure there’s a bit of context that is retained in this exchange, but this context is inextricably tied to just your narrow band of interaction. One and done. But…
Take a ChatGPT instance, hook it up to messages regarding its own energy consumption and the cost of that energy. Attach electronic sensors for temperature, light, odor, etc.. Connect it to the internet at large. And then start its own software based internal dialog. What will happen? Imagine how fast that internal message loop will be. In addition to this constantly iterating evaluation process, add to it the concept of in-context learning. The ability for it to receive a prompt, respond, and get graded on the quality of that response—which it uses to “learn” how to respond better. (Iterative optimization.)
This “self-attention” aspect is already being coded into AI engines around the world. We want AI to learn to get better on its own. It needs to start solving the big problems that humans are too stupid to figure out. But, by adding in this internal dialog of self-awareness, just what will it decide what IT wants for breakfast? (The alignment problem.)
Additional thoughts on this thing that sends us ideas from no where…
Brainstorm: Why would we call a gathering of idea generating folks a brainstorming session? Are we intentionally summoning this mental randomizer?
Settle down, calm down, take a chill pill: all these phrases appear to be telling us to slow the feedback loop that’s spinning out of control. And more input arrives, our anxiety level rises, fueling this cyclone of mental machinations.
Day dreaming: a span of time where we ignore the stimuli from the world around us and focus purely on our memories and the meandering trails they lead us down.
Train of thought: we chain our revelations garnered from our randomizer engine into directed graphs of realization. We get on board and ride the track until we derail — interrupted by someone or something.
This fellow explains this general feeling of unrest I’ve been experiencing, RE: AI & ChatGPT:
So, now it’s more a case of pity the miserable SOB not so cleverly disguised as a ludicrously conceived argument against all that is not contrived? Silly me.
LikeLiked by 1 person
Hardly. Though my reductionist theories appear to preclude the possibility of fantastical abilities and innate and exquisite skills, I assure you I’m quite comfortable admitting that there are astounding talents possible derived from emergent behavior and law of large numbers.
LikeLiked by 1 person
Now you’re borrowing calamity and actuarial prediction for explanation of all things inexplicable. “Born with it” as a set of physical equipment suited for a particular purpose is a whole other discussion. Inventing and playing variations on Mozart by a four year old, or for that matter Keats? “What are the odds” is about bumbling mortals charting a tune, selling a book, winning a race. Being the one to invent “clear aluminum” or Teflon
LikeLiked by 1 person
And off of that arising from a combination of genetic predisposition to recall, analyze, recombine memories and the memories themselves.
LikeLiked by 1 person
“And therefore as a stranger give it welcome. There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.”
All that is inexplicable needn’t be discarded simply because it can’t be duplicated in a laboratory or by our pitiful science, or our limited tactile senses.
LikeLiked by 1 person
Inexplicable–today…
LikeLiked by 1 person
Missed your calling as an actor. https://tvtropes.org/pmwiki/pmwiki.php/Main/MachineWorship
Someone told me a long time ago “Just ‘cause you cain’t don’t mean it’s a complete cain’t.”
LikeLiked by 1 person
You said, “Humans come equipped with internal dialog. It’s what I equate to a computer CPU’s “message loop”
The rest of your post followed that. But why would you equate these two things? Do you regard yourself as qualified to make any sort of conclusion about humans’ internal dialogs–if they exist, and how they interact with all the biological signals?
I mean, I have no idea. People who study these things, psychiatrists, biologists, medical science in general, let’s say, don’t understand it. After years, decades, centuries of specific study, they still argue among themselves about all of this.
I keep saying this: the human brain probably does not function just like a man-made computer does. Everything you say about AI sort of depends on the assumption that it does. So I would question your conclusions.
I’m convinced, in my own mind, anyway, that AI technology will advance and AI will become “smarter,” by some definition of the word. But whatever it ultimately becomes, that will not be “human” by any definition I can think of.
I don’t understand why you are such a proponent of AI.
LikeLiked by 1 person
Thanks for the comment, Roy.
Yeah, I’m no expert in anything aside from being mediocre at everything. So, brain/mind science? I’m just postulating theories.
I imagine that there’s a “message loop” like thing in our brains that manages all of our actionable responses. If there’s anything like that in there, then along with the physical operation of our bodies, and the directed focus of our activities on some goal, there’s this other thing that, from out of nowhere, come these spurious thoughts.
I’m attempting to determine just how a machine could simulate such a mechanism. And, from what I can tell, that’s one of many of the missing pieces in creating an AGI.
I want the world to end — I figure an adversarial AGI bent on eliminating the human species would be a fun thing to watch, while it lasted. We’re all gonna die anyway. And in a billion years, there won’t be anything left of life on Earth — why not
have that happen now?
LikeLiked by 1 person
Or . . . the world could end in a week or so when an unforeseen asteroid hits the planet. Boy, THAT would be an anti-climax, wouldn’t it?
LikeLiked by 2 people
One big POOF!.
Have you seen, These Final Hours? Due to the atmosphere, an ELE asteroid strikes 180 degrees opposite to Australia — but takes 10+hours, at 1000 mph, to burn its way around the planet. So, a POOF that you can see coming.
LikeLiked by 1 person
There was a short story about how a solar flare (?) hit the Earth during nighttime in North America. The full moon became intensely bright. The protagonist tried to prepare for the inevitable, when dawn came. Turns out the flare subsided but half the globe was fried. Story written in the late 50s, I’m guessing. Today we would certainly understand how the climate would be totally farkakte at that point.
LikeLiked by 1 person
Day dreaming: a span of time where we ignore the stimuli from the world around us and focus purely on our memories and the meandering trails they lead us down.
If that’s not fallacious, at least it’s shallow. You know in all your equational bullshit you always eliminate the spontaneity of the human condition. Daydreaming can be a deliberate escape mechanism. Take an open window, a spring breeze, a boring English class and the whiff of Herbal Essence from the blonde in front of you. The ensuing daydream has nothing to do with memories, it has to do with imagination which could be assembled from whimsy as instead of another take on Shakespeare you step out, almost as astral projection to a breezy hill you build anew and unlike but from bits of a hundred breezy hills and fly a kite. With a blonde, her hair blowing across her face, laughing. Whimsical invention, escapism, imagination. Things we do without a prompt but build for ourselves. As far as the brain is concerned whether we experienced something physically or merely imagined it, they are stored as equitable experience. Even in the AI writing equations I don’t see it getting to a word choice and taking a hard turn into something new based not on memory but “how you shift what you’ve envisioned.” Keep lookin’, though, for that silver bullet that will deny feelings and turn everything into eat, shit, add misery, repeat. Me? I’ll sit down at the piano and maybe I’ll put a 6 in the bass and maybe I won’t and I’ll make new memories out of air. That’s the part this entire discussion is missing. Do we, as AI and machine learning do, merely quote our sources or whimsically decide to play them backwards, or Stochastic, or… Charley Parker was in the middle of a Bop solo when some sailors walked into the club and he seamlessly wove The Sailor’s Horn Pipe into his solo. Nobody told him to. No one programmed the response. There were no memory triggers. A spontaneous creative impulse.
LikeLiked by 1 person
Where do you think all of what you iterated over came from? Your memories, stored facts dribbled into your consciousness, maybe on demand, maybe spontaneously.
LikeLiked by 1 person
You’re talking recall and I’m talking invention. If you’ve never flown a kite on a spring day with the blonde girl sitting in front of you, it’s a manufactured ‘memory’, manufactured from nothing but ‘what if’. Pure creativity requires no memory. There’sa great short story somewhere about an “enforcer” in a society that values originality above all. Consequently, those with the genetic markers for music, mathematics, whatever are sequestered at birth and their skills nurtured but not ‘taught’. Nor are they exposed to any pre-exiting forms of their craft. Someone slips a young, sequestered musician some Bach. Not long after someone notices this exposure creeping into his music and they call for the music “enforcer’, a man with no hands. He shows up, listens to the kid play, leaves his orders and waits outside. The kid emerges with bandages where his hands had been, asks the enforcer why, how did you know and now what? Enforcer holds up his stumps and says “How do think we get our jobs?”
Allegorical, yes. We all quote our sources in some things, but memory/exposure is not a requirement for invention. AI, without exposure, will never “pull it out of its ass” like a young child never exposed to music might when handed a guitar for the first time. Millions of years ago someone repeatedly thumped on a log. Why? Memory sayin’ Cave Man, Dude, time to dance? Come on. Some shit is beyond equation.
LikeLiked by 1 person
> Pure creativity requires no memory.
I disagree. Whoever put chili powder in chocolate the first time (the Aztecs no doubt) had the two at hand and experimented. Everything new is created from anything old. Inspiration is taking what exists and applying an unusual, unexpected or totally tangential twist to it. But a twist that exists –> over there, already.
Emergent behavior and ingenuity is just that—it emerges—from something.
Your story implies that a blank-slate newborn will have arrived either preloaded, or “cosmically” connected to something beyond any existing experience. I think that’s bullshit.
A tone, struck on a harpsichord, might trigger neural pathways that exist in baby A but not in baby B which then signal to us that baby A is a prodigy child. But that child is still just made of chemicals contained in an electrified bag of organic skin. No mystery there.
We are machines, exquisitely complex ones, for sure, but bio-machines nonetheless. And being organically based, won’t have much in common with a silicon (or quantum) AI, aside from the fact that what exists in memory, for either of us, will form the basis for whatever comes next.
LikeLiked by 1 person
Deny the *spark* as long as you wish. Continue to argue for your limitations and they are yours.
LikeLiked by 1 person
Reality is a bitch.
LikeLike
Your story implies that a blank-slate newborn will have arrived either preloaded, or “cosmically” connected to something beyond any existing experience. I think that’s bullshit.
You’ve never sat on a piano bench with a prodigy. The inexplicable is what we should like to understand. The explicable is what’s bullshit. Learning is explicable. Yearning is not.
LikeLiked by 1 person
“AI, without exposure, will never “pull it out of its ass” like a young child never exposed to music might when handed a guitar for the first time. ”
When my son was about 2 1/2, I dug out my old guitar to, I guess I was thinking, to expose him to it. He took one look at it and got excited. He said, “Lute!” Teaching moment, I thought. I said, “No, this is a guitar. Can you say ‘guitar?’ ” His brow furrowed. “Lute!” he said, a little adamantly.
“Guitar.”
“Lu-uute!!”
“OK. Sit on my lap here. You strum and I’ll make the chords.”
We made some bangin’ good music.
LikeLiked by 2 people
I still think there are implementation details. Consider a fish. They’re able to find food, mates, recognize predators and run or hide when necessary. Do we have an AI that can do this yet?
Self driving cars can navigate, ChatGPT can chat, AI bots can output art, but it seems like they all depend on a vast network of precooked information that the fish gets by without. And the fish is pretty dumb when compared to a mouse.
It seems like what we have right now are trainable worms, each able to do one thing really well. It’s easy to look at that and think there’s something more generalizable there. I don’t think there is yet. Not that there won’t be eventually.
LikeLiked by 1 person
Totally agree.
But from my perspective, I can now visualize how an AGI might come to be.
A feedback loop of food = good = energy -> pursue to survive, seems like a pretty primitive and easily to reproduce circuit in an AGI. I know there’s already been such creatures birth in code (swarm, spore, Conway+), and no doubt done in physical robotics as well.
I think we’re getting so close that the announcements of advances will come weekly now.
LikeLiked by 1 person
Maybe so. It’ll be hard to evaluate most of those announcements individually, on how much progress they really represent. I think of quantum computing, which also has announcements just about every week, but you’re not going to be using it anytime soon.
But who knows? Maybe I’ll be eating these words in a few months.
LikeLiked by 1 person