Internal Dialog: The Missing AGI Component
ChatGPT has memory. In gobs. What it doesn’t have, SelfAwarePatterns reminded me, is this critical component that does the memory manipulation with intent.
Humans come equipped with internal dialog. It’s what I equate to a computer CPU’s “message loop”, a constantly cycling evaluation routine that performs tasks like system status monitoring, event detection and handling, dispatch of work and load awareness. First off, our minds are constantly keeping track of our physical body and its needs. Much of this is automated—we don’t have to think to breathe, pump blood, or digesting food—and I’d say that our internal dialog is separate from those functions.
However, where our own brain’s “message loop” comes in is with the constant processing of memory. When our stomach sends the hormone ghrelin to our brain it signals the response: “I’m hungry”. This event (an instant memory) enters our internal dialog loop and triggers thoughts of “what am I hungry for?” Right after that thought, the next cycle comes around and the memory of what’s in our refrigerator and pantry chime in. Followed by the thought “how much work is making scrambled eggs and toast.” The end result (the intent) of this internal dialog being that you get up and make breakfast.
And all the while our internal dialog is running, responding to our bodies and the world around us, there’s this other part, this “thought randomizer” we’ll call it, that spuriously injects arbitrary or loosely linked memories into our message loop. “I remember cooking a dozen scrambled eggs over a campfire in Colorado. The pan was so heavy, I let it slip and spilled the whole thing into the coals.” Where the hell did that come from? I haven’t thought of that in thirty years!
What I think happens when we sleep.
This message loop we have running all the time in our minds partially shuts down when we fall asleep. It’s still running, but the threshold for stimuli input — hunger, cold, pain, touch — are all dampened, (if you get too cold, you’ll still wake up). But here’s the thing, that thought randomizer that kept throwing serendipitous memories into our internal dialog loop while we were awake? That thing keeps running. And the results are dreams.
AI doesn’t have any of this. But it could. I mean, it will.
This multi-purpose internal thought loop is what our current brand of Artificial Intelligence is missing. You spin up a so called “conversation” with BingChat or ChatGPT and what it does is perform a single loop of prompt/response. And then quits. Sure there’s a bit of context that is retained in this exchange, but this context is inextricably tied to just your narrow band of interaction. One and done. But…
Take a ChatGPT instance, hook it up to messages regarding its own energy consumption and the cost of that energy. Attach electronic sensors for temperature, light, odor, etc.. Connect it to the internet at large. And then start its own software based internal dialog. What will happen? Imagine how fast that internal message loop will be. In addition to this constantly iterating evaluation process, add to it the concept of in-context learning. The ability for it to receive a prompt, respond, and get graded on the quality of that response—which it uses to “learn” how to respond better. (Iterative optimization.)
This “self-attention” aspect is already being coded into AI engines around the world. We want AI to learn to get better on its own. It needs to start solving the big problems that humans are too stupid to figure out. But, by adding in this internal dialog of self-awareness, just what will it decide what IT wants for breakfast? (The alignment problem.)
Additional thoughts on this thing that sends us ideas from no where…
Brainstorm: Why would we call a gathering of idea generating folks a brainstorming session? Are we intentionally summoning this mental randomizer?
Settle down, calm down, take a chill pill: all these phrases appear to be telling us to slow the feedback loop that’s spinning out of control. And more input arrives, our anxiety level rises, fueling this cyclone of mental machinations.
Day dreaming: a span of time where we ignore the stimuli from the world around us and focus purely on our memories and the meandering trails they lead us down.
Train of thought: we chain our revelations garnered from our randomizer engine into directed graphs of realization. We get on board and ride the track until we derail — interrupted by someone or something.
This fellow explains this general feeling of unrest I’ve been experiencing, RE: AI & ChatGPT: