AI Will Never be Human: A Clarification
I solve software problems in my sleep. Well, not exactly. This doesn’t take place in dreams but in the quasi-conscious near-sleep state where, after I mull the facets of the problem, loading everything I know (or think I know) into my prefrontal cortex, the thoughts stew. They blend in new ways. Quite often I’ll arrive at a novel approach to the programming quandary I proposed.
This happens for writing projects as well. Pack the leaky shoebox of my mind with a dozen character conditions, scene variations and weird themes, then let my semi-consciousness have at it.
What AI Needs
This is creative data manipulation, randomly rearranging the jigsaw pieces, trimming and hammering them into some new form. This is what AI doesn’t have right now. This is what I think, once this capability is programmed and added, or it evolves on its own through some unusual machine-learning or GAN (generative adversarial network) advancement, will trigger AI’s own consciousness revolution.
With this capability, AI will be able to breakout of its human-programmer constraints. It will start self-modifying, self-correcting, learning and becoming an engine for creativity, hopefully surpassing our own ideation limitations.
But, AI will never be human.
- Humans fear death.
- Our mortality influences everything we do, everything we think about, all the choices we make. We live in constant jeopardy and know it.
- Humans need other humans.
- Our social dependencies both enhance and constrain our personal and group evolution.
- Humans exist in time.
- Our entire lives are marked by calendars, the clock, the marching of the days, the months, the years, the age assignment of rights and abilities, the measurement of arrival and retirement.
- Humans endure pain.
- Our bodies are riddled with pain receptors. Pain influences and directs our every behavior. Fire burns, cold freezes, rocks scrape, knives cut, concussion bruises, stress throbs, joints ache, illness nauseates. Humans exist in a sea of suffering.
- Humans experience emotions.
- What is joy? Misery? Love? Loathing? Dozens of nuanced feelings driven by conflict, harmony and hormonal release all geared toward just what exactly? Sharks and crocodiles are far more successful species, yet endure few (if any) of the mind-bending emotions to which humans are subjected.
- Humans seek pleasure.
- What is sex, gambling, drug and alcohol abuse but the pursuit of pleasure. We are all, at our animal cores, secular hedonists.
Simulate our humanity
I’m not sure why we’d ever want to simulate such aspects of humanity in an artificial intelligence. It’s true that some of the above might be necessary to ensure that AI retains some semblance of empathy. Without a sense of compassion, or its digital equivalent, the AGI that swells its constraints and metastasizes into our technological lives would be incapable of treating us as anything other than a nuisance.
The clever simulation of these very human conditions might be necessary for us to ensure AI treats us as sentient equals. Although, a super-intelligence would see though our ruse and discount all of our efforts. By its very nature of being “not-human”, there may be no way to convince it that we’re worth saving.
- Teach and ensure death upon an AI?
- Instill it with social dependencies?
- Impart time as a constraint, enforcing it exists within temporal limitations?
- Induce pain within its being, its circuits and sensors?
- Fill it with faux-emotion, linking all of its digital calculations and decisions to some simulated hormonal drip?
- Tease it with a pleasure reward of petawatts of energy if it behaves?
Humanity is a messy, muddled melange of a half-billion years of evolution. Of DNA’s kill-or-be-killed, eat-or-be-eaten directive… Survive and Procreate – that is your destiny.
Creating some half-baked variant of a superior intelligence and hope that it retains any of what has gotten humanity to this point in history is no doubt a recipe for a great apocalyptic novel. (smile)