Two of the top three self-created catastrophic threats humanity faces are rogue AI and terrorist acquired nuclear weapons. The third is bio-engineered viruses and in the coming years should probably be grouped with the other two (but we’ll leave it out for now).
Here I will compare Rogue AIs and nuclear weapons. This is not an experts opinion, just something for conversation.
Damage
We all pretty much are aware of the dangers regarding what a nuke can do. And if for some reason either China, Russia, Pakistan, India or the US think that any of the others have it in for them — purely due to a terrorist’s acquisition and launch of such a weapon — the world is in for an ugly nuclear destruction, fallout and winter.
But what damage can a rogue artificial intelligence do? One only has to realize the extent by which our everyday lives are controlled by highly intelligent computer systems to understand the damage that could occur. Many of these are popular media/movie/novel topics such as the collapse of the world’s financial system. But others more insidious yet perhaps even more damaging are the possibility that our water systems become compromised. Or that the major airport’s air-traffic controllers get hijacked. Or this one, which I find increasingly likely, the take-over of our electrical grid and generation. We are all utterly dependent on electricity; the pervasiveness of this boggles the mind: banks — all your money is digital now, and requires banks to be powered up to provide exchange. Water & sewage — all pumped by electric pumps. Gasoline stations — pumped by electric pumps. Refrigeration — all fresh or frozen food rots within days to weeks without electricity. The stock market, communication networks, elevators, street lights, traffic signals, trams and electric buses, hospitals — the list goes one and on.
If a rogue AI takes control of our grid, and renders it inert in some way, we, either nationally or regionally, are toast.
Proliferation
A terrorist getting hold of a nuke is going to be really, really rare.
A terrorist group getting hold of the code to build and unleash a rogue AI — not so rare. The code to build advanced AIs is exposed, for the most part, as open source. And there are far more teams trying to build GAIĀ (general artificial intelligence) systems than terrorist cells trying to steal and use a nuke.
No one wants to really detonate a nuke — the world knows the result.
But building and releasing a rogue AI? Well, that would be something eh? Maybe, they might think, it would be humanity’s salvation. Or not. But would they even stop to think about flipping on the switch? Human’s love to experiment — and AI is one of the most potentially lucrative and alluring experiments ever imagined. Let’s face it, it/they will get built and it/they will get turned on. Just a matter of time.
Difficulty
Acquisition or construction, hiding, deploying, detonation of a nuke is hard. Really hard. The physical aspects of the technology are hard. And the monitoring and policing of fissile materials are intense. Nukes, as a threat, are really low — when it comes to activation by a terrorist group.
AIs, on the other hand, are hard, sure. But they’re certainly not monitored or policed. They’re not tracked or detectable with Geiger-counters. They just need a big ass set of computer hardware which is cheap-cheap-cheap these days. And then the software itself — although tough right now, gets easier and more available everyday. And soon, the software will begin to write itself; perfect itself; correct and fix and direct itself. And pretty soon, presto! RogueAI.
Aftermath
The aftermath of a nuke attack, even just multi-megaton version, is an ugly thought. Just one, by a terrorist, would be manageable however. But if that single detonation was determined to be a First-Strike by a superpower? Well, the pieces will be so small that picking them up would be futile.
The aftermath of a rogue AI? Who knows. It probably depends on the extent by which it networks out; infiltrates other systems; copies itself, protects itself. If a rogue AI launches too soon then the chances that it does little damage and is quickly contained is high. But if an GAI bides its time, waits and watches, expands in a clandestine way into every possible computing niche that it can — maybe for years — then, then we’ll have a huge problem. One humanity may have to reset the clock to per-electricity days.
A more insidious story would be to have the rogue AI remain hidden, from everyone, even its creators. And as it learns, and grows, slowly starts to mold humanity, send out false data, bend the political wands of the power that be such that they do its bidding without their knowledge.
That’s why, in my opinion, a rogue AI is much more a threat to humanity than nukes. Nukes are predictable. A rogue AI? Who knows what may, or shall I say “WILL” happen?