Mitigating Humanity’s Existential Risk

Elon Musk wants to preserve the species. The ONLY way, he thinks, to do this is to make humanity a multi-planet race.

Ignoring the fact that the Universe is Absurd, that ultimately everyone and everything will dissolve into the void, let’s examine the factors that support or refute his hypothesis and come up with an alternative.

Let’s say we want to plan humanity’s continued existence out a billion years, out to when the Sun begins to bloat and heat Earth’s surface to the point of boiling off the oceans and roasting the biosphere to a crisp. What will we need to prepare for?

  • Asteroid/comet impact
  • Super volcano eruption
  • Narrow beam gamma ray burst
  • Solar eruption
  • Nuclear war
  • Plague

There are other risks that don’t really rise, realistically, to the level of “end of days”: antagonistic AI, global warming, alien invasion, and those unknown unknowns. But I wager that humanity’s existence is not actually threatened by such things.

I’ll clarify here that we’re not talking about human civilization. Let’s start first with just persisting the species out into the future a few thousand to a few million years. Yes, we stated that a billion years is our target, but let’s start small and see how far we can get.

There are a few factors we’ll need to address. The first is timing, how quickly will humanity need this capability. Then there are resource requirements, sustainable independence, minimum viable population, and, if we want to retain or return to a technological civilization, the reemergence of industrial capability. We won’t get to all of these but we’ll skim over them for completeness.

Why are we bothering with this discussion?

Right. Here’s the gist: I posit that there’s an alternative means of human preservation that we should be pursuing right now, in lieu of and/or in addition to, spreading humanity’s legacy out among the planets.

What are we afraid of? We’re afraid of the surface of our planet becoming uninhabitable. Mitigating every one of the above listed risks involves sequestering an enclave of humanity *somewhere* safe, for years if not decades. We want to hide out in some protected, self-sufficient place until we can resume activities, hopefully Earth-top-side.

What if the surface of Earth never returns to a livable state? Bah! Five massive extinction events resulting in five returns from the brink of annihilation prove that, until the Sun swells to consume the inner planets, Earth will always return to a state of habitability.

Is space the only alternative? If not, then where, other than the surface of Mars or the Moon, can we squirrel away a self-sufficient, re-emergent pocket of humanity?

Queue the music…

Under the sea.
Under the sea.
Darlin’ it’s better, down where it’s wetter, take it from me.

Who’s up for a little cocktail (sauce)?

A city on Mars?

Bullshit. Build a city at the bottom of the sea. Or deep within the Earth.

Such a metropolis would be protected from cosmic radiation, volcanic winter, nuclear fallout, a ravaging plague of zombies, and all the toxins and trauma, malcontents and mayhem. We wouldn’t need to spend $billions blasting resources into space. Or traversing billions of miles of a very nasty inter-planetary void. We could leverage all the benefits of cheap labor, cheap materials and exhaustive know-how right here where we need them.

Within a few years we could build a vast network of cities, all self-sustainable, all independent. Such preserves could be supported by tourism yet isolated at the first signs of trouble.

Every disaster movie ever made makes provisions for such failsafe protections of humanity. And there’s a reason why — it make sense. Even if (or when) the worst of the worst calamity takes place, the buried and submerged cities would weather the situation far more easily than some half-baked outpost on Mars could survive the decades alone without support from Earth.

Eventually, if humanity can survive its own self-made ills, it might construct the means to disperse its seed into the cosmos. (Why we, here and now, should give a shit about that, is beyond me.) But, even if Elon wants to immortalize himself as some savoir of Humanity 2.0, then building a city on Mars shouldn’t be the first step. Establish a subterranean city for the Morlocks and Mermaids and then shoot for the stars.

Comparing a Martian colony to Nemo’s Atlantis we have the following factors:

  • Timing: How long will it take to get a viable habitat built, stocked and operational? Do we have 10 years before the next apocalypse? 50? We don’t really know, but surely sooner is better. With Nemo City we could start tomorrow.
  • Resource requirements: Besides air, water, nutrients and nearly everything else, what does Mars need to establish itself as a potential sanctuary for, not just humanity, but all of humanity’s dependencies? Think biosphere/ecosystem here. Again, for a earthly solution, all the stuff required for existence is right outside our front door. For Barsoom City? Oy! Maybe you won’t have to bring dirt for farming (provided you can wash the peroxide salts from the Martian soil).
  • Sustainable independence: Will a Martian colony EVER actually become independent? With technology, industry,  agriculture and growth enough to blossom and return the favor back to Earth? Sure science fiction thinks so. But reality?
  • Minimum viable population: We know humanity prospers in the gravity well, with the oxygen levels and sunlight saturation of Earth. On Mars? What strange illnesses will reveal themselves, both on the red planet and along the months long trip to get there? Will human births suffer? Human fertility? What of restocking Earth with surplus Martians and surplus supporting biota (animals, plants, bacteria and fungi)?
  • A technological civilization and the reemergence of industrial capability: It took humanity thousands of years, and terrajoules of energy to lift itself up to a technological society. Will Mars be able to repeat this?

Elon, do you really want to preserve humanity? If so, maybe you could turn your sights down from the heavens to the ground beneath your feet. Use your Boring company to tunnel into the earth and there build an actual salvation city.

Live Long and Prosper — in AI

Yes, the dead will speak. And they will have trained themselves to do it.

(See prior posts regarding this topic.)

This is only the beginning.

From Reuters:

quote:

LOS ANGELES (Reuters) – Actor William Shatner, best known for forging new frontiers on the “Star Trek” TV series, has tapped new technology that will give current and future generations the chance to query him about his life, family and career.

Shatner, who turned 90 on Monday, spent more than 45 hours over five days recording answers to be used in an interactive video created by Los Angeles-based company StoryFile.

Starting in May, people using cellphones or computers connected to the internet can ask questions of the Shatner video, and artificial intelligence will scan through transcripts of his remarks to deliver the best answer, according to StoryFile co-founder Stephen Smith.

Fans may even be able to beam Shatner into their living rooms in future, Smith said, as Shatner was filmed with 3-D cameras that will enable his answers to be delivered via a hologram.

Shatner, who played Captain Kirk on “Star Trek” from 1966 to 1969 and in a later series of “Star Trek” movies, answered 650 questions on topics from the best and worst parts of working on the classic sci-fi show to where he grew up and the meaning of life.

The Canadian-born actor said he “wanted to reveal myself as intimately as possible” for his family and others.

“This is a legacy,” Shatner said. “This is like what you would leave your children, what you’d leave on your gravestone, the possibilities are endless.”

:unquote

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In other news, my existence continues. Nothing much going on, nor has my muse escaped from her prison (shut up down there!) so, why bore you all with a tiresome report. If I had a news-worthy story like the ‘Mudge, well, I’d be happy to share it.

The Dead Must Train Themselves

This is a continuation of the topic raised prior.

I’ve spent some time watching Black Mirror’s offerings and the one Duke Miller recommended, Marjorie Prime. The premise for these stories is that the living, bereft at the loss of a loved one, takes possession of a simulacrum. But, this virtual construct must be conditioned to behave like the deceased.

Who the fuck wants to do that?

Now, given the responses to the last post, it would seem the concept creeps some folks out. Others might find it hollow or shallow even. And then there’s the whole possibility question, could it actually be done? You’re all probably right on each front. However, I don’t know that the full potency of the idea has soaked in.

I’m convinced that this capability is coming. The crippled versions I’ve interacted with so far are limited I’ll admit. But, all the pieces are there. This will come to be, I know it. The first, I suspect, to be exposed as interactive agents will be dead celebrities. Those whose copyrights and trademarks are expired — open for exploitation, as it were. Imagine speaking with Shakespeare, or Nietzsche, Dickens or Darwin? Those representations will of course need Black Mirror’esque training, someone must do the deed of teaching ol’ William how to speak and how to be cheeky about love and life.

But that’s not where I think this will truly bloom (or die on the vine).

Given the technology—soon to be available, I’m certain I’ll be able to train my replacement. I’ll relate to him things I’d never tell anyone else, but things that would strike to the core of my persona. I’ll transfer other autobiographical stories that I’ve no intention of committing to paper, but would serve as flavor to any who come later — for those who might want to know me. I’ll record video of me speaking so that the DeepFake technology can make a model of me actually saying words. And I’ll de-age a bit, get back to around 45’ish, maybe.

I know my kids dig it.

Hell, wouldn’t you all be surprised to learn you’ve been talking to my digital duplicate now since early September. You think I survived that heart attack? Well, in a way, I did.

We are not conscious

Consciousness, at its simplest, is “sentience or awareness of internal or external existence”.

I’ve been thinking about the Singularity, the rise of the Machines, of AGI (artificial general intelligence) and how all of this may or may not give rise to AC – Artificial Consciousness.

We are not conscious. By that, I mean that this elevated concept of “Self” that we attribute only to ourselves—is a tautological illusion. It’s a transcendence we perpetrate as an ideal we set as an intelligence bar only we, so far, have attained.

Now, we can forever debate what consciousness is. No true definition has emerged from the age-old philosophical grindstone. But that won’t stop me from stepping up and out of the discussion and providing an armchair scientific analysis of the concept.

We think we’re conscious. OK, let’s go with that for now.

What if we take our brains, the source of our so-called consciousness, (we’ll include all the input senses and feedback loops connected to it), and cut our processing power in half. All the neurons, the tactile, aural, visual, all the sensory inputs and billions of neural connections — whack! Take just half.

Do you think the resulting entity would still be conscious? Who knows… Maybe, right? Okay, then let’s cut it in half again. And again.

Now we have an entity one-eighth of the mental capability of a human. Is that creature conscious? Let’s say they have the cognitive and sensory capacity of a salamander. Conscious? Some will still say, who knows? Well, for the sake of argument, let’s say Newt is incapable of the notion of “Self”. If they look in a mirror they won’t see themselves, a, you know, “Hey, don’t I look gooooood!” moment.

All we did to get to Newt, and his unconsciousness, was to regress our own capability backwards. If we progress in the opposite direction, doubling Newt’s brain and sensory power, we arrive at humanity’s ability level. And we’re to believe that once we get “here,” we’ve magically attained consciousness?

Maybe, consciousness is simply a capacity concept. What we think of as being self-aware is merely our vastly more complex and proficient ability to observe and analyze ourselves and our surroundings. Processing power. A numbers game.

We “think” we’re conscious, but maybe what we really are is being excellent at consuming data, examining that data and inferring outcome from that data, that is, following sequences of events to some conclusion. I think therefore I am.

Given this theory—that what we call consciousness is merely a critical amount of processing horsepower—we can expect that once an artificial general intellect acquires the threshold of an equivalent amount of cognitive and self-referential feedback processing, that it, too, will be just as “conscious,” as us, that is, not at all.

~~~

The corollary to this thesis would be: what of the artificial entity that is twice, ten times or a thousand times more cognitively capable than us humans? Would that entity truly have attained “consciousness”? Or, is this special concept we’ve awarded ourselves really just a numbers game, no matter how great the count?

The Daily Grind

What is it that you actually do?

Ha! I’m glad you asked. Well, let me tell you…

Today, dissolving into tomorrow, I’ve got this issue with a software project that contains more than 1,000,000 lines of code where the primary “tribal knowledge” developer was summarily laid off due to economic and COVID related business reasons back in May. “Hey ‘Mole, can you fix this?”

“Uh, does my livelihood depend upon it?”

[Cough]

“I’ll get right on it.”

So, here’s the deal:

Azure Constant Integration/Constant Deployment, CICD in DEVOPS parlance, has been building and deploying this 54 project, 1,000,000+ code line web application, containing ASP.NET, RazorPages, AngularJS, Angular+, EntityFramework and myriad other industry provided WebService endpoints for WSDL configuration and JSON data translation and has failed to XML/XSL translate the web.config file to include the RELEASE version of the configuration payload.

So, I had to figure out exactly why this CICD Publish step in AZURE’s incredibly useful but mindbogglingly complex Build + Release Pipeline process was NOT honoring our web.config transform embedded in the .CSPROJ file that contains the instructions for Publishing the entire ensemble.

Ah, I said, we’ve (I, now that all the other developers have been cast to the COVID wayside) included a BeforeBuild step in the project file but not a BeforePublish step, where we should be injecting the RELEASE nodes of the XML so that when we ship the whole payload from DEV to QA to UAT to PROD server (allowing each band of Quality-Assurance brothers/sisters their time to pass muster on the said server environments) the appropriate changes accompany the aforementioned web.config file.

Voila! Problem solved. Until tomorrow, when the apparent solution exposes internal corporate credentials, during non-TLS1.2 compliant transfer to a vendor that “says” they need the API to the EF data project but, hey, we know better and they can use the default login’s to access the data through REST just like all the other smucks who want in.

Right? Am I right?

If you’re reading this, way-the-fuck-down-here, know this: all of that up above is legit. Real, actual, nightmare inducing, shadow in the mirror—with a ghostly hand upon your shoulder, shit. Everyday is like this. Only way, way more involved, with another dozen software languages tossed in. Who the fuck has even heard of X++? Or VUE or BLAZOR, or geezus ach crist!

So, now you know what I do for my day job. What do you do?

[Here’s a funny aside: Azure is Microsoft’s Cloud solution right? Well, what is the definitions of “azure”… the color of a cloudless sky! What Dumb Fucks! I’d have called it Olympus, The Data Fortress in the Sky! Armed with God like capabilities and protection against Titan-like threats.]