The Deadly Gamble on Super A.I.

0
2232
Artificial Intelligence
Artificial Intelligence

With fate of humanity in the balance, even a small risk demands serious action.

This is the third installment of “Privatizing the Apocalypse,” a four-part essay to be published throughout October. Read the previous installments here — Part 1: “The 50/50 Murder” and Part 2: “Deterrence — and the Undeterrable”.


The most famous meteorite in prehistory struck Mexico some 66 million years ago. It unleashed the explosive force of 10 billion Hiroshima-scale bombs. Three-fourths of all species perished in the aftermath, including every breed of dinosaur. And this was merely the fifth-worst mass extinction of the past half-billion years. (The worst wiped out 96 percent of all species.)

The universe will continue to lob Yucatan-grade rocks our way. Intriguingly, we might be able to fend one off if we have enough warning. This foresight would be costly, however, and we’re quite unlikely to get smacked until eons after our great-great-great-great-grandchildren are gone. So should we even bother to track this stuff?

Assuming meteorites caused all five big extinctions (though not at all certain, this is a reasonable assumption for a thought experiment), they have turned up roughly once every 100 million years. This means in any given year, it’s almost certain none of us will be killed by a meteorite. But once in a very long while, they’ll kill us all. Meld their frequency with 7.5 billion lives on the line, and meteorites kill 75 of us per year, on average. Stated differently, massive asteroids cause the probabilistic equivalent of 75 certain human deaths per year, using the lens of “expected value” math. (The first article in this series presents this in some depth, although you don’t need to read it to follow this piece.)

The Trump administration proposes to spend $150 million next year on programs meant to forecast and preclude massive cosmic collisions. That’s $2 million per expected annual death. Is this just profligate?

The math of automotive safety is a good comparison point, since few things are analyzed — or litigated — more ruthlessly. So let’s consider the mother of all mandates: the airbag, which U.S. law has required in all passenger vehicles since 1998. These cost manufacturers about $450 per car. And the National Highway Traffic Safety Administration credits airbags with saving 2,756 lives in 2016. Given that 17.25 million vehicles were sold in the country that year, we can estimate that American society spent about $2.8 million for each life saved by airbags.

This means the United States’ killer asteroid budget is about 30 percent less than our prophylactic spending on at least one everyday lethal risk — which is to say, it’s in the same general ballpark. I personally deem it a wise investment and rather farsighted for a country that has been decried for under spending on climate risk. It’s also useful context for considering other existential threats. If we’re quite unlucky, this will include today’s topic, which is artificial super intelligence.


Hollywood has done a respectable job of making the dangers inherent in A.I. popularly accessible. That said, the mere fact that something is a Hollywood staple can also inoculate us from taking it seriously. James Barratt writes analogously that if “the Centers for Disease Control issued a serious warning about vampires,” it would “take time for the guffawing to stop, and the wooden stakes to come out.”

Of course, nothing with a credible risk of canceling humanity is a guffawing matter. This is true even if the odds of a catastrophe are provably minuscule. And with super A.I. risk, the odds aren’t provably anything. Unlike with meteor strikes, we can’t exactly turn to the geological record for guidance. Even tech history tells us little, as it’s larded with unforeseen breakthroughs and hairpin shifts. These make technology’s path much less predictable than politics, sports, or any other upset-riddled field. Tech’s future can be framed responsibly only in terms of probabilities, not certitudes. Those placing zero or 100 percent odds on rationally debated outcomes are either dishonest or deluded.

Thorough analyses of A.I. risk fill entire books, so I won’t attempt one here. But the danger’s essence is straightforward. Start with the blistering speed of advances in computing. Since that speed is exponential and compounding, thousand-fold performance jumps routinely occur within a decade — then promptly expand to 2000x, then 4000x.

We must therefore accept that computers might one day surpass us at making better computers.

Exponential processes weren’t visible to us back when our brains evolved on the savannah, so forecasting them doesn’t come naturally, and accurate forecasts can feel stupidly far-fetched. Amara’s Law holds that we tend to overestimate the effects of new technology in the short run yet underestimate them in the long run. Short-term letdowns can also prompt cynicism — deepening the shock of long-term successes. With exponential change, naysayers should therefore skip the victory lap when the yeasayers look dopey. Recklessly aggressive predictions can turn out to be comically timid in the end.

That’s why we humans are often startled to be surpassed in domains that recently seemed almost intractable to computing. This is happening a lot these days: in Jeopardy!, facial recognition, radiology, Go, and, soon enough, driving. We must therefore accept that computers might one day surpass us at making better computers. And I’ll stress that the operative word is “might” — I claim no certainty on this front (nor does anyone who merits your attention in this debate).

If this threshold is crossed, computing’s rate of progress could then surge violently — because although it takes decades to train a great software engineer, a digital engine can be copied millions of times in swift order. A runaway process of digital self-improvement might therefore yield minds as brilliant in relation to us as we are to bacteria.


What such minds might then do is as fathomable to us as our career goals are to E. coli. I doubt a super A.I. would annihilate us out of spite, just as we don’t annihilate bacteria out of spite. We do, however, routinely exterminate them by the billion. We do this out of caution (swabbing the bathroom with Lysol). We do this in active self-defense (downing antibiotics). We do this unwittingly by simply existing and metabolizing. Bacteria have no moral standing with us. They’re just background noise with a potential to turn malignant. We approach them with the rational dispassion of Hal 9000 or Ex Machina’s Ava facing down a human obstacle.

So why might a super A.I. treat us like a micro-liter of influenza? It could be a precautionary move, à la SkyNet in Terminator. Just as we sterilize bathrooms because a minuscule subset of germs pose dangers, an A.I. might fear that a few of us might try to unplug it. Or that a small clique of our leaders might blunder into a nuclear war — which could blow up our digital betters along with the rest of us.

Alternatively, our doom could be the side effect of a super A.I. tackling its to-do list. Just as a viral particle can’t fathom my music choices, I can’t imagine what something reeeeeeal smart might do with the most accessible building materials in the universe. Which is to say, the atoms currently comprising our planet, its biosphere, and its inhabitants. It all might make a lovely interstellar supercollider. Or a vast computational substrate. Who knows? You may as well ask a speck of Ebola why I still like the Violent Femmes.

Given that we’re spending our own reign on this planet making cool stuff out of its innards and our fellow critters, our successors could surely have similar interests. Yes, it may be unseemly for them to turn their progenitors into molecular Legos. But we descend from fish, worms, and bacteria, all of which we slaughter guiltlessly when it suits us.

However profoundly unlikely all this sounds—and, indeed, hopefully is—we cannot deem it impossible. And rational beings hedge against unlikely catastrophes. We spend billions researching aviation safety, although global commercial air travel caused precisely zero deaths last year. And, of course, airbags are installed in every new car, though only a tiny fraction will ever deploy. This is out of prudent caution in the face of uncertainty, not species-wide stupidity. And since we cannot state the likelihood of an A.I. doomsday with precision or consensus, it would be reassuring if the smartest folks around were unanimously chuckling and rolling their eyes about the danger.


Unfortunately, they’re not. In one of his last major public addresses, the late Stephen Hawking said, “The rise of powerful A.I. will be either the best or the worst thing ever to happen to humanity.” Why the latter? Because “A.I. could develop a will of its own, a will that is in conflict with ours and which could destroy us.” The erratic but undeniably brilliant Elon Musk agrees with this assessment and indeed considers A.I. to be “far more dangerous than nukes.” And while the equally brilliant Bill Gates distances himself from Musk’s gravest warnings, he has put himself “in the camp that is concerned about super intelligence.” and of those who “don’t understand why some people are not concerned.”

Unlike when celebrities debate vaccines with immunologists, none of this can be dismissed as the bleating of attention-seeking nitwits. Critics have nonetheless questioned Musk and Hawking’s credentials on the basis that neither of them ever trained as an A.I. expert. But should we heed the opinions of only guild-certified insiders in a matter this grave? That would sit awkwardly with Upton Sinclair’s dictum that it’s “difficult to get a man to understand something when his salary depends upon his not understanding it.” This tendency scales with salary size — and A.I. experts are known to pull down seven figures even at nonprofits.

Nor can Musk, Gates, or Hawking be written off as clever folk who became halfwit tourists upon entering a wholly unknown realm (à la Henry Kissinger and George Shultz joining the Theranos board, say). Teslas practically run on A.I. Microsoft has huge A.I. budgets. And for all we know, Stephen Hawking was an A.I. himself. A.I. safety advocates also include many of the field’s deepest insiders — people like Stuart Russell, who wrote the book on A.I. And that’s not a metaphor but flatly means his text is used in more college A.I. courses than any other fucking book. Russell doesn’t express certainty that A.I. will kill us all — far from it. But he believes it could pose catastrophic risks, as do others of his ilk.

In light of all this, assigning a risk of zero to A.I. catastrophes would be a faith-based act — one that piously ignores both expert opinion and technology’s unpredictable nature. Rational participants in this debate should focus what level of nonzero risk is acceptable — and if that level is attainable.

It is not about precision. Rather, it’s about scale and reasonableness. I can’t begin to pinpoint the precise level of danger we face from A.I. Nobody can.

We can start with one of the highest degrees of surety technology ever offers. In network operations, the snappy phrase “five nines” stands for 99.999 percent uptime. A service operating at five nines is available for all but 30 seconds per month (or 28 seconds in February, I suppose). Though it’s often inserted into contracts, this standard is routinely described as impossible to guaranteeeffectively impossible, a myth, etc. And this is for relatively tidy, well-understood feats like keeping mainframes and websites running. By contrast, handicapping an A.I. doomsday involves something that is quite plausible but not at all understood and is most likely to be achieved by autonomous software, should it ever come about. There’s no way five nines of confidence can be assigned to that. Indeed, two nines (or 99 percent) seems almost belligerently optimistic.


Years could be spent quibbling with every element of this analysis. So I will be very clear: It is not about precision. Rather, it’s about scale and reasonableness. I can’t begin to pinpoint the precise level of danger we face from A.I. Nobody can. But I’m confident that we’re multiple orders of magnitude away from the risk levels we accept in the ordinary, non-exponential world.

When a worst-case scenario could kill us all, five nines of confidence is the probabilistic equivalent of 75,000 certain deaths. It’s impossible to put a price on that. But we can note that this is 25 times the death toll of the 9/11 attacks — and the world’s governments spend billions per week fending off would-be sequels to that. Two nines of confidence, or 99 percent, maps to a certain disaster on the scale World War II. What would we do to avoid that? As for our annual odds of avoiding an obliterating asteroid, they start at eight nines with no one lifting a finger. Yet we’re investing to improve those odds. We’re doing so rationally. And the budget is growing fast (it was just $20 million in 2012).

So, what should we do about this?

I studied Arabic in college, and these days, I podcast — which is to say, I truly have no idea. But even I know what we should not do, which is to invest puny resources against this outcome compared to what we spend on airbags and killer asteroids. A super A.I.’s threat profile might easily resemble that of the Cold War. Which is to say: quite uncertain but all too plausible and, much as we’d hate to to admit it, really fucking huge. Humanity didn’t cheap out on navigating its path through the Cold War. Indeed, we spent trillions. And for all the blunders, crimes, and missteps along the way, we came out of that one pretty okay.

I’ll close by noting that the danger here is most likely to stem from the errors of a cocky, elite group, not the malice of a twisted loner. Highly funded and competitive fields studded with geniuses leave no space for solitary wackos to seize the agenda. The dynamics that brought us the Titanic, the Stuxnet virus, World War I, and the financial crisis are far more worrying here.

These wouldn’t be bad guys running amok. They’d mostly be neutral-to-good guys cutting corners. We don’t know the warning signs of a super A.I. project veering off the rails, because no has ever completed one. So corners might be cut out of ignorance. Or to beat Google to market. Or to make sure China doesn’t cross the finish line first. As previously noted, safety concerns can melt away in headlong races, particularly if both sides think an insuperable geopolitical advantage will accrue to the victor. And it doesn’t take something of global consequence to get things moving. Plenty of people are constitutionally inclined to take huge, reckless chances for even a modest upside. Daredevils will risk it all for a small prize and some glory, and society allows this. But the ethics mutate when a lucrative private gamble imperils everyone. Which is to say, when apocalyptic risk is privatized.

Imagine a young, selfish, unattached man who stands to become grotesquely rich if he helps his startup make a huge A.I. breakthrough. There’s a small chance that things could go horribly wrong — and a lack of humility inclines him to minimize this risk. From time immemorial, migrants, miners, and adventurers have accepted far greater personal dangers to chase far smaller gains.

We can’t sanely expect that all of tomorrow’s startup talent will shun this calculus out of respect for strangers, foreigners, or those yet unborn. Many might. But others will sneer at every argument in essays like this, on the grounds that they’re way smarter than Stephen Hawking, Elon Musk, and Bill Gates combined. We all know someone like this. Those of us in tech may know dozens. And the key breakthroughs in this field might require only a tiny handful of them, joined into a confident, brilliant, and highly motivated team.

(The article was first published here.  Copyright resides with the author.)