Nuclear war vs. AI: What’s a real world-ending threat?

April 27, 2023:

Four years ago, I wrote one of my most controversial articles. It argued that climate change — while it will make the world we live in much worse and lead directly and indirectly to the deaths of millions — won’t end human life on Earth.

This isn’t scientifically controversial. It’s consistent with IPCC projections, and with the perspective of most climate scientists. Some researchers study extreme tail-risk scenarios where planetary warming is far more catastrophic than projected. I think studying that is worthwhile, but these are very unlikely scenarios — not anyone’s best guess of what will happen.

So the reason that arguing that climate change is likely not a species-ending threat is so controversial isn’t because of science. It’s because the argument can feel like intellectual hair-splitting and hand-waving, like a way of diminishing the severity of the challenge that unquestionably lies ahead of us.

Millions of people will die with climate change, and that’s horrendous; it feels almost like selling those victims out to tell comfortable people in rich countries that they will probably not be personally affected and will probably get to continue in their comfortable lives.

But fundamentally, I believe in our ability to solve problems without exaggerating about them, and I don’t believe in our ability to solve problems while exaggerating about them. You need a clear picture of what’s going to happen to fix it. Climate action addressed with the wrong understanding of the threat is unlikely to save the people who actually need saving.

AI, nuclear war, and the end of the world

This has been on my mind recently as the case that AI poses an existential risk to humanity — which I’ve written about since 2018 — has gone mainstream.

In an article in Time, AI safety researcher Eliezer Yudkowsky wrote that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” New international treaties against building powerful AI systems are part of what it’ll take to save us, he argued, even if enforcing those treaties means acts of war against noncomplying nations.

This struck a lot of people as fairly outrageous. Even if you’re convinced that AI might be quite dangerous, you might need more convincing that it’s extraordinarily deadly indeed to consider it worth risking a war. (Wars are also dangerous to the future of human civilization, especially wars with the potential to escalate to a nuclear exchange.)

Yudkowsky doubled down: Uncontrolled superhuman AI will likely end all life on Earth, he argued, and a nuclear war, while it would be extremely bad, wouldn’t do that. We should not court a nuclear war, but it’d be a mistake to let fear of war stop us from putting teeth in international treaties about AI.

Both parts of that are, of course, controversial. A nuclear war would be devastating and kill millions of people directly. It could be even more catastrophic if firestorms from nuclear explosions lowered global temperatures over a long period of time, a possibility that is contested among experts in the relevant atmospheric sciences.

Avoiding a nuclear war seems like it should be one of humanity’s highest priorities regardless, but the debate over whether “nuclear winter” would result from a nuclear exchange isn’t meaningless hairsplitting. One way we can reduce the odds of billions of people dying of mass starvation is to decrease nuclear arsenals, which for both the US and Russia are much smaller than they were at the height of the Cold War but are recently on the rise again.

Is AI an existential risk?

As for whether AI would kill us all, the truth is that reporting on this question is honestly extraordinarily difficult. Climate scientists broadly agree that climate change won’t kill us all, though there’s substantial uncertainty about which tail-risk scenarios are plausible and how plausible. Nuclear war researchers have substantial, heated disagreement about whether a nuclear winter would ensue from a nuclear war.

But both of those disagreements pale in comparison to the degree of disagreement over the impacts of AI. CBS recently asked Geoffrey Hinton, called the godfather of AI, about claims that AI could wipe out humanity. “It’s not inconceivable, that’s all I’ll say,” Hinton said. I’ve heard the same thing from many other experts: Stakes that high seem to be genuinely on the table. Of course, other experts insist there is no cause for worry whatsoever.

The million-dollar question, then, is how AI could wipe us out, if even a nuclear war or a massive pandemic or substantial global temperature change wouldn’t do it. But even if humanity is pretty tough, there are many other species on Earth that can tell you — or could have told you before they went extinct — that an intelligent civilization that doesn’t care about you can absolutely grind up your habitat for its highways (or the AI equivalent, maybe grinding up the whole biosphere to use for AI civilization projects).

It seems extraordinarily difficult to navigate high-stakes trade-offs like these in a principled way. Policymakers don’t know which experts to turn to to understand the stakes of AI development, and there’s no scientific consensus to guide them. One of my biggest takeaways here is that we need to know more. It’s impossible to make good decisions without a clearer grasp of what we’re building, why we’re building it, what might go wrong, and how wrong it could possibly go.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

Source link