UN Report: How to stop risking human extinction

September 22, 2022:

Since 1990, the United Nations Development Programme has been tasked with releasing reports every few years on the state of the world. The 2021/2022 report — released earlier this month, and the first one since the Covid-19 pandemic began — is titled “Uncertain Times, Unsettled Lives.” And unsurprisingly, it makes for stressful reading.

“The war in Ukraine reverberates throughout the world,” the report opens, “causing immense human suffering, including a cost-of-living crisis. Climate and ecological disasters threaten the world daily. It is seductively easy to discount crises as one-offs, natural to hope for a return to normal. But dousing the latest fire or booting the latest demagogue will be an unwinnable game of whack-a-mole unless we come to terms with the fact that the world is fundamentally changing. There is no going back.”

Those words ring true. Only a few years ago, we lived in a world where experts had long warned that a pandemic was coming and it could be devastating — now, we live in a world that a pandemic has clearly devastated. Only a year ago, there hadn’t been a large land war in Europe since World War II, and some experts optimistically assumed that two countries with McDonald’s in them would never go to war.

Now, not only is Russia occupying stretches of Ukraine, but the destruction of Russia’s army in the fighting there has kicked off other regional instability, most notably with Azerbaijan attacking Armenia earlier this month. Fears of the use of nuclear weapons in wartime, quiet since the Cold War, are back as people worry about whether Putin could turn to tactical nukes if faced with a total defeat in Ukraine.

Of course, all of those situations are possible — even likely — to resolve without catastrophe. The worst rarely happens. But it’s hard to avoid a feeling that we’re just rolling the dice, hoping that we somehow won’t eventually hit on an unlucky number. Every pandemic, every minor war between nuclear-armed powers, every new and uncontrolled technology, may pose only a small chance of escalating to a catastrophic-scale event. But if we take that risk every year without taking precautions, humanity’s lifespan may be limited.

Why “existential security” is the opposite of “existential risk”

Toby Ord, senior research fellow at Oxford’s Future of Humanity Institute and the author of the existential risk book The Precipice: Existential Risk and the Future of Humanity, explores this question in an essay in the latest UNDP report. He calls it the problem of “existential security”: the challenge not just of preventing each individual prospective catastrophe, but of building a world that stops rolling the dice on possible extinction.

“To survive,” he writes in the report, “we need to achieve two things. We must first bring the current level of existential risk down — putting out the fires we already face from the threats of nuclear war and climate change. But we cannot always be fighting fires. A defining feature of existential risk is that there are no second chances — a single existential catastrophe would be our permanent undoing. So we must also create the equivalent of fire brigades and fire safety codes — making institutional changes to ensure that existential risk (including that from new technologies and developments) stays low forever.”

He illustrates the point with this fairly terrifying graph:

Toby Ord, UN Human Development Report 2021-2022

The idea is this: Say we go through a situation where a dictator threatens to use nuclear war, or where tensions between two nuclear powers seem to be hitting the breaking point. Maybe most of the time the situation is defused, as indeed was the case during the many, many Cold War close calls. But if this situation recurs every few decades, then the probability we’ll defuse every single prospective nuclear war will get steadily lower. The odds that humanity will still be around in 200 years eventually become quite low, just as the odds that you can keep winning at craps drop with every roll.

“Existential security” is the state where we are mostly not facing risks in any given year, or decade, or ideally even century, that have a substantial chance of annihilating civilization. For existential security from nuclear risk, for instance, perhaps we reduce nuclear arsenals to the point where even a full nuclear exchange would not pose a risk of collapsing civilization, something the world made significant progress on as countries slashed nuclear arsenal levels after the Cold War. For existential security from pandemics, we could develop PPE that is comfortable to wear and provides approximately total protection against disease, plus a worldwide system to detect diseases early — ensuring that any catastrophic pandemic would be possible to nip in the bud and protect people from.

The ideal, though, would be existential security from everything — not just from the knowns, but the unknowns. For example, one big worry among experts including Ord is that once we build highly capable artificial intelligences, AI will dramatically hasten the development of new technologies that imperil the world while — because of how modern AI systems are designed — it’ll be incredibly difficult to tell what it’s doing or why.

So an ideal approach to managing existential risk doesn’t just fight today’s threats but makes policies that will prevent threats from arising in the future too.

That sounds great. As longtermists have argued recently, existential risks pose a particularly devastating threat because they could destroy not just the present, but a future where hundreds of billions more people could one day live. But how do we bring it about?

Ord proposes “an institution aimed at existential security.” He points out that preventing the end of the world is exactly the sort of thing that’s supposed to be within the purview of the United Nations — after all, “the risks that could destroy us transcend national boundaries,” he writes. The problem, Ord observes, is that to prevent existential risk, an institution would have to have broad ability to intervene in the world. No country wants any other country to be allowed to pursue an incredibly dangerous research program, but at the same time, no country wants to give other countries purview over their own research programs. Only a supranational authority — something like the International Atomic Energy Agency, but with a far broader remit — could potentially overcome those more narrow national concerns.

Often, the hard part in securing humanity’s future isn’t figuring out what needs to be done but actually doing it. With climate change, the problem and the risks were well understood for a long time before the world took action to shift away from greenhouse gases. Experts warned about the risks of pandemics before Covid-19 struck, but they largely weren’t listened to — and institutions that the US thought were ready, like the CDC, turned out to fall on their face during a real crisis. Today, there are expert warnings about artificial intelligence, but other experts assure us there’ll be no problem and we don’t need to try to solve it.

Writing reports only helps if people read them; building an international institute for existential security only works if there’s a way to transform the study of existential risks into serious, coordinated action to make sure we don’t face them. “There is not sufficient buy-in at the moment,” Ord acknowledges, but “this may change over years or decades as people slowly face up to the gravity of the threats facing humanity.”

Ord doesn’t speculate on what might bring that change about, but personally, I’m pessimistic. Anything that changed the international order enough to support international institutions with real authority with respect to existential risk would likely have to be a devastating catastrophe in its own right. It seems unlikely we’ll make it to the path of “existential security” without taking some serious risks — which hopefully we survive to learn from.

Source link