“Catastrophic” AI harms among warnings in declaration signed by 28 nations

November 4, 2023:

Technology Secretary Michelle Donelan (front row center) is joined by international counterparts for a group photo at the AI Safety Summit at Bletchley Park in Milton Keynes, Buckinghamshire on Wednesday November 1, 2023.
Enlarge / UK Technology Secretary Michelle Donelan (front row center) is joined by international counterparts for a group photo at the AI Safety Summit at Bletchley Park in Milton Keynes, Buckinghamshire, on November 1, 2023.

On Wednesday, the UK hosted an AI Safety Summit attended by 28 countries, including the US and China, which gathered to address potential risks posed by advanced AI systems, reports The New York Times. The event included the signing of “The Bletchley Declaration,” which warns of potential harm from advanced AI and calls for international cooperation to ensure responsible AI deployment.

“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models,” reads the declaration, named after Bletchley Park, the site of the summit and a historic World War II location linked to Alan Turing. Turing wrote influential early speculation about thinking machines.

Rapid advancements in machine learning, including the appearance of chatbots like ChatGPT, have prompted governments worldwide to consider regulating AI. Their concerns led to the meeting, which has drawn criticism for its invitation list. In the tech world, representatives from major companies included those from Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI, and Tencent. Civil society groups, like Britain’s Ada Lovelace Institute and the Algorithmic Justice League in Massachusetts, also sent representatives.

Political summit representatives from the US included Vice President Kamala Harris and Gina Raimondo, the secretary of commerce. China’s vice minister of science and technology, Wu Zhaohui, expressed Beijing’s willingness to “enhance dialogue and communication” on AI safety. UK government representatives like Technology Secretary Michelle Donelan played a starring role in the event, hoping to place the UK front and center in the AI space.

According to The Guardian, UK Prime Minister Rishi Sunak applauded the Bletchley Declaration and emphasized the need to identify potential threats related to advanced AI systems that may eventually surpass human intelligence. Along that line of thinking, Elon Musk, who attended the summit, said, “For the first time, we have a situation where there’s something that is going to be far smarter than the smartest human… It’s not clear to me we can actually control such a thing,” according to The Guardian.

Musk has been a prominent outspoken member of a movement to warn about the hypothetical existential risks of AI. However, some AI experts like Google Brain co-founder Andrew Ng and Meta Chief AI Scientist Yann LeCun call those risks overhyped. “The opinion of the *vast* majority of AI scientists and engineers (me included) is that the whole debate around existential risk is wildly overblown and highly premature,” LeCun wrote on X (formerly Twitter) on October 9. Other critics of AI technology prefer to focus on current perceived harms from AI, including environmental, privacy, ethics, and bias issues, instead of hypothetical threats.

While the summit began an international dialogue on AI safety, it stopped short of setting specific policy goals, according to The Times. That may have something to do with the nebulous nature of “AI” itself. “Artificial intelligence” is a very broad term with a fuzzy definition that can encompass many technologies, ranging from chess-playing computer programs to large language models that can code in Python. In particular, the declaration refers to what it calls “frontier AI,” which the document vaguely defines as “highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks—as well as relevant specific narrow AI that could exhibit capabilities that cause harm—which match or exceed the capabilities present in today’s most advanced models.”

Similarly, there remains a lack of consensus on what global AI regulations should entail or who should be responsible for drafting them. The US, for its part, has announced a separate American AI Safety Institute, and President Biden recently issued an executive order on AI. The European Union is working on an AI bill to establish regulatory principles and guidelines for specific AI technologies.

While the summit signifies a move toward international cooperation on AI safety, some analysts believe it leaned more toward posturing and symbolism. Just before the summit, Prime Minister Sunak announced a live Thursday tie-in interview with Musk that will take place on Musk’s social media platform, X.

Source link