Top AI players pledge to pull the plug on models that present intolerable risk

Seoul Summit follows up Bletchley Declaration with more non-binding and vague promises

Sixteen global AI leaders – including Google, Microsoft, IBM, and OpenAI – have made fresh but non-binding pledges to deactivate their own tech if it shows signs it is driving a dystopian outcome.

The commitments were inked on the opening day of the AI Seoul Summit 2024, staged this week in South Korea. The event is a sequel to last year's AI Safety Summit that saw 28 nations and the EU sign up for The Bletchley Declaration – a shared vision for addressing Ai-related risks, but absent concrete or practical commitments.

Ahead of this year's event UK prime minister Rishi Sunak and South Korean president Yoon Suk Yeol penned an op-ed in which they wrote: "The pace of change will only continue to accelerate, so our work must accelerate too."

The Seoul Summit produced a set of Frontier AI Safety Commitments that will see signatories publish safety frameworks on how they will measure risks of their AI models. This includes outlining at what point risks become intolerable and what actions signatories will take at that point. And if mitigations do not keep risks below thresholds, the signatories have pledged not to "develop or deploy a model or system at all."

Other signatories included Amazon, Anthropic, Cohere, G42, Inflection AI, Meta, Mistral AI, Naver, Samsung Electronics, Technology Innovation Institute, xAI and Zhipu.ai.

All of that sounds great … but the details haven't been worked out. And they won't be, until an "AI Action Summit" to be staged in early 2025.

Signatories to the Seoul document have also committed to red-teaming their frontier AI models and systems, sharing information, investing in cyber security and insider threat safeguards in order to protect unreleased tech, incentivizing third-party discovery and reporting of vulnerabilities, AI content labelling, prioritizing research on the societal risks posed by AI, and to use AI for good.

In other words, to "address the world's greatest challenges."

A UK government press release referred to the signing participants as "the most significant AI technology companies around the world" and noted that it included representation from "the world's two biggest AI powers" – the US and China.

During the summit, Yoon and Sunak led a leaders' session in which a document known as the Seoul Declaration [PDF] was adopted.

"We recognize the importance of interoperability between AI governance frameworks in line with a risk-based approach to maximize the benefits and address the broad range of risks from AI, to ensure the safe, secure, and trustworthy design, development, deployment, and use of Al," the document states.

Attendees of the session included government – the G7 plus Singapore, Australia, the UN, the PECD and the EU – alongside industry representatives. ®

More about

TIP US OFF

Send us news


Other stories you might like