The second AI Safety Summit in Seoul, South Korea, concluded with advancements in AI governance and international cooperation.
Over two days, governments, industry leaders, and civil society representatives engaged in discussions that produced three major documents: the “Seoul Declaration,” the “Frontier AI Safety Commitments,” and the “Seoul Ministerial Statement.”
Seoul Declaration
The Seoul Declaration aims to enhance international cooperation on AI governance by engaging with various global initiatives. Leaders acknowledged the importance of global cooperation, citing efforts such as the Hiroshima AI Process Friends Group and the updated OECD AI principles. They endorsed the recent UN General Assembly resolution advocating for safe, secure, and trustworthy AI systems for sustainable development. Additionally, they welcomed ongoing discussions on the Global Digital Compact ahead of the 2024 Summit of the Future and looked forward to the final report from the UN Secretary-General’s High-level Advisory Body on AI.
Frontier AI Safety Commitments
Sixteen global AI tech companies, including Amazon, Google, Microsoft, Meta, and OpenAI, among others from the US, China, and UAE, signed the “Frontier AI Safety Commitments.” These commitments focus on several key areas:
- Risk Management: Companies committed to evaluating risks throughout the AI lifecycle, defining severe risk thresholds, implementing strategies to mitigate these risks, and establishing processes for managing risks that exceed thresholds. Continuous improvement in risk assessment and mitigation practices was also emphasised.
- Accountability: Developing accountability frameworks, assigning roles, and allocating resources to uphold these commitments were highlighted as essential steps.
- Transparency: The companies pledged to provide public updates on the implementation of their commitments and involve external actors in their risk assessment and safety frameworks. They also committed to adopting best practices, such as internal and external red-teaming, promoting information sharing, enhancing cybersecurity, fostering third-party vulnerability reporting, and openly disclosing model capabilities and limitations.
A safety framework will be published by the next AI Summit in Paris, ensuring transparency in their evolving approaches.
Seoul Ministerial Statement
The summit concluded with 27 countries and the EU agreeing to collaborate on defining AI risk thresholds. Notably, while China participated in the discussions, it did not sign the “Seoul Ministerial Statement,” abstaining from endorsing the unified approach to AI governance.
The Seoul Ministerial Statement emphasised the importance of a global strategy focused on AI safety, innovation, and inclusivity. Key points included the need for transparency, accountability, and robust risk management for advanced AI models. It also highlighted the importance of promoting AI-driven ecosystems, public sector applications, and sustainable practices while ensuring equitable AI benefits, enhancing digital literacy, and bridging digital divides.
Moving Forward
Ministers committed to ongoing dialogue, empirical research, and proactive measures to guide AI development responsibly. Specific thresholds to identify and manage severe AI risks will be established before the next summit in Paris on Monday 10 and Tuesday 11 February 2025. The summit also expanded its focus beyond the initial AI Safety Summit at Bletchley Park, addressing the environmental impact of AI and the equitable distribution of its benefits.
If you are interested in aligning your AI practices with global standards or want to stay ahead in the evolving landscape of AI governance, get in touch with Jessica Birch at [email protected].