Access Alert: AI Safety Summit – global declaration and landmark agreements

Access Alert: AI Safety Summit – global declaration and landmark agreements

The AI Safety Summit 2023 took place at Bletchley Park in the UK this week, uniting international stakeholders from governments, AI enterprises, civil society, and research luminaries. The core pillar of the event was the “Bletchley Declaration“, supported by a coalition of 28 nations led by the UK and featuring the US, EU, and China, which emphasised paramount AI safety principles. The declaration called for the responsible development and deployment of AI and recognised AI’s potential to contribute to the UN Sustainable Development Goals.

The Summit yielded significant outcomes:

  1. A shared comprehension of the risks tied to frontier AI systems, emphasising collective responsibility across the entire AI lifecycle.
  2. Plans for an international advisory panel on frontier AI risks to craft a “State of Science” report on frontier AI’s risks and capabilities.
  3. A “landmark agreement” to test AI model safety before release among “like-minded governments” and leading AI companies, including Amazon Web Services, Anthropic, Google, DeepMind, Meta, Microsoft, and Open AI.
  4. A new AI Safety Institute will be formed to assess advanced AI systems, enhance basic AI safety research, and foster information sharing through collaboration with AI companies and countries.

From the EU side, it was highlighted that the bloc is on track to finalise the EU AI Act by the end of the year, with a call for AI developers to endorse the Code of Conduct proposed by G7 leaders. Discussions are ongoing concerning the formation of a European AI Office, primarily focused on overseeing advanced AI models and collaborating with the scientific community to set standards and testing protocols. Italian Prime Minister Giorgia Meloni unveiled plans for an international conference on AI’s impact on the workforce during Italy’s G7 presidency.

The Summit culminated with a conversation between Rishi Sunak and Elon Musk. Musk cautioned against the perils of AI surpassing human intelligence and advocated for the prudent establishment of a “referee” to oversee tech companies.

Access Partnership continues to monitor AI-related developments globally, with a dedicated AI Policy Lab that cultivates conversations surrounding artificial intelligence. For more insights about what these developments mean for your business please reach out to Mike Laughton at [email protected] or Jessica Birch at [email protected].

Related Articles

Access Alert: Trump’s First Executive Orders

Access Alert: Trump’s First Executive Orders

Following his inauguration yesterday, President Trump has quickly moved to sign a series of executive orders, marking a decisive shift...

21 Jan 2025 Opinion
Viable Online Age Verification Technologies and the Implementation of Age-Restricted Social Media Legislation

Viable Online Age Verification Technologies and the Implementation of Age-Restricted Social Media Legislation

Australia’s Parliament amended its Online Safety regulations[1] and on 10 Dec 2024 banned children under 16 from using social media...

21 Jan 2025 Opinion
Understanding How AI Impacts Jobs and Skills in ASEAN

Understanding How AI Impacts Jobs and Skills in ASEAN

The rapid adoption of artificial intelligence (AI) and generative AI (GAI) is accelerating the change in the skills needed for...

17 Jan 2025 Opinion
Digitalisation Driving Trade Amid Constraints

Digitalisation Driving Trade Amid Constraints

In a world increasingly anxious about restricted global trade, digitalisation stands out as a quiet driver. Digitalisation is not just...

15 Jan 2025 Opinion