Access Alert: Microsoft, Google, OpenAI, and Anthropic launch industry body for advanced AI

Access Alert: Microsoft, Google, OpenAI, and Anthropic launch industry body for advanced AI

Four of the world’s leading artificial intelligence (AI) companies have joined forces to create the Frontier Model Forum. The body, which is comprised of Microsoft, Google, OpenAI, and Anthropic, aims to ensure the safe and responsible development of frontier AI models (those more advanced than existing systems) by harnessing the companies’ technical and operational capacity to aid the wider sector through research and collaboration.

The Forum will focus on identifying best practices, advancing AI safety research, and facilitating information sharing between companies and governments. The body will discuss trust and safety risks with politicians and academics while also promoting positive use cases for the technology, such as detecting cancer and tackling the climate crisis.

Membership is open to companies that demonstrate a strong commitment to safety in the development of their frontier models, both through technical and institutional approaches. Members are also required to participate in joint initiatives and support the Forum’s goals, which include advancing technical evaluations and benchmarks and developing a public library of solutions to support industry standards.

An advisory board for the body will be formed over the coming months to guide its strategy and ensure a plurality of backgrounds and perspectives. The four founding members will also establish key institutional arrangements, including a charter, governance, and funding. These efforts will be led by a working group and executive board.

Microsoft, Google, OpenAI, and Anthropic plan to consult with civil society and government over the next few weeks on the Forum’s design and the most beneficial forms of collaboration. The companies have expressed a willingness to support existing government and multilateral initiatives, including the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council, as well as multi-stakeholder initiatives like the Partnership on AI and MLCommons.

The creation of the body follows calls for tighter regulation of AI over recent months. All four founding members of the Forum were among the seven companies that agreed to new AI safeguards in the US last week at the behest of the Biden Administration. The guidelines include independent testing of AI models and a watermarking system for AI-generated content.

Access Partnership is closely monitoring developments in the AI sector. If you would like to hear more about these topics, please subscribe to our AI Policy Lab newsletter.

Related Articles

The Practical Path Forward: Why Data Regulations 2.0 Must Be Built for Emerging Tech – And Fast

The Practical Path Forward: Why Data Regulations 2.0 Must Be Built for Emerging Tech – And Fast

This article is part of Access Partnership’s series ‘The New Privacy Playbook: Adapting to a Shifting Global Landscape’, which explores...

17 Jun 2025 Opinion
The Power of Light: Optical Links For Satellite Communications

The Power of Light: Optical Links For Satellite Communications

Making big waves in satellite communications, optical links, also known as Free-Space Optical (FSO) communication or laser communication, are transforming...

13 Jun 2025 Opinion
What Vietnam’s Political Transformation Means for Business Leaders

What Vietnam’s Political Transformation Means for Business Leaders

Vietnamese Communist Party General Secretary Tô Lâm and his administration are leading a streamlining of government that will fundamentally change the...

13 Jun 2025 Opinion
Access Alert: Ireland Launches SMS Sender ID Registry to Tackle Text Scams

Access Alert: Ireland Launches SMS Sender ID Registry to Tackle Text Scams

In 2022, Ireland recorded an alarming average of 1,000 fraud cases per day linked to scam calls and texts, with...

12 Jun 2025 Opinion