Access Alert: Microsoft, Google, OpenAI, and Anthropic launch industry body for advanced AI

Access Alert: Microsoft, Google, OpenAI, and Anthropic launch industry body for advanced AI

Four of the world’s leading artificial intelligence (AI) companies have joined forces to create the Frontier Model Forum. The body, which is comprised of Microsoft, Google, OpenAI, and Anthropic, aims to ensure the safe and responsible development of frontier AI models (those more advanced than existing systems) by harnessing the companies’ technical and operational capacity to aid the wider sector through research and collaboration.

The Forum will focus on identifying best practices, advancing AI safety research, and facilitating information sharing between companies and governments. The body will discuss trust and safety risks with politicians and academics while also promoting positive use cases for the technology, such as detecting cancer and tackling the climate crisis.

Membership is open to companies that demonstrate a strong commitment to safety in the development of their frontier models, both through technical and institutional approaches. Members are also required to participate in joint initiatives and support the Forum’s goals, which include advancing technical evaluations and benchmarks and developing a public library of solutions to support industry standards.

An advisory board for the body will be formed over the coming months to guide its strategy and ensure a plurality of backgrounds and perspectives. The four founding members will also establish key institutional arrangements, including a charter, governance, and funding. These efforts will be led by a working group and executive board.

Microsoft, Google, OpenAI, and Anthropic plan to consult with civil society and government over the next few weeks on the Forum’s design and the most beneficial forms of collaboration. The companies have expressed a willingness to support existing government and multilateral initiatives, including the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council, as well as multi-stakeholder initiatives like the Partnership on AI and MLCommons.

The creation of the body follows calls for tighter regulation of AI over recent months. All four founding members of the Forum were among the seven companies that agreed to new AI safeguards in the US last week at the behest of the Biden Administration. The guidelines include independent testing of AI models and a watermarking system for AI-generated content.

Access Partnership is closely monitoring developments in the AI sector. If you would like to hear more about these topics, please subscribe to our AI Policy Lab newsletter.

Related Articles

Tech Policy Trends 2025

Tech Policy Trends 2025

Unlocking the future: The impact of AI on industry, society, and policy AI is transforming the way we live, work,...

3 Dec 2024 Reports
Access Alert: Enhancing Efficiency in India’s Logistics Through AI and Digital Integration

Access Alert: Enhancing Efficiency in India’s Logistics Through AI and Digital Integration

A recent panel discussion at the Bengaluru Tech Summit 2024 on 20 November 2024 focused on the transformative role of...

29 Nov 2024 Opinion
Access Alert: How Will Deepfake Regulations in APAC Impact Your Business?

Access Alert: How Will Deepfake Regulations in APAC Impact Your Business?

The rise of deepfakes – AI-generated content that manipulates audio, video, or images to create realistic but false representations –...

29 Nov 2024 Opinion
Access Alert: UK Government Announces £3.5M Funding Opportunity for Satellite Connectivity Projects

Access Alert: UK Government Announces £3.5M Funding Opportunity for Satellite Connectivity Projects

Introduction The UK Space Agency (UKSA) has launched a funding call of up to £3.5 million aimed at advancing satellite...

28 Nov 2024 Opinion