Access Alert: Microsoft, Google, OpenAI, and Anthropic launch industry body for advanced AI

Access Alert: Microsoft, Google, OpenAI, and Anthropic launch industry body for advanced AI

Four of the world’s leading artificial intelligence (AI) companies have joined forces to create the Frontier Model Forum. The body, which is comprised of Microsoft, Google, OpenAI, and Anthropic, aims to ensure the safe and responsible development of frontier AI models (those more advanced than existing systems) by harnessing the companies’ technical and operational capacity to aid the wider sector through research and collaboration.

The Forum will focus on identifying best practices, advancing AI safety research, and facilitating information sharing between companies and governments. The body will discuss trust and safety risks with politicians and academics while also promoting positive use cases for the technology, such as detecting cancer and tackling the climate crisis.

Membership is open to companies that demonstrate a strong commitment to safety in the development of their frontier models, both through technical and institutional approaches. Members are also required to participate in joint initiatives and support the Forum’s goals, which include advancing technical evaluations and benchmarks and developing a public library of solutions to support industry standards.

An advisory board for the body will be formed over the coming months to guide its strategy and ensure a plurality of backgrounds and perspectives. The four founding members will also establish key institutional arrangements, including a charter, governance, and funding. These efforts will be led by a working group and executive board.

Microsoft, Google, OpenAI, and Anthropic plan to consult with civil society and government over the next few weeks on the Forum’s design and the most beneficial forms of collaboration. The companies have expressed a willingness to support existing government and multilateral initiatives, including the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council, as well as multi-stakeholder initiatives like the Partnership on AI and MLCommons.

The creation of the body follows calls for tighter regulation of AI over recent months. All four founding members of the Forum were among the seven companies that agreed to new AI safeguards in the US last week at the behest of the Biden Administration. The guidelines include independent testing of AI models and a watermarking system for AI-generated content.

Access Partnership is closely monitoring developments in the AI sector. If you would like to hear more about these topics, please subscribe to our AI Policy Lab newsletter.

Related Articles

Access Alert: New agency for digital transformation and telecommunications in Mexico

Access Alert: New agency for digital transformation and telecommunications in Mexico

The Mexican Congress has approved the creation of the Agency of Digital Transformation and Telecommunications, which will have the level...

19 Nov 2024 Opinion
Access Alert: The wider impact of Australia’s social media ban for under-16s

Access Alert: The wider impact of Australia’s social media ban for under-16s

Australia’s states and territories have unanimously backed a national plan to ban children under sixteen from most forms of social...

18 Nov 2024 Opinion
Economic Impact Report: Driving digital growth in Vietnam with Google

Economic Impact Report: Driving digital growth in Vietnam with Google

Vietnam’s economic development journey has been impressive. From one of the world’s lowest-income countries, Vietnam has risen to become a...

14 Nov 2024 General
Access Alert: What Trump’s 2024 victory means for tech and trade

Access Alert: What Trump’s 2024 victory means for tech and trade

The election of Donald Trump as the 47th US President portends change in US technology and digital policy. Artificial Intelligence...

8 Nov 2024 Opinion