Access Alert: Landmark agreement reached on EU AI Act

Access Alert: Landmark agreement reached on EU AI Act

After three days of intense negotiations, late on Friday 8 December, EU policymakers reached a milestone political-level agreement on the European Union’s Artificial Intelligence (AI) Act.

While the provisions are still subject to further revisions, and an impending co-legislator vote is expected to determine the final text in Q1 2024, the political-level agreement has reached consensus on various key factors surrounding AI.

Scope and Definition

The regulation has maintained a risk-based approach with additional safeguards for foundation models. Free and open-source software is exempt from the scope of the regulation unless they are categorised as a high-risk AI system, prohibited application, or an AI system at risk of causing manipulation. AI systems whose use case falls purely under Member State competency are also exempt from the scope of regulation.

The definition of an AI system is aligned with the OECD Definition, which reads: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.

Governance

Compliance of AI systems with the Act will be supervised by national competent authorities, whose actions are harmonised at EU-level within the European Artificial Intelligence Board. The AI Board will be compromised of Member State representatives and will act as a coordination point between Member States and an advisory body to the Commission. The supervision of foundation models will be undertaken by the AI Office, who will be advised by a scientific panel of independent experts on matters relating to general-purpose AI models.

Prohibited AI

The agreement prohibits various practices concerning the use of AI, including:

  • Real-time biometric identification with the exception for certain defined law enforcement purposes
  • Ex-post remote biometric identification with the exception for targeted searches for people convicted or suspect of serious crimes
  • Biometric categorisation based on sensitive data
  • Emotion recognition in the workplace and education
  • Predictive policing and social scoring based on certain personal characteristics
  • Manipulative or deceptive practices which exploit persons vulnerabilities

High-risk AI systems

AI systems will now be categorised as high-risk depending on their significant potential harm to health, safety, fundamental rights, the environment, democracy, and rule of law, including systems used to influence the outcome of elections and voter behaviour, AI systems predicting migration trends, and border surveillance.

The Act outlines important obligations for those creating high-risk AI systems that meet certain benchmarks, with conformity assessments, fundamental rights impact assessments, and increased transparency required.

General-purpose AI models/Foundation models

The regulation also introduces horizontal obligations for all foundation models, coupled with more rigorous requirements for exceptionally powerful foundation models that present systemic risks – such models will be classified by the AI Office. The Act highlights certain transparency and copyright obligations – such as the need to provide information on the data used to train models – which will apply to all foundation models.

Fines

The agreement includes a framework for fines, should companies break these rules. Although these vary depending on the company size and violation, fines for non-compliance range from 7.5 million euros or 1.5% of turnover for the supply of incorrect information to 35 million euros or 7% of global turnover.

Next steps

Following the agreement, work will continue at technical level to finalise the details of the new regulation before the co-legislators (European Parliament and Council) will vote on the final text, which is expected in early 2024. All obligations are to be complied with within two years after the text enters into force, with certain provisions becoming enforceable from as early as six months after the text enters into force.

As we approach the finalisation of the EU AI Act, staying informed about its latest developments and progress has never been more important. Access Partnership is closely monitoring the AI Act through our dedicated AI Policy Lab, offering insights into the ever-evolving AI landscape. For more information on the EU AI Act, and how it may impact your organisation, please contact Lydia Dettling at [email protected].

Related Articles

Driving Brazil’s app ecosystem: The economic impact of Google Play and Android

Driving Brazil’s app ecosystem: The economic impact of Google Play and Android

With the largest Internet population in Latin America and the fourth-largest market for app adoption globally, Brazil is an established...

15 Apr 2024 Opinion
Access Alert: Brazilian authorities ask for contributions on AI and connectivity

Access Alert: Brazilian authorities ask for contributions on AI and connectivity

On 9 April, Brazil’s National Telecommunications Authority (Anatel) released a public consultation to gather contributions and insights about the role...

11 Apr 2024 Latest AI Thought Leadership
Responsible AI Readiness Index (RARI)

Responsible AI Readiness Index (RARI)

In an era where AI increasingly influences every aspect of society, the need for responsible and ethical practices has become...

11 Apr 2024 General
Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

The SATELLITE 2024 conference in Washington, DC, took place between 18-21 March 2024. The event brought together close to 15,000...

28 Mar 2024 Opinion