Access Alert: Landmark agreement reached on EU AI Act

Access Alert: Landmark agreement reached on EU AI Act

After three days of intense negotiations, late on Friday 8 December, EU policymakers reached a milestone political-level agreement on the European Union’s Artificial Intelligence (AI) Act.

While the provisions are still subject to further revisions, and an impending co-legislator vote is expected to determine the final text in Q1 2024, the political-level agreement has reached consensus on various key factors surrounding AI.

Scope and Definition

The regulation has maintained a risk-based approach with additional safeguards for foundation models. Free and open-source software is exempt from the scope of the regulation unless they are categorised as a high-risk AI system, prohibited application, or an AI system at risk of causing manipulation. AI systems whose use case falls purely under Member State competency are also exempt from the scope of regulation.

The definition of an AI system is aligned with the OECD Definition, which reads: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.


Compliance of AI systems with the Act will be supervised by national competent authorities, whose actions are harmonised at EU-level within the European Artificial Intelligence Board. The AI Board will be compromised of Member State representatives and will act as a coordination point between Member States and an advisory body to the Commission. The supervision of foundation models will be undertaken by the AI Office, who will be advised by a scientific panel of independent experts on matters relating to general-purpose AI models.

Prohibited AI

The agreement prohibits various practices concerning the use of AI, including:

  • Real-time biometric identification with the exception for certain defined law enforcement purposes
  • Ex-post remote biometric identification with the exception for targeted searches for people convicted or suspect of serious crimes
  • Biometric categorisation based on sensitive data
  • Emotion recognition in the workplace and education
  • Predictive policing and social scoring based on certain personal characteristics
  • Manipulative or deceptive practices which exploit persons vulnerabilities

High-risk AI systems

AI systems will now be categorised as high-risk depending on their significant potential harm to health, safety, fundamental rights, the environment, democracy, and rule of law, including systems used to influence the outcome of elections and voter behaviour, AI systems predicting migration trends, and border surveillance.

The Act outlines important obligations for those creating high-risk AI systems that meet certain benchmarks, with conformity assessments, fundamental rights impact assessments, and increased transparency required.

General-purpose AI models/Foundation models

The regulation also introduces horizontal obligations for all foundation models, coupled with more rigorous requirements for exceptionally powerful foundation models that present systemic risks – such models will be classified by the AI Office. The Act highlights certain transparency and copyright obligations – such as the need to provide information on the data used to train models – which will apply to all foundation models.


The agreement includes a framework for fines, should companies break these rules. Although these vary depending on the company size and violation, fines for non-compliance range from 7.5 million euros or 1.5% of turnover for the supply of incorrect information to 35 million euros or 7% of global turnover.

Next steps

Following the agreement, work will continue at technical level to finalise the details of the new regulation before the co-legislators (European Parliament and Council) will vote on the final text, which is expected in early 2024. All obligations are to be complied with within two years after the text enters into force, with certain provisions becoming enforceable from as early as six months after the text enters into force.

As we approach the finalisation of the EU AI Act, staying informed about its latest developments and progress has never been more important. Access Partnership is closely monitoring the AI Act through our dedicated AI Policy Lab, offering insights into the ever-evolving AI landscape. For more information on the EU AI Act, and how it may impact your organisation, please contact Lydia Dettling at [email protected].

Related Articles

Access Alert: India General Elections 2024 – What’s Next?

Access Alert: India General Elections 2024 – What’s Next?

Between 19 April and 1 June, India held the world’s largest democratic elections, with 969 million eligible voters. This marathon...

8 Jul 2024 Opinion
Access Alert: 2024 UK general election – Labour triumphs with pledge for change

Access Alert: 2024 UK general election – Labour triumphs with pledge for change

Labour landslide UK voters have elected the first Labour government since 2010, ending 14 years of Conservative-led administrations. At the...

5 Jul 2024 Opinion
India’s App Market: Creating Global Impact

India’s App Market: Creating Global Impact

The Indian app market is experiencing rapid growth and continues to solidify its position as a major global player. For...

2 Jul 2024 Opinion
The State of Broadband 2024 Annual Report: Leveraging AI for Universal Connectivity

The State of Broadband 2024 Annual Report: Leveraging AI for Universal Connectivity

With the artificial intelligence (AI) revolution already well underway, the Broadband Commission has added yet another task to AI’s to-do...

2 Jul 2024 Opinion