Access Alert: Landmark agreement reached on EU AI Act

Access Alert: Landmark agreement reached on EU AI Act

After three days of intense negotiations, late on Friday 8 December, EU policymakers reached a milestone political-level agreement on the European Union’s Artificial Intelligence (AI) Act.

While the provisions are still subject to further revisions, and an impending co-legislator vote is expected to determine the final text in Q1 2024, the political-level agreement has reached consensus on various key factors surrounding AI.

Scope and Definition

The regulation has maintained a risk-based approach with additional safeguards for foundation models. Free and open-source software is exempt from the scope of the regulation unless they are categorised as a high-risk AI system, prohibited application, or an AI system at risk of causing manipulation. AI systems whose use case falls purely under Member State competency are also exempt from the scope of regulation.

The definition of an AI system is aligned with the OECD Definition, which reads: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.

Governance

Compliance of AI systems with the Act will be supervised by national competent authorities, whose actions are harmonised at EU-level within the European Artificial Intelligence Board. The AI Board will be compromised of Member State representatives and will act as a coordination point between Member States and an advisory body to the Commission. The supervision of foundation models will be undertaken by the AI Office, who will be advised by a scientific panel of independent experts on matters relating to general-purpose AI models.

Prohibited AI

The agreement prohibits various practices concerning the use of AI, including:

  • Real-time biometric identification with the exception for certain defined law enforcement purposes
  • Ex-post remote biometric identification with the exception for targeted searches for people convicted or suspect of serious crimes
  • Biometric categorisation based on sensitive data
  • Emotion recognition in the workplace and education
  • Predictive policing and social scoring based on certain personal characteristics
  • Manipulative or deceptive practices which exploit persons vulnerabilities

High-risk AI systems

AI systems will now be categorised as high-risk depending on their significant potential harm to health, safety, fundamental rights, the environment, democracy, and rule of law, including systems used to influence the outcome of elections and voter behaviour, AI systems predicting migration trends, and border surveillance.

The Act outlines important obligations for those creating high-risk AI systems that meet certain benchmarks, with conformity assessments, fundamental rights impact assessments, and increased transparency required.

General-purpose AI models/Foundation models

The regulation also introduces horizontal obligations for all foundation models, coupled with more rigorous requirements for exceptionally powerful foundation models that present systemic risks – such models will be classified by the AI Office. The Act highlights certain transparency and copyright obligations – such as the need to provide information on the data used to train models – which will apply to all foundation models.

Fines

The agreement includes a framework for fines, should companies break these rules. Although these vary depending on the company size and violation, fines for non-compliance range from 7.5 million euros or 1.5% of turnover for the supply of incorrect information to 35 million euros or 7% of global turnover.

Next steps

Following the agreement, work will continue at technical level to finalise the details of the new regulation before the co-legislators (European Parliament and Council) will vote on the final text, which is expected in early 2024. All obligations are to be complied with within two years after the text enters into force, with certain provisions becoming enforceable from as early as six months after the text enters into force.

As we approach the finalisation of the EU AI Act, staying informed about its latest developments and progress has never been more important. Access Partnership is closely monitoring the AI Act through our dedicated AI Policy Lab, offering insights into the ever-evolving AI landscape. For more information on the EU AI Act, and how it may impact your organisation, please contact Lydia Dettling at [email protected].

Related Articles

The need for more accurate geographical coordinates for earth stations in SpaceCap

The need for more accurate geographical coordinates for earth stations in SpaceCap

The International Telecommunication Union (ITU) Radiocommunication Sector (ITU-R)  plays an important role in the global management of the radio-frequency spectrum...

3 Oct 2024 Opinion
Advantage Southeast Asia: Emerging AI Leader

Advantage Southeast Asia: Emerging AI Leader

Artificial intelligence (AI) is offering a once-in-a-generation opportunity for economic growth and societal transformation, with the conversation dominated by the...

2 Oct 2024 General
Google and Korea: 20 years of partnership and AI innovation

Google and Korea: 20 years of partnership and AI innovation

Korean entertainment groups like Blackpink and BTS have continually taken the world by storm, while Android revolutionized mobile access for...

26 Sep 2024 General
Access Alert: A new era of global governance – the Pact for the Future

Access Alert: A new era of global governance – the Pact for the Future

On 22 September, the United Nations adopted the Pact for the Future at the Summit of the Future in New...

24 Sep 2024 Opinion