Europe’s Bid to Regulate AI Worldwide

Europe’s Bid to Regulate AI Worldwide

On 21 April, the European Commission presented its proposal for a European regulatory approach to artificial intelligence, adopting a risk-based attitude with strict rules for high-risk systems. This proposed regulation is a move against aggressive surveillance and bias in AI organisms and demonstrates Europe’s commitment to safeguarding individual rights in a time of disruptive technological shifts. This proposal presents significant consequences for processes of recruitment, aerospace technologies, cyber security, law enforcement and even the wider administration of justice and democratic processes.

The Commission’s Executive Vice President, Margarethe Vestager, has argued that the rules would develop “new global norms” while being “future-proof and innovation friendly”. However, will the rest of the world follow suit?

The Targets

The regulation targets “high-risk” AI systems. Key targets in the draft include forms of invasive and unethical AI, including for the manipulation of behaviour such as political messaging, and would outlaw AI-based indiscriminate surveillance and social scoring systems similar to current Chinese practices. The EU also looks set to take a precautionary approach to other AI applications. Among these technologies are facial recognition systems using existing surveillance cameras, which would need special permission from EU regulators. This also extends to AI usage for hiring processes, including algorithms which filter applications and even financial systems that calculate credit scores. Systems which assess eligibility for welfare or judicial assistance will require assessments to ensure they meet EU requirements.

These rules follow a risk scale, the most severe being “unacceptable risk”, which will result in the AI system being partially or fully banned. The larger “high risk” category includes “critical infrastructures” such as transport, “educational and vocational training” and “law enforcement”. To ensure compliance, the AI regulation introduces three levels of fines for breaches, ranging from up to 2% of global annual turnover or EUR 10 million (whichever is higher) for supplying incorrect, incomplete or misleading information to notified bodies and national competent authorities, to up to 6% of global annual turnover or EUR 30 million (whichever is higher) for non-compliance with prohibited AI practices and rules on data and data governance.

Industry’s Diplomatic Caution

While the risk-based approach was welcomed by the tech industry, some players, including DIGITAL EUROPE, fear that “the inclusion of AI software as part of the EU’s product compliance framework could lead to an excessive burden for many providers”. Critics have questioned the vague definitions of AI in the draft legislation, which largely focuses on machine learning but may not fully apply to next-gen computing technologies, such as quantum computing. Other areas for concerns include the differentiation between high-risk and low-risk AI systems, and changing risk factors during development life cycles. Due to the changing nature of AI, the Commission has reserved rights to also add or amend the list of AI systems – so industry should view exclusion from Annex III today as a temporary reprieve, rather than a deliverance. A major concern raised by policymakers and industry alike is the foreseen burden for small and medium companies to comply with documentation and operational requirements for high-risk systems.

Can Europe Lead the Way?

While the Commission hopes to repeat its success with the GDPR and set global standards on AI, there are industry fears that the proposed rules will increase costs and limit innovation, just as AI developments are beginning to demonstrate their potential. The Commission also aims to solve the difficult task of striking a balance between increasing its competitiveness while ensuring the protection of citizens’ privacy and fundamental rights. The key question is whether this balance is persuasive to other jurisdictions, or else the EU may simply look too complicated to launch or develop products in – and find the EU’s burgeoning AI sectors looking to offshore bases (perhaps one just a few miles from its shores).

On this score, the US Federal Trade Commission expressed its comparatively more relaxed commitment to regulatory enforcement regarding AI. Time will tell if Washington finds itself behind global sentiment as it has done with GDPR, or if turns out to be prescient.

What Next?

Before determining whether the EU will shape the global AI rulebook, the proposed regulation must first be scrutinised by member states and parliamentarians. As with the other digital files currently under examination, AI regulation faces an uphill battle, with special interests at the ready.

Related Articles

Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

The SATELLITE 2024 conference in Washington, DC, took place between 18-21 March 2024. The event brought together close to 15,000...

28 Mar 2024 Opinion
Access Alert: Saudi Arabia launches consultation on spectrum management

Access Alert: Saudi Arabia launches consultation on spectrum management

Continuing the efforts carried out by the Communications and Information Technology Commission (CST) to improve Saudi Arabia’s regulatory framework and...

26 Mar 2024 Opinion
Access Alert: APEC Peru 2024

Access Alert: APEC Peru 2024

From February through March 2024, delegations from the 21 Asia-Pacific Economic Cooperation (APEC) economies gathered at the Lima Convention Center...

22 Mar 2024 Opinion
Digital Equality in Latin America

Digital Equality in Latin America

In this episode of LATAM Digital, Geusseppe Gonzalez engages with Sonia Jorge, Founder and Executive Director of the Global Digital...

21 Mar 2024 Opinion