The Governance of AI: the ‘Brussels Effect’ of a Pan-European Framework

The Governance of AI: the ‘Brussels Effect’ of a Pan-European Framework

Artificial Intelligence (AI) is transforming our way of life and its applications are continuously expanding. The development of AI shows massive potential but also poses critical unanswered questions. On 21 January, a panel of distinguished speakers gathered under the auspices of Workday and Forum Europe to discuss the governance of AI and the development of a global eco-system of trust. Panellists discussed, in the first place, Europe’s attempts to set up an AI regulatory framework, the risks and challenges this entails, as well as cooperation opportunities with third states and multilateral fora.

The need for a pan-European AI framework

The key theme that found a consensus during the conference is the need for a European regulatory framework that could allow safe use of AI. At the EU level, emphasis has been placed on the desirability of a harmonised approach to AI regulation and governance, especially as regards the upcoming Digital Services Act (DSA). The Commission is relying on a pyramid-like categorisation of AI applications, including B2B and B2C ones, whereby high-risk applications are pictured at the top, and the main bulk includes several low-risk B2B cases that do not require additional regulation. It was also highlighted how EU standards on public-sector AI procurement would be needed, on the basis of the seven Ethics Guidelines for Trustworthy Artificial Intelligence presented by the High-Level Expert Group on AI. Among these, a notion of transparency and human oversight emerged as particularly crucial ones from this panel.

Jim Shaughnessy from Workday highlighted that the company’s AI whitepaper shows agreement with the European Commission’s approach proposing a ‘Trustworthy by Design Regulatory Framework’ based on transparency, governance, accountability and enforcement. Given the growing pervasiveness of AI technology, continued Jim Shaughnessy, a coherent and trustworthy framework for AI developers would prevent fragmentation and ensure trust. MEP Anna-Michelle Asimakopoulou added that such framework should be human centric and based on European values. The objective of the regulations should be “trustworthiness rather than mere trust”, claimed Andrea Renda from the Centre for European Policy Studies. The EU should not regulate any kind of AI, but AI which is oriented towards the common good.

The risk of overregulation

While it is widely recognised that regulation is needed, the risk of overregulation remains an alarming image. Indeed, the speakers underlined that some aspects relating to AI are already regulated by existing legislation. Privacy, for instance, is comprehensively dealt with in the GDPR which also defines ‘risk’. Similarly, for transparency there are several sector-specific regulations. Cecilia Bonefeld-Dahl from Digital Europe argued that while the EU should not shy away from high risk, it should be careful not to overregulate. On this matter, European Commission’s representative Kim Jørgensen reassured the public that regulation and innovation are not contradictory, but that to enable the latter, consumers need to trust AI.

Enforcement

Future AI regulation – unlike traditional standard-setting – will require enforcement instruments and mechanisms should that are flexible enough to address the evolving nature of algorithms. For instance, high-risk AI applications will arguably need continuous regulatory revisions. That would require an expert-led, multi-stakeholder body constantly updating the notion of AI-related risk practices. That would be easier at the EU level, but more challenging multilaterally. Overall, effective enforcement will require close cooperation between the public and private sector.

Cooperation with third countries

A final theme that ran through the conference was the opportunity for cooperation with third countries. Specifically, MEP Anna-Michelle Asimakopoulou stated that likeminded countries need to agree on standards and answer together to critical questions. The US elections have created new room for transatlantic cooperation, and it is therefore fundamental for the EU to act now. Reaching an international alliance, noted Renda, is as desirable as complicated. Interpretations of risk and legal understanding differ, with the EU being significantly more focused on ex ante regulations than the US.

Discussing AI governance and future transatlantic cooperation, the panel noted that data privacy would be a viable agenda item for the EU to build new shared agenda with the US. To that end, the risk of regulatory fragmentation across the EU would pose challenges to a smooth transatlantic cooperation. However, the Commission is confident that the ongoing EU digital transformation under Von der Leyen will ensure enough coherence to productively engage with Washington. The goal would be a GDPR-like ‘Brussels effect’ whereby the world would look at the EU as a model when it comes to shaping the digital future and designing relevant regulation.

Finally, at a multilateral level, a number of existing institutional frameworks were discussed as potential ground for future AI governance, including the World Trade Organisation and NATO as far as security and defence applications are concerned. The common element to these fora, panelists agreed, should be a determination not to stifle innovation. To that end, Caecilia Dahl noted, existing regulation such as existing laws on discrimination and consumer protection should be effectively enforced.

All in all, the governance of AI is a tricky matter which is gaining increased momentum at the European as well as global level. The panel showed consensus on the importance of ensuring a trustworthy framework for European innovation although opinions on the likelihood and shape of transatlantic cooperation differ. The next weeks will be key for the creation of new cooperation opportunities with the US administration but will also be the stage of new EU’s legislative proposals.

Authors:

Giulia Abrate, Access Partnership
Leopoldo Biffi, Access Partnership

Related Articles

Access Alert: A new era of global governance – the Pact for the Future

Access Alert: A new era of global governance – the Pact for the Future

On 22 September, the United Nations adopted the Pact for the Future at the Summit of the Future in New...

24 Sep 2024 Opinion
The future of trust: Why AI governance and regulation are crucial in the age of deepfakes

The future of trust: Why AI governance and regulation are crucial in the age of deepfakes

In 2024, a milestone in AI development was reached with Elon Musk’s Grok platform, which enables the production of photorealistic...

23 Sep 2024 Opinion
Access Alert: Mexico presses ahead with dissolution of telecoms regulator

Access Alert: Mexico presses ahead with dissolution of telecoms regulator

Mexico is pushing forward with a constitutional reform package that proposes the extinction of IFT and Cofece, the autonomous bodies...

16 Sep 2024 Opinion
Google’s Economic Impact in Latin America

Google’s Economic Impact in Latin America

Access Partnership is pleased to announce that we supported Google in estimating its impact in Mexico and Brazil. Our study...

13 Sep 2024 Opinion