We Are Innovation | Accountability in the EU AI Act: Who is Responsible for Decisions Made by AI?

We Are Innovation | Accountability in the EU AI Act: Who is Responsible for Decisions Made by AI?

This article was originally published in We Are Innovation 17 March 2022. 

Six years ago, DeepMind’s intelligently designed AlphaGo algorithm defeated the world’s most renowned Go player in four out of five intense games played in Seoul, South Korea. Go is commonly referred to as the most strategically complex board game, with an extensive number of potential moves, yet the artificial intelligence (AI) which managed to conquer the world’s best human Go player performed in ways that were entirely unexpected by its competitor, Go experts and even its developers.

If an AI algorithm’s next move cannot be predicted when playing a board game, who should be held accountable for AI’s decisions when there are higher consequences at stake, such as recruitment, credit scores or legal proceedings? Accountability is a key issue in AI regulation and will affect industry players, governments, and civil society alike as the technology begins to be regulated.

Defining Accountability

Since 2017, around 60 countries have implemented an AI strategy or policy, most focused on supporting the adoption of AI or investment in indigenous AI companies. However, the EU is the first region attempting to create a legal framework which regulates AI with its “AI Act” legislation.

The Act details a risk-based approach to AI regulation, categorising AI products into three groups: prohibited AI systems, high-risk systems, and other systems. In its current form, the Act holds developers and manufacturers responsible for AI failures or unprecedented outcomes, whether intended or not. Considering that many algorithms make decisions which are unpredicted by their developers, as demonstrated by DeepMind’s AlphaGo, is this fair?

A rigid approach to accountability disadvantages small and medium enterprises (SMEs). Smaller companies will not be able to manage the liability burden that will attach itself to AI development. An alternative is to find a solution that regulates AI not as a single product or service, but as a continuous process that constantly undergoes modifications and adaptations.

Is Explainability the Solution?

One approach to addressing accountability is explainability – the process of demonstrating how an algorithm arrives at its solution, mathematically or otherwise. In theory, if AI’s decision-making process can be explained in a coherent and transparent manner, then the correct entity can be held responsible for its decisions. The EU has already identified explainability as a tool to address trust, declaring that citizens should have access to information which demonstrates how an AI algorithm reached its conclusion, to promote fair decision-making processes.

However, this process can be extremely costly and difficult considering the complexity of AI systems. According to the EU AI Act, explainability expectations will vary according to location, meaning that businesses will end up having to create different algorithms to operate in different markets, something which is extremely costly, again disadvantaging SMEs. To make explainability an effective tool, the Act must recognise these challenges and offer alternative solutions so challenger businesses in the AI economy are not disadvantaged.

Where Europe Goes, the World will Follow

Failing to identify a solution for accountability that recognises the need to support innovation runs the risk of forcing SMEs out of the market. Manufacturers cannot always control or predict the decisions that AI makes and explainability is an unfeasible alternative, particularly for SMEs. Therefore, policymakers should invest time and resources into developing a transparent, affordable process which holds the appropriate entities accountable for AI’s decisions. This is crucial, since much like Europe’s General Data Protection Regulation (GDPR), the AI Act will serve as a global model for other regions to follow.

Subscribe to our news alerts here.

Related Articles

Channel NewsAsia: Heart of the Matter – When cash is no longer king for payments, who gets left behind?

Channel NewsAsia: Heart of the Matter – When cash is no longer king for payments, who gets left behind?

Singapore’s adoption rate of cashless payments is one of the highest in Asia. But there are pockets of the population...

6 Jun 2023 Press
Access Alert | Rwandan government approves National AI Policy

Access Alert | Rwandan government approves National AI Policy

On 20 April 2023, the Cabinet of Rwanda approved the National AI Policy, a significant step towards harnessing the potential...

5 Jun 2023 Opinion
Charting a Path Forward: The US Government’s Quest to Regulate Artificial Intelligence in 2023

Charting a Path Forward: The US Government’s Quest to Regulate Artificial Intelligence in 2023

In the first half of 2023 alone, advancements in Artificial Intelligence (AI) and its increased prevalence in people’s daily lives...

2 Jun 2023 AI Policy Lab
The Economic Impact of Generative AI: The Future of Work in the Philippines

The Economic Impact of Generative AI: The Future of Work in the Philippines

Artificial Intelligence (AI) has been a rapidly growing field in the Philippines and is predicted to make a significant contribution...

31 May 2023 AI Policy Lab reports