The Ethical Challenge of Artificial Intelligence

In a guest blog for Tech UK, Matt Allison asks whether artificial intelligence presents a fundamental ethical challenge requiring a new regulatory framework.

Policy-makers and regulators around the world are becoming increasingly fixated by the rapid growth of artificial intelligence (AI). Some experts (including the eminent physicist and cosmologist Stephen Hawking and the entrepreneur Elon Musk) have made alarming predictions about the potential for AI to lead to human alienation, suffering, or worse, actual destruction of human life. However, even without taking such an extreme view, it’s evident that the adoption of AI tools will pose new ethical challenges, which may need a regulatory response.

On 16 April the House of Lords Select Committee on AI set out their thoughts on the matter, following a comprehensive inquiry which has been underway since June 2017. Having taken written comments from over 200 organisations and individuals, and heard testimony from a variety of industry, academic and regulatory bodies, peers arrived at the conclusion that a light-touch, industry-led regulatory model was preferable.

The committee report does envision an important role for government in ensuring that AI is deployed in a responsible and ethical way, for example through the creation of “data trusts” to facilitate the ethical sharing of data, seeing this as a way for UK-based SMEs to compete with large, mostly US-based technology companies that are close to holding “data monopolies”. The report also points out the need for the public sector to lead in procurement of AI solutions as a key way to build public trust and confidence in the use of AI.

At the EU level, the European Commission will next week set forth its own position on the topic of AI regulation, with the publication of a communication which is expected to touch on accountability, transparency and liability in the context of AI tools and services. Early indications are that the Commission will demand cooperation from companies developing AI solutions to explain in a clear and transparent way how decisions made using AI can avoid perpetuating entrenched bias, and who should be liable when an AI product or service causes harm.

Industry will push back strongly on any attempt by regulators to compel disclosure of proprietary information, such as algorithms used to generate machine learning. While supporting the aims of transparency and accountability, the prevailing logic in the tech industry is that creating a new regulatory framework specifically for AI today would be premature, as the way this market will develop is still highly uncertain.

There is a degree of truth to this assertion but given that EU regulators will soon be armed with strong new enforcement powers in data protection (through the General Data Protection Regulation) and cybersecurity (through the NIS Directive), it is entirely appropriate for regulators to consider how these new powers can be deployed to address the important ethical and normative concerns associated with AI. Without strong, demonstrable public oversight, trust in AI among the general population will be slow to develop and AI adoption rates will suffer as a result.

 

Author: Matt Allison, Manager, Public Policy, Access Partnership

The article was originally published on Tech UK on 26 April as part of the AI Week.

Related Articles

Should Regulators Be Closing the Skies or the Digital Divide?

Should Regulators Be Closing the Skies or the Digital Divide?

Since satellites provide global connectivity, their frequencies are coordinated internationally by the International Telecommunication Union (ITU) to avoid interference, as...

14 Feb 2025 General
Access Alert: European Commission Outlines 2025 Work Programme

Access Alert: European Commission Outlines 2025 Work Programme

On 11 February, the European Commission published its 2025 Work Programme, outlining 51 new policy initiatives under the theme ‘Moving...

13 Feb 2025 General
Building the Backbone of US Digital Leadership

Building the Backbone of US Digital Leadership

This opinion piece is part of Access Partnership’s  ‘A Digital Manifesto’  initiative, which recommends a framework to develop US global leadership on...

12 Feb 2025 General
Access Alert: EU Launches InvestAI to Mobilise EUR 200 Billion for AI Development

Access Alert: EU Launches InvestAI to Mobilise EUR 200 Billion for AI Development

“Ursula von der Leyen presents her vision to MEPs” by European Parliament is licensed under CC BY 2.0. The European Union has unveiled...

11 Feb 2025 Opinion