A period of rapid transformation is dawning, but what legal and ethical questions does this paradigm shift raise? On 25 February 2020, EURACTIV hosted “The Just AI Transition: Where Are the Opportunities and Threats?”, a virtual conference that brought together experts from governance, academia and industry to discuss regulatory frameworks for the development of AI, potential use cases across multiple sectors, and the extent to which this technology can change our lives.
Regulating the Industry
For some, AI is an exciting chance to make tomorrow’s world better; for others, it poses one of the more immediate, yet uncertain, threats to humanity. This duality was highlighted by Lucilla Sioli, Director for Artificial Intelligence and Digital Industry, DG CONNECT, European Commission, who delivered a keynote address to open the conference. While AI does present a number of opportunities – in the last year alone, it has helped speed up the delivery of successful COVID-19 vaccines – Sioli emphasised that it could violate rights if a careful approach is not taken.
To make the technology trustworthy and reliable, the EU is currently working with member states and a high-level expert group to develop a plan; due to be published in April 2021, this strategy will include safeguards and an outline of the infrastructure required to ensure its appropriate use. The idea, Sioli suggests, is to ensure AI systems are safe and compliant before they enter the market.
Joanna Bryson, Professor of Ethics and Technology at The Hertie School of Governance, echoed Sioli’s sentiment on the importance of a rule-based approach to AI that holds manufacturers to account. Bryson asserted that although the full capabilities of AI are yet to be fully understood, it is not necessarily opaque: we should know who has developed the technology and whether they are following regulations.
Accessing AI’s Potential
Representing the view of industry, Loubna Bouarfa, Founder and CEO of Okra Technologies, diverged from the arguments made by Sioli and Bryson. According to her, the COVID-19 pandemic has demonstrated how our reliance on rigid, rule-based systems creates a high-degree of uncertainty that technologies like AI, which are outcome-based, can overcome. “Speed and data are the new currency” Bouarfa opined, so we need to leverage technologies that can help us make optimal decisions faster.
Torbjørn F. Folgerø, SVP and Chief Digital Officer at Equinor, also asserted the game-changing possibilities created by AI. Looking at the technology from the perspective of the energy sector, Folgerø described AI as “critical” to making the transition from oil and gas to renewable sources of power. He also highlighted the potential for AI to build confidence and enhance the skills of employees, who are already receiving training to participate in the industries of tomorrow.
Bouarfa also commented on this point of training and upskilling, focusing on the satisfaction of workers in sectors that are already utilising AI. While individual tasks can be automated through such methods as machine learning, the technology is not usually able to replace entire roles. Instead, it can do the heavy lifting in terms of boring, repetitive duties that are time-consuming and energy-sapping for the workforce. By freeing some of this time, workers have greater opportunities to show creativity and develop their skills in other areas, improving morale and competence.
A Human-Centric Approach
Perhaps the key tenet of the debate between panellists was the relationship between humans and machines in a world of AI. All participants agreed that a key aspect of accessing the full potential of the technology was to establish diverse teams behind the scenes, particularly for the purposes of quality assurance. By bringing together people from a range of backgrounds, it is believed that the AI developed will take on a broader perspective, while this process would also cultivate transparency and fairness.
To take this point further, the human role in AI remains integral. In her keynote address, Sioli reflected that while AI is good at performing specific tasks, it is still far from being as versatile as the human mind; both the utopian and doomsday views on either side of the discussion are overestimated, and we still have a huge part to play in guiding the journey of AI. A crucial aspect of this is building trust among the wider population.
The focus of any law that emerges, with respect to AI, should be to ensure responsibility; Bryson argued that while the Digital Services Act currently compensates for gaps in the EU’s competition law, it will be crucial to ensure that big corporations cannot act freely without regulation and avoid responsibility on the point of AI. Adding to the argument, Folgerø posited that internal compliance is needed within companies to make certain that the use of AI is safe.
Building for the Future
Despite fears about the possible job losses due to an increased rollout of automation, it has been found that companies who care most about their employees are those investing in AI. Summarising their thoughts on the topic, the panellists agreed that we need to “create better solutions for a broader society”, a goal that can be achieved by utilising the advanced tools at our disposal. While there is a need for laws to protect society from the whiplash effect of technological development, it is clear that AI can push us toward a brighter tomorrow.