As machines become ever-smarter and better at making decisions, how do we ensure they behave ethically? This was the key question that speakers tackled at the Digital Leadership Forum event “The Ethics of AI” hosted by Dell Technologies at Pivotal.
Discussion was kicked off by Birgitte Andersen, CEO and co-founder of the Big Innovation Centre, who discussed the UK’s role as a leader in ethical AI. She urged the government to reconsider competition policy and ensure equal access to personal and business data for small and medium-sized companies given the emerging presence of monopolistic data giants. To tackle this, she suggested the UK encourage public service leadership in AI adoption and shift policies from controlling data itself towards controlling the uses of that data, based on the principle of ‘fair use’. She concluded by praising the recent creation of the UK Centre for Data Ethics and Innovation and criticising the tendency to focus on risks and harms rather than promoting the potential economic opportunities.
Up next, Access Partnership’s EMEA Public Policy Director Matthew McDermott reminded the audience that we shouldn’t jump to extreme solutions when data protection, product liability, and sectoral laws already provide a level of regulation for AI. Instead, the government and industry collaboration can foster the development of ethical AI — such as through ethical officers in industry — and build on existing regulation while promoting innovation and investment. He also addressed the need for the debate on AI to shift from technical discourse to a discussion on its outcomes. McDermott concluded by underlining the need to ensure UK leadership in the global fora working on AI ethics in the UN, the OECD, G20, and WEF.
Professor Kostas Stathis from Royal Holloway shared his more technical perspective on the topic, addressing the challenges of developing a moral agency to regulate ethical behaviour. He further highlighted two key aspects for a potential design of an ethical AI system: value alignment with humans and data protection. According to him, AI’s main goal of facilitating human activity depends on transparency and accountability to ensure public trust in the technology.
Finally, the speakers gathered with Henrik Nordmark, Head of Data Science at Profusion, in a panel discussion on automation of the workforce. He said that as a society, we need to first consider what we want the role of AI to be — our friend and companion, or as our assistant in decision-making. As such, he argued that the perception of AI should shift from a vision of machines mimicking humans to machines facilitating human activity, specifically in the workplace. Here, Anderson added that in 10 years, 60% of current jobs will be at risk of automation — not necessarily a bad thing, she said, but requiring worker retraining and re-education to fit occupations of the future.
What is more, the panel agreed, is that AI can be used as a tool to rebuild trust. If a company can demonstrate it is open about its use of AI, this could restore confidence in the organisation and create big opportunities for development.
Crucially, the speakers concluded that as a society, we need to make sure we don’t sleepwalk into a society where AI is used in unethical ways. To prevent this, both governments and industry need to step ahead, consider what the problems of the future could be, and how we can work to collectively solve them.
Author: Teodora Delcheva, Public Policy Assistant, Access Partnership