Key Takeaways: Leading Your Organisation to Responsible AI Event

As machines become better and smarter at making decisions, the question of how we ensure their ethical behaviour arises. This was one of the topics debated at the Digital Leadership Forum’s (DLF) “Leading your organisation to responsible AI” event in London on 19 July. The session was the first instalment of a series of events, part of DLF’s newly created “AI for Good” initiative. The project aims to help organisations deploy ethical artificial intelligence (AI) in their products and operations.

DLF AI for Good

As machines become better and smarter at making decisions, the question of how we ensure their ethical behaviour arises. This was one of the topics debated at the Digital Leadership Forum’s (DLF) “Leading your organisation to responsible AI” event, hosted by Lloyds Banking Group in London on 19 July. The session was the first instalment of a series of events, part of DLF’s newly created “AI for Good” initiative. The project, supported by Dell Technologies, aims to help organisations define and deploy ethical artificial intelligence (AI).

The event kicked off with a discussion of a “black box”, a traditional AI model – based on the idea that the more data-heavy and complex the system is, the more accurate the model is. However, this does not always work in practice and makes it more difficult to determine the outcome. For example, a bank might decline a mortgage application based on the AI model’s recommendation and then fail to explain to the consumer why this occurred. AI systems can be unintentionally biased and even discriminatory, often reflecting the data trends inputted into the model. A solution to this issue is to build an “explanatory model”; replicating the traditional “black box”, with the addition of a feedback loop to improve performance and robustness, as well as minimising biases within the data. Mastering the explanatory model will help organisations build trust with their customers and support compliance with regulatory requirements.

Concerns regarding privacy and the inherent bias of AI systems have long been perceived as the most common pitfalls of the technology. Bias occurs when there is a lack of transparency of how AI systems operate and when data is of poor quality. Machine learning processes are only as good as the data they are programmed with, often replicating the deeply rooted biases present in our society. To address this challenge, organisations must ensure that AI applications are developed by teams from diverse demographic, gender, ethnic and socio-economic backgrounds. Moreover, to ensure that the privacy of the individual is not diminished by the collection of personal data, including their identity and behaviours; organisations must remain transparent, justified and accountable throughout decision-making processes and in their application of AI. These values underpin the individual’s right to be forgotten. If managed well, AI will challenge society, enable us to be more productive and conscientious, although it was also noted from the discussion that machines will not outsmart people as the unique element and process of human emotion cannot be replicated.

Focus then turned to whether we really need AI. Technology is always presented as a solution to difficult problems – yet non-technical problems are rarely solved by technology. Before launching a new technology or AI application, organisations should ask themselves if AI is the most suitable and effective option to address the problem that needs solving. Companies should consider the context and ultimate purpose in which AI is to be used. The same applies to the regulation of technology; regulation should not simply respond to technological change but also facilitate and mediate its use. Regulation should be targeted at actions and human behaviour, not at a type of technology. Governments must provide a legally enforceable threshold for responsible AI that drives innovation in a positive direction.

The event ended with roundtable discussions on topics ranging from avoiding bias and ensuring responsible AI to regulating AI in the interest of society. While “responsible AI” has become a widely debated issue, stakeholders struggle to define what is meant by the term and differentiate these practices from “irresponsible AI”. Instead of adopting umbrella terms, it is useful to consider the applications of AI in the context of different sectors and use cases. This provides a more comprehensive focus when considering the challenges of AI in a particular sector and the actions required to solve them collectively. It is essential to engage with a range of perspectives across sectors and facilitate the involvement of multiple actors including industry, academia, civil society and governments. Finally, all levels of AI applications, from data to algorithms, should be examined to ensure that they are free from societal biases and discrimination of historically underrepresented and marginalised communities.

Author: Ivan Ivanov, Marketing Manager, Access Partnership

Related Articles

Access Alert: Mexico’s New Agency for Digital Transformation and Telecommunications Takes Form

Access Alert: Mexico’s New Agency for Digital Transformation and Telecommunications Takes Form

President Claudia Sheinbaum has identified digital transformation as a key strategic element of Plan Mexico, in which she aims to...

3 Feb 2025 Opinion
Strengthening US Global AI Leadership

Strengthening US Global AI Leadership

This opinion piece is part of Access Partnership’s  ‘A Digital Manifesto’  initiative, which recommends a framework to develop US global...

29 Jan 2025 Opinion
Donald Trump Warns DeepSeek Should Be ‘Wake-up Call’ for America’s AI Industry

Donald Trump Warns DeepSeek Should Be ‘Wake-up Call’ for America’s AI Industry

US President Donald Trump has warned that the emergence of DeepSeek, a Chinese AI startup, should serve as a “wake-up...

28 Jan 2025 Opinion
A Digital Manifesto: Making the First Hundred Days Count

A Digital Manifesto: Making the First Hundred Days Count

Lead, Strengthen, and Prepare for the Future The new US Administration should make US global leadership on digital policy a...

23 Jan 2025 Opinion