Key Takeaways: Leading Your Organisation to Responsible AI Event

As machines become better and smarter at making decisions, the question of how we ensure their ethical behaviour arises. This was one of the topics debated at the Digital Leadership Forum’s (DLF) “Leading your organisation to responsible AI” event in London on 19 July. The session was the first instalment of a series of events, part of DLF’s newly created “AI for Good” initiative. The project aims to help organisations deploy ethical artificial intelligence (AI) in their products and operations.

DLF AI for Good

As machines become better and smarter at making decisions, the question of how we ensure their ethical behaviour arises. This was one of the topics debated at the Digital Leadership Forum’s (DLF) “Leading your organisation to responsible AI” event, hosted by Lloyds Banking Group in London on 19 July. The session was the first instalment of a series of events, part of DLF’s newly created “AI for Good” initiative. The project, supported by Dell Technologies, aims to help organisations define and deploy ethical artificial intelligence (AI).

The event kicked off with a discussion of a “black box”, a traditional AI model – based on the idea that the more data-heavy and complex the system is, the more accurate the model is. However, this does not always work in practice and makes it more difficult to determine the outcome. For example, a bank might decline a mortgage application based on the AI model’s recommendation and then fail to explain to the consumer why this occurred. AI systems can be unintentionally biased and even discriminatory, often reflecting the data trends inputted into the model. A solution to this issue is to build an “explanatory model”; replicating the traditional “black box”, with the addition of a feedback loop to improve performance and robustness, as well as minimising biases within the data. Mastering the explanatory model will help organisations build trust with their customers and support compliance with regulatory requirements.

Concerns regarding privacy and the inherent bias of AI systems have long been perceived as the most common pitfalls of the technology. Bias occurs when there is a lack of transparency of how AI systems operate and when data is of poor quality. Machine learning processes are only as good as the data they are programmed with, often replicating the deeply rooted biases present in our society. To address this challenge, organisations must ensure that AI applications are developed by teams from diverse demographic, gender, ethnic and socio-economic backgrounds. Moreover, to ensure that the privacy of the individual is not diminished by the collection of personal data, including their identity and behaviours; organisations must remain transparent, justified and accountable throughout decision-making processes and in their application of AI. These values underpin the individual’s right to be forgotten. If managed well, AI will challenge society, enable us to be more productive and conscientious, although it was also noted from the discussion that machines will not outsmart people as the unique element and process of human emotion cannot be replicated.

Focus then turned to whether we really need AI. Technology is always presented as a solution to difficult problems – yet non-technical problems are rarely solved by technology. Before launching a new technology or AI application, organisations should ask themselves if AI is the most suitable and effective option to address the problem that needs solving. Companies should consider the context and ultimate purpose in which AI is to be used. The same applies to the regulation of technology; regulation should not simply respond to technological change but also facilitate and mediate its use. Regulation should be targeted at actions and human behaviour, not at a type of technology. Governments must provide a legally enforceable threshold for responsible AI that drives innovation in a positive direction.

The event ended with roundtable discussions on topics ranging from avoiding bias and ensuring responsible AI to regulating AI in the interest of society. While “responsible AI” has become a widely debated issue, stakeholders struggle to define what is meant by the term and differentiate these practices from “irresponsible AI”. Instead of adopting umbrella terms, it is useful to consider the applications of AI in the context of different sectors and use cases. This provides a more comprehensive focus when considering the challenges of AI in a particular sector and the actions required to solve them collectively. It is essential to engage with a range of perspectives across sectors and facilitate the involvement of multiple actors including industry, academia, civil society and governments. Finally, all levels of AI applications, from data to algorithms, should be examined to ensure that they are free from societal biases and discrimination of historically underrepresented and marginalised communities.

Author: Ivan Ivanov, Marketing Manager, Access Partnership

Related Articles

AI for All in Thailand: Building an AI-ready economy with Google

AI for All in Thailand: Building an AI-ready economy with Google

อ่านบทความนี้เป็นภาษาไทย A doctor in Bangkok analyzes medical images with AI, leading to a faster, more accurate diagnosis for her patient....

19 Dec 2024 AI Policy Lab
The Role of Earth Observation in Combating Desertification in Middle Eastern Countries

The Role of Earth Observation in Combating Desertification in Middle Eastern Countries

This month’s UNCCD COP16 in Riyadh marked a pivotal moment in combating global land degradation and drought, with outcomes including...

13 Dec 2024 Opinion
Access Alert: Enhancing Efficiency in India’s Logistics Through AI and Digital Integration

Access Alert: Enhancing Efficiency in India’s Logistics Through AI and Digital Integration

A recent panel discussion at the Bengaluru Tech Summit 2024 on 20 November 2024 focused on the transformative role of...

29 Nov 2024 Opinion
Access Alert: How Will Deepfake Regulations in APAC Impact Your Business?

Access Alert: How Will Deepfake Regulations in APAC Impact Your Business?

The rise of deepfakes – AI-generated content that manipulates audio, video, or images to create realistic but false representations –...

29 Nov 2024 Opinion