Key Takeaways: Leading Your Organisation to Responsible AI Event

As machines become better and smarter at making decisions, the question of how we ensure their ethical behaviour arises. This was one of the topics debated at the Digital Leadership Forum’s (DLF) “Leading your organisation to responsible AI” event in London on 19 July. The session was the first instalment of a series of events, part of DLF’s newly created “AI for Good” initiative. The project aims to help organisations deploy ethical artificial intelligence (AI) in their products and operations.

DLF AI for Good

As machines become better and smarter at making decisions, the question of how we ensure their ethical behaviour arises. This was one of the topics debated at the Digital Leadership Forum’s (DLF) “Leading your organisation to responsible AI” event, hosted by Lloyds Banking Group in London on 19 July. The session was the first instalment of a series of events, part of DLF’s newly created “AI for Good” initiative. The project, supported by Dell Technologies, aims to help organisations define and deploy ethical artificial intelligence (AI).

The event kicked off with a discussion of a “black box”, a traditional AI model – based on the idea that the more data-heavy and complex the system is, the more accurate the model is. However, this does not always work in practice and makes it more difficult to determine the outcome. For example, a bank might decline a mortgage application based on the AI model’s recommendation and then fail to explain to the consumer why this occurred. AI systems can be unintentionally biased and even discriminatory, often reflecting the data trends inputted into the model. A solution to this issue is to build an “explanatory model”; replicating the traditional “black box”, with the addition of a feedback loop to improve performance and robustness, as well as minimising biases within the data. Mastering the explanatory model will help organisations build trust with their customers and support compliance with regulatory requirements.

Concerns regarding privacy and the inherent bias of AI systems have long been perceived as the most common pitfalls of the technology. Bias occurs when there is a lack of transparency of how AI systems operate and when data is of poor quality. Machine learning processes are only as good as the data they are programmed with, often replicating the deeply rooted biases present in our society. To address this challenge, organisations must ensure that AI applications are developed by teams from diverse demographic, gender, ethnic and socio-economic backgrounds. Moreover, to ensure that the privacy of the individual is not diminished by the collection of personal data, including their identity and behaviours; organisations must remain transparent, justified and accountable throughout decision-making processes and in their application of AI. These values underpin the individual’s right to be forgotten. If managed well, AI will challenge society, enable us to be more productive and conscientious, although it was also noted from the discussion that machines will not outsmart people as the unique element and process of human emotion cannot be replicated.

Focus then turned to whether we really need AI. Technology is always presented as a solution to difficult problems – yet non-technical problems are rarely solved by technology. Before launching a new technology or AI application, organisations should ask themselves if AI is the most suitable and effective option to address the problem that needs solving. Companies should consider the context and ultimate purpose in which AI is to be used. The same applies to the regulation of technology; regulation should not simply respond to technological change but also facilitate and mediate its use. Regulation should be targeted at actions and human behaviour, not at a type of technology. Governments must provide a legally enforceable threshold for responsible AI that drives innovation in a positive direction.

The event ended with roundtable discussions on topics ranging from avoiding bias and ensuring responsible AI to regulating AI in the interest of society. While “responsible AI” has become a widely debated issue, stakeholders struggle to define what is meant by the term and differentiate these practices from “irresponsible AI”. Instead of adopting umbrella terms, it is useful to consider the applications of AI in the context of different sectors and use cases. This provides a more comprehensive focus when considering the challenges of AI in a particular sector and the actions required to solve them collectively. It is essential to engage with a range of perspectives across sectors and facilitate the involvement of multiple actors including industry, academia, civil society and governments. Finally, all levels of AI applications, from data to algorithms, should be examined to ensure that they are free from societal biases and discrimination of historically underrepresented and marginalised communities.

Author: Ivan Ivanov, Marketing Manager, Access Partnership

Related Articles

Access Alert: What the abolition of Mexico’s telecoms and competition regulators means and what to do next

Access Alert: What the abolition of Mexico’s telecoms and competition regulators means and what to do next

Mexico’s Congress has approved the constitutional reform for the elimination of the Federal Institute of Telecommunications (IFT) and the Federal...

25 Nov 2024 Opinion
Access Partnership Concludes 2024 with Double Recognition: Best Tech Policy Advisory and Innovative Tech Consultancy of the Year

Access Partnership Concludes 2024 with Double Recognition: Best Tech Policy Advisory and Innovative Tech Consultancy of the Year

London, UK – Access Partnership has celebrated the end of 2024 by winning Best Technology Policy Advisory at The Business...

22 Nov 2024 General
Access Alert: New agency for digital transformation and telecommunications in Mexico

Access Alert: New agency for digital transformation and telecommunications in Mexico

The Mexican Congress has approved the creation of the Agency of Digital Transformation and Telecommunications, which will have the level...

19 Nov 2024 Opinion
Access Alert: The wider impact of Australia’s social media ban for under-16s

Access Alert: The wider impact of Australia’s social media ban for under-16s

Australia’s states and territories have unanimously backed a national plan to ban children under sixteen from most forms of social...

18 Nov 2024 Opinion