On Monday 21 February, Policy Manager and Anti-Corruption expert, Sofia Tirini, attended the PECB Anti-Bribery Conference 2022, covering AI and integrity policy frameworks and how to build principles for ethical rights respecting artificial intelligence.More than 200 participants attended the session and raised questions and comments around the use of AI and the risks concerning this new technology.Sofia gave a thorough presentation on the AI use-cases in relation to integrity and transparency, explaining three of the main risks associated to this technology, being discrimination, data privacy and freedom of opinion at the top of the list.During the one-hour discussion, the panellist described different approaches to ensure the best application of AI in tackling corruption. Below are the main take-aways:
- The best approach to reduce the potential harm of AI is to develop and implement principles and policies for ethics and rights in relation to AI with a risk-based approach, and a broader risk management framework for trustworthy and responsible AI.
- An open dialogue and collaboration between all stakeholders is fundamental in the process of establishing new regulations.
- In the future, the data available will be even larger than today. One challenge in using this data will be to effectively process heterogeneous data sources before extracting and transforming data towards one or more data models. The importance of the quality of data (including consistency, integrity, accuracy, and completeness) will require new ways to gather information and appropriate data. This must be done following an ethical approach.
- Explainability of the AI models should be at the front of any development. Through explainability we refer to the process on providing plain and easy-to-understand meaningful information, appropriate to the context, to foster a general understanding of AI systems which allows users to understand their interaction with the AI tools.