On Monday 21 February, Policy Manager and Anti-Corruption expert, Sofia Tirini, attended the PECB Anti-Bribery Conference 2022, covering AI and integrity policy frameworks and how to build principles for ethical rights respecting artificial intelligence.
More than 200 participants attended the session and raised questions and comments around the use of AI and the risks concerning this new technology.
Sofia gave a thorough presentation on the AI use-cases in relation to integrity and transparency, explaining three of the main risks associated to this technology, being discrimination, data privacy and freedom of opinion at the top of the list.
During the one-hour discussion, the panellist described different approaches to ensure the best application of AI in tackling corruption. Below are the main take-aways:
- The best approach to reduce the potential harm of AI is to develop and implement principles and policies for ethics and rights in relation to AI with a risk-based approach, and a broader risk management framework for trustworthy and responsible AI.
- An open dialogue and collaboration between all stakeholders is fundamental in the process of establishing new regulations.
- In the future, the data available will be even larger than today. One challenge in using this data will be to effectively process heterogeneous data sources before extracting and transforming data towards one or more data models. The importance of the quality of data (including consistency, integrity, accuracy, and completeness) will require new ways to gather information and appropriate data. This must be done following an ethical approach.
- Explainability of the AI models should be at the front of any development. Through explainability we refer to the process on providing plain and easy-to-understand meaningful information, appropriate to the context, to foster a general understanding of AI systems which allows users to understand their interaction with the AI tools.
While companies are developing this technology, governments are trying to keep pace in order to regulate AI and ensure a safe environment for R&D, reducing the harm that can be found in AI tools. The biggest challenge is to ensure the balance between protecting people and reducing risks while enabling the technology to develop without introducing barriers to AI innovation.
At Access Partnership, we have the capabilities to assist all players in developing ethical and transparent standards concerning the uses of AI that ensure legal certainty and avoid excessive regulatory burdens for the industry. If you would like to know more about risk-based approached in AI, or discuss more insights into policy, best-practices and regulations of AI, please feel free to reach out to Sofia at [email protected] or our staff for more information.
If you missed the session, you can watch it again here.
Subscribe to our news alerts here.