The rapid advancement of artificial intelligence (AI) has ushered in a new era of technological capabilities, revolutionising economic sectors and transforming the way we act and interact.
While these advancements shed light on the benefits and transformative potential of AI, they also bring a range of ethical concerns, especially when it comes to mitigating the technology’s many potential risks and side effects.
In this regard, countries and regions around the world have taken action to advance responsible AI use, including developing AI ethics frameworks that provide guiding principles to navigate AI development, deployment, and innovation.
National and regional frameworks
The United States (US), the European Union (EU), and Singapore have released their own AI ethics frameworks, all with principles that appear to overlap with one another.
The frameworks share common principles in promoting the transparency and explainability of AI systems, adopting a human-centric approach to AI development, and ensuring fairness and non-discrimination in algorithmic design.
The table below summarises the emerging areas of convergence in ethical AI principles across jurisdictions:
AI ethics principles: A comparison of frameworks across the US, EU, and Singapore
Global discussions on ethical AI have also gained significant momentum across various multilateral fora and platforms.
UNESCO and the G20/OECD have also published AI frameworks to ensure the ethical and responsible deployment of AI. Notably, both frameworks outline similar principles in promoting human-centred values and fairness, safety and security, transparency and explainability, accountability, and inclusive growth. The table below presents the overlapping areas between the two frameworks:
AI ethics principles: A comparison of frameworks by UNESCO and the G20/OECD
The emergence of various frameworks at the national, regional, and multilateral levels is a positive indication that countries and regions are acknowledging the significance of both fostering AI innovation and establishing protective measures.
However, the fact that each country/region/organisation is developing its own set of principles may lead to confusion and delay the interoperability of trustworthy/responsible AI systems. Therefore, it is crucial for stakeholders to engage in collaborative efforts aimed at harmonising these frameworks and establishing common principles that guide responsible and fair AI implementation.
Access Partnership and the Fair Tech Institute are closely tracking developments on responsible AI and the AI landscape in Southeast Asia region and across the world. If you would like to request an expert briefing on this issue or request more information on AI-related issues, please contact Li Xing at [email protected] or Lim May-Ann at [email protected].