Renuka Rajaratnam Senior Policy Manager, Asia & US |
Xiaomeng Lu Senior Policy Manager, Asia & US |
“AI can do much good, including by making products and processes safer, it can also do harm.” – EU Commission.
The potential harm associated with AI has spurred global discussions among policy-makers in recent years regarding the need to regulate and ‘guide’ AI development. Whilst the EU has officially published its plans to regulate AI last month, across Asia-Pacific, AI regulation is still in its early stages. So far, most of the region’s efforts have been focused on the development of national and sector level principles to guide and support the growth of an ethical AI industry.
The ethical AI principles documents released by the governments in Asia are truly emblematic of each country’s cultural, economic, and political identity. National values and ambitions are clearly visible beyond the foundational AI principles of human-centric, fairness, transparency and explainability, accountability, and privacy and security.
Singapore’s pragmatic and market-oriented model AI governance framework achieves the bare minimum in terms of ethical principles and instead is ripe with implementation guidelines and industry best practice. Singapore and Hong Kong, in line with their reputations as global financial hubs, have developed additional ethical AI guidelines for the financial sector.
Focused on ensuring consumer, environmental and national security, Australia’s AI Ethics Principles include values such as ‘human, social and environmental well-being’, as well as ‘contestability’. Contestability recommends putting processes in place to enable society and consumers to challenge the use or output of an AI system. The spirit of contestability is also found in parallel regulatory developments such as the Consumer Data Right, the 2019 inquiry into digital platforms, and the Human Rights Commission’s discussions on technology’s impact on basic rights.
In line with their Society 5.0 vision, Japan’s social principles of AI were developed to address the country’s socio-economic problems and provide the basis for an ‘AI-ready society’. Economic problems include the declining birth rate and aging population, labour shortages, depopulation, and increased fiscal expenditure.
China’s national AI Governance Principle, developed through a government-led public private partnership process, reflects its authoritarian political system. The drafting committee took a top-down approach in developing the principals, which address eight areas such as security and controllability, privacy, inclusiveness, fairness and justice, and open cooperation.
In a region characterised by a variety of cultural norms, policy priorities, and political dispositions, the first instalment of AI policies and ethical principles in Asia foreshadow eventual asymmetries in the emerging AI regulatory landscape. National values and norms will especially influence to what extent APAC policy-makers put in place critical AI rules around intellectual property rights, liability, and human safety.