Access Alert: Key takeaways from the AI for Good Global Summit 2023

Access Alert: Key takeaways from the AI for Good Global Summit 2023

What direction should AI policy and regulation take?

This was one of the major questions addressed at the AI for Good Global Summit 2023, held 6-7 July at the International Conference Centre Geneva (CICG). Organised by the International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, the annual summit aims to identify practical applications of artificial intelligence (AI) to accelerate progress towards the Sustainable Development Goals (SDGs).

Over 280 projects were showcased, demonstrating AI’s potential to address globally pressing issues, such as climate change, education, hunger, and poverty. The focus was on putting human values first, addressing biases, developing ethical systems, and ensuring transparency and accountability.

Policy & Regulatory Considerations

The summit provided several ideas on how to address AI from a policy and regulatory perspective, especially with reference to the recently launched EU AI Act.

The main ideas presented focused on the need to enhance international coordination and organisation of AI policy efforts, including:

AI Ethics: UNESCO presented its Recommendation on the Ethics of Artificial Intelligence. Several governments and institutions have developed AI Ethics Guidelines or principles. However, the focus on AI ethics should shift from principles to implementation and compliance while also recognising the different stages of policy development between countries and regions.

Data Governance: One of the challenges with AI is regulating the data that is used to train models; i.e., what type of data should be allowed to feed into AI models and applications. As such, AI regulation is intrinsically linked to data regulation. Any organisation developing AI applications needs to ensure that data governance plays an integral part in the technology development and modelling process. As such, global discussions around AI regulation will need to address data-related issues, such as access, ownership, transparency, pooling, harmonisation, and interoperability.

Registry of AI applications: With the growing number of AI applications, it was suggested to implement a registry of existing and future AI applications with a view to tracking applications, use cases, and implications. It was suggested that governments have the duty of care in AI governance and will need a structured approach to registering AI applications to ensure their safe and responsible use.

Global observatory on AI: Given the highly complex AI landscape, it was suggested that an international body on AI is needed to coordinate international and inter-governmental efforts in the field of AI. The role and responsibilities of such a body could potentially be (i) research-based, (ii) overseeing and harmonising international AI policy and regulations, or (iii) performing approvals of AI applications to ensure safe, responsible, and inclusive deployment. The need for an international body on AI was justified by the fact that international organisations (such as the ITU and UNESCO) can play a role in providing evidence and expertise. However, they are not enforcers of regulation.

AI standards: The ITU-T will play a role in developing international standards for AI in collaboration with other UN agencies. This work will be organised through ITU-T Focus Groups exploring various aspects of AI and machine learning, including machine learning for future networks, environmental efficiency, health, and autonomous vehicles, among others.

AI capacity building initiatives: Proposals were presented on empowering existing organisations that may already have the expertise and structures to tackle challenges brought on by AI, both from a policy and research perspective. The ITU and UNESCO will play a key role in leading these efforts within the UN system.

AI Policy Lab

While the AI for Good Global Summit presented several important ideas on international AI governance, there is significant work ahead to find efficient solutions to AI policy and regulation that (i) ensure inclusive and responsible use of AI, (ii) keep consumers/users safe, and (iii) promote innovation for the benefit of the social good.

Access Partnership’s AI Policy Lab is working closely with governments and the private sector to shape AI policy and regulation globally. If you are interested in learning more about our AI Policy Lab or require support to stay on top of AI policy and regulatory developments, please contact Anja Engen at [email protected].

Related Articles

Access Partnership Concludes 2024 with Double Recognition: Best Tech Policy Advisory and Innovative Tech Consultancy of the Year

Access Partnership Concludes 2024 with Double Recognition: Best Tech Policy Advisory and Innovative Tech Consultancy of the Year

London, UK – Access Partnership has celebrated the end of 2024 by winning Best Technology Policy Advisory at The Business...

22 Nov 2024 General
Access Alert: New agency for digital transformation and telecommunications in Mexico

Access Alert: New agency for digital transformation and telecommunications in Mexico

The Mexican Congress has approved the creation of the Agency of Digital Transformation and Telecommunications, which will have the level...

19 Nov 2024 Opinion
Access Alert: The wider impact of Australia’s social media ban for under-16s

Access Alert: The wider impact of Australia’s social media ban for under-16s

Australia’s states and territories have unanimously backed a national plan to ban children under sixteen from most forms of social...

18 Nov 2024 Opinion
Economic Impact Report: Driving digital growth in Vietnam with Google

Economic Impact Report: Driving digital growth in Vietnam with Google

Vietnam’s economic development journey has been impressive. From one of the world’s lowest-income countries, Vietnam has risen to become a...

14 Nov 2024 General