Access Alert: Key takeaways from the AI for Good Global Summit 2023

Access Alert: Key takeaways from the AI for Good Global Summit 2023

What direction should AI policy and regulation take?

This was one of the major questions addressed at the AI for Good Global Summit 2023, held 6-7 July at the International Conference Centre Geneva (CICG). Organised by the International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, the annual summit aims to identify practical applications of artificial intelligence (AI) to accelerate progress towards the Sustainable Development Goals (SDGs).

Over 280 projects were showcased, demonstrating AI’s potential to address globally pressing issues, such as climate change, education, hunger, and poverty. The focus was on putting human values first, addressing biases, developing ethical systems, and ensuring transparency and accountability.

Policy & Regulatory Considerations

The summit provided several ideas on how to address AI from a policy and regulatory perspective, especially with reference to the recently launched EU AI Act.

The main ideas presented focused on the need to enhance international coordination and organisation of AI policy efforts, including:

AI Ethics: UNESCO presented its Recommendation on the Ethics of Artificial Intelligence. Several governments and institutions have developed AI Ethics Guidelines or principles. However, the focus on AI ethics should shift from principles to implementation and compliance while also recognising the different stages of policy development between countries and regions.

Data Governance: One of the challenges with AI is regulating the data that is used to train models; i.e., what type of data should be allowed to feed into AI models and applications. As such, AI regulation is intrinsically linked to data regulation. Any organisation developing AI applications needs to ensure that data governance plays an integral part in the technology development and modelling process. As such, global discussions around AI regulation will need to address data-related issues, such as access, ownership, transparency, pooling, harmonisation, and interoperability.

Registry of AI applications: With the growing number of AI applications, it was suggested to implement a registry of existing and future AI applications with a view to tracking applications, use cases, and implications. It was suggested that governments have the duty of care in AI governance and will need a structured approach to registering AI applications to ensure their safe and responsible use.

Global observatory on AI: Given the highly complex AI landscape, it was suggested that an international body on AI is needed to coordinate international and inter-governmental efforts in the field of AI. The role and responsibilities of such a body could potentially be (i) research-based, (ii) overseeing and harmonising international AI policy and regulations, or (iii) performing approvals of AI applications to ensure safe, responsible, and inclusive deployment. The need for an international body on AI was justified by the fact that international organisations (such as the ITU and UNESCO) can play a role in providing evidence and expertise. However, they are not enforcers of regulation.

AI standards: The ITU-T will play a role in developing international standards for AI in collaboration with other UN agencies. This work will be organised through ITU-T Focus Groups exploring various aspects of AI and machine learning, including machine learning for future networks, environmental efficiency, health, and autonomous vehicles, among others.

AI capacity building initiatives: Proposals were presented on empowering existing organisations that may already have the expertise and structures to tackle challenges brought on by AI, both from a policy and research perspective. The ITU and UNESCO will play a key role in leading these efforts within the UN system.

AI Policy Lab

While the AI for Good Global Summit presented several important ideas on international AI governance, there is significant work ahead to find efficient solutions to AI policy and regulation that (i) ensure inclusive and responsible use of AI, (ii) keep consumers/users safe, and (iii) promote innovation for the benefit of the social good.

Access Partnership’s AI Policy Lab is working closely with governments and the private sector to shape AI policy and regulation globally. If you are interested in learning more about our AI Policy Lab or require support to stay on top of AI policy and regulatory developments, please contact Anja Engen at [email protected].

Related Articles

Driving Brazil’s app ecosystem: The economic impact of Google Play and Android

Driving Brazil’s app ecosystem: The economic impact of Google Play and Android

With the largest Internet population in Latin America and the fourth-largest market for app adoption globally, Brazil is an established...

15 Apr 2024 Opinion
Access Alert: Brazilian authorities ask for contributions on AI and connectivity

Access Alert: Brazilian authorities ask for contributions on AI and connectivity

On 9 April, Brazil’s National Telecommunications Authority (Anatel) released a public consultation to gather contributions and insights about the role...

11 Apr 2024 Latest AI Thought Leadership
Responsible AI Readiness Index (RARI)

Responsible AI Readiness Index (RARI)

In an era where AI increasingly influences every aspect of society, the need for responsible and ethical practices has become...

11 Apr 2024 General
Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

The SATELLITE 2024 conference in Washington, DC, took place between 18-21 March 2024. The event brought together close to 15,000...

28 Mar 2024 Opinion