Access Alert: Key takeaways from the AI for Good Global Summit 2023

Access Alert: Key takeaways from the AI for Good Global Summit 2023

What direction should AI policy and regulation take?

This was one of the major questions addressed at the AI for Good Global Summit 2023, held 6-7 July at the International Conference Centre Geneva (CICG). Organised by the International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, the annual summit aims to identify practical applications of artificial intelligence (AI) to accelerate progress towards the Sustainable Development Goals (SDGs).

Over 280 projects were showcased, demonstrating AI’s potential to address globally pressing issues, such as climate change, education, hunger, and poverty. The focus was on putting human values first, addressing biases, developing ethical systems, and ensuring transparency and accountability.

Policy & Regulatory Considerations

The summit provided several ideas on how to address AI from a policy and regulatory perspective, especially with reference to the recently launched EU AI Act.

The main ideas presented focused on the need to enhance international coordination and organisation of AI policy efforts, including:

AI Ethics: UNESCO presented its Recommendation on the Ethics of Artificial Intelligence. Several governments and institutions have developed AI Ethics Guidelines or principles. However, the focus on AI ethics should shift from principles to implementation and compliance while also recognising the different stages of policy development between countries and regions.

Data Governance: One of the challenges with AI is regulating the data that is used to train models; i.e., what type of data should be allowed to feed into AI models and applications. As such, AI regulation is intrinsically linked to data regulation. Any organisation developing AI applications needs to ensure that data governance plays an integral part in the technology development and modelling process. As such, global discussions around AI regulation will need to address data-related issues, such as access, ownership, transparency, pooling, harmonisation, and interoperability.

Registry of AI applications: With the growing number of AI applications, it was suggested to implement a registry of existing and future AI applications with a view to tracking applications, use cases, and implications. It was suggested that governments have the duty of care in AI governance and will need a structured approach to registering AI applications to ensure their safe and responsible use.

Global observatory on AI: Given the highly complex AI landscape, it was suggested that an international body on AI is needed to coordinate international and inter-governmental efforts in the field of AI. The role and responsibilities of such a body could potentially be (i) research-based, (ii) overseeing and harmonising international AI policy and regulations, or (iii) performing approvals of AI applications to ensure safe, responsible, and inclusive deployment. The need for an international body on AI was justified by the fact that international organisations (such as the ITU and UNESCO) can play a role in providing evidence and expertise. However, they are not enforcers of regulation.

AI standards: The ITU-T will play a role in developing international standards for AI in collaboration with other UN agencies. This work will be organised through ITU-T Focus Groups exploring various aspects of AI and machine learning, including machine learning for future networks, environmental efficiency, health, and autonomous vehicles, among others.

AI capacity building initiatives: Proposals were presented on empowering existing organisations that may already have the expertise and structures to tackle challenges brought on by AI, both from a policy and research perspective. The ITU and UNESCO will play a key role in leading these efforts within the UN system.

AI Policy Lab

While the AI for Good Global Summit presented several important ideas on international AI governance, there is significant work ahead to find efficient solutions to AI policy and regulation that (i) ensure inclusive and responsible use of AI, (ii) keep consumers/users safe, and (iii) promote innovation for the benefit of the social good.

Access Partnership’s AI Policy Lab is working closely with governments and the private sector to shape AI policy and regulation globally. If you are interested in learning more about our AI Policy Lab or require support to stay on top of AI policy and regulatory developments, please contact Anja Engen at [email protected].

Related Articles

Access Alert: India General Elections 2024 – What’s Next?

Access Alert: India General Elections 2024 – What’s Next?

Between 19 April and 1 June, India held the world’s largest democratic elections, with 969 million eligible voters. This marathon...

8 Jul 2024 Opinion
Access Alert: 2024 UK general election – Labour triumphs with pledge for change

Access Alert: 2024 UK general election – Labour triumphs with pledge for change

Labour landslide UK voters have elected the first Labour government since 2010, ending 14 years of Conservative-led administrations. At the...

5 Jul 2024 Opinion
India’s App Market: Creating Global Impact

India’s App Market: Creating Global Impact

The Indian app market is experiencing rapid growth and continues to solidify its position as a major global player. For...

2 Jul 2024 Opinion
The State of Broadband 2024 Annual Report: Leveraging AI for Universal Connectivity

The State of Broadband 2024 Annual Report: Leveraging AI for Universal Connectivity

With the artificial intelligence (AI) revolution already well underway, the Broadband Commission has added yet another task to AI’s to-do...

2 Jul 2024 Opinion