Access Alert: Key takeaways from the AI for Good Global Summit 2023

Access Alert: Key takeaways from the AI for Good Global Summit 2023

What direction should AI policy and regulation take?

This was one of the major questions addressed at the AI for Good Global Summit 2023, held 6-7 July at the International Conference Centre Geneva (CICG). Organised by the International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, the annual summit aims to identify practical applications of artificial intelligence (AI) to accelerate progress towards the Sustainable Development Goals (SDGs).

Over 280 projects were showcased, demonstrating AI’s potential to address globally pressing issues, such as climate change, education, hunger, and poverty. The focus was on putting human values first, addressing biases, developing ethical systems, and ensuring transparency and accountability.

Policy & Regulatory Considerations

The summit provided several ideas on how to address AI from a policy and regulatory perspective, especially with reference to the recently launched EU AI Act.

The main ideas presented focused on the need to enhance international coordination and organisation of AI policy efforts, including:

AI Ethics: UNESCO presented its Recommendation on the Ethics of Artificial Intelligence. Several governments and institutions have developed AI Ethics Guidelines or principles. However, the focus on AI ethics should shift from principles to implementation and compliance while also recognising the different stages of policy development between countries and regions.

Data Governance: One of the challenges with AI is regulating the data that is used to train models; i.e., what type of data should be allowed to feed into AI models and applications. As such, AI regulation is intrinsically linked to data regulation. Any organisation developing AI applications needs to ensure that data governance plays an integral part in the technology development and modelling process. As such, global discussions around AI regulation will need to address data-related issues, such as access, ownership, transparency, pooling, harmonisation, and interoperability.

Registry of AI applications: With the growing number of AI applications, it was suggested to implement a registry of existing and future AI applications with a view to tracking applications, use cases, and implications. It was suggested that governments have the duty of care in AI governance and will need a structured approach to registering AI applications to ensure their safe and responsible use.

Global observatory on AI: Given the highly complex AI landscape, it was suggested that an international body on AI is needed to coordinate international and inter-governmental efforts in the field of AI. The role and responsibilities of such a body could potentially be (i) research-based, (ii) overseeing and harmonising international AI policy and regulations, or (iii) performing approvals of AI applications to ensure safe, responsible, and inclusive deployment. The need for an international body on AI was justified by the fact that international organisations (such as the ITU and UNESCO) can play a role in providing evidence and expertise. However, they are not enforcers of regulation.

AI standards: The ITU-T will play a role in developing international standards for AI in collaboration with other UN agencies. This work will be organised through ITU-T Focus Groups exploring various aspects of AI and machine learning, including machine learning for future networks, environmental efficiency, health, and autonomous vehicles, among others.

AI capacity building initiatives: Proposals were presented on empowering existing organisations that may already have the expertise and structures to tackle challenges brought on by AI, both from a policy and research perspective. The ITU and UNESCO will play a key role in leading these efforts within the UN system.

AI Policy Lab

While the AI for Good Global Summit presented several important ideas on international AI governance, there is significant work ahead to find efficient solutions to AI policy and regulation that (i) ensure inclusive and responsible use of AI, (ii) keep consumers/users safe, and (iii) promote innovation for the benefit of the social good.

Access Partnership’s AI Policy Lab is working closely with governments and the private sector to shape AI policy and regulation globally. If you are interested in learning more about our AI Policy Lab or require support to stay on top of AI policy and regulatory developments, please contact Anja Engen at [email protected].

Related Articles

Access Alert: European Commission announces three-pillar approach to connectivity policy

Access Alert: European Commission announces three-pillar approach to connectivity policy

On 21 February, the European Commission presented its widely anticipated white paper on ‘How to master Europe’s digital infrastructure needs?’,...

21 Feb 2024 Opinion
Tech Policy Trends 2024: The evolution of GDPR

Tech Policy Trends 2024: The evolution of GDPR

With a major review scheduled for 2024, the EU will use the months ahead to reflect on where the GDPR...

21 Feb 2024 Opinion
Access Alert: Portugal imposes fines for non-compliance with type-approval requirements

Access Alert: Portugal imposes fines for non-compliance with type-approval requirements

Type approval (also known as homologation or certification) ensures product quality, safety, and interoperability, assuring consumers, businesses, and regulatory authorities...

20 Feb 2024 Opinion
Securing the final frontier: British perspectives on commercial defence in space

Securing the final frontier: British perspectives on commercial defence in space

In a statement on 15 January, the United Kingdom’s Secretary of Defence, Grant Shapps, declared that “The era of the...

19 Feb 2024 Opinion