In an era where Artificial Intelligence (AI) impacts our lives every day, the United Nations (UN) Secretary-General initiated an important step towards the responsible global governance of AI. This was the establishment of a High-Level Advisory Body on AI, which was announced in the summer of 2023, and marked a significant move towards a globally inclusive approach to AI governance. This body, comprising up to 32 experts from diverse fields and regions, who were selected from over 1,800 nominees from 128 countries, aims to develop recommendations that align AI governance with human rights and the accomplishment of the Sustainable Development Goals. The group commenced its work in October, undertaking initial consultations in November and meeting in December 2023, leading to the publication of its Interim Report on 22 December 2023.
Rather than proposing a single model, this report focuses on principles guiding the formation of global governance institutions for AI and suggests functions that such institutions would need to perform. The report acknowledges existing international initiatives on AI governance and proposes preliminary recommendations, with a commitment to elaborating on these in the final report by August 2024.
It calls for responsible practices in hardware and software development and warns against missed opportunities of failing to share AI benefits. This is showcased by way of five guiding principles, as well as seven functions which the international governance mechanisms should perform.
- AI should be governed inclusively, by and for the benefit of all: AI’s potential to improve lives is hindered by its limited accessibility and utilisation, especially in the Global South. Affirmative and corrective measures, such as improving access and building capacity, are necessary to address the underrepresentation of certain communities in technology.
- AI must be governed in the public interest: Governance efforts must align with public policy goals, entail expanded representation of diverse stakeholders, and delineate responsibility between public and private sector actors. For public interests to take precedence over private interests, it is crucial to have consistently enforced binding norms by Member States. Further investments should be made in public technology and infrastructure to better serve public actors.
- AI governance should be built in step with data governance and the promotion of data commons: Regulations and legal frameworks that prioritise data privacy and security, while still enabling data use, are essential for effective AI governance. Creating public data commons is especially crucial for data that can help solve societal challenges like climate change, public health, and crisis response.
- AI governance must be universal, networked, and rooted in adaptive multistakeholder collaboration: AI governance should look to achieve universal buy-in from inclusive participation and reduce entry barriers for marginalised communities in the Global South. This involves harmonising emerging AI regulations to prevent accountability gaps. The ambition should be to create a well-coordinated and interoperable global framework, with a new organisational structure, that aligns with civil society’s concerns about AI’s impact on human rights. The framework should incorporate cultural perspectives, draw on global best practices, and engage the private sector, academia, civil society, and governments through innovative structures to ensure success.
- AI governance should be anchored in the UN Charter, International Human Rights Law, and other agreed international commitments such as the Sustainable Development Goals: AI governance should align with established international frameworks like the UN Charter, International Human Rights Law, and the Sustainable Development Goals. The Global Digital Compact and the Roadmap for Digital Cooperation are examples of multi-stakeholder deliberations towards a global governance framework of technologies including AI. Strong involvement of UN Member States, empowering UN agencies and involving diverse stakeholders, will be vital to empowering and resourcing a global AI governance response.
- Regularly assess the future directions and implications of AI: To support policymakers in shaping domestic AI programmes, a specialised AI knowledge and research entity, similar to the Intergovernmental Panel on Climate Change, could be created. It would operate every six months and provide evidence-based insights to policymakers on AI development. An analytical observatory function is also suggested to coordinate research efforts on the social impact of AI, including labour, education, and public health. The current advisory body acts as an initial step towards such an expert-led process.
- Reinforce interoperability of governance efforts emerging around the world and their grounding in international norms through a Global AI Governance Framework endorsed in a universal setting (UN): Use existing UN organisations to establish consistent AI governance arrangements that align with international norms. A centralised body could facilitate policy harmonisation, build common understanding, and encourage peer-to-peer learning. The Global AI Governance Framework would guide policymaking and implementation, preventing AI divides and governance gaps.
- Develop and harmonise standards, safety, and risk management frameworks: The UN could play a crucial role in uniting states to formulate shared standards. Networking emerging AI safety institutes is essential to prevent conflicting frameworks. New global standards and indicators should be established to measure the environmental impact of AI, contributing to achieving SDGs.
- Facilitate development, deployment, and use of AI for economic and societal benefit through international multi-stakeholder cooperation: Developers and users, especially in the Global South, must focus on establishing data standards, protection protocols, and legal mechanisms for liability and dispute resolution. Evolving legal, financial, and technical frameworks is crucial to anticipate future challenges posed by complex AI systems. Capacity development in the public sector is urgently needed for countries to engage responsibly with AI and participate in global efforts to develop necessary enablers for AI.
- Promote international collaboration on talent development, access to compute infrastructure, building of diverse high-quality datasets, responsible sharing of opensource models, and AI-enabled public goods for the SDGs: A new mechanism (or mechanisms) is required to facilitate access to data, compute, and talent in order to develop, deploy, and use AI systems for the SDGs through upgraded local value chains, giving independent academic researchers, social entrepreneurs, and civil society access to the infrastructure and datasets needed to build their own models and to conduct research and evaluations. Creating incentives for private sector actors to grant open access to data and computing is essential for leveraging AI for the SDGs. Finally, capacity-building initiatives, especially in the Global South, would complement these initiatives to facilitate local creation, adoption, and context-specific tuning of models.
- Monitor risks, report incidents, coordinate emergency response: A global framework is necessary, which should include capabilities to monitor, report, and respond promptly to systemic vulnerabilities. The suggested techno-prudential model aims to enhance resilience against AI-related risks to global stability, emphasising human rights principles and reporting frameworks inspired by international agencies.
- Compliance and accountability based on norms: The UN could establish norms in areas such as international security challenges. Moreover, institutions like the WTO can aid in dispute resolution and non-binding norms could supplement binding norms. However, transparency, clear objectives, and trust-building with citizen stakeholders are necessary for any global governance institution to be legitimate.
The report emphasises the importance of enhancing accountability mechanisms and ensuring equitable representation and voice for all countries in the AI governance landscape. This approach underlines the UN’s commitment to a balanced and inclusive AI future, where technological progress aligns seamlessly with global standards and ethical considerations.
The UN is actively seeking global engagement on this Interim Report, inviting comments until 31 March 2024. This participatory approach highlights the commitment to a transparent and inclusive process in shaping AI governance. The final report, expected in mid-2024, will be a crucial element of the Global Digital Compact, set to be discussed at the Summit of the Future in September 2024. This forward-looking initiative by the UN is pivotal in ensuring that AI is governed in a way that benefits humanity globally, balancing technological advancement with ethical and human-centric considerations.
For any additional insights about AI Governance at Multilateral Organisations, or for support compiling your comments on the UN’s Interim Report on AI, please contact Hamza Hameed at [email protected] and Jessica Birch at [email protected].