How policymakers’ plans and priorities are shifting in the digital age
As the armed conflict in Ukraine escalates, the use of increasingly advanced, and vehemently forbidden, weapons is making its way into mainstream discussions. Apart from biological weapons, AI-powered weapons are garnering attention from military strategists, AI theorists, and industrialists alike.
As terrifying as an AI-driven war may be, it seems the technology is not yet capable of wreaking large-scale havoc. But it may not be far off, considering the leaps it is making.
Indeed, an abundance of data has allowed AI-enabled systems to grow and mature exponentially. The pandemic further fueled this movement as many essential services went online, creating even more data and embedding automated processes across a wide range of platforms.
In turn, governments and policymakers are racing to not only better understand AI, but also frame it in a way that benefits humanity. The key question for policymakers is how they should go about designing policies and regulations to ensure that we can simultaneously harness AI’s potential benefits and minimize its likely threats?
The development of AI regulation in the United States can provide invaluable insights into how best to navigate the formation of policies on AI sooner rather than later, before AI eventually writes policies for itself.
Where do we start?
The most critical spark to ignite policy around AI is often created through the introduction of a national agenda or strategy specifically devoted to AI. For the United States, this came with the American AI Initiative, launched through an Executive Order in 2019.
The initiative set in motion a series of critical developments, including the establishment of the National AI Initiative Act. The Act established a framework for the government to foster environments to coordinate AI research and development, form enabling institutions, and evaluating policy and regulation options.
The national strategy significantly channeled resources across government ranks to advance and accelerate the country’s AI development.
What to prioritize?
But even with the national agenda, areas of priority are not always clear, as was the case for the United States. To ensure that resources are channeled to the right priority, the government launched several critical studies to better understand the trends and shifts that shape AI development and adoption.
To achieve this, the government formed the National Artificial Intelligence Research Resource (NAIRR) Task Force in 2021 to evaluate the country’s AI position. The evaluation is set to submit two reports in 2022 outlining the national strategy and its implementation.
This is further supplemented by the National Security Commission on Artificial Intelligence (NSCAI) to evaluate the competitiveness of the United States’ AI, Machine Learning (ML), and autonomous technologies.
Early findings were critical to help policymakers understand the country’s position, highlighting that the United States were not investing enough in AI capabilities and therefore needed to bolster the country’s competitiveness, especially when China has no limits to how much it invests into the technology.
Having identified the need to bolster the AI development in the country, the government passed the US Innovation Competition Act (S.1260) in 2021. The Act bolsters the country’s AI capabilities, enabling the government to channel more resources into technological capacities and technical skills. This was coupled with the introduction of the Directorate for Technology and Innovation, in the National Science Foundation, to support scientific breakthroughs.
The pursuit of innovation, however, brought about new policy considerations to ensure that the use of AI is fair and safe for all. This is perhaps where the biggest policy challenges emerge, in the need to balance innovation with appropriate safeguards.
These policy challenges can manifest across many sectors and areas of national interest. For the United States, several agencies including the Federal Trade Commission (FTC), US Government Accountability Office (GAO), the White House Office of Science and Technology Policy’s (OSTP), and the National Institute of Standards and Technology (NIST) were among the key agencies mobilized to come up with guidance, principles, and regulations tackling bias in algorithm and fair use of AI.
How AP can help: Navigating the challenges of policymaking in the digital age
Given the multifaceted and cross-cutting nature of AI technology, concerted efforts are required from policymakers to remain agile, to match the pace of development, and manage the rules of the road across different interests of the nation.
The challenges further place the burden on policymakers to engage with industries, markets, and other economies to develop rules that are effective and comprehensive.
Access Partnership’s Global Government Advisory (GGA) team has worked closely with governments and multilateral organizations to navigate the policymaking and regulatory challenges of AI, including our work at the AI for Good Global Summit.
For more information and to work with us in this important area, please contact Grace Gown.
Subscribe to our news alerts here.