Charting a Path Forward: The US Government’s Quest to Regulate Artificial Intelligence in 2023

Charting a Path Forward: The US Government’s Quest to Regulate Artificial Intelligence in 2023

In the first half of 2023 alone, advancements in Artificial Intelligence (AI) and its increased prevalence in people’s daily lives have accelerated efforts by policymakers and regulators in the US to both encourage innovation while also protecting consumers from potential harms. Some of these initiatives, like the National Institute of Standards and Technology (NIST)’s Artificial Intelligence Risk Management Framework (AI RMF), published on 26 January, had been under development for several years. Many others this year were launched in reaction to recent events; the US Copyright Office, for example, published guidelines on works containing material generated by AI on 16 March amid a growing number of applications and lawsuits to attribute partial or total authorship credit to AI tools themselves. In April, the National Telecommunications and Information Administration (NTIA) issued its AI Accountability Policy Request for Comment while several other agencies, including the Federal Trade Commission (FTC) and Department of Justice, issued a joint statement committing themselves to leveraging their existing authorities to protect citizens from “AI-related harms”.  

The ubiquity of AI technology has thus led a wide variety of agencies to issue guidance and consult the public on the influence of AI within their respective areas of responsibility. At their core, these, and other government publications addressing AI cumulatively seek answers to two key questions for US policymakers to answer: who should be responsible for regulating and overseeing AI development, and how should they go about doing so? 

Against this backdrop, the Senate Judiciary Subcommittee on Privacy, Technology, and the Law launched a series of hearings intended to “write the rules of AI” to “demystify and hold accountable those new technologies to avoid some of the mistakes of the past”. The first hearing, held on 16 May, featured witness testimony from OpenAI CEO Samuel Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and New York University Professor Emeritus Gary Marcus. In a surprise to some, all three witnesses advocated, with some variation, for AI to be regulated by the federal government. Key areas highlighted during the discussion include: 

Risk-Based, Tiered Approach – Calling for “precision regulation of AI”, Montgomery argued for establishing differing rules and levels of regulation for AI tools by categorising them based on how much risk they pose to consumers. While during his testimony Altman concurred that different levels of AI applications need to be regulated differently as to not stifle innovation, outside of the hearing he has been critical of the EU’s upcoming AI Act, which takes a risk-based approach to AI regulation. 

Content Moderation and Platform Regulation – One of the major areas of focus among the members of the subcommittee was the importance of addressing AI regulation early and effectively, in contrast to their perceived failure to do so with regards to social media and online platforms. All three witnesses concurred with the sentiments expressed by multiple senators, emphasising the importance of avoiding a scenario similar to “Section 230” with AI, referencing a provision found in the Communications Decency Act of 1996 granting online services substantial legal protections against liabilities for user-generated content on their platforms. 

Transparency – The witnesses discussed the importance of transparency by companies developing and selling AI-enabled products to protect consumers from algorithmic bias and other potential harms. One potential solution first proposed in the hearing by Subcommittee Chair Senator Richard Blumenthal (D-CT) was to require companies to produce publicly-available scorecards or “nutrition labels” detailing the specific components and types of data used to train and build their AI systems. Other recommendations to enhance transparency mentioned include requiring internal and external audits of AI systems as well as disclaimers informing consumers when they are engaging with an AI system. Professor Marcus stressed the importance of involving independent scientists in executing these transparency methods.  

Common Safety Standards – All three witnesses recommended that for any AI regulatory framework, a common set of safety standards would be necessary to adequately mitigate potential harms caused by AI systems. Altman spoke in favor of public-private collaboration to develop such safety standards, while Professor Marcus referenced the Food and Drug Administration (FDA)’s product safety review process as a potential model to emulate for AI. Montgomery discussed the need for standardised AI impact assessments to demonstrate how a company’s AI systems perform against tests for bias and other ways they could potentially impact the public. 

New Federal Agency for AI – The most controversial aspect of the hearing came when Altman and Marcus signaled their support for the creation of an entirely new federal agency dedicated to oversight over all AI development and regulation in the US. The new federal agency would be empowered to issue and revoke business licenses to companies who develop and sell AI-enabled products. Following the hearing, industry stakeholders such as CEO of the trade group NetChoice Steve DelBianco voiced strong opposition to the idea as potentially stifling to innovation by imposing undue restrictions on companies, and warned that a new agency would inevitably clash with agencies with existing jurisdiction over specific aspects of AI, such as the Federal Communications Commission (FCC) and Federal Trade Commission (FTC). Similar skepticism was expressed regarding Professor Marcus’ recommendation for a “CERN-like” international authority to coordinate global AI oversight. In contrast, Montgomery stood opposed to the creation of a new federal agency for AI.  

In the weeks since the Judiciary Subcommittee’s first hearing on AI oversight, the Biden Administration has continued on its quest cement the US as the global leader in AI development by publishing both the 2023 Update to the National Artificial Intelligence Research and Development Strategic Plan as well as a request for information on the risks and benefits of AI on May 23. The former adds a ninth strategy to the 2019 AI R&D strategic plan “to underscore a principled and coordinated approach to international collaboration in AI research” following a series of commitments on the topic made by the US and other members of the G7 at their most recent summit in Hiroshima. The latter document seeks responses from the public to a series of questions on the intersection of AI to a range of topics from bolstering democracy to civil rights.  

As 2023 reaches its midpoint, the US is still far away from introducing a federal AI bill for consideration by Congress, even as the EU inches ever closer to passing its AI Act more than two years after it was first proposed. The fraught debate over the content of a federal privacy law, too, may point to further roadblocks in the future once a first draft of an AI bill is developed. All the while, multiple state legislatures have introduced, and in some cases passed, their own laws on both AI and privacy, creating an increasingly fragmented policy landscape that has led to regulatory uncertainty for many companies as well as high compliance costs that disproportionately impact small businesses and low-income consumers. The recent announcement of an upcoming National Artificial Intelligence Strategy by the Biden Administration will ideally unify the multitude of workshops and consultations by various government agencies and translate them into a clear framework from which federal legislation can be created. Until that happens, however, AI systems will continue to be deployed to the public without the necessary guardrails needed to protect against the many existing and potential harms they pose to consumers. 

Access Partnership is working on several AI projects, tracking global AI developments and empowering our clients to respond strategically. For more information, please contact Jacob Hafey at [email protected] 

Related Articles

Access Alert: Enhancing Efficiency in India’s Logistics Through AI and Digital Integration

Access Alert: Enhancing Efficiency in India’s Logistics Through AI and Digital Integration

A recent panel discussion at the Bengaluru Tech Summit 2024 on 20 November 2024 focused on the transformative role of...

29 Nov 2024 Opinion
Access Alert: How Will Deepfake Regulations in APAC Impact Your Business?

Access Alert: How Will Deepfake Regulations in APAC Impact Your Business?

The rise of deepfakes – AI-generated content that manipulates audio, video, or images to create realistic but false representations –...

29 Nov 2024 Opinion
Access Alert: UK Government Announces £3.5M Funding Opportunity for Satellite Connectivity Projects

Access Alert: UK Government Announces £3.5M Funding Opportunity for Satellite Connectivity Projects

Introduction The UK Space Agency (UKSA) has launched a funding call of up to £3.5 million aimed at advancing satellite...

28 Nov 2024 Opinion
Access Alert: What the abolition of Mexico’s telecoms and competition regulators means and what to do next

Access Alert: What the abolition of Mexico’s telecoms and competition regulators means and what to do next

Mexico’s Congress has approved the constitutional reform for the elimination of the Federal Institute of Telecommunications (IFT) and the Federal...

25 Nov 2024 Opinion