Access Alert: FCC proposes new consumer protections against AI-generated robocalls and robotexts

Access Alert: FCC proposes new consumer protections against AI-generated robocalls and robotexts

Continuing the intensification of regulatory conditions around the provision of voice services and the rules to use numbering resources to protect consumers and end-users from the increasing and sophisticated forms of scams, the US Federal Communications Commission (FCC) has proposed new consumer protections against AI-generated robocalls and robotexts.

The proposal invites public input on how to define AI-generated calls. It requires callers to disclose their use of AI in calls and text messages, supports technologies that help consumers identify and avoid unwanted and illegal AI robocalls, and ensures that beneficial uses of AI for people with disabilities can continue without legal risk.

The proposed protections suggest defining AI-generated calls and mandating that callers disclose their intent to use AI-generated calls and texts when obtaining prior express consent. Callers would also need to inform consumers during each call that they receive an AI-generated message, allowing them to recognise and avoid those calls or texts, which may pose a higher risk of fraud and other scams.

Additionally, the proposal includes measures to protect the positive applications of AI that assist people with disabilities in using telephone networks, safeguarding them from potential liability under the Telephone Consumer Protection Act. The FCC seeks further feedback and information on emerging technologies that can alert consumers to unwanted and illegal AI-generated calls and texts.

The proposed rules are part of a broader effort by the FCC to protect consumers from AI-generated scams that mislead and misinform the public, enabling consumers to make informed decisions. The FCC has also proposed new transparency standards that would require disclosure when AI technology is used in political ads on radio and television.

Recently, the FCC adopted a Declaratory Ruling clarifying that the use of voice cloning technology in common robocall scams is illegal without the prior express consent of the called party or an exemption. It has also proposed substantial fines for illegal robocalls using deepfake AI-generated voice cloning technology and caller ID spoofing to spread election misinformation to potential New Hampshire voters before the January 2024 primary.

Access Partnership closely monitors regulatory updates globally. If you are interested in learning more about the rules for the provision of voice services around the world or the FCC consultation, please contact Chrystel Erotokritou at [email protected]  or Juliana Ramirez at [email protected].

Related Articles

Evolving Scams, Evolving Regulations: The Balance Between KYC and Consumer Education

Evolving Scams, Evolving Regulations: The Balance Between KYC and Consumer Education

The alarming increase of fraudulent calls and scam text messages have propelled regulatory authorities worldwide to tighten Know Your Customer...

3 Mar 2025 Opinion
Access Alert: Global Business Leaders Unite in Cape Town for B20 Launch

Access Alert: Global Business Leaders Unite in Cape Town for B20 Launch

This year’s B20 cycle began with vigour this week, with its launch event taking place in Cape Town, South Africa...

27 Feb 2025 Opinion
Multi-Orbit Connectivity: Beyond The Hype

Multi-Orbit Connectivity: Beyond The Hype

When it comes to satellite communications, innovation is the key to solving the need for higher levels of data transmission...

26 Feb 2025 Opinion
Tech and the G20: Risk, Reward, and Africa’s Digital Future

Tech and the G20: Risk, Reward, and Africa’s Digital Future

This is the first in a series of opinion pieces leading up to the G20 Summit in November, where we...

26 Feb 2025 Opinion