The Federal Communications Commission (FCC) has issued a ruling prohibiting the use of artificial intelligence (AI) voices in robocalls. The decision, made under the Telephone Consumer Protection Act, aims to crack down on the use of AI-generated voices to deceive and scam people. This ruling comes in response to an investigation into AI-generated robocalls that impersonated President Joe Biden's voice during New Hampshire's primary election. The regulation allows the FCC to fine companies that use AI voices, block service providers that carry them, and gives call recipients the ability to file lawsuits. State attorneys general also have a new mechanism to take action against violators.
FCC Chairwoman Jessica Rosenworcel stated that bad actors have been using AI-generated voices to spread misinformation, impersonate celebrities, and extort individuals. The ruling classifies AI-generated voices in robocalls as "artificial," making them subject to the same standards as other forms of automated calls under the consumer protection law. Violators of the law can face substantial fines, with a maximum of over $23,000 per call. Call recipients also have the right to take legal action and potentially recover up to $1,500 in damages for each unwanted call.
Experts caution that despite the FCC's ruling, personalized spam targeting individuals through phone calls, texts, and social media is likely to continue. AI tools, such as voice-cloning software and image generators, are already being used in elections globally. Efforts to regulate AI in political campaigns have been made in Congress, but no federal legislation has been passed thus far.
While the FCC's ruling has been praised, there is a call for Congress to take further action. Representative Yvette Clarke, who introduced legislation to regulate AI in politics, emphasized the need for collaboration to address the issue of AI-generated content used for deception. The recent AI-generated robocalls during New Hampshire's primary election have raised concerns about misleading voters and the potential harm to elections. Investigations have identified the Texas-based company Life Corp. and its owner, Walter Monk, as the source of the calls, which were transmitted by Lingo Telecom.
Overall, the FCC's ruling on AI-generated voices in robocalls aims to tackle the exploitation of technology for fraudulent purposes. It empowers the FCC to take action against violators and provides individuals with legal recourse against unwanted calls. However, experts highlight the need for ongoing efforts to regulate AI in political campaigns and address the potential for AI abuse in voice technology.