The Federal Communications Commission said Thursday it is immediately outlawing scam robocalls featuring fake, artificial intelligence-created voices, cracking down on so-called "deepfake" technology that experts say could undermine election security or supercharge fraud.
The unanimous FCC vote extends anti-robocall rules to cover AI deepfake calls by recognizing those voices as "artificial" under a federal law governing telemarketing and robocalling.
The FCC's move gives state attorneys general more legal tools to pursue illegal robocallers that use AI-generated voices to impersonate celebrities, politicians and close family members, the FCC said.
"Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters," said FCC Chairwoman Jessica Rosenworcel in a statement. "We're putting the fraudsters behind these robocalls on notice."
The decision to interpret the Telephone Consumer Protection Act more broadly to include AI-generated voices comes weeks after a fake robocall that impersonated President Joe Biden targeted thousands of New Hampshire voters and urged them not to participate in the state's primary.
Authorities said this week they had linked those fake calls to a Texas man and two companies in an ongoing investigation that could lead to civil and criminal penalties.