Lingo Telecom was fined $1M by the FCC for its role in a Biden deepfake scam using AI-generated robocalls to influence the New Hampshire primary.
The Federal Communications Commission (FCC) has imposed a $1 million fine on Lingo Telecom, a telecommunications corporation headquartered in Texas, for its involvement in the illegal Biden deepfake scam.
An AI-generated recording of President Joe Biden’s voice was employed in the scheme to discourage individuals from voting in the New Hampshire primary election. This recording was disseminated through robocalls.
FCC Strikes Back
As per a press release issued by the Federal Communications Commission (FCC), the $1 million fine is not only a punitive measure but also a step toward ensuring that telecommunications companies are held accountable for the content they permit to be disseminated through their networks.
In addition to the monetary penalty, the FCC has mandated that Lingo Telecom implement a “historic compliance plan.”
This strategy involves the stringent observance of the FCC’s caller ID authentication regulations, which are intended to prevent the type of fraud and deception that transpired in this instance.
Further, Lingo Telecom is now required to adhere to the “Know Your Customer” and “Know Your Upstream Provider” principles, which are essential for phone carriers to effectively monitor call traffic and guarantee that all calls are authenticated.
Threat to Democratic Processes
The robocalls, which were coordinated by political consultant Steve Kramer, were a component of a more extensive endeavor to disrupt the New Hampshire primary election.
The calls aimed to manipulate and intimidate voters, thereby subverting the democratic process, by utilizing AI technology to create a convincing imitation of President Biden’s voice.
On May 23, Steve Kramer was indicted for his involvement in the start of the robocalls. Kramer, who was employed by Dean Phillips, a competitor, was charged with impersonating a candidate during the Democratic primary election in New Hampshire.
The utilization of deepfake technology in this fraud is particularly alarming, as it represents a novel and unsettling development in the ongoing battle against disinformation.
Deepfakes, which employ artificial intelligence to produce audio or video recordings that are both highly realistic and fraudulent, pose a significant threat to the integrity of democratic processes.
Cointelegraph illuminated the growing issue of AI-generated deepfakes in the ongoing election cycle in March, emphasizing the urgent necessity for electors to differentiate between fact and fiction.
In February, a consortium of 20 prominent AI technology companies pledged to prevent the use of their software to influence electoral outcomes.