TLDR
- A parody video using AI to clone Vice President Kamala Harris’s voice was shared by Elon Musk on X
- The video mimics a real Harris campaign ad but includes fake audio making controversial statements.
- Musk initially shared the video without clearly labeling it as parody, raising concerns about misinformation.
- The incident has sparked debate about the need for AI regulation in politics and social media.
- Some lawmakers and officials are calling for new laws or “guardrails” to govern AI use in political content.
- The Harris campaign criticized the video as “fake, manipulated lies.”
A recent incident involving a manipulated video of Vice President Kamala Harris has reignited concerns about the use of artificial intelligence (AI) in politics.
The video, which uses AI to clone Harris’s voice and make controversial statements she never actually said, gained widespread attention after being shared by tech billionaire Elon Musk on his social media platform X.
This is amazing 😂
pic.twitter.com/KpnBKGUUwn— Elon Musk (@elonmusk) July 26, 2024
The video in question is a parody that mimics a real Harris campaign ad but replaces her original audio with AI-generated speech.
In the fake version, the Harris voice-alike makes inflammatory comments about being a “diversity hire” and calls President Joe Biden “senile.”
While the video was originally created and shared as satire, Musk’s post did not initially label it as such, leading to confusion and debate about the responsible use of AI in political discourse.
This incident highlights the growing challenge of distinguishing between real and AI-generated content, especially in the realm of politics. As AI technology becomes more sophisticated and accessible, there are increasing concerns about its potential to spread misinformation or manipulate public opinion.
Rep. Barbara Lee (D-Calif.) called the video “dangerous” and criticized Musk’s sharing of it. “It shows you just how dangerous [Musk] is and how dangerous it is for social media not to have guardrails,” Lee said in a CNN interview. She emphasized the need for regulatory measures to govern AI use, stating, “We need to make sure that as we look at AI and move forward, that there are some regulatory guardrails and rules that it has to follow.”
California Governor Gavin Newsom also weighed in, suggesting that such manipulations should be illegal. “Manipulating a voice in an ‘ad’ like this one should be illegal,” Newsom wrote on X. “I’ll be signing a bill in a matter of weeks to make sure it is.”
The Harris campaign responded to the video, with spokesperson Mia Ehrenberg stating, “We believe the American people want the real freedom, opportunity, and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump.”
This incident comes at a time when federal regulators are already looking to address the use of deepfake technology in politics.
The Federal Communications Commission recently advanced a proposal to require advertisers to disclose the use of AI in television and radio advertisements. The use of AI-generated voices is already banned in robocalls..