Inverse Zone

The Global Magazine Of News and Technology

Making Deepfakes Gets Cheaper and Easier Thanks to A.I.

Making Deepfakes Gets Cheaper and Easier Thanks to A.I.


It wouldn’t be completely out of place for comedian-turned-podcaster Joe Rogan to endorse a “libido-boosting” coffee brand for men.

But when a video circulating on TikTok recently showed Mr Rogan and his guest, Andrew Huberman, peddling the coffee, some eagle-eyed viewers were shocked, including Dr Huberman.

“Yes, that’s wrong”, Dr. Huberman wrote on Twitter after seeing the ad, in which he appears to be praising the testosterone-boosting potential of coffee, even though he’s never done it.

The ad was part of a growing number of fake social media videos made with technology powered by artificial intelligence. Experts said Mr Rogan’s voice appeared to have been synthesized using AI tools that mimic the voices of celebrities. Dr. Huberman’s comments were taken from an unrelated interview.

Making realistic fake videos, often called deepfakes, once required elaborate software to put one person’s face on another’s. But now, many tools for creating them are available to everyday consumers, even on smartphone apps, and often for little or no money.

The new edited videos — mostly, so far, the work of meme creators and marketers — have gone viral on social media sites like TikTok and Twitter. The content they produce, sometimes called cheapfakes by researchersworks by cloning celebrity voices, altering mouth movements to match an alternate sound, and writing persuasive dialogue.

The videos and the accessible technology behind them have AI researchers worry about their dangersand raised new concerns about whether social media companies are ready to curb growing digital fraud.

Disinformation watchdogs are also bracing for a wave of digital fakes that could mislead viewers or make it harder to know what’s true or false online.

“What’s different is that anyone can do it now,” said Britt Paris, an assistant professor of library and information science at Rutgers University, who helped coin the term “cheapfakes.” “. “It’s not just about people with sophisticated computer technology and fairly sophisticated computer know-how. Instead, it’s a free app.

Tons of manipulated content have been circulating on TikTok and elsewhere for years, usually using more artisanal tricks like careful editing or swapping one audio clip for another. In videos on TikTok, Vice President Kamala Harris appeared to say that all people hospitalized with Covid-19 were vaccinated. In fact she said patients were not vaccinated.

Graphika, a research firm that studies misinformation, spotted deepfakes of fictional news anchors which pro-China bot accounts distributed late last year, in the first known example of the technology being used for state-aligned influence campaigns.

But several new tools are bringing similar technology to everyday internet users, giving comedians and fans the ability to make their own compelling parodies.

Last month, a fake video circulated showing President Biden declaring a national plan for war between Russia and Ukraine. The video was produced by the team behind “Human Events Daily”, a podcast and live stream run by Jack Posobiec, a right-wing influencer known for spreading conspiracy theories.

In a segment explaining the video, Posobiec said his team created it using AI technology. A tweet about the video from The Patriot Oasis, a conservative account, used a breaking news tag without indicating that the video was fake. The tweet has been viewed more than eight million times.

Many music videos featuring synthesized voices appeared to use technology from ElevenLabs, an American start-up co-founded by a former Google engineer. In November, the company launched a speech cloning tool that can be trained to reproduce voices in seconds.

ElevenLabs came to attention last month after 4chan, a message board known for its racist and conspiratorial content, used the tool to share hate messages. In one example, 4chan users created an audio recording of anti-Semitic text using a computer-generated voice that impersonated actor Emma Watson. Motherboard reported earlier on 4chan’s use of audio technology.

ElevenLabs said on Twitter that it introduce new guarantees, such as limiting voice cloning to paid accounts and providing a new AI detection tool. But 4chan users said they would create their own version of the voice-cloning technology using open source code, releasing demos that sound like audio produced by ElevenLabs.

“We want to have our own custom AI with the power to create,” an anonymous 4chan user wrote in a post about the project.

In an email, a spokeswoman for ElevenLabs said the company was looking to collaborate with other AI developers to create a universal detection system that could be adopted across the industry.

Videos using cloned voices, created with ElevenLabs’ tool or similar technology, have gone viral in recent weeks. One, posted on Twitter by Elon Musk, the site’s owner, showed a fake profane conversation between Mr. Rogan, Mr. Musk and Jordan Peterson, a Canadian men’s rights activist. In another, posted on YouTube, Mr Rogan appeared to interview a fake version of Canadian Prime Minister Justin Trudeau about his political scandals.

“Producing such forgeries should be a felony with a mandatory ten-year sentence,” Mr Peterson said in a tweet about the fake videos featuring his voice. “This technology is dangerous beyond belief.”

In a statement, a YouTube spokeswoman said the video of Mr. Rogan and Mr. Trudeau did not violate the platform’s policies because it “provides sufficient context.” (The creator had described it as a “fake video.”) The company said its misinformation policies prohibited content that has been trafficked in a deceptive manner.

Experts who study deepfake technology have suggested that the fake ad featuring Mr. Rogan and Dr. Huberman was most likely created with a voice cloning program, although the exact tool used was unclear. Mr. Rogan’s audio has been merged into a real interview with Dr. Huberman discusses testosterone.

The results are not perfect. The clip of Mr Rogan was taken from an unrelated interview published in December with Fedor Gorst, a professional pool player. Mr. Rogan’s mouth movements do not match the audio and his voice sounds unnatural at times. If the video won over TikTok users, it was hard to say: it got a lot more attention after being flagged for its impressive counterfeit.

TikTok’s policies prohibit digital forgeries “that mislead users by distorting the truth of events and cause significant harm to the subject of the video, to other people, or to society.” Several of the videos were deleted after The New York Times reported them to the company. Twitter also removed some of the videos.

A TikTok spokesperson said the company uses “a combination of technology and human moderation to detect and remove” manipulated videos, but declined to elaborate on its methods.

Mr. Rogan and the company featured in the false advertisement did not respond to requests for comment.

Many social media companies, including Meta and Twitch, have banned deepfakes and manipulated videos that mislead users. Meta, which owns Facebook and Instagram, held a competition in 2021 to develop programs capable of identifying deepfakes, resulting in a tool who could spot them 83 percent of the time.

Federal regulators have been slow to respond. A federal law of 2019 called for a report on the weaponization of deepfakes by aliens, demanded government agencies inform Congress if deepfakes targeted elections in the United States, and created an award to encourage research into tools capable of detecting deepfakes.

“We can’t wait two years for laws to be passed,” said Ravit Dotan, a postdoctoral researcher who leads the Collaborative AI Responsibility Lab at the University of Pittsburgh. “By then, the damage could be too great. We have elections coming up here in the United States. It’s going to be a problem. »





Source link