Taylor Swift isn’t the only victim of AI porn. Can the spread of deepfake nudes be stopped?

What is going on

Late last month, a series of AI-generated, sexually explicit images of pop superstar Taylor Swift circulated on social media, sparking outrage among her fans and renewed calls for a crackdown on so-called deepfakes.

Fake celebrity nudes are not a new phenomenon, but thanks to advanced and widely available artificial intelligence tools, it is now possible to quickly create high-quality images or videos with anyone’s likeness in any conceivable scenario. While much attention has been paid to how deepfakes can be used to spread disinformation, research shows that 98% of all AI-generated videos online are pornographic and almost all of the targeted individuals are women.

Celebrities such as actresses, musicians and social media influencers are most often featured in deepfake porn, but there are many examples of average women and girls being targeted as well. Last year, administrators at a New Jersey high school discovered that some students had used AI to create fake nude photos of more than 30 of their classmates. Similar incidents have been reported at other schools in the US and abroad.

It is illegal in almost every state to share real nude photos of someone without their consent, especially if the person is a minor. But the laws surrounding artificial porn are much weaker – even though the harm done to victims can be the same regardless of whether the content is fake or real. There is no federal law regarding deepfake porn, and only about ten states have statutes banning it. Most social media sites ban AI porn, but the scale of the problem and lax moderation mean it can still be widespread on their platforms. One post containing Swift deepfakes was live on X, formerly Twitter, for 17 hours and was viewed more than 45 million times before it was deleted.

Why there is discussion

Like so many other harmful things online, AI porn may be impossible to completely eradicate. But experts say there are plenty of things that can be done to make it dramatically less common and limit the damage it causes.

Several bills have been introduced in Congress that would provide nationwide protections against deepfake porn, either by creating new legal penalties for those who create or share it or by giving victims new rights to sue for damages after being targeted become. Supporters of these plans say that even if the new laws don’t catch all the bad actors, they would lead to a number of high-profile cases that would deter others from creating deepfakes.

Beyond new laws, many tech industry observers argue that the public should put pressure on the various mainstream entities that allow people to create, find, distribute, and profit from AI porn — including social media platforms, credit card companies, AI developers and search engines. engines. There’s also hope that the fear of lawsuits from someone like Swift will pose enough financial risks that these groups will start taking deepfakes more seriously.

At the same time, some experts argue that the war against AI porn is effectively already lost. According to them, the technical problem of finding and blocking so many deepfakes is effectively unsolvable and even the most aggressive new laws or policies will only catch a small fraction of the flood of fake explicit content that exists.

What’s next

Swift is reportedly considering legal action in response to the deepfakes of her, but experts say her options may be limited with so few laws on the books. Despite the new attention being paid to the issue, there currently appear to be no plans for Congress to vote on any of the several anti-AI porn proposals.

Perspectives

Congress must finally ban deepfakes nationwide

“This is a rare, bipartisan issue that lawmakers must use to do some good before bad actors use AI to wreak even more havoc in the lives of more innocent people.” – National security expert Frank Figliuzzi, MSNBC

We don’t have the tools to stop AI porn

“On the one hand, our technologies and the human teams behind them are not up to the task. On the other hand, a government overcorrection could leave us with severely limited social networks that shut down legitimate forms of commentary.” — Miles Klee, Rolling Stone

Strict laws can create examples that deter everyone else

“There is hope for a solution. Some of the measures taken by Congress are a start, and while long-term rules are still being ironed out, authorities can get a handle on the situation for now by providing examples of some of the worst perpetrators. Deterrents can work, even for people who think they can hide behind the cloak of online anonymity.” — Parmy Olson, Bloomberg

The public can force Big Tech to take deepfakes more seriously

“Swift and her fans could advocate for legal changes at the federal level. But their outrage could do something else: push platforms to take notice.” – Amanda Hoover, Wired

Any new laws should focus on helping victims as much as possible

“[Proposed bills are] not necessarily always focused on the needs of the victim. Usually the victim’s main need is to delete content. So if we had a victim-centered legislative landscape. … It would focus on how we make online environments safer for people to actually use.” — Sophie Maddocks, AI researcher at the University of Pennsylvania, to Slate

The infrastructure that supports the AI ​​porn economy must be torn down

“It’s not just the creators of deepfake porn who are benefiting from this abuse. Deepfake porn sites are powered by search engines that direct internet traffic to deepfake content. Internet service providers host them and credit card and payment companies facilitate transactions on their sites, while other companies advertise their products under these videos.” – Sophie Compton and Reuben Hamlyn, CNN

We need new laws to combat all the dangers of AI, not just deepfakes

“Now is the time to call on Congress and state lawmakers to take action – not just against deepfake porn and not just for Taylor Swift, but against the dangers of AI more broadly, and for a safer future for every person on the planet.” – Jill Filipovic, The Guardian

Raising awareness of the terrible damage deepfakes cause is a good place to start

“Deepfake porn and other forms of digital sexual abuse are sometimes dismissed as ‘lesser’ evil than physical assault due to their online nature. A high-profile case like Swift’s could draw attention to the real impact of these images on victims.” – Jade Gilbourne, The Conversation

Photo illustration: Yahoo News, photo: Getty Images

Leave a Comment