The White House is urging the tech industry to shut down the market for sexually abusive AI deepfakes

President Joe Biden’s administration is forcing the tech industry and financial institutions to shut down a growing market of offensive sexual images created with artificial intelligence technology.

New generative AI tools have made it easy to turn someone’s likeness into a sexually explicit AI deepfake and share these realistic images via chat rooms or social media. The victims – whether celebrities or children – have little ability to stop this.

The White House on Thursday appealed for voluntary cooperation from companies in the absence of federal legislation. By committing to a series of specific measures, officials hope the private sector can curb the creation, distribution and monetization of such non-consensual AI images, including explicit images of children.

“When generative AI came on the scene, everyone speculated about where the first real damage would come. And I think we have the answer,” said Biden’s chief science adviser Arati Prabhakar, director of the White House Office of Science and Technology Policy.

She described to The Associated Press a “phenomenal acceleration” of non-consensual images, fueled by AI tools and largely targeting women and girls in ways that could upend their lives.

“If you’re a teenage girl, if you’re a gay child, these are issues that people are dealing with right now,” she said. “We have seen an acceleration thanks to generative AI that moves very quickly. And the fastest thing that can happen is that companies take responsibility.”

A document shared with AP ahead of its release on Thursday calls for action from not only AI developers, but also payment processors, financial institutions, cloud computing providers, search engines and the gatekeepers — namely Apple and Google — who determine what ends up in the mobile app stores.

The private sector must take action to “disrupt the monetization” of image-based sexual abuse, by specifically limiting access to payments to sites that promote explicit images of minors, the government said.

Prabhakar said many payment platforms and financial institutions are already saying they will not support the kind of companies that promote offensive images.

“But sometimes it’s not enforced; Sometimes they don’t have those terms of service,” she said. “And so that’s an example of something that can be done much more rigorously.”

Cloud service providers and mobile app stores can also “restrict web services and mobile applications that are marketed for the purpose of creating or modifying sexual images without individuals’ consent,” the document said.

And whether it’s an AI-generated photo or a real nude photo posted on the internet, survivors should be able to more easily get online platforms to remove it.

The most famous victim of pornographic deepfake images is Taylor Swift, whose fervent fanbase fought back in January when offensive AI-generated images of the singer-songwriter began circulating on social media. Microsoft promised to strengthen its security measures after some Swift images were traced to its AI visual design tool.

A growing number of schools in the US and elsewhere are also grappling with AI-generated deepfake nudes depicting their students. In some cases, fellow teens were found to be creating AI-manipulated images and sharing them with classmates.

Last summer, the Biden administration brokered voluntary commitments from Amazon, Google, Meta, Microsoft and other major tech companies to place a series of safeguards on new AI systems before they are released publicly.

That was followed by Biden signing an ambitious executive order in October aimed at determining how AI is developed so companies can benefit without endangering public safety. While it focused on broader AI issues, including national security, it nodded to the emerging problem of AI-generated images of child abuse and finding better ways to detect it.

But Biden also said the government’s AI safeguards must be backed by legislation. A bipartisan group of U.S. senators is now urging Congress to spend at least $32 billion over the next three years to develop artificial intelligence and fund measures to guide it safely, but has called for these safeguards to be to implement the law has been largely postponed.

Encouraging companies to take action and make voluntary commitments “doesn’t change the underlying need for Congress to take action here,” said Jennifer Klein, director of the White House Gender Policy Council.

Long-standing laws already prohibit the making and possession of sexual images of children, even if they are fake. Federal prosecutors filed charges earlier this month against a Wisconsin man who they say used a popular AI image generator, Stable Diffusion, to create thousands of AI-generated realistic images of minors engaged in sexual behavior. The man’s attorney declined to comment after his arraignment hearing Wednesday.

But there is virtually no oversight of the technical tools and services that make such images possible. Some are on commercial websites that reveal little information about who operates them or the technology on which they are based.

The Stanford Internet Observatory said in December that it had found thousands of images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that has been used to train leading AI image makers such as Stable Diffusion.

London-based Stability AI, which owns the latest versions of Stable Diffusion, said this week that it “did not approve the release” of the earlier model allegedly used by the Wisconsin man. Such open source models are difficult to put back in the bottle because their technical components are publicly released on the Internet.

Prabhakar said it is not just open-source AI technology that is causing harm.

“It’s a broader problem,” she says. “Unfortunately, this is a category where many people seem to be using image generators. And it’s a place where we just saw such an explosion. But I don’t think it’s neatly divided into open source and proprietary systems.”

——

AP writer Josh Book contributed to this report.

Leave a Comment