Teenage girls are falling victim to deepfake nude photos. One family is calling for more protection

A mother and her 14-year-old daughter are calling for better victim protection after AI-generated nude images of the teen and other female classmates were distributed at a New Jersey high school.

Meanwhile, on the other side of the country, officials are investigating an incident involving a teenage boy who allegedly used artificial intelligence to create and distribute similar images of other students – including teenage girls – attending a high school in suburban Seattle, go to Washington.

The disturbing cases have once again put a spotlight on explicit AI-generated material that overwhelmingly harms women and children and is surfacing online at an unprecedented rate. According to an analysis by independent researcher Genevieve Oh, which was shared with The Associated Press, more than 143,000 new deepfake videos have been posted online this year, which is more than every other year combined.

Desperate for solutions, affected families are urging lawmakers to implement robust safeguards for victims whose images are manipulated using new AI models, or the plethora of apps and websites openly advertising their services. Advocates and some legal experts are also calling for federal regulations that could provide uniform protections across the country and send a strong message to current and potential perpetrators.

“We are fighting for our children,” said Dorota Mani, whose daughter was among the victims in Westfield, a New Jersey suburb outside New York City. “They’re not Republicans, and they’re not Democrats. They don’t care. They just want to be loved, and they want to be safe.”

The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce them becomes more available and easier to use. Researchers are sounding the alarm this year about the explosion of AI-generated child sexual abuse material using images of real victims or virtual characters. In June, the FBI warned that it continued to receive reports of victims, both minors and adults, whose photos or videos were used to create explicit content that was shared online.

Several states have passed their own laws over the years to combat the problem, but these vary in scope. Texas, Minnesota and New York passed legislation this year criminalizing non-consensual deepfake porn, as did Virginia, Georgia and Hawaii, which already had laws on the books. Some states, such as California and Illinois, have only allowed victims to sue perpetrators for damages in civil court, which New York and Minnesota also allow.

A few other states are considering their own legislation, including New Jersey, where a bill is currently in the works to ban deepfake porn and impose penalties — jail time, a fine, or both — on those who distribute it.

State Sen. Kristin Corrado, a Republican who introduced the legislation earlier this year, said she decided to get involved after reading an article about people trying to circumvent revenge porn laws by using their former partner’s image to create deepfake porn to generate.

“We just felt like an incident was going to happen,” Corrado said.

The bill has been a few months in coming, but there’s a good chance it will pass, she said, especially now that a lot of attention has been drawn to the issue because of Westfield.

The Westfield event took place this summer and was brought to the high school’s attention on Oct. 20, Westfield High School spokesperson Mary Ann McGann said in a statement. McGann did not provide details on how the AI-generated images were distributed, but Mani, the mother of one of the girls, said she had received a call from the school informing her that her nude photos had been taken with the faces of a number of female students and then were dispersed. among a group of friends on the social media app Snapchat.

The school has not confirmed any disciplinary action, citing the confidentiality of cases involving students. The Westfield Police Department and the Union County Prosecutor’s Office, both of which were notified, did not respond to requests for comment.

No details are known about the incident in Washington state, which occurred in October and is under investigation by police. Issaquah Police Chief Paula Schwan said they have obtained multiple search warrants and noted that the information they have “may be subject to change” as the investigation continues. When reached for comment, the Issaquah School District said it could not discuss the details because of the investigation, but said any form of bullying, harassment or assault among students is “completely unacceptable.”

If officials take steps to prosecute the incident in New Jersey, current state law banning the sexual exploitation of minors may already apply, said Mary Anne Franks, a law professor at George Washington University who leads the Cyber Civil Rights Initiative, an organization that aims to combat online abuse. . But that protection does not extend to adults who might find themselves in a similar scenario, she said.

The best solution, according to Franks, would come from a federal law that could provide consistent protection nationwide and penalize questionable organizations that profit from products and apps that make it easy for anyone to create deepfakes. She said this could also send a strong message to minors who might impulsively take images of other children.

President Joe Biden signed an executive order in October that, among other things, called for banning the use of generative AI to produce child sexual abuse material or non-consensual “intimate images of real individuals.” The order also directs the federal government to issue guidelines for labeling and watermarking AI-generated content to distinguish between authentic and software-generated material.

Citing the Westfield incident, U.S. Representative Tom Kean, Jr., a Republican who represents the city, introduced a bill on Monday that would require developers to provide information about AI-generated content. Among other efforts, another federal bill introduced by U.S. Rep. Joe Morelle, a New York Democrat, would make it illegal to share deepfake porn images online. But no progress has been made in months due to the gridlock in Congress.

Some urge caution — including the American Civil Liberties Union, the Electronic Frontier Foundation and The Media Coalition, an organization that works for trade groups representing publishers, movie studios and others — saying careful consideration is needed to avoid proposals that conflict can come up with the the First Amendment.

“Some concerns about deepfake misuse can be addressed under existing cyber harassment laws,” said Joe Johnson, an attorney for the ACLU of New Jersey. “Whether at the federal or state level, there needs to be substantial conversation and stakeholder input to ensure that a bill is not overly broad and addresses the stated problem.”

Mani said her daughter has created a website and a charity to help AI victims. The two have also been in talks with state lawmakers pushing New Jersey’s law and are planning a trip to Washington to advocate for more protections.

“Not every child, boy or girl, will have the support system to tackle this problem,” Mani said. “And maybe they don’t see the light at the end of the tunnel.”

__

AP reporters Geoff Mulvihill and Matt O’Brien contributed from Cherry Hill, New Jersey and Providence, Rhode Island.

Leave a Comment