close
close

Law enforcement races to stop AI child porn | News, Sports, Jobs

Law enforcement races to stop AI child porn | News, Sports, Jobs



WASHINGTON (AP) — A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to show a group of naked girls. A U.S. Army soldier has been accused of creating images depicting children he knew being sexually abused. A software engineer tasked with creating hyper-realistic, suggestive images of children.

Law enforcement agencies across the United States are cracking down on disturbing images of child sexual abuse created by AI technology, from manipulated photos of real children to graphic depictions of computer-generated children. Justice Department officials say they are aggressively pursuing criminals using AI tools, while states are racing to ensure that people producing “deepfakes” and other harmful images of children can be prosecuted under their laws.

“We need to signal early and often that this is a crime, that it will be investigated and prosecuted when the evidence supports it,” Steven Grocki, head of the Justice Department’s Child Exploitation and Obscenity Section, said in an interview with The magazine. Associated Press. “And if you sit there and think otherwise, you are fundamentally wrong. And it’s only a matter of time before someone holds you accountable.”

The Justice Department says existing federal laws clearly apply to such content, and recently brought what is believed to be the first federal case involving images entirely generated by AI; This means that the children depicted are virtual, not real. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska, accusing him of posting innocent photos of real children he knew through an AI chatbot and rendering the images obscene.

The prosecutions come as child advocates urgently seek to crack down on misuse of the technology to prevent a flood of disturbing images that authorities fear could make it harder to rescue real victims. Law enforcement officials worry that investigators are wasting time and resources trying to identify and track exploited children who do not actually exist.

Meanwhile, lawmakers are introducing a series of bills that would allow local prosecutors to file criminal charges under state law for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse images, according to a review by the National Center for Missing and Exploited Children.

“Frankly, we are trying to catch up as law enforcement with a technology that is advancing much faster than we are,” California District Attorney Erik Nasarenko told Ventura County.

Nasarenko pushed for legislation signed by Gov. Gavin Newsom last month that would make clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California law requires prosecutors to prove the images depict a real child.

Law enforcement officials say AI-generated images of child sexual abuse could be used to groom children. Even if they are not physically abused, children can be deeply affected by having their image altered to appear sexually suggestive.

“It felt like a part of me had been taken away. Even though I wasn’t physically violent,” said Kaylin Hayman, a 17-year-old who starred in Disney Channel’s “Just Roll with It” and helped push the California bill after falling victim to “deepfake” images .



Breaking news of the day and more in your e-mail inbox