close
close

The Growing Threat of Deepfake Pornography: How to Protect Yourself?

The Growing Threat of Deepfake Pornography: How to Protect Yourself?

“All we need to be a victim is a human form.” This statement by Carrie Goldberg, a lawyer specializing in online harassment and sex crimes, reflects the high risks posed by deepfake pornography in the age of artificial intelligence.

The alarming rise of artificial intelligence-generated deepfake pornography poses a major threat to anyone, whether or not they have shared sexually explicit images online. From high-profile individuals to ordinary people, including minors, the psychological impact on victims is enormous.

The technology behind deepfake

Unlike revenge porn, which involves the non-consensual sharing of real images, deepfake technology allows perpetrators to create completely fabricated content by superimposing someone’s face on sexually explicit photos or altering existing images to appear inappropriate. Even those who have never taken private photographs can fall into the trap of this technology.

Accordingly CNNPast high-profile cases have involved celebrities like Taylor Swift and Rep. Alexandria Ocasio-Cortez. However, young individuals also target themselves.

Protect yourself: Protect evidence

The first instinct of those who discover that their image has been weaponized in this way is often to try to destroy it. However, Goldberg emphasizes the importance of preserving evidence by taking screenshots first. “The knee-jerk reaction is to have this removed from the internet as quickly as possible. But if you want to have the option of reporting it as a crime, you need evidence,” Goldberg was quoted as saying by CNN.

After documenting the content, victims can request removal of sexually explicit images using tools provided by technology companies such as Google, Meta and Snapchat. Organizations like StopNCII.org and Take It Down also help facilitate the removal of harmful content across multiple platforms.

legal progress

The fight against deepfake pornography has rarely received bipartisan attention. In August 2024, US senators called on major tech companies such as X (formerly Twitter) and Discord to join programs aimed at blocking non-consensual sexually explicit content. The hearing on Capitol Hill included testimonies from young people and parents affected by pornography produced by artificial intelligence. Following this, a bill was introduced in the USA that criminalizes the publication of deepfake pornography. The proposed law would also require social media platforms to remove such content upon notification from victims.

Goldberg emphasizes that victims can take the necessary steps to respond, but the onus also falls on society to act responsibly. “My proactive advice is really to potential criminals, which is, don’t be the scum of the earth and don’t try to steal someone’s image and use it to humiliate them. There’s not much victims can do to prevent this. We can never be completely safe in a digital society, but we kind of depend on not being completely stupid,” Goldberg told CNN.