The proliferation of the internet has brought up paramount connectivity to the world, but compromised our privacy and safety. In recent years, the growth of AI tools has led to incredible innovations, but also forms of horrific abuse. For years, websites promoting nonconsensual sexual services like revenge pornography have spread online, with some of the victims being underage. Women and girls may fall victim to blackmail or grooming, where predators exploit this vulnerability to obtain this content.
Now with the accelerated development of AI imaging and video generation, apps and websites are now offering to “nudify” images. This technology works by digitally “undressing” any image. The dominant victims of this service are women and girls. People can use these apps to create noncensual sexual and abusive images of women and girls. It also allows for predators to create illicit child sexual abusive material just by scraping images from the victims’ public profiles.
Lawmakers and tech companies have attempted to reduce the harmful impact of these sites, however, millions are still accessing these sites.”Nudifying” apps have seen an alarming increase in popularity. Graphika’s researchers revealed that over 24 million people visited these websites in September 2023 alone. Worse still, these apps and websites are raking in millions of dollars in profit.
The Personal Toll on Young Victims

The most vulnerable to this form of sexual exploitation are primarily young teenagers. A Kentucky teenager committed suicide after falling victim to a sextortion scam. The 16-year-old minor had received threatening texts demanding $3,000 to not leak a nude AI-generated image of the teenager. This tragedy sets a terrible precedent, as this case is one of thousands targeting children globally with digital blackmail.
The proliferation of these “nudify” AI tools have brought new forms of abuse that impact children. According to recent research, these apps are becoming increasingly popular in schools. These pornography scandals have begun to occur at universities and schools globally, where teenagers create sexualized images of their classmates.
A 14-year-old high-school student had to be called into the principal’s office. Alarmed, Francesca anxiously approached the principal’s office trying to figure out what she could have done wrong. The vice-principal had called her to inform her that AI-generated nude photos of her were circulating among the students. Horrified, Francesca informed her mother who was livid and contacted the school board. Franscesca’s mother is livid about their response, in which they explained there were no laws yet to deal with this.
The inaction and bias of the school led Franscesca and her mother to take it public. They revealed how the school treated the girls as opposed to the boys, in an interview with 60 Minutes. A Save the Children survey found that 1 in 5 young people in Spain have been victims of deepfake nudes, with those images shared online nonconsensually. In Lancaster, PA, 2 boys created “347 AI-generated deep-fake pornographic images and videos” from photos of 60 girls, some as young as 12.
The Technology Behind Digital Exploitation

These “nudify” apps work when perpetrators upload fully clothed images of victims onto their platforms. Perpetrators can scrape these images from victims’ public social profiles such as Instagram or Facebook even when victims have enhanced privacy settings. Then all they have to do is click a button and it begins to analyze body type, skin color and pose.
Then reconstructs the image using its expansive database on sexually explicit and illicit material. The LAION-5B Dataset is used to train some AI nudify apps. This dataset contains over 150 million illicit and sexual images within its 5.85 billion-image collection. Stanford University research revealed that LAION-5B, a massive dataset used to train AI image generation models, contains 3,226 images suspected to be child sexual abuse material.
The dataset was temporarily taken offline and has since been “cleaned” with 2,236 links to child sexual abuse material removed. The ease of access to these tools means photos are almost instant and you can create a 60-second deep-fake pornographic video of anyone in just minutes.
The Massive Profit Machine
A recent analysis has demonstrated that 85 “undress” websites garnered an average of 18.5 million visitors over a 6 month period. Their earnings for that year are estimated to be up to $36 million. Of the 10 most visited nudify sites, most of the website’s traffic originate from the United States. The business model has proven resilient, with researchers describing the fight against AI nudifiers as “a game of whack-a-mole”.
Corporate Complicity and Infrastructure Support
Providers of this AI-nudifying service profit from access to mainstream tech infrastructure giants like Google, Amazon and Cloudflare. Most of these websites exploit this access to operate and remain profitable despite crackdowns by platforms and regulators. Meta recently filed a lawsuit against a Hong Kong company behind a Nudify app called Crush AI. The lawsuit alleges that the company repeatedly circumvented Meta’s rules on advertising on its platform.
Online safety researcher Alexios Mantzarlis criticized “Silicon Valley’s laissez-faire approach to generative AI,” arguing that tech companies should have ceased providing services to AI nudifiers when their primary use case became explicitly clear.
Global Demographics and Usage Patterns
A demographics report on victims of deep-fake photos and videos states that 99% of the people are women and girls. The global reach of these apps is extremely concerning. Most website traffic comes from the United States. Mexico, Brazil, India, and Germany make up the other top 5 countries where traffic stems from.
A recent report found that over a third of teenagers were aware of nudify websites. 27% of the teen boys who admitted to using them said they did not share the photos or videos. This means victims often have no recourse since they are unaware of the violation.
The OnlyFans Double Standard
Non-consensual image apps face minimal backlash while women are punished for consensual adult content. Annie Knight, 26, was fired from her corporate job after her boss discovered her OnlyFans account. She began her account to help pay off her student loans. She received a termination email with screenshots and 3 reasons for dismissal from her boss.
Similarly, Kirsten Vaughn, a 24-year-old mechanic, was fired after colleagues discovered and began watching her OnlyFans content at work. This led to her being sexually harassed at work yet management blamed her rather than the harassing colleagues.
High-Profile Cases and Celebrity Targeting
Victims of these AI nudified sites expand to public figures and celebrities. A famous case involved Taylor Swift, where AI nudified photos of her began circulating on the internet. The American Sunlight Project found 35,000 copies of non-consensual images and videos of 26 members of Congress, 25 women and 1 man. Helen Mort, a poet, discovered nude photos created from her Christmas and vacation photos. They included images of her pregnancy, saying “It really makes you feel powerless… Like you’re being put in your place”.
Legal Responses and Legislative Efforts
The law’s response to help victims and to prevent the victimization of children and women through this service is lacklustre. The Take It Down Act was signed into law by President Trump in May 2025. The act criminalizes non-consensual deepfake intimate imagery and requires digital platforms to remove such content within 48 hours of notification.
However, the irony is that the law requires victims to identify and report the content themselves. This approach places the onus on the victim to catch the perpetrator when the victim is unaware. This also shifts the responsibility to the victims to catch their perpetrators and not law enforcement. In the UK, creating sexually explicit deepfakes became a criminal offense with perpetrators facing up to 2 years in jail. This highlights the inadequacy of the current legal respite and how protections of women and children are not considered.
The Inadequacy of Current Solutions
The Maryland Coalition Against Sexual Assault has hightlighted that vicitims cannot easily protect or prevent nonconsensual material being made. The responsibility is on technology companies to take action. The current approach forces victims to discover, expose and report violations themselves.
The rise in AI-generated nude images threatens privacy, emotional health, and reputations severely. Deepfakes have already caused damage to victim’s lives, careers and relationships with some teenagers committing suicide. Experts warn that existing legislation contains severe flaws that need addressing. The stigma of consensual adult material needs to be addressed. The real focus should be on prevention of this illicit nonconsensual content, especially when it involves children.
Read More: Artificial Intelligence Offers New Look at Jesus Using Shroud of Turin