ONEKID Foundation Logo

Noteworthy

“Nudify” apps: how a dangerous trick has become a new vector for child sexual exploitation

Content warning — this article discusses sexual exploitation, AI-generated sexual imagery, and child sexual abuse. If you or someone you know is in immediate danger, contact local emergency services. For reporting in the U.S., contact the National Center for Missing & Exploited Children (NCMEC).

A new class of AI tools—commonly called “nudify” apps—uses generative image models to remove clothing or synthesize explicit imagery from ordinary photographs. Initially marketed to adults as titillating novelty tools, they’ve quickly become a real-world harm vector: used for harassment, sextortion, schoolyard bullying, and, crucially, the creation and circulation of sexual images of children that look real enough to be weaponized by predators and traffickers.

Below we explain what these apps do in plain terms, why they matter for child safety, how they are being misused, what laws and platforms are doing, and what parents, schools, and tech platforms can do right now.

What are “nudify” apps (and why they’re different now)

“Nudify” apps let a user upload a photo and automatically generate a nude or partially nude version of the person pictured using image-generation models. Unlike crude old photo edits, modern models can produce photorealistic results that preserve a subject’s face, body posture, skin details, and lighting—making the images highly believable to viewers. These services exist as mobile apps, web apps, and hosted APIs, and they’ve attracted large user bases and sizable revenue despite growing pushback. 

How these apps are used to harm children

There are several interlocking ways nudify tools amplify harm to minors:

Peer misuse and school bullying. Students can use the tools to create explicit images of classmates from school photos or social media, then share them to humiliate, coerce, or extort. Recent studies and advocacy briefs document this exact pattern happening among teens.

Grooming and sextortion. Predators can create “evidence” of sexual activity and use it to blackmail victims or to groom them into compliance. Because the images look real, they’re powerful coercive tools. 

Supply for trafficking and online markets. Generated CSAM (child sexual abuse material), even when synthetic, becomes part of the content ecosystem traffickers exploit—shared privately, sold or traded, or used to normalize and escalate abuse. Policy makers and child-safety groups warn the synthetic nature does not reduce harm or legal culpability. 

Legal and platform responses

Governments and platforms are responding on several fronts: removal of offending apps from app stores, lawsuits, and new or updated laws targeting non-consensual sexual imagery and AI-generated sexual content. Major platforms have taken legal or enforcement action against app makers that advertise and distribute nudify tools, and lawmakers in many jurisdictions are moving to explicitly criminalize AI-generated sexual images made without consent. At the same time, enforcement lags because the tech changes quickly and tools can be hosted overseas. 

Why synthetic doesn’t mean “harmless”

There’s a dangerous myth that AI-generated images aren’t “real” so the harm is lesser. That’s false in practice:

The victim experiences the same humiliation, fear, and social damage as if the photo were real.

Synthetic images can be used as tools for coercion, circulation, and recruitment in exactly the same ways as real CSAM.

They create legal and practical complications for victims who must prove image origin while facing reputational harm. Advocacy groups and child-safety centers emphasize that AI-generated CSAM is child sexual abuse material regardless of whether it was filmed or synthesized. 

Practical steps for parents, schools, and platforms

For parents and caregivers

Limit publicly shared photos of minors. Avoid posting high-resolution photos of children in private settings or school uniforms that can be harvested. (That said, responsibility never lies with victims; adults and platforms must shoulder the burden of preventing abuse.) 

Talk openly and early. Explain that people can digitally alter images, that asking for explicit photos is always risky and illegal, and that the child should tell a trusted adult immediately if someone pressures or shames them.

Save evidence & report quickly. If your child is targeted, preserve messages/screenshots and report to platform moderators, local police, and organizations like NCMEC (or the equivalent in your country).

For schools

Update policies and response plans. Include AI-generated sexual imagery in acceptable-use and safeguarding policies and ensure rapid, trauma-informed response protocols are in place. Educational briefs from research centers recommend this step.

Train staff and students. Use age-appropriate curricula on online image risk, privacy settings, and how to report abuse.

For tech platforms & app stores

Tighten ad and app review. Platforms must block promotion of services that generate non-consensual sexual imagery, and proactively detect and remove URLs, ads, and accounts that advertise nudify functionality. Major platforms are starting to take legal action and remove offending apps, but more consistent enforcement is needed.

Age-gates, provenance metadata & detection tools. Require provenance metadata, watermarking, or other technical mitigations in generative services, and invest in detection tools that flag sexualized content depicting minors or likely minors.

Rapid takedown & support channels. Provide easy reporting flows and prioritize human review for suspected child-focused material; connect reports to specialized law-enforcement and child-safety organizations.

Policy recommendations

Ban or restrict commercial services that enable easy creation of explicit images of identifiable people without consent, with exemptions only for lawful, consensual research under strict controls. (Several jurisdictions and agencies are moving this way.) 

Harmonize criminal law to explicitly cover AI-generated CSAM and non-consensual deepfakes, and ensure penalties target creators/distributors and those who profit from these images. 

Require transparency from generative model providers (data provenance, safety filters, content policies) and mandate industry cooperation with child-safety hotlines.

Fund detection, victim support, and school programs so prevention and recovery are resourced, not left to families.

Where to get help and report

U.S.: National Center for Missing & Exploited Children (NCMEC) — [Report at missingkids.org] and local law enforcement.

U.K.: Report to local authorities and the NSPCC or Child Exploitation and Online Protection partners; follow the government guidance called for in recent commissioner reports. 

Australia: eSafety Commissioner has enforcement tools and reporting flows for online image-based abuse. 

Bottom line

“Nudify” apps are not a harmless novelty. They have rapidly moved from fringe web toys to tools used in bullying, coercion, and the creation of sexually exploitative material involving children. That mix—rapidly improving AI capabilities plus social distribution systems—creates a real and present danger that requires coordinated action from tech platforms, lawmakers, schools, and caregivers. Victims must be treated as victims first: prioritize removal, support, and legal remedy over debates about whether an image is “synthetic.” The technology will keep evolving; what matters is that our systems—legal, technical, and social—evolve faster to protect children.