Despite strict content policies, Apple and Google have allowed AI-powered “nudify” apps to operate on their official marketplaces at scale. New findings expose deep moderation gaps, raise legal and ethical concerns, and shed light on how generative AI tools are reshaping the boundaries of digital harm and corporate responsibility.
In an era where artificial intelligence is rapidly reshaping digital experiences, a new investigation has laid bare a striking contradiction at the heart of two of the world’s most influential digital marketplaces: Apple’s App Store and Google’s Play Store. Despite well-publicized rules barring sexually explicit and exploitative content, both platforms have been hosting and, until recently, profiting from dozens of AI-powered “nudify” applications that enable users to generate non-consensual intimate imagery. According to research conducted by the Tech Transparency Project (TTP), these tools were widely available and easily discovered through basic keyword searches, underscoring deep weaknesses in content enforcement and platform governance.
Generative AI, with its capacity to create richly detailed synthetic images from minimal input, has unlocked transformative creative possibilities. It has also unleashed tools that can be weaponized to violate privacy and dignity, particularly through “nudify” and face-swap applications. These apps allow users to upload a photograph and command the AI to remove clothing or superimpose someone’s face onto a naked body. The TTP investigation found that 55 such apps were on Google Play and 47 on Apple’s App Store, with 38 appearing on both platforms at the time of the review.
Even more alarmingly, these apps are not fringe anomalies: together they have been downloaded over 705 million times worldwide and have generated an estimated $117 million in revenue through in-app purchases and subscriptions, of which both Apple and Google take a significant commission. This reality sharply contradicts the platforms’ own guidelines, which explicitly ban “sexual nudity” and non-consensual depictions in apps distributed through their stores. Yet rather than preemptively blocking these tools, both marketplaces allowed them to flourish until external scrutiny forced a reaction.
What makes this situation uniquely troubling is not just the existence of these apps, but the clear policy violation that their presence represents. Apple’s App Store Review Guidelines bar “overtly sexual or pornographic material,” and Google Play’s policies explicitly prohibit apps that “claim to undress people or see through clothing,” even if marketed as a prank or entertainment. Despite these prohibitions, many of the nudify apps functioned precisely in these prohibited ways and were discoverable through innocuous searches for terms like “nudify” or “undress.”
The technical ease with which these apps operate belies the profound harm they can cause. By lowering barriers to creating realistic deepfake nudity, they facilitate the creation of content that, in many jurisdictions, is recognized as a form of image-based sexual abuse. Unlike traditional deepfake websites hidden in the dark corners of the internet, these apps were available in mainstream marketplaces trusted by consumers and families. That some were rated for users as young as 9 or 13 years old only magnifies the risk that minors could inadvertently access tools capable of generating exploitative content.
In response to the report, both platform operators have taken some enforcement actions. Apple confirmed that it removed 28 identified apps, though watchdog follow-ups suggest the actual number of removals may be slightly lower. Google has “suspended” several of the referenced apps from the Play Store. However, observers note that enforcement has been reactive rather than proactive, triggered only after external media and watchdog reporting rather than through internal detection and moderation systems. Many similar tools reportedly still remain available on both platforms.
The economic incentives embedded in app marketplaces further complicate the picture. Because both Apple and Google collect up to 30 % of in-app revenue on purchases and subscriptions, the platforms have indirectly profited from the distribution of these nudify apps. This dynamic raises uncomfortable questions about whether marketplace monetization models create conflicting incentives that may impede vigorous enforcement against profitable but harmful tools.
The broader implications of this controversy extend beyond individual apps to systemic challenges in moderating synthetic media abuse. Traditional content moderation approaches focus on reacting to reported harmful content, but generative tools create harm by design and do not require a specific complaint to produce exploitative outputs. Experts argue that this underscores a new category of moderation governance — the regulation of tools themselves rather than isolated pieces of harmful content — which requires keyword detection, functional testing, and proactive screening methodologies that go deeper than current automated filters.
In the United States and internationally, lawmakers and regulators have increasingly focused on the question of platform accountability for AI-related harms. In recent months, for example, investigations and lawsuits tied to similar generative AI misuse — including controversies over the Grok AI chatbot producing sexualized or otherwise abusive images — have highlighted public concern and legal scrutiny over how AI is deployed by major tech companies and distributed via major channels.
Public reaction has been swift and critical. Advocacy groups, digital rights organizations, and journalists have described the discovery of nudify apps as emblematic of a wider failure by Apple and Google to keep pace with the rapid evolution of generative AI. Critics argue that while both companies invest heavily in AI research and commercialization, they have not matched that investment with equally sophisticated content safety frameworks that can anticipate and mitigate new categories of abuse.
One central concern is that platforms are still playing catch-up with the technologies they help circulate. In the case of shaking out nudify apps, basic keyword searches were sufficient to reveal dozens of violations — a glaring reminder that, in many cases, detection does not require cutting-edge machine learning or deep forensic analysis, but rather disciplined content review and enforcement commitment.
The ethical dimensions of allowing such tools also extend into cultural and social territory. Deepfake and nudify tools have been linked to the broader phenomenon of image-based sexual abuse, disproportionately harming women and marginalized communities. Studies and anecdotal evidence suggest that non-consensual deepfake content can have severe psychological, social, and sometimes legal consequences for victims, straining existing frameworks for digital rights and personal autonomy.
While Apple and Google’s partial enforcement actions are a step, they raise deeper questions about what it will take for platforms to build safety at the pace of AI innovation. Would it require independent audits of app marketplaces? Stronger regulatory mandates? Or perhaps structural changes to app store governance that prioritize human safety over marketplace expansion? These are questions increasingly being asked by policymakers, technologists, and civil society alike.
From a consumer standpoint, the episode erodes trust in digital marketplaces that have long marketed themselves as curated and secure environments. Users expect platforms like Apple’s App Store and Google Play to act as vigilant gatekeepers, preventing harmful or inappropriate tools from reaching millions of unsuspecting users. The revelation that such tools have been readily available — and profitable — for years suggests a significant shortfall between policy and practice.
Looking ahead, the reaction from the United States could involve a mix of legislative, regulatory, and industry-driven initiatives. U.S. lawmakers have introduced and debated bills aimed at tightening platform accountability for generative AI harms, and there is increasing bipartisan attention on deepfake technologies as part of broader tech oversight efforts. It is conceivable that future hearings will scrutinize not only specific apps but the moderation infrastructures and business models that allowed them to proliferate in the first place. Similarly, regulatory bodies might consider mandates requiring transparency reports, algorithmic audits, and proactive abuse detection for platforms hosting generative technologies.
There are also plausible scenarios in which Apple and Google voluntarily strengthen their internal policies. Market pressure, reputational risk, and competitive dynamics could push both companies to develop more advanced AI abuse detection tools, invest in specialist moderation teams, and revise app submission and review processes to flag generative tools capable of producing harmful outputs before they ever reach consumers.
In sum, the exposure of AI “nudify” apps on major app marketplaces represents not just a content moderation oversight but a wake-up call for digital governance in the age of generative AI. It highlights the tension between innovation and responsibility, the economic incentives embedded in digital ecosystems, and the urgent need for policies and technologies that can protect individuals from new forms of algorithmic harm. As AI continues to evolve, so too must our frameworks for ensuring that it serves the public good without eroding fundamental rights to privacy, dignity, and safety.

0 Comments