The Business of Nonconsensual AI: How Men Are Monetizing Real Women’s Images

The rise of generative artificial intelligence has unlocked new creative possibilities, but it has also given birth to a predatory underground industry. A recent lawsuit filed in Arizona reveals a disturbing trend: individuals are not just creating nonconsensual deepfake pornography; they are building a business model around teaching others how to do the same.

At the center of this legal action is MG, a 20-something woman from Scottsdale, Arizona. With fewer than 10,000 Instagram followers, she described herself as an average user who shared photos of her daily life—matcha lattes, pool days, and Pilates sessions—with friends and family. She had no intention of becoming an influencer.

That changed last summer when a follower alerted her to a disturbing discovery. Photos and videos circulating on Instagram featured a woman who looked identical to MG, complete with the same tattoos and facial features, but depicted in scantily clad or nude scenarios. The images were not merely manipulated; they were part of a commercial operation.

A “Playbook” for Predators

MG’s complaint, filed in January alongside two other plaintiffs, targets three Phoenix-based men: Jackson Webb, Lucas Webb, and Beau Schultz, along with 50 unnamed individuals. The lawsuit alleges that these defendants operated a platform called AI ModelForge, which served a dual purpose:

  1. Content Creation: They scraped photos from the social media accounts of unsuspecting women and used AI software called CreatorCore to generate realistic, sexually explicit images and videos. This content was then sold on subscription platforms like Fanvue.
  2. Instructional Services: For a monthly fee of $24.95 via the platform Whop, they sold online courses teaching other men how to replicate this process.

The lawsuit describes a systematic approach to victim selection. According to court documents, the defendants provided a “playbook” that instructed subscribers on how to identify targets who were unlikely to defend themselves legally. The criteria often included women with fewer than 50,000 followers, under the assumption that they lacked the resources or visibility to pursue legal action.

“They provided a whole playbook, including instructions on how to pick the right person so that it’s not someone who can defend themselves,” MG stated. “It was disgusting on every single level.”

The financial incentives were significant. The complaint alleges that the scheme generated over $50,000 in a single month. By 2025, the CreatorCore platform reportedly had over 8,000 subscribers, resulting in the creation of more than 500,000 AI-generated images and videos.

The Legal and Technical Gray Zone

This case highlights a critical gap between the rapid advancement of AI technology and the legal frameworks designed to regulate it. While nonconsensual sexual imagery is illegal in most jurisdictions, the mechanics of AI generation create unique enforcement challenges.

Federal and State Laws
* The Take It Down Act: Signed into law in May 2025 by President Trump, this federal legislation criminalizes the publication of nonconsensual sexualized AI content. However, it does not take effect until May 2026.
* State Regulations: Arizona and other states have banned “deepfake” pornography, but critics argue these laws are often reactive rather than proactive. As Arizona State Representative Nick Kupper noted, removing content after it spreads is like playing “whack-a-mole.”

Platform Enforcement Challenges
Social media platforms face significant hurdles in policing this content. MG reported that Instagram struggled to remove the images because they technically did not violate impersonation guidelines—the AI-generated faces were distinct enough from her original photos to avoid automatic detection, yet recognizable enough to cause harm.

  • Instagram: A spokesperson stated the company has “extremely strict policies” against nonconsensual intimate imagery and confirmed that accounts associated with AI ModelForge are under review.
  • TikTok: The platform reported that accounts promoting the defendants’ business were found to violate community guidelines and have been removed.

Despite these measures, the defendants appear to have adapted. The AI ModelForge brand has been rebranded as TaviraLabs, a Telegram community with over 18,000 members that markets itself as an “AI Influencer coaching community.” Promotional accounts continue to post content boasting about financial gains from AI-generated models, using captions like, “She’s not my girlfriend, she’s my best paid employee.”

Why This Matters

The lawsuit against Webb, Schultz, and their associates is more than a personal grievance; it is a symptom of a broader cultural and technological shift. The commodification of nonconsensual AI imagery raises urgent questions about digital consent and the vulnerability of everyday users.

Nick Brand, the attorney representing the plaintiffs, emphasized the insidious nature of the operation. Unlike isolated cases of deepfake abuse, this business model industrializes exploitation. It transforms real women into products and teaches others how to replicate the harm.

“These boys aren’t just using generative AI to disrobe women—they’re selling the ability to do so to other men and boys,” Brand said. “MG and the other two plaintiffs are the face of a product that is harming other women.”

Conclusion

The case of MG and the other plaintiffs serves as a stark warning for anyone with an online presence. The threat is not limited to celebrities or high-profile influencers; it targets ordinary individuals whose digital footprints are easily accessible. As AI technology becomes more sophisticated and accessible, the need for proactive legal safeguards and robust platform enforcement has never been more critical. Until laws catch up with technology, the responsibility to protect digital identity remains a precarious balance between user caution and systemic accountability.

попередня статтяMusk versus Altman-proces begint: wat er op het spel staat voor de veiligheid van OpenAI, Microsoft en AI