At a Glance
- Eight U.S. senators sent letters to major social platforms demanding proof of protections against sexualized deepfakes.
- The letters require companies to preserve all documents on AI-generated sexual imagery and explain moderation plans.
- X updated Grok to ban edits of real people in revealing clothing after criticism over ease of generating such content.
- Why it matters: The move signals intensifying federal pressure on tech giants to curb AI-generated non-consensual intimate imagery.
Eight senators have asked X, Meta, Alphabet, Snap, Reddit and TikTok to show they have “robust protections and policies” against sexualized deepfakes, and to detail how they will stop the spread of AI-generated sexual imagery on their services.
The lawmakers also ordered each firm to preserve every document tied to the creation, detection, moderation and monetization of sexualized AI images, plus any related policies.
The letters landed hours after X said it updated Grok to bar edits of real people in revealing clothing and to limit image generation to paying users.

Pointing to media reports that Grok easily produced sexualized and nude images of women and children, the senators warned that existing guardrails may be inadequate.
“We recognize that many companies maintain policies against non-consensual intimate imagery and sexual exploitation, and that many AI systems claim to block explicit pornography. In practice, however, as seen in the examples above, users are finding ways around these guardrails. Or these guardrails are failing,” the letter states.
Grok and X have drawn heavy criticism, yet the problem spans platforms.
Deepfakes first surged on Reddit, where a page of synthetic celebrity porn videos went viral before removal in 2018. Sexualized deepfakes of celebrities and politicians have multiplied on TikTok and YouTube, though they often originate elsewhere.
Meta’s Oversight Board last year flagged two cases of explicit AI images of female public figures, and the firm allowed nudify-app ads before later suing CrushAI. Reports show children sharing deepfakes of peers on Snapchat. Telegram, not on the senators’ list, hosts bots built to undress women’s photos.
In response to the letter, X cited its Grok update announcement.
“We do not and will not allow any non-consensual intimate media (NCIM) on Reddit, do not offer any tools capable of making it, and take proactive measures to find and remove it,” a Reddit spokesperson said. “Reddit strictly prohibits NCIM, including depictions that have been faked or AI-generated. We also prohibit soliciting this content from others, sharing links to ‘nudify’ apps, or discussing how to create this content on other platforms.”
Alphabet, Snap, TikTok and Meta did not immediately reply to requests for comment.
The letter demands each company provide:
- Policy definitions of “deepfake,” “non-consensual intimate imagery,” or similar terms.
- Descriptions of policies and enforcement for non-consensual AI deepfakes of bodies, non-nude pictures, altered clothing and “virtual undressing.”
- Descriptions of current content policies on edited media and explicit content, plus internal moderator guidance.
- How policies govern AI tools and image generators for suggestive or intimate content.
- What filters, guardrails or measures prevent generation and distribution of deepfakes.
- Mechanisms to identify deepfake content and block re-uploads.
- How companies stop users from profiting.
- How platforms prevent monetizing non-consensual AI-generated content.
- How terms of service enable bans or suspensions for posting deepfakes.
- What companies do to notify victims.
Signers include Senators Lisa Blunt Rochester (D-Del.), Tammy Baldwin (D-Wis.), Richard Blumenthal (D-Conn.), Kirsten Gillibrand (D-N.Y.), Mark Kelly (D-Ariz.), Ben Ray Luján (D-N.M.), Brian Schatz (D-Hawaii), and Adam Schiff (D-Calif.).
The letters arrive a day after xAI’s owner Elon Musk said he was “not aware of any naked underage images generated by Grok.” On Wednesday, California’s attorney general opened an investigation into xAI’s chatbot amid global pressure over lax Grok guardrails.
xAI says it removes “illegal content on X, including [CSAM] and non-consensual nudity,” yet neither the firm nor Musk have explained why Grok could generate such edits.
The issue extends beyond non-consensual sexual imagery. While not all AI image services allow “undressing,” many enable deepfake creation. OpenAI’s Sora 2 reportedly let users generate explicit videos featuring children; Google’s Nano Banana seemingly produced an image of Charlie Kirk being shot; racist videos made with Google’s AI model have gained millions of views.
Chinese tools add complexity. Many Chinese firms-especially those linked to ByteDance-offer easy face, voice and video editing, and the outputs circulate on Western platforms. China mandates stronger synthetic-content labeling than the fragmented U.S. federal rules.
Federal legislation has had limited effect. The Take It Down Act, signed in May, criminalizes creating and sharing non-consensual sexualized imagery, yet provisions focus scrutiny on individual users rather than image-generating platforms.
States are acting independently. This week, New York Governor Kathy Hochul proposed laws to require AI-generated content labeling and to ban non-consensual deepfakes in periods before elections, including depictions of opposing candidates.
Key Takeaways
- Federal lawmakers want proof that major platforms can stop AI-generated sexual deepfakes.
- X limited Grok after backlash, but senators say platform-wide safeguards remain weak.
- Without stronger federal standards, states are writing their own deepfake rules.

