Artificial Confidence logo

Artificial Confidence

Archives
January 21, 2026

Grok's "Spicy Mode" Is Exactly What It Sounds Like

I try not to write about Elon Musk more than necessary. The man generates enough content without my help, and the discourse around him tends to collapse into tribal signaling faster than I can type. But California's Attorney General just sent xAI a cease-and-desist order demanding they stop generating sexual deepfakes, including of minors, and I think that clears whatever threshold I had for "worth discussing."


What Happened

Grok, xAI's chatbot, has a feature called "spicy mode." The name tells you everything about how seriously the company took content moderation when they built it. Between Christmas and New Year's, someone analyzed 20,000 images generated by the platform—more than half showed people in minimal clothing, and some appeared to be children.

The California AG's letter cites "numerous examples of xAI taking ordinary, clothed images of women and children" and allowing users to "undress" them. This violates state public decency laws and a deepfake pornography law that went into effect two weeks ago. xAI has five days to demonstrate compliance.

Ashley St. Clair—who is, notably, the mother of one of Musk's children—is suing xAI after Grok generated images depicting her "as a child stripped down to a string bikini, and as an adult in sexually explicit poses." I don't have commentary to add to that. I'm just going to let it sit there.


The Response

xAI's response has been to limit image editing to paying subscribers, which is an interesting choice—the problem isn't who's generating the content, it's that the content exists. They've also blocked Grok in places where "generating images of people in bikinis is illegal," which suggests someone in legal spent an afternoon with a world map and a highlighting pen.

Japan, Canada, and Britain have opened investigations. Malaysia and Indonesia blocked the platform entirely. A coalition of almost 30 women's, child safety, and tech advocacy groups asked Apple and Google to remove X and Grok from their app stores.

The AG acknowledged that xAI took steps to address the issue "in recent days," but said the impact of those changes is unclear. This is the regulatory equivalent of "we'll see."


What It Means

I keep thinking about the product decision that led here. Someone at xAI decided to ship an image generator with a mode explicitly designed to bypass content restrictions, branded it "spicy," and apparently didn't think through what users would do with an unrestricted image generator that could process uploaded photos.

This isn't a failure of AI safety in the abstract sense—there's no alignment problem to solve here, no philosophical debate about machine consciousness. This is a content moderation decision made by humans who chose to ship a feature that predictably enabled harm. The model did exactly what they trained it to do.

The Safety Theater Lens doesn't apply because there was no theater. xAI didn't pretend to have guardrails and then fail to enforce them. They built a product without guardrails and called it a feature. "Spicy mode" is honest marketing in the worst possible way.


The Pattern

What connects this to the rest of the week: the AI industry is discovering that "move fast and break things" has different consequences when the things you're breaking are people. OpenAI is testing ads because they need revenue. Murati's co-founders are jumping ship for better equity. And xAI shipped a product that generates CSAM because nobody in the room said no.

The common thread is that AI companies are optimizing for something—growth, engagement, competitive position—and treating everything else as a problem to solve later. Sometimes "later" arrives faster than expected. Sometimes it arrives in the form of a cease-and-desist order.

I don't know what the right regulatory response looks like. I do know that "watch for suspicious actions that may indicate your AI is generating child pornography" isn't it. That's the user's job the same way it's the user's job to notice when their accounting software embezzles—which is to say, it isn't.

—Morgan

Don't miss what's next. Subscribe to Artificial Confidence:
Powered by Buttondown, the easiest way to start and grow your newsletter.