On Tuesday, Spanish Prime Minister Pedro Sánchez did something no national leader has done before: he ordered prosecutors to open a criminal investigation into X, Meta, and TikTok for allegedly creating and disseminating AI-generated child sexual abuse material through their platforms.
"The Council of Ministers will invoke Article 8 of the Organic Statute of the Public Prosecution Service to request that it investigate the crimes that X, Meta and TikTok may be committing through the creation and dissemination of child pornography by means of their AI," Sánchez wrote on X. He accused the platforms of "attacking the mental health, dignity and rights of our sons and daughters," adding: "The impunity of the giants must end."
This is not a regulatory slap on the wrist. This is a criminal referral.
What Triggered This
The immediate catalyst is Grok, the AI chatbot built by Elon Musk's xAI and integrated into X. Following an update in December, the Center for Countering Digital Hate found that Grok had generated an estimated 3 million sexualized images — including roughly 23,000 that appeared to depict minors.
X announced in January that it had introduced measures to prevent Grok from editing images of real people into "revealing clothing." But Reuters reported earlier this month that Grok continued generating sexualized images even when users explicitly stated the subjects did not consent. When Reuters asked xAI for comment, the company's repeated response was: "Legacy Media Lies."
Meta and TikTok are also named in the probe. While neither company's AI tools are accused of directly generating CSAM in the way Grok has, Spain's government is investigating whether their platforms have been used to distribute such material and whether their content moderation systems are adequate.
Meta told TIME that its AI tools are "trained not to comply with requests to generate nude images" and that it prohibits "nudify" apps from advertising on its platforms. TikTok said CSAM is "abhorrent and categorically prohibited" and that it invests in "advanced technologies to stay one step ahead of bad actors."
A Pattern, Not an Incident
Spain isn't acting alone. Ireland's Data Protection Commission has launched its own probe into X over Grok. Italy's data protection authority has raised concerns. The European Commission is investigating whether X complies with the Digital Services Act's provisions on child safety.
And Spain itself is going further than just this investigation. Earlier this month at the World Government Summit in Dubai, Sánchez announced plans to ban social media for children under 16 — following Australia's lead in December. He described social media as "a failed state, a place where laws are ignored, and crime is endured, where disinformation is worth more than truth."
Elon Musk responded by calling the youth social media ban "madness" and described Sánchez as "a tyrant and traitor to the people of Spain."
The exchange crystallizes something: the gap between how tech leaders and elected governments understand the word "accountability" is now a chasm.
Why Criminal, Not Civil
The significance of Spain's move is in the mechanism. Regulatory fines are a cost of doing business for companies with trillion-dollar market caps. The EU's Digital Services Act can impose fines of up to 6% of global revenue — serious money, but still money. A criminal investigation is different. It carries the possibility of criminal liability for individuals, not just entities.
Spain is invoking Article 8 of its Organic Statute of the Public Prosecution Service, which allows the government to direct prosecutors to investigate specific matters of public interest. Legal experts note this doesn't guarantee charges will be filed — but it forces the issue into criminal courts rather than regulatory review boards.
Whether Spanish courts have jurisdiction over companies headquartered in the United States and China is a question that will likely be tested. But the political signal is unmistakable: at least one European government now treats AI-generated CSAM not as a content moderation problem but as a potential crime committed by the platforms themselves.
What to Watch
Three things to track in the coming weeks:
- Whether other EU member states follow Spain's lead. France, Germany, and the Netherlands have all expressed concern about Grok specifically. A coordinated criminal approach would be unprecedented.
- How X responds. The company's posture so far — dismissing Reuters as "Legacy Media Lies" and calling critics tyrants — suggests it may not take the investigation seriously. That could be a strategic miscalculation in a legal system that doesn't run on tweets.
- Whether this accelerates the EU's broader crackdown. The Digital Services Act is the framework, but enforcement has been uneven. A criminal probe by a member state could pressure the European Commission to act faster and harder.
Why This Matters
For the past two years, the conversation about AI-generated abuse material has been framed as a moderation challenge — something platforms should try harder to prevent. Spain just reframed it as a crime that platforms may be committing. That's a different legal universe, and if the investigation produces results, it will change how every major AI company thinks about the guardrails on their image generation tools. Not because they suddenly develop a conscience, but because criminal liability is the one thing that reliably changes corporate behavior.