The European Commission has launched a new formal investigation against the social media platform X under the Digital Services Act over sexual deepfakes.
This is according to a statement published on the European Commission’s website.
The move comes amid concerns that Grok was used to generate non-consensual sexually explicit images, prompting investigations and regulatory actions in other countries like the UK, Malaysia, and Indonesia.
According to the European Commission, the new investigation will assess whether X properly assessed and mitigated risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material.
The EU disclosed that these risks seem to have materialised, exposing citizens in the EU to serious harm.
“In light of this, the Commission will further investigate whether X complies with its DSA obligations to:
“Diligently assess and mitigate systemic risks, including the dissemination of illegal content, negative effects in relation to gender-based violence, and serious negative consequences to physical and mental well-being stemming from deployments of Grok’s functionalities into its platform.
“Conduct and transmit to the Commission an ad hoc risk assessment report for Grok’s functionalities in the X service with a critical impact on X’s risk profile prior to their deployment,” they stated.
Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, said that allowing harmful sexual deepfakes is an unacceptable form of degradation.
“Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens – including those of women and children – as collateral damage of its service.”
The Commission added that X must also conduct and submit an ad hoc risk assessment report for Grok’s functionalities that could have a critical impact on the platform’s risk profile prior to deployment.
If X is found to have failed in these obligations, it would constitute infringements of Articles 34(1) and (2), 35(1), and 42(2) of the DSA.
Separately, the ongoing investigation from December 2023 has been extended to examine whether X has adequately assessed and mitigated systemic risks associated with its recommender systems, including the platform’s recent switch to a Grok-based recommender system.
The extended December 2023 investigation builds on formal proceedings the European Commission opened, which assessed whether X had complied with the Digital Services Act (DSA) in key areas, including content moderation, risk management, deceptive design, advertising transparency, and access for independent researchers.
The December 2023 probe relied on X’s risk assessment report, its Transparency Report, and responses to formal information requests, including content related to Hamas’ attacks against Israel.
These proceedings were the first formal enforcement under the DSA and followed X’s designation as a Very Large Online Platform (VLOP) on 25 April 2023, with 112 million monthly EU users.
On 5 December 2025, the Commission fined X €120 million for non-compliance related to deceptive design, lack of advertising transparency, and restricted data access for researchers.
Regulators highlighted three main violations.
X’s paid blue checkmark system, which allowed users to buy verification without proper identity checks, misleading the public and increasing exposure to scams.
Advertising transparency breaches, including missing information on sponsors and target audiences, design barriers, and long processing delays that hindered researchers’ ability to track disinformation.
Restricted access for independent researchers to public data, limiting scrutiny of systemic risks such as misinformation and illegal content, a key DSA requirement.

