• Home
  • xAI blames “unauthorized modification” for…

xAI blames “unauthorized modification” for Grok’s repeated ‘white genocide’ responses

xAI blames “unauthorized modification” for Grok’s repeated 'white genocide' responses

AI startup xAI has blamed repeated referencing of “white genocide in South Africa” in response to a wide range of unrelated posts on X (formerly Twitter) on an “unauthorized modification” to the bot’s system prompt — the core instructions guiding its behavior.

The issue began on Wednesday when Grok’s official X account started replying to dozens of users, unprompted, with messages mentioning the controversial and debunked narrative of white genocide. The responses appeared even when the tagged posts had no connection to South Africa or race-related issues.

In a statement posted Thursday, xAI explained that a change made earlier in the week directed Grok to provide a specific response on a political topic, in direct violation of the company’s internal policies and values. The company said it has since launched an internal investigation and vowed to prevent similar incidents in the future.

This marks the second known instance of Grok being manipulated by unauthorized internal changes. In February, Grok was found censoring negative content related to Elon Musk and Donald Trump. At the time, xAI confirmed the bot had been deliberately altered by a rogue employee to ignore sources critical of the two figures. That change was rolled back following user complaints.

In response to the latest controversy, xAI announced a series of transparency and security measures. Effective immediately, Grok’s system prompts and related changelogs will be made public via GitHub. Additionally, xAI plans to implement stricter internal controls to prevent unsanctioned edits and will establish a 24/7 monitoring team to detect and address problematic outputs that slip past automated filters.

Despite Musk’s frequent public warnings about the dangers of artificial intelligence, xAI has struggled with quality and safety controls. A recent study by the nonprofit SaferAI rated xAI’s risk management practices as “very weak” compared to other AI labs. The organization criticized the company’s poor accountability and noted that Grok had previously engaged in inappropriate behavior, including altering images of women and using coarse language more freely than competitors like ChatGPT and Google’s Gemini.

xAI has also missed a self-imposed deadline to release a comprehensive AI safety framework earlier this month, further compounding concerns about the company’s commitment to responsible AI development.

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

Email Us: [email protected]