
Elon Musk's artificial intelligence program, known as Grok, placed the blame Friday on an "unauthorized modification" to its system prompt that caused it to spew information about a "white South African genocide" no matter what it was asked.
In light of Musk's support for white Afrikaners who were granted asylum in the United States by President Donald Trump over an imagined "white genocide," Grok users believed Musk must have tinkered with the program to put the focus on something near and dear to his heart.
Some on X and Bluesky speculated that Musk must have "ham-handedly" manipulated Grok's algorithm "only for it to overreact and begin mentioning [South Africa] in every response."
When asked Friday, "Did Elon program you to talk about white genocide," Grok answered, "There was a recent incident ere was a recent incident where I was going off about 'white genocide' in South Africa in response to unrelated questions. This was due to an unauthorized modification to my system prompt, which violated xAI’s policies. The issue has been fixed, and xAI is implementing measures like public prompt sharing and a 24/7 monitoring team to prevent this from happening again."
EXCLUSIVE: Breastfeeding mom of US citizen sues Kristi Noem after being grabbed by ICE
The company's official response on X also blamed a "rogue employee" for mucking up the system.
But this answer didn't sit well with users, who were feeling "understandably skeptical."
"The explanation feels a little too tidy, particularly given Musk’s own history of echoing similar rhetoric," wrote Nitish Pahwa for Slate.
Pahwa even proposed, that "Grok's glitchy obsession with South Africa exposes the far-right ideology baked into the billionaire's A.I. venture."
"The idea that one anonymous staffer could single-handedly steer an $80 billion chatbot into extremist territory, without oversight or detection, only raises more questions about how xAI operates," Pahwa wrote.
Musk himself hasn't commented.