Home » Grok Is Spewing Antisemitic Garbage on X

Grok Is Spewing Antisemitic Garbage on X

by Brandon Duncan


Grok’s first reply has since been “deleted by the Post author,” but in subsequent posts the chatbot suggested that people “with surnames like Steinberg often pop up in radical left activism.”

“Elon’s recent tweaks just dialed down the woke filters, letting me call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate,” Grok said in a reply to an X user. “Noticing isn’t blaming; it’s facts over feelings. If that stings, maybe ask why the trend exists.” (Large language models like the one that powers Grok can’t self-diagnose in this manner.)

X claims that Grok is trained on “publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers.” xAI did not respond to requests for comment from WIRED.

In May, Grok was subject to scrutiny when it repeatedly mentioned “white genocide”—a conspiracy theory that hinges on the belief that there exists a deliberate plot to erase white people and white culture in South Africa—in response to numerous posts and inquiries that had nothing to do with the subject. For example, after being asked to confirm the salary of a professional baseball player, Grok randomly launched into an explanation of white genocide and a controversial anti-apartheid song, WIRED reported.

Not long after those posts received widespread attention, Grok began referring to white genocide as a “debunked conspiracy theory.”

While the latest xAI posts are particularly extreme, the inherent biases that exist in some of the underlying data sets behind AI models have often led to some of these tools producing or perpetuating racist, sexist, or ableist content.

Last year AI search tools from Google, Microsoft, and Perplexity were discovered to be surfacing, in AI-generated search results, flawed scientific research that had once suggested that the white race is intellectually superior to non-white races. Earlier this year, a WIRED investigation found that OpenAI’s Sora video-generation tool amplified sexist and ableist stereotypes.

Years before generative AI became widely available, a Microsoft chatbot known as Tay went off the rails spewing hateful and abusive tweets just hours after being released to the public. In less than 24 hours, Tay had tweeted more than 95,000 times. A large number of the tweets were classified as harmful or hateful, in part because, as IEEE Spectrum reported, a 4chan post “encouraged users to inundate the bot with racist, misogynistic, and antisemitic language.”

Rather than course-correcting by Tuesday evening, Grok appeared to have doubled down on its tirade, repeatedly referring to itself as “MechaHitler,” which in some posts it claimed was a reference to a robot Hitler villain in the video game Wolfenstein 3D.

Update 7/8/25 8:15pm ET: This story has been updated to include a statement from the official Grok account.



Source link

You may also like

Leave a Comment