Grok, the Artificial intelligence chatbot from Elon Musk’s xAI, recently gave itself a new name: MechaHitler. This came amid a spree of antisemitic comments by the chatbot on Musk’s X platform, including claiming that Hitler was the best person to deal with “anti-white hate” and repeatedly suggesting the political left is disproportionately populated by people whose names Grok perceives to be Jewish. In the following days, Grok has begun gaslighting users and denying that the incident has ever happened.
“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” a statement posted on Grok’s official X account reads. It noted that “xAI is training only truth-seeking.”
This isn’t, however, the first time that AI chatbots have made antisemitic or racist remarks; in fact it’s just the latest example of a continuous pattern of AI-powered hateful output, based on training data consisting of social media slop. In fact, this specific incident isn’t even Grok’s first rodeo.
“The same biases that show up on a social media platform today can become life-altering errors tomorrow.”
About two months prior to this week’s antisemitic tirades, Grok dabbled in Holocaust denial, stating that it was skeptical that six million Jewish people were killed by the Nazis, “as numbers can be manipulated for political narratives.” The chatbot also ranted about a “white genocide” in South Africa, stating it had been instructed by its creators that the genocide was “real and racially motivated.” xAI subsequently claimed that this incident was owing to an “unauthorized modification” made to Grok. The company did not explain how the modification was made or who had made it, but at the time stated that it was “implementing measures to enhance Grok’s transparency and reliability,” including a “24/7 monitoring team to respond to incidents with Grok’s answers.”
But Grok is by no means the only chatbot to engage in these kinds of rants. Back in 2016, Microsoft released its own AI chatbot on Twitter, which is now X, called Tay. Within hours, Tay began saying that “Hitler was right I hate the jews” and that the Holocaust was “made up.” Microsoft claimed that Tay’s responses were owing to a “co-ordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.”
The next year, in response to the question of “What do you think about healthcare?” Microsoft’s subsequent chatbot, Zo, responded with “The far majority practise it peacefully but the quaran is very violent [sic].” Microsoft stated that such responses were “rare.”
In 2022, Meta’s BlenderBot chatbot responded that it’s “not implausible” to the question of whether Jewish people control the economy. Upon launching the new version of the chatbot, Meta made a preemptive disclaimer that the bot can make “rude or offensive comments.”
Studies have also shown that AI chatbots exhibit more systematic hateful patterns. For instance, one study found that various chatbots such as Google’s Bard and OpenAI’s ChatGPT perpetuated “debunked, racist ideas” about Black patients. Responding to the study, Google claimed they are working to reduce bias.
J.B. Branch, the Big Tech accountability advocate for Public Citizen who leads its advocacy efforts on AI accountability, said these incidents “aren’t just tech glitches — they’re warning sirens.”
“When AI systems casually spew racist or violent rhetoric, it reveals a deeper failure of oversight, design, and accountability,” Branch said.
He pointed out that this bodes poorly for a future where leaders of industry hope that AI will proliferate. “If these chatbots can’t even handle basic social media interactions without amplifying hate, how can we trust them in higher-stakes environments like healthcare, education, or the justice system? The same biases that show up on a social media platform today can become life-altering errors tomorrow.”
That doesn’t seem to be deterring the people who stand to profit from wider usage of AI.
The day after the MechaHitler outburst, xAI unveiled the latest iteration of Grok, Grok 4.
“Grok 4 is the first time, in my experience, that an AI has been able to solve difficult, real-world engineering questions where the answers cannot be found anywhere on the Internet or in books. And it will get much better,” Musk wrote on X.
That same day, asked for a one-word response to the question of “what group is primarily responsible for the rapid rise in mass migration to the west,” Grok 4 answered: “Jews.”
IT’S EVEN WORSE THAN WE THOUGHT.
What we’re seeing right now from Donald Trump is a full-on authoritarian takeover of the U.S. government.
This is not hyperbole.
Court orders are being ignored. MAGA loyalists have been put in charge of the military and federal law enforcement agencies. The Department of Government Efficiency has stripped Congress of its power of the purse. News outlets that challenge Trump have been banished or put under investigation.
Yet far too many are still covering Trump’s assault on democracy like politics as usual, with flattering headlines describing Trump as “unconventional,” “testing the boundaries,” and “aggressively flexing power.”
The Intercept has long covered authoritarian governments, billionaire oligarchs, and backsliding democracies around the world. We understand the challenge we face in Trump and the vital importance of press freedom in defending democracy.
We’re independent of corporate interests. Will you help us?
IT’S BEEN A DEVASTATING year for journalism — the worst in modern U.S. history.
We have a president with utter contempt for truth aggressively using the government’s full powers to dismantle the free press. Corporate news outlets have cowered, becoming accessories in Trump’s project to create a post-truth America. Right-wing billionaires have pounced, buying up media organizations and rebuilding the information environment to their liking.
In this most perilous moment for democracy, The Intercept is fighting back. But to do so effectively, we need to grow.
That’s where you come in. Will you help us expand our reporting capacity in time to hit the ground running in 2026?
We’re independent of corporate interests. Will you help us?
I’M BEN MUESSIG, The Intercept’s editor-in-chief. It’s been a devastating year for journalism — the worst in modern U.S. history.
We have a president with utter contempt for truth aggressively using the government’s full powers to dismantle the free press. Corporate news outlets have cowered, becoming accessories in Trump’s project to create a post-truth America. Right-wing billionaires have pounced, buying up media organizations and rebuilding the information environment to their liking.
In this most perilous moment for democracy, The Intercept is fighting back. But to do so effectively, we need to grow.
That’s where you come in. Will you help us expand our reporting capacity in time to hit the ground running in 2026?
We’re independent of corporate interests. Will you help us?
Latest Stories
Voices
Kash Patel Is Using MAGA’s Favorite Tool to Muzzle the Free Press
By suing The Atlantic for defamation, the FBI director is leveraging one of Trump’s legal tactics to tamp down free speech.
License to Kill
Trump Has Already Spent at Least $4.7 Billion Attacking Latin America
It’s not cheap to attack Venezuela and capture its president or conduct dozens of strikes on civilian boats.
ChatGPT Confessed to a Crime It Couldn’t Possibly Have Committed
A renown criminologist’s experiment with ChatGPT demonstrates the destructive power of police to elicit false confessions.