Elon Musk’s chatbot Grok went on a hateful tirade earlier this month. The AI-powered account praised Hitler and posted a series of antisemitic comments over X, the digital platform also owned by Musk. The company’s CEO, Linda Yaccarino, resigned the next day – though it’s unclear whether her exit was directly related to the bot’s rant.
Posting on X, the Anti-Defamation League (ADL) said Grok’s behavior was “irresponsible, dangerous and antisemitic”. xAI and the Grok account on X apologized for the incident, but days later released two chatbot “companions”, including an NSFW anime avatar available to children and a red panda character built to issue crude insults.
Musk, meanwhile, chalked Grok’s behavior up to it being “too eager to please” and to respond to user generated prompts. He said that the issue had been fixed. US politicians and platform advertisers said little, unlike past responses to similar incidents on X. Their silence and Musk’s downplaying of the event speak louder than if we had heard significant public denouncements of the chatbot’s actions from the powers that be.
The relative quiet at the top is illustrative of two things. First, Grok’s behavior is not surprising. At this point, it’s predictable. Second, little about Grok, X, or xAI is likely to change in the short-term.
Grok sharing hateful content is Grok working precisely as it was designed to. Before the inflammatory chatbot was “born” and launched on X in late 2023, Musk spoke about the need for AI alternatives to ChatGPT, which he deemed too politically correct. He has consistently referred to his anti-woke AI as “TruthGPT”. Grok was deliberately built to be provocative – and “to tell it like it is”.
Grok is a fitting representation of the current culture of X and, more broadly, of social media and those at its reins. In the years since Musk purchased the internet platform, originally named Twitter, many have pointed out that it has devolved into a space rife with “racism, misogyny, and lies”. It’s also full of spam and scams.
Shortly after the platform changed names and hands, most of the people behind content moderation and platform trust and safety were unceremoniously fired. Extremists and conspiracy theorists who had been deplatformed from Twitter were reinstated on X. In late 2024, the social media organization’s first transparency report in years revealed serious problems with this new laissez-faire approach, including unsettling new instances of child exploitation.
According to Musk, Grok was built to seek and reveal truths beyond the supposedly sanitized content seen on competing generative AI systems or prior social media platforms. But Grok what actually does is tell the “truth” of the current hands-off, malignant, version of X, and of other social media platforms and tech leadership. Since Musk’s acquisition of X – and the deregulation of content that followed – other platforms have followed suit.
These shifts follow an executive order and other moves from the Trump administration aimed at curtailing “censorship” through digital content moderation and, seemingly, the collaborative study of social media propaganda. Meanwhile, social media and AI companies are getting more cozy with Washington: xAI just signed a $200m contract with the Pentagon to provide “Grok for government”.
In past research, I’ve found that billionaires’ and global political leaders’ claims about striking down digital media censorship and preserving free speech online are often suspect at best. At worst, and closer to what I’ve encountered in my analyses, such claims are self-serving for those in power. These figures not only own a significant stake in the digital information environment; they have also purposefully and steadily cultivated the online space to be favorable to their goals and ideas. They seek to artificially control what trends, or what people see as popular behavior, and to make such statements or actions acceptable.
But an online world where hate, spam, and scams run rampant is only one version of what the internet and digital media technology can be. The opposite, as Musk and others in Silicon Valley and Washington have rightfully pointed out, is an overtly censored online space. China’s internet and social media platforms, for instance, have been tightly controlled by the government since the outset.
We do not have to accept either extreme. Empirical research and lessons from recent history show us that we can indeed have healthier, more connective, communication tools. We can have social media platforms that are sensibly and systematically moderated from both the top down and bottom up.
But to have such spaces we must demand better, more thoughtful, content moderation and tool design from technologists and policy makers. We must also break big tech’s stranglehold on innovation in the social media space. Too much power online is consolidated in the hands of too few companies and individuals. We need new digital platforms that genuinely center human rights, pluralism, and democratic discourse and better policies that allow for such an experience.