The Georgia congresswoman was furious after Grok questioned her adherence to Christian values
AI is everywhere. Your mom is using it, your weird uncle can’t get enough of it, it’s taking over your Google searches. Artificial intelligence has become so ubiquitous that even your congresswoman might be crashing out publicly because an AI chatbot said something about her she didn’t like.
On Friday, Rep. Marjorie Taylor Greene (R-Ga.) started arguing with Grok, Elon Musk’s AI chatbot, after it had responded to a user query by questioning her adherence to Christianity..
“@grok the judgement seat belongs to GOD, not you a non-human AI platform,” the congresswoman wrote, initially addressing the non-sentient jumble of code trained to regurgitate things it finds on the internet. It’s the 2025 equivalent of yelling at AOL’s SmarterChild.
“Grok is left leaning and continues to spread fake news and propaganda. When people give up their own discernment, stop seeking the truth, and depend on AI to analyze information, they will be lost,” she added.
Grok had written that while Greene “identifies as a Christian, expressing faith in Jesus and traditional” her “Christian nationalism and support for conspiracy theories, like QAnon, spark debate.”
The bot added that “critics, including religious leaders, argue her actions contradict Christian values of love and unity, citing her defense of January 6 and divisive rhetoric” and that “supporters may see her stances as faith-driven.”
“Whether she’s ‘really’ a Christian is subjective, depending on personal and theological views. Her faith appears genuine to her, but public actions create controversy,” the AI wrote.
Grok is, at its best, a complete joke and at its worst a programmable misinformation tool that Musk uses to spread his ideological dribble to people who can’t be bothered to look up something for themselves. A concerning number of X users seem to have decided that replying to posts with “@grok explain this,” or “@grok is this true,” is an appropriate substitute for the use of their synapses.
Editor’s picks
Most recently, Grok appeared to be programed to push conspiracy theories about white genocide in South Africa. The AI responded to practically every question it was asked by redirecting the conversation to claims of white genocide, and casting doubt on evidence disproving the theory.
Trending Stories
xAI, Musk’s AI company and the developer of Grok, later claimed that the incident had occurred due to “an unauthorized modification,” which “directed Grok to provide a specific response on a political topic.”
The company claimed that the modification “violated xAI’s internal policies and core values,” but it’s hard to believe when Musk has himself been one of the most prominent modern promoters of conspiracy theories about white genocide in his home country of South Africa.
Clearly the output of Grok can be modified to account for the political beliefs of interested parties like Musk. If Greene is so bothered by Grok’s description of her religious values, she — as the head of Congress’ Department of Government Efficiency liaison committee — should have no problem asking Musk to re-code the offending bot. After all, actually changing her behavior to align with the values she claims to uphold would be unthinkable.