A Grok Crock: That Dog Ate My Homework
May 29, 2025
Just the dinobaby operating without Copilot or its ilk.
I think I have heard Grok (a unit of XAI I think) explain that outputs have been the result of a dog eating the code or whatever. I want to document these Grok Crocks. Perhaps I will put them in a Grok Pot and produce a list of recipes suitable for middle school and high school students.
The most recent example of “something just happened” appears in “Grok Says It’s Skeptical’ about Holocaust Death Toll, Then Blames Programming Error.” Does this mean that smart software is programming Grok? If so, the explanation should be worded, “Grok hallucinates.” If a human wizard made a programming error, then making a statement that quality control will become Job One. That worked for Microsoft until Copilot became the go-to task.
The cited article stated:
Grok said this response was “not intentional denial” and instead blamed it on “a May 14, 2025, programming error.” “An unauthorized change caused Grok to question mainstream narratives, including the Holocaust’s 6 million death toll, sparking controversy,” the chatbot said. Grok said it “now aligns with historical consensus” but continued to insist there was “academic debate on exact figures, which is true but was misinterpreted.” The “unauthorized change” that Grok referred to was presumably the one xAI had already blamed earlier in the week for the chatbot’s repeated insistence on mentioning “white genocide” (a conspiracy theory promoted by X and xAI owner Elon Musk), even when asked about completely unrelated subjects.
I am going to steer clear of the legality of these statements and the political shadows these Grok outputs cast. Instead, let me offer a few observations:
- I use a number of large language models. I have used Grok exactly twice. The outputs had nothing of interest for me. I asked, “Can you cite X.com messages.” The system said, “Nope.” I tried again after Grok 3 became available. Same answer. Hasta la vista, Grok.
- The training data, the fancy math, and the algorithms determine the output. Since current LLMs rely on Google’s big idea, one would expect the outputs to be similar. Outlier outputs like these alleged Grokings are a bit of a surprise. Perhaps someone at Grok could explain exactly why these outputs are happening. I know dogs could eat homework. The event is highly unlikely in my experience, although I had a dog which threw up on the typewriter I used to write a thesis.
- I am a suspicious person. Grok makes me suspicious. I am not sure marketing and smarmy talk can reduce my anxiety about Grok providing outlier content to middle school, high school, college, and “I don’t care” adults. Weaponized information in my opinion is just that a weapon. Dangerous stuff.
Net net: Is the dog eating homework one of the Tesla robots? if so, speak with the developers, please. An alternative would be to use Claude 3.7 or Gemini to double check Grok’s programming.
Stephen E Arnold, May 29, 2025
Comments
Got something to say?