AI: Helping Humans Be Stupid
March 9, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “Scientists Warn Fake Research Is Spreading Faster Than Real Science.” The write up contained no surprises. Humans love short cuts, convenience, and cute ways to snooker an advantage. The write up presents what I thought was obvious as an important insight. Wow.
The Science Daily reports:
A new study from Northwestern University warns that coordinated scientific fraud is becoming increasingly common. From fabricated data to purchased authorships and paid citations, researchers say organized groups are manipulating the academic publishing system.
I have mentioned in my assorted writings that Dr. Gene Garfield, the fellow who made citations an indicator of importance, knew that the system would be gamed. He was correct. It is trivial to get colleagues, friends, graduate students, and Fiverr.com workers to pump, reference, and backlink to benefit a person, a company or an idea. (I provide an example of a publicly traded company flooding the zone with shaped messages in this article.)

The “Scientists Warn…” article points out:
…fraudulent studies are now appearing at a faster rate than legitimate scientific publications.
What’s this means for smart software? Answer: It will not only hallucinate but it will output incorrect information. Do you want your doctor to trust an AI to diagnose what’s wrong with your child? How about an AI to figure out the doses of chemo for your cancer-ridden mom? Do you want to be admitted to graduate school by an AI? Sure, you don’t, but you will have little say in the matter.
AI is going to operate just like the helpful bots in the Telegram platform or the add ins available in the Claude marketplace. Unless one takes special care, those software daemons are just going to do their thing and use fake information. Think about that when you ponder the implications of your retirement savings invested in the company pumping out shaped information to paint a very rosy investment picture.
Is a single scientist going rogue? Nah. The Science Daily story says:
…the researchers identified coordinated operations involving paper mills, brokers and compromised journals. Paper mills function like production lines for academic manuscripts. They produce large numbers of papers and sell them to researchers who want to increase their publication record quickly. These manuscripts often contain fabricated data, manipulated or stolen images, plagiarized text and sometimes claims that are scientifically impossible.
Can the scientific, technical, and medical professional publishers fix the problem in their peer-reviewed publications? I suppose but there are several hurdles:
- Money. Professional publishers don’t want to invest in what is a black hole problem
- Authors. Why stop? If a topic is sufficiently narrow, the only people who can identify a fake is a graduate student who made up the data in the first place. Example: The Harvard ethics professor who made up information for an ethics paper.
- Readers. Humans read less and less and fewer humans appear to read critically. Smart software companies don’t read; they process and then synthesize information and spit it out. Readers are not very good at finding fake data when writ large like the economy is great or tiny like information related to the DNA of Etruscans.
I want to suggest a fix that almost no one on the planet will be interested in pursuing. Ready or not, here’s my recipe:
- Take learning seriously
- Read critically and look for anomalies and discrepancies, then check them
- Do this throughout life
- Demonstrate this approach as part of the furniture of life.
Spoiler: I estimate one percent of the people in the US will follow this recipe. I think the tech bros want sheeple, not people who question.
Stephen E Arnold, March 9, 2026
Comments
Got something to say?

