AI: Helping Humans Be Stupid
March 9, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “Scientists Warn Fake Research Is Spreading Faster Than Real Science.” The write up contained no surprises. Humans love short cuts, convenience, and cute ways to snooker an advantage. The write up presents what I thought was obvious as an important insight. Wow.
The Science Daily reports:
A new study from Northwestern University warns that coordinated scientific fraud is becoming increasingly common. From fabricated data to purchased authorships and paid citations, researchers say organized groups are manipulating the academic publishing system.
I have mentioned in my assorted writings that Dr. Gene Garfield, the fellow who made citations an indicator of importance, knew that the system would be gamed. He was correct. It is trivial to get colleagues, friends, graduate students, and Fiverr.com workers to pump, reference, and backlink to benefit a person, a company or an idea. (I provide an example of a publicly traded company flooding the zone with shaped messages in this article.)

The “Scientists Warn…” article points out:
…fraudulent studies are now appearing at a faster rate than legitimate scientific publications.
What’s this means for smart software? Answer: It will not only hallucinate but it will output incorrect information. Do you want your doctor to trust an AI to diagnose what’s wrong with your child? How about an AI to figure out the doses of chemo for your cancer-ridden mom? Do you want to be admitted to graduate school by an AI? Sure, you don’t, but you will have little say in the matter.
AI is going to operate just like the helpful bots in the Telegram platform or the add ins available in the Claude marketplace. Unless one takes special care, those software daemons are just going to do their thing and use fake information. Think about that when you ponder the implications of your retirement savings invested in the company pumping out shaped information to paint a very rosy investment picture.
Is a single scientist going rogue? Nah. The Science Daily story says:
…the researchers identified coordinated operations involving paper mills, brokers and compromised journals. Paper mills function like production lines for academic manuscripts. They produce large numbers of papers and sell them to researchers who want to increase their publication record quickly. These manuscripts often contain fabricated data, manipulated or stolen images, plagiarized text and sometimes claims that are scientifically impossible.
Can the scientific, technical, and medical professional publishers fix the problem in their peer-reviewed publications? I suppose but there are several hurdles:
- Money. Professional publishers don’t want to invest in what is a black hole problem
- Authors. Why stop? If a topic is sufficiently narrow, the only people who can identify a fake is a graduate student who made up the data in the first place. Example: The Harvard ethics professor who made up information for an ethics paper.
- Readers. Humans read less and less and fewer humans appear to read critically. Smart software companies don’t read; they process and then synthesize information and spit it out. Readers are not very good at finding fake data when writ large like the economy is great or tiny like information related to the DNA of Etruscans.
I want to suggest a fix that almost no one on the planet will be interested in pursuing. Ready or not, here’s my recipe:
- Take learning seriously
- Read critically and look for anomalies and discrepancies, then check them
- Do this throughout life
- Demonstrate this approach as part of the furniture of life.
Spoiler: I estimate one percent of the people in the US will follow this recipe. I think the tech bros want sheeple, not people who question.
Stephen E Arnold, March 9, 2026
Professional Legal Publishers: The Bell Tolls in D Minor
February 10, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “Anthropic’s New AI Tools Deepen Selloff in Data Analytics and Software Stocks, Investors Say.” The story is published by the outfit that reminds me it is in the “trust” business. I think the company is trying.
Here’s quote from the write up:
Toronto-based Thomson Reuters, which owns the Westlaw legal database, slumped by nearly 18%. It is on track for its biggest daily loss on record and lowest close since June 2021. “I think Anthropic came out with some plug-ins to tackle the legal space,” said Mike Archibald, a portfolio manager at AGF Investments in Toronto. “Obviously, that’s where Thomson Reuters generates a good chunk of their revenues. Sometimes the market just shoots first and asks questions later.” Thomson Reuters, which is also parent company of Reuters News, is set to report its fourth quarter earnings results on Thursday. Its shares are now down 33% year-to-date after dropping about 22% in 2025.
I really want to quote from John Donne’s Meditation 17 in the 1624 Devotions upon Emergent Occasions. You probably remember the line “Never send to know for whom the bell tolls; it tolls for thee.” But I won’t instead. I will suggest you recall that Mozart’s Requiem relies on D minor. You could I suppose find a YouTube version of this musical composition. I didn’t I just remembered the theme of the Confutatis. Good enough.

Thanks, Venice.ai. Good enough.
Why the reference to bells and a funereal work by Wolfgang? I was sparked by this passage in the cited story:
Britain’s RELX and the Netherlands’ Wolters Kluwer, both providers of legal analytics services, fell 14% and about 13%, respectively. RELX shares have now almost halved from their peak last February and on Tuesday were set for their biggest drop since 1988. Its dramatic reversal highlights the pressure AI is exerting on Europe’s software sector. Other professional services firms closed sharply lower too. Factset Research fell 10.5%, Morningstar lost 9% and LegalZoom slumped 19.7%. In London, Experian, Sage Group, London Stock Exchange Group and Pearson fell between 6% and 12%. Traders and analysts said investor fear often outweighed company fundamentals.
I don’t know much about today’s professional publishing business. Based on my bumping into these firms over the years, I know that each adds value to information. In the legal sector, smart people review documents and add notations. One outfit put hundreds of lawyers to work creating annotations very much like to work done by monks in medieval scriptoria.
Professional publishing firms’ share prices have taken a hit (temporary, probably) because smart investors realized that AI can do this type of knowledge work at a lower cost. It does not take much imagination to see a workstation trained to do legal content sitting in a closet at a big law firm or maybe in a rack in a data center somewhere where costs are low. That workstation can do to public documents and to “firm only” content what the professional legal publishers and adjacent firms do for less money. Since professional publishers rely on a relatively small number of very big law firms and government agencies for their revenue, the threat may give some investors pause.
Yes, there are actions the professional publishers can take. However, these firms have been telling themselves that their AI experiments and products are right in step with the needs of the law firms, the accountants, the consultants, and other markets. Unfortunately professional publishers believe they are often the smart people in the room. I would suggest that these individuals are indeed smart, but there may be even more intelligent people working at Anthropic-type outfits. In my experience, what I call Silicon Valley companies don’t see the world the way professional publishers do.
That’s the problem. Professional publishers innovate slowly and in a tightly constrained mind space. Those wild and crazy Silicon Valley types just slap tech on a problem, send it out, fiddle around, and upgrade in what I call fast cycle mode.
Do you hear those D minor vibrations? I do. But John Donne said in “A Hymn to God the Father”:
“I have a sin of fear, that when I have spun
My last thread, I shall perish on the shore;
But swear by Thyself, that at my death Thy Son
Shall shine as He shines now, and heretofore;
And having done that, Thou hast done;
I fear no more.” [Emphasis added by me]
Or at least until the next quarterly report.
Stephen E Arnold, February 10, 2026
Rage Baiting Tim Cook And Sundar Pichai
January 30, 2026
Rage baiting makes the Internet go round and The Verge published an editorial taking aim at two of Big Tech’s leaders: “Tim Cook And Sundar Pichai Are Cowards.” Article writer Elizabeth Lopatto dubbed Cook and Pichai as “cowards,” because of some disgusting actions by X users. X users utilized Grok to make AI images that undressed women and minors. That’s not good.
Lopatto thought these actions would inspire Pichai and Cook to remove X from Google’s and Apple’s app stores. She claims that these two are too afraid of Elon Musk to remove X. Lopatto cites the developer guidelines for Google and Apple app stores. Both guidelines don’t allow these disgusting action.
An enraged Lopatto wrote that Pichai and Cook won’t remove X (despite the breech of guidelines), because they don’t want to upset a right-wing media ecosystem that Musk owns. Each of these Big Tech leaders have too much to lose in her summation:
“Cook’s Apple has a massive dependency on China, and smartphones, computers, and chips are currently exempt from the tariffs on China. Cook can present Donald Trump with as many golden gifts as he wants, but those tariffs don’t have to stay that way. Google’s Pichai is similarly weak. Trump has threatened Google numerous times over his placement in search results, and so far YouTube has managed to mostly avoid scrutiny over its content moderation policies because Pichai has been content to coddle Trump with promises that everything he does is the biggest thing in Google search history.”
She continues to claim these men “sold their principles for power” and “don’t even control their own companies.” Lopatto is correct that Pichai and Cook are hypocrites and so is everyone in Big Tech. It is concerning that AI algorithms are making degrading images of women and minors. It is 2026, and perhaps as the Verge works hard to become the industry standard for rock solid technology news and analysis, new intellectual paths just have to be clicked. Ooops. Sorry, I meant explored. My bad.
Whitney Grace, January 30, 2026
AI and the Cult of Personality
January 29, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
AI or smart software may be in a cluster of bubbles or a cluster of something. One thing is certain we now have a cult of personality emerging around software that makes humans expendable or a bit like the lucky worker who dragged megaliths from quarry to nice places near the Nile River.
Let me highlight a few of the people who have emerged as individuals who saw the future and decided it would be AI. These are people who know when a technology is the next big thing. Maybe the biggest next big thing to come along since fire or the wheel. Here are a few names:
Brett Adcock
Sam Altman
Marc Benioff
Chris Cox
Jeff Dean
Timnit Gebru
Ian Goodfellow
Demis Hassabis
Dan Hendrycks
Yann LeCun
Fei-Fei Li
Satya Nadella
Andrew Ng
Elham Tabassi
Steve Yun
I want to add another name to his list, which can be expanded far beyond the names I pulled off the top of my head. This is none other than Ed Zitron.

Thanks, Venice.ai. Good enough.
I know this because I read “Ed Zitron on Big tech, Backlash, Boom and Bust: AI Has Taught Us That People Are Excited to Replace Human Beings.” The write up says:
Zitron’s blunt, brash skepticism has made him something of a cult figure. His tech newsletter, Where’s Your Ed At, now has more than 80,000 subscribers; his weekly podcast, Better Offline, is well within the Top 20 on the tech charts; he’s a regular dissenting voice in the media; and his subreddit has become a safe space for AI sceptics, including those within the tech industry itself – one user describes him as “a lighthouse in a storm of insane hyper capitalist bullsh#t”.
I think it is classy to use a colloquial term for animal excrement in a major newspaper. I wonder, however, if this write up is more about what the writer perceives as wrong with big AI than admiration for a PR and marketing person?
The write up says:
Explaining Zitron’s thesis about why generative AI is doomed to fail is not simple: last year he wrote a 19,000-word essay, laying it out. But you could break it down into two, interrelated parts. One is the actual efficacy of the technology; the other is the financial architecture of the AI boom. In Zitron’s view, the foundations are shaky in both cases.
The impending failure of AI is based upon the fact that it is lousy technology; that is, it outputs incorrect information and hallucinates. Plus, the financial structure of the training, legal cases, pings, pipes, and plumbing is money thrown into a dumpster fire.
The article humanizes Mr. Zitron, pointing out:
Growing up in Hammersmith, west London, his parents were loving and supportive, Zitron says. His father was a management consultant; his mother raised him and his three elder siblings. But “secondary school was very bad for me, and that’s about as much as I’ll go into.” He has dyspraxia – a coordination disability – and he was diagnosed with ADHD in his 20s. “I think I failed every language and every science, and I didn’t do brilliant at maths,” he says. “But I’ve always been an #sshole over the details.”
Yes, another colloquialism. Anal issues perhaps?
The write up ends on a note that reminds me of good old Don Quixote:
He just wants to tell it like it is. “It’d be much easier to just write mythology and fan fiction about what AI could do. What I want to do is understand the truth.”
Several observations:
- I am not sure if the write up is about Mr. Zitron or the Guardian’s sense that high technology has burned Fleet Street and replaced it with businesses that offer AI services
- A film about Mr. Zitron does seem to be one important point in the write up. Will it be a TikTok-type of film or a direct-to-YouTube documentary with embedded advertising?
- AI is now the punching bag for those who are not into big tech, no matter what they say to their editors. Social media gang bangs are out of style. Get those AI people.
Net net: Amusing. I wonder if Mr. Beast will tackle the video opportunity.
Stephen E Arnold, January 29, 2026
Tell People What They Want to Hear and Make Up Data. Winning Tactic
January 26, 2026
As one of the people who created Business Dateline in 1983, this article is no surprise. Business Dateline was unique in that we included corrections to the original full text articles in the database. Our interviews with special librarians (now an almost extinct species of information professional), dozens of our best customers, and individuals who were members of trade associations like the now defunct Information Industry Association encouraged us.
Forty years ago, we spent a substantial sum to modify our database workflow to monitor changes to full text documents, create updated records, and insert those records into the online services which provided access to our paying customers.
No one noticed. Users did not care.
Our research was not flawed. The sample we used did care, but these people were not our bread-and-butter users. If the information in the cited article with the very wordy title is on the money, nobody cares today. If it is online, the information is presumed to be accurate until it is not. Even then, no one cares.
The author of this cited article does care. The author invested considerable time in gathering data for his article. The author wants professionals in publishing and institutions to care.
We cared. We created Business Dateline because we knew errors, lies, and distorted information were endemic in online. Cheating is rewarded by the incentives in place. Those incentives are still in place, and it is more frustrating than it was 40 years ago to get a fix to a bonkers online content object.
One of the comments to the cited article struck a chord with me. The stated is from a person who identified himself / herself as Anonymous. I quote:
… Incentives [for accuracy] don’t work that way in business schools, where career success depends upon creating a clear “brand.” People do not care about science or good research, they care about being known for something specific…. Plus there are (bad) outside incentives that exist in business schools. As the word “brand” suggests, there are also very lucrative outside options to be gained from telling people something that they want to hear…
To sum up, accuracy doesn’t matter. If making up information advances a career to lands a paying project, go for the fake.
What are the downsides? For most people, what look like mistakes can be explained away or just get mowed down by the person driving the John Deer.
What happens if the information in a medical database or a nuclear power piping article is incorrect? Not much. A doctor can say, “We did our best.” When the pipe bursts, the engineers check the specs and say, “A structural anomaly.”
With fakery endemic in modern US academia and business, why worry?
Stephen E Arnold, January 26, 2026
Windows Strafed by Windows Fanboys: Incredible Flip
December 19, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
When the Windows folding phone came out, I remember hunting around for blog posts, podcasts, and videos about this interesting device. Following links I bumbled onto the Windows Central Web site. The two fellows who seemed to be front and center had a podcast (a quite irregularly published podcast I might add). I was amazed at the pro-folding gizmo. One of the write ups was panting with excitement. I thought then and think now that figuring out how to fold a screen is a laboratory exercise, not something destined to be part of my mobile phone experience.
I forgot about Windows Central and the unflagging ability to find something wonderfully bigly about the Softies. Then I followed a link to this story: “Microsoft Has a Problem: Nobody Wants to Buy or Use Its Shoddy AI Products — As Google’s AI Growth Begins to Outpace Copilot Products.”

An athlete failed at his Dos Santos II exercise. The coach, a tough love type, offers the injured gymnast a path forward with Mistral AI. Thanks, Qwen, do you phone home?
The cited write up struck me as a technology aficionado pulling off what is called a Dos Santos II. (If you are not into gymnastics, this exercise “trick” involves starting backward with a half twist into a double front in the layout position. Boom. Perfect 10. From folding phone to “shoddy AI products.”
If I were curious, I would dig into the reasons for this change in tune, instruments, and concert hall. My hunch is that a new manager replaced a person who was talking (informally, of course) to individuals who provided the information without identifying the source. Reuters, the trust outfit, does this on occasion as do other “real” journalists. I prefer to say, here are my observations or my hypotheses about Topic X. Others just do the “anonymous” and move forward in life.
Here are a couple of snips from the write up that I find notable. These are not quite at the “shoddy AI products” level, but I find them interesting.
Snippet 1:
If there’s one thing that typifies Microsoft under CEO Satya Nadella‘s tenure: it’s a general inability to connect with customers. Microsoft shut down its retail arm quietly over the past few years, closed up shop on mountains of consumer products, while drifting haphazardly from tech fad to tech fad.
I like the idea that Microsoft is not sure what it is doing. Furthermore, I don’t think Microsoft every connected with its customers. Connections come from the Certified Partners, the media lap dogs fawning at Microsoft CEO antics, and brilliant statements about how many Russian programmers it takes to hack into a Windows product. (Hint: The answer is a couple if the Telegram posts I have read are semi accurate.)
Snippet 2:
With OpenAI’s business model under constant scrutiny and racking up genuinely dangerous levels of debt, it’s become a cascading problem for Microsoft to have tied up layer upon layer of its business in what might end up being something of a lame duck.
My interpretation of this comment is that Microsoft hitched its wagon to one of AI’s Cybertrucks, and the buggy isn’t able to pull the Softie’s one-horse shay. The notion of a “lame duck” is that Microsoft cannot easily extricate itself from the money, the effort, the staff, and the weird “swallow your AI medicine, you fool” approach the estimable company has adopted for Copilot.
Snippet 3:
Microsoft’s “ship it now fix it later” attitude risks giving its AI products an Internet Explorer-like reputation for poor quality, sacrificing the future to more patient, thoughtful companies who spend a little more time polishing first. Microsoft’s strategy for AI seems to revolve around offering cheaper, lower quality products at lower costs (Microsoft Teams, hi), over more expensive higher-quality options its competitors are offering. Whether or not that strategy will work for artificial intelligence, which is exorbitantly expensive to run, remains to be seen.
A less civilized editor would have dropped in the industry buzzword “crapware.” But we are stuck with “ship it now fix it later” or maybe just never. So far we have customer issues, the OpenAI technology as a lame duck, and now the lousy software criticism.
Okay, that’s enough.
The question is, “Why the Dos Santos II” at this time? I think citing the third party “Information” is a convenient technique in blog posts. Heck, Beyond Search uses this method almost exclusively except I position what I do as an abstract with critical commentary.
Let my hypothesize (no anonymous “source” is helping me out):
- Whoever at Windows Central annoyed a Softie with power created is responding to this perceived injustice
- The people at Windows Central woke up one day and heard a little voice say, “Your cheerleading is out of step with how others view Microsoft.” The folks at Windows Central listened and, thus, the Dos Santos II.
- Windows Central did what the auth9or of the article states in the article; that is, using multiple AI services each day. The Windows Central professional realized that Copilot was not as helpful writing “real” news as some of the other services.
Which of these is closer to the pin? I have no idea. Today (December 12, 2025) I used Qwen, Anthropic, ChatGPT, and Gemini. I want to tell you that these four services did not provide accurate output.
Windows Central gets a 9.0 for its flooring Microsoft exercise.
Stephen E Arnold, December 19, 2025
Sam AI-Man Is Not Impressing ZDNet
December 9, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
In the good old days of Ziff Communication, editorial and ad sales were separated. The “Chinese wall” seemed to work. It would be interesting to go back in time and let the editorial team from 1985 check out the write up “Stop Using ChatGPT for Everything: The AI Models I Use for Research, Coding, and More (and Which I Avoid).” The “everything” is one of those categorical affirmatives that often cause trouble for high school debaters or significant others arguing with a person who thinks a bit like a Silicon Valley technology person. Example: “I have to do everything around here.” Ever hear that?

Yes, granny. You say one thing, but it seems to me that you are getting your cupcakes from a commercial bakery. You cannot trust dinobabies when they say “I make everything” can you?
But the subtitle strikes me as even more exciting; to wit:
From GPT to Claude to Gemini, model names change fast, but use cases matter more. Here’s how I choose the best model for the task at hand.
This is the 2025 equivalent to a 1985 article about “Choosing Character Sets with EGA.” Peter Norton’s article from November 26, 1985, was mostly arcana, not too much in the opinion game. The cited “Stop Using ChatGPT for Everything” is quite different.
Here’s a passage I noted:
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
And what about ChatGPT as a useful online service? Consider this statement:
However, when I do agentic coding, I’ve found that OpenAI’s Codex using GPT-5.1-Max and Claude Code using Opus 4.5 are astonishingly great. Agentic AI coding is when I hook up the AIs to my development environment, let the AIs read my entire codebase, and then do substantial, multi-step tasks. For example, I used Codex to write four WordPress plugin products for me in four days. Just recently, I’ve been using Claude Code with Opus 4.5 to build an entire complex and sophisticated iPhone app, which it helped me do in little sprints over the course of about half a month. I spent $200 for the month’s use of Codex and $100 for the month’s use of Claude Code. It does astonish me that Opus 4.5 did so poorly in the chatbot experience, but was a superstar in the agentic coding experience, but that’s part of why we’re looking at different models. AI vendors are still working out the kinks from this nascent technology.
But what about “everything” as in “stop using ChatGPT for everything”? Yeah, well, it is 2025.
And what about this passage? I quote:
Up until now, no other chatbot has been as broadly useful. However, Gemini 3 looks like it might give ChatGPT a run for its money. Gemini 3 has only been out for a week or so, which is why I don’t have enough experience to compare them. But, who knows, in six months this category might list Gemini 3 as the favorite model instead of GPT-5.1.
That “everything” still haunts me. It sure seems to me as if the ZDNet article uses ChatGPT a great deal. By the author’s own admission, he “doesn’t have enough experience to compare them.” But, but, but (as Jack Benny used to say) and then blurt “stop for everything!” Yeah, seems inconsistent to me. But, hey, I am a dinobaby.
I found this passage interesting as well:
Among the big names, I don’t use Perplexity, Copilot, or Grok. I know Perplexity also uses GPT-5.1, but it’s just never resonated with me. It’s known for search, but the few times I’ve tried some searches, its results have been meh. Also, I can’t stand the fact that you have to log in via email.
I guess these services suck as much as the ChatGPT system the author uses. Why? Yeah, log in method. That’s substantive stuff in AI land.
Observations:
- I don’t think this write up is output by AI or at least any AI system with which I am familiar
- I find the title and the text a bit out of step
- The categorical affirmative is logically loosey goosey.
Net net: Sigh.
Stephen E Arnold, December 9, 2025
Guess Who Will Not Advertise on Gizmodo? Give Up?
December 8, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I have
I spotted an interesting write up on Gizmodo. The article “438 Reasons to Doubt that David Sacks Should Work for the Federal Government” suggests that none of the companies in which David Sacks has invested will throw money at Gizmodo. I don’t know that Mr. Sacks will direct his investments to avoid Gizmodo, but I surmise that the cited article may not induce him to tap his mobile and make ad buys on the Gizmodo thing.

A young trooper contemplates petting one of the animals. Good enough, Venice.ai. I liked that you omitting my instruction to have the young boy scout put his arm through the bars in order to touch the tiger. But, hey, good enough is the gold standard.
The write up reports as actual factual:
His investments may expose him to conflicts of interest. They also probably distort common sense.
Now wait a Silicon Valley illegal left turn: “Conflicts of interest?”
The write up explains:
The presence of such a guy—who everyone knows has a massive tech-based portfolio of investments—totally guarantees the perception that public policy is being shaped by self-dealing in the tech world, which in turn distorts common sense.
The article prances forth:
When you zoom out, it looks like this: As an advisor, Trump hired a venture capitalist who held a $500,000-per-couple dinner for him last year in San Francisco. It turns out that guy has a stake in a company that makes AI night vision goggles. When he writes you an AI action plan calling for AI in the military, and your Pentagon ends up contracting with that very company, that’s just sensible government policy. After all, the military needs AI-powered night vision goggles, doesn’t it?
Several observations:
- The cited article appears to lean heavily on reporting by the New York Times. The Gray Lady does not seem charmed by David Sacks, but that’s just my personal interpretation.
- The idea that Silicon Valley viewpoints appear to influence some government projects is interesting. Combine streamlining of US government procurement policies, and I wonder if it possible that some projects get slipstreamed. I don’t know. Maybe?
- Online media that poke the tender souls of some big time billionaires strikes me as a risk-filled approach to creating actionable information. I think the action may not be what Gizmodo wants, however.
Net net: This new friskiness in itself is interesting. A thought crossed my mind about the performance capabilities of AI or maybe Anduril’s drones? But that’s a different type of story to create. It is just easier to recycle the Gray Lady. It is 2025, right after a holiday break?
Stephen E Arnold, December 8, 2025
Mother Nature Does Not Like AI
December 1, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
Nature, the online service and still maybe a printed magazine, published a sour lemonade story. Its title is “Major AI Conference Flooded with Peer Reviews Written Fully by AI.” My reaction was, “Duh! Did you expect originality from AI professionals chasing big bucks?” In my experience, AI innovation appears in the marketing collateral, the cute price trickery for Google Gemini, and the slide decks presented to VCs who don’t want to miss out on the next big thing.
The Nature article states this shocker:
Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.
Once again: Duh!
How about this statement from the write up and its sources?
The conference organizers say they will now use automated tools to assess whether submissions and peer reviews breached policies on using AI in submissions and peer reviews. This is the first time that the conference has faced this issue at scale, says Bharath Hariharan, a computer scientist at Cornell University in Ithaca, New York, and senior program chair for ICLR 2026. “After we go through all this process … that will give us a better notion of trust.”
Yep, trust. That’s a quality I admire.
I want to point out that Nature, a publication interested in sticking to the facts, does a little soft shoe and some fancy dancing in the cited article. For example, there are causal claims about how conferences operate. I did not spot any data, but I am a dinobaby prone to overlook the nuances of modern scientific write ups. Also, the article seems to want a fix now. Yeah, well, that is unlikely. LLMs change so that smart software tuned to find AI generated content are not exactly as reliable as a 2025 Toyota RAV.
Also, I am not sure fixes implemented by human reviewers and abstract readers will do the job. When I had the joyful opportunity to review submissions for a big time technical journal, I did a pretty good job on the first one or two papers tossed at me. But, to be honest, by paper three I was not sure I had the foggiest idea what I was doing. I probably would have approved something written by a French bulldog taking mushrooms for inspiration.
If you are in the journal article writing game or giving talks at conferences, think about AI. Whether you use it or not, you may be accused of taking short cuts. That’s important because professional publishers and conference organizers never take short cuts. They take money.
Stephen E Arnold, December 1, 2025
A Newsletter Firm Appears to Struggle for AI Options
October 17, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Adapting to AI’s Evolving Landscape: A Survival Guide for Businesses.” The premise of the article will be music to the ears of venture funders and go-go Silicon Valley-type AI companies. The write up says:
AI-driven search is upending traditional information pathways and putting the heat on businesses and organizations facing a web traffic free-fall. Survival instincts have companies scrambling to shift their web strategies — perhaps ending the days of the open internet as we know it. After decades of pursuing web-optimization strategies that encouraged high-volume content generation, many businesses are now feeling that their content-marketing strategies might be backfiring.
I am not exactly sure about this statement. But let’s press forward.
I noted this passage:
Without the incentive of web clicks and ad revenue to drive content creation, the foundation of the web as a free and open entity is called into question.
Okay, smart software is exploiting the people who put up SEO-tailored content to get sales leads and hopefully make money. From my point of view, technology can be disruptive. The impacts, however, can be positive or negative.
What’s the fix if there is one? The write up offers these thought starters:
- Embrace micro transactions. [I suppose this is good if one has high volume. It may not be so good if shipping and warehouse costs cannot be effectively managed. Vendors of high ticket items may find a micro-transaction for a $500,000 per year enterprise software license tough to complete via Venmo.]
- Implement a walled garden. [That works if one controls the market. Google wants to “register” Android developers. I think Google may have an easier time with the walled-garden tactic than a local bakery specializing in treats for canines.]
- Accepts the monopolies. [You have a choice?]
My reaction to the write up is that it does little to provide substantive guidance as smart software continues to expand like digital kudzu. What is important is that the article appears in the consumer oriented publication from Kiplinger of newsletter fame. Unfortunately the article makes clear that Kiplinger is struggling to find a solution to AI. My hunch is that Kiplinger is looking for possible solutions. The firm may want to dig a little deeper for options.
Stephen E Arnold, October 17, 2025

