Smart Software Seems to Lack a Capability: Adaptation to That Which It Was Not Trained

April 8, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

William James, the American thinker much loved by first year psychology students, coined the phrase “a certain blindness.” As I recall, the idea is that each humanoid cannot perceive certain things. Scammers use the principle to get money out of grandmas and lonely widowers. Smart people with money instinctively know that they do not have any blindnesses whatsoever.

image

Smart software operates on the principle, probably because of the limitations of the training sets and the Rube Goldberg machine built from algorithms that power artificial intelligence. I am not sure how one fixes a humanoid who believes he or she has 360 degree sightedness or a smart software system crafted by BAIT outfits. Oh, BAIT is my lingo for “big AI tech.”

I read “AI Can Beat Chess Grandmasters, But It Can’t Adapt to Modern Video Games.” If the write up is spot on, the implication is that when a humanoid does something not in a training set, the smart software is lost in space. The write up says:

AI is still pretty bad at handling a new video game it has never seen before…. According to researchers, many of AI’s biggest gaming successes are based on systems that are finely tuned to one specific game. In those defined boundaries, AI can basically become superhuman. But as soon as there are slight changes to the rules or environments, its impressive performance can collapse.

As Jack Benny used to say, “Yipes.”

The article points out:

The research paper adds that reinforcement learning can produce impressive results, but acceptable goals are only achieved after millions or billions of simulated runs. So the system becomes an expert in the exact situation it is trained for. But all of this falls apart when any changes are introduced. Even something as simple as shifted colors or repositioned objects on a screen can break it.

I can visualize the responses from the BAIT outfits now. The jibber jabber will boil down to denial and misdirection. And why not? Which BAIT outfit wants to have investors and stakeholders shout, “You misled us” or “You are mendacious” or “You are a crook.”

The write up adds:

LLMs (Large Language Models) do not solve this either. NYU [researcher] says they perform surprisingly poorly on unfamiliar games. When it does start doing well, this is usually in custom game-specific scaffolding to interpret game states, manage memory, and execute actions. Strip that extra support away, and performance drops fast.

Interesting. But the AI push is that smart software is the next big thing. If BAIT outfits build it, people will come. Well, that’s the theory. Ignore the surveys that suggest a significant number of people are wary of smart software. What will happen when smart software and smart systems get the sub optimal answer. That will be exciting for some.

Stephen E Arnold April 8, 2026

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta