The AI Problem: Getting Left Behind
March 4, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
After lots of clicks and learning that key features were in “gray,” I was able to read “Redefining the Software Engineering Profession for AI.” The write up explains a corollary to “home alone”; that is, left behind.
I waded through examples of AI output fixed up because a humanoid smarter than the AI spotted mistakes. Are there mistakes in AI output? If you ask a whiz kid at a big tech outfit (I shall not name names), the answer is, “Look at the score on this benchmark.” If you ask someone who knows about a specific topic, you may hear, “Hey, you have to double check this stuff.”

Thanks, Venice.ai. Good enough.
And there is a lot of stuff to check. That’s the main idea lurking behind the fancy lingo and the screenshots. The write up finally says:
Generative AI currently acts as seniority-biased technological change: It disproportionately amplifies engineers who already possess systems judgment, like taste for architecture, debugging under uncertainty, and operational intuition.
As a dinobaby, I am usually wrong by default. However, for me this means that a person who knows something cold is going to be in great demand. Why? The “older and more informed humans” can spot the AI mistakes. This is definitely good for senior types. The write up focuses on computer programming. I think the observation applies to other disciplines as well. I want to point out that the softer the user’s field, the less likely errors will be flagged and hopefully corrected. Question: Why? Answer: Programming works or it doesn’t. A squishy discipline like social science has more flexibility. Programming is brittle; explaining why a young female is unhappy is clay.
What’s the fix? I think the big idea is to go back to apprentice-type programs. A younger programmer with less experience works with a senior more skilled programmer. Somehow the knowledge of the senior diffuses to the younger. At least that’s my take. Does it work? Sure, for skilled and adept less seasoned programmers. But we live in a multi-tasking, accelerationalist environment. Will it work? Probably but some real life data are needed.
The write up concludes with:
The future of software engineering will be defined not by the volume of code AI can generate but by how effectively humans learn, reason, and mature alongside these systems. Investing in early-in-career developers through deliberate preceptorship ensures today’s expertise becomes tomorrow’s intuition. In balancing automation with apprenticeship, we preserve the enduring vitality of the software engineering profession.
How will this play out in the TikTok-type of programmer, financial engineer, or tax expert? Answer: Outputs are good enough. Look at these benchmark scores.
Stephen E Arnold, March 4, 2026
Comments
Got something to say?

