Job Loss? No Big Deal Because We Have Theoretical Data

March 10, 2026

green-dino_thumb_thumb[3]Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I have been thinking about two “white papers” for a couple of days. Both of them are interesting for several reasons. First, each is based on assumptions that appear to be disconnected from what I call real life. Second, each is full of data, and I am not usually skeptical of free outputs available on the public Internet, I am curious about the “facts” underpinning each write up. And, third, the authors of the write up seem to have been unduly influenced by science fiction, Austrian economists, and the modern equivalent of a study group talking about the insights of Timothy Leary (may be rest in peace).

The first write up focused on Anthropic. That is the AI outfit who does not understand the “We pay. You obey” mentality of some governmental entities. This firm (allegedly trying to figure out how to navigate the real world) published “Economic Research. Labor Market Impacts of AI: A New Measure and Early Evidence.”

The main point of the write up is to make clear that AI does not cause people to lose jobs. The write up uses fancy words and fancy graphs to demonstrate that AI causes minimal employment disruption. If you like radar charts, here’s a nifty one. Tip: Where the points reach out, more workers can make use of AI:

image

The chart is from Anthropic’s research team.

This means, in my opinion, if one is a fry cook or Dressing Room Attendant AI might not be where the unemployment action is. Other occupations like cartoonist or lawyer, AI is likely to be useful. But so far not too many lawyers have been terminated. Some financial services firms are not too interested in Anthropic’s theoretics. Morgan Stanley RIFed a non-theoretical 2,500 people or three percent of its workforce. Maybe AI or cost cutting? I don’t know. Let’s assume that it is just good management and no AI. I wonder about the tales of woe I see on Reddit.com and LinkedIn.com from people who seem to be able to write clearly. These individuals cannot generate revenue from a “job” at a company or from T shirt sales.

After this exercise in 2026 economic research, the Anthropic wizards conclude that AI does not — at least yet — eliminate jobs. Believe it or not. Anyone who took Economics 101 at a one-horse college knows, economics is just a rock solid, really accurate social science. Translation: I have to read this craziness and feed it back to a person who seems to be distracted 24×7?

Anthropic is definitely trying to be smart is making clear, “Hey, we are here to help you, not take your job.” Some may believe this. I don’t. Why am I skeptical? This chart is a tip off to my thought  process:

image

When was the last time that statistically valid data spit out mirror image charts? I know. When the data are shaped. But that’s just my dinobaby skepticism applied to a commercial enterprise that does not understand the concept of making the customer happy and the implications of telling a customer “We pay. You obey.” Guess what. You lose your job. No AI required.

Now what about the second essay titled “Software is Eating the Work.” This one is also about AI but not in the patently wacky way the Anthropic theoretical research write up is. From my perspective t his essay focuses on the future or non-future for “programmers.” The professionals who used to write code will increasingly become :

“rollout providers” who can redesign processes, manage organizational change, and make AI systems trustworthy enough to take humans out of the loop.

The structure of the essay involves some variant of the thesis-antithesis-synthesis stuff from Philosophy 104 at a one donkey and one mule college. The argument makes it clear that programmers are indeed endangered species. Those who survive will not be coders. These people will be orchestrators. The smart software eliminates large chunks of the developer category. The argument lines up with the Anthropic theoretical model.

The second paper says:

Stage 4 is about designing systems that will fully automate tasks currently done by humans, in a way that is truly new. Humans need to be taken out of the loop, made into orchestrators and inspectors. Work needs to be replatformed off humans, and onto AI systems. To adapt Marx’s line, engineers have hitherto only defined the work in various ways: the point now is to do it.

Okay, you get the idea: Job loss. Do it now.

Several observations are warranted. Ready or not, here I go:

  1. AI does some things reasonably well; others, not so well. This means we are in “good enough” territory. From my point of view, this might not be a good place to spend one’s time. Good enough is not utopia.
  2. The “build it and they will come” assumption is now officially “do it now” and dump humans where one can. Why? Reduce costs.
  3. The papers are happily blind to the AI enabling impact. This is not a job problem; this is a power delivery and infrastructure problem. But both papers appear to have Mad Magazine’s “What me worry?” mind set.

Net net: These papers strike me as mostly rationalization, weaponized information, and poobahism. Good enough. And hallucinations? On display every day. As Scott Adams said, “I respectfully decline the invitation to join your hallucination.”

Stephen E Arnold, March 10, 2026

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta