Big Tech AI: Biased or Not?

March 20, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read an unusual news item published by Versant CNBC. Its title is “Anthropic’s Claude Would Pollute Defense Supply Chain: Pentagon CTO.” I don’t know much about the US government and I know even less about the Department of War. What I do know is that Versant CNBC called attention to a facet of smart software most ignored. Dr. Timnit Gebru raised some questions about AI bias, and she was invited to find her future elsewhere along with her pet stochastic parrot. Others have suggested that certain content is under-represented; namely, I have with regard to coverage of information in other major countries. To get Chinese and Russian perspectives, I have to use language specific indexes and rely on online translation services. The information I have located is not well represented in result sets my team and I have reviewed from US big tech outfits’ AI systems. Yeah, English and low-hanging fruit are more common, not a salient post from a Chinese or Russian language source. Your mileage may vary, but I am a dinobaby, and I don’t wander too far from the outmoded ideas about editorial policies, precision, recall, and other other impedimenta from ancient online services.

image

A smart software system is testifying about policy biases before a distinguished body of elected officials. Thanks, Venice.ai. Good enough.

The Versant CNBC outfit which I will refer to as VC NBC reports:

Defense Department CTO Emil Michael on Thursday [March 12, 2026] said Anthropic’s Claude artificial intelligence models would “pollute” the agency’s supply chain because they have “a different policy preference” that is baked in.

Okay, “a different policy preference” suggests to me:

  1. The developers of smart software can steer what the models output; that is, weaponize them, shape them, make them formulate responses that affect the systems or users ingesting AI output
  2. Professionals at in the US government have determined from their own observations and by consulting trusted experts like those from Palantir Technologies that their determinations are accurate and valid based on the systems in use prior to this determination
  3. Users of these systems and analysts of these systems have not been sufficiently critical of AI outputs to observe these pollutive functions and the explicit policy preferences noted in the information presented by VC NBC.

Let’s assume the information presented by VC NBC is spot on. I have several questions:

  1. Is it easy to shape or weaponize the probabilistic word guessing systems to make a duck the equivalent of a cow or to present a fact as an incorrect assertion? If yes, who are the experts turning the knobs and twisting the dials in these AI companies?
  2. Is one company capable of weaponizing and shaping, or can other AI outfits perform a similar calibration mechanism? If yes, are the Chinese and French models weaponized, shaped, or directed in a similar way? Other than academics publishing in ArXhiv, are there mainstream research outfits tracking these clever meta-editorial activities? Are RAND- or McKinsey-type outfits chasing this concept.
  3. Short of shutting down an alleged weaponizer, how will this “policy” shaping system be controlled? I think that may be difficult because some organizations like Microsoft have integrated an alleged policy shaper along with the allegedly more objective services. Note that the smart software, including the Chinese and French systems, do not include Chinese or Russian content. English seems to be the go-to language for training data. That decision may inject cultural bias I would suggest.

Net net: I think that policy shaping is now a “fact” that may have some persistence in the AI world. I will be interested in watching how the AI firms explain and demonstrate that the outputs of their systems are not just “correct” but “objective” according to the standard used by the US government. Does anyone care? That is an important question. I want to avoid Zitron-onics and say, “Worth monitoring.”

Stephen E Arnold, March 20, 2026

Upskilling: Chasing the Impossible for Most People

March 17, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

The idea that one can take a group of 100 white collar workers and upskill them to “do” AI strikes me as a little crazy. For a short time, I taught a class at Duquesne University, did a one year tour in a program set up from youth offenders, and for some reason I still don’t understand served as a director of a special program at Northern Illinois University for special admission students. I learned that upskilling at each of these levels was difficult. The Duquesne experience made clear to me that bright people who had chosen a profession in the Catholic church were not “into” learning some new methods. My work with young people made clear that upskilling a person with traditional instructional methods was a waste of time. Therefore, when I hear about upskilling white collar professionals to learn about AI and then use AI to perform some job functions, I think a dose of reality may be needed.

A good example of this fanciful thinking appears in “The AI Cost-Cutting Fallacy: Why Doing More with Less is Breaking Engineering Teams.” The premise is now a trope. AI will make workers more productive. The Harvard Business Review explains that AI usage causes some workers to experience stress. The estimable HBR management wizards call this condition “brain fry.”

image

A 45 year old professional utility rate statistical analyst waits for a local train. He has been terminated because he insists that smart software cannot perform the requisite mathematical analyses required to determine probable power demand of a new data center coming online in three months. His superior wants to use the optimistic, hallucinated outputs from the firm’s new AI system. He knows he will be RIFed because AI does not have the know how our hero has gained over his 20 year career. Thanks, Venice.ai. Good enough.

“The AI Cost-Cutting” article states:

In late 2024 and throughout 2025, a dangerous narrative took hold in boardrooms across the tech industry. The logic seemed seductive in its simplicity: if AI tools like GitHub Copilot, Cursor, or Windsurf can help a developer write code 20% to 50% faster, then surely a company can reduce its engineering headcount by a similar margin while maintaining the same output. This “spreadsheet logic” has led to a wave of premature optimizations, where leadership teams view AI licenses as a direct substitute for human talent. The expectation is straightforward: buy the tools, cut the bottom 5–20% of the workforce, and watch margins improve. However, this approach fundamentally misunderstands the nature of software engineering. It confuses typing speed with problem-solving.

I agree.

The article then grinds through MBA jargon to make clear that efficiency has a downside: Degradation, not improvement. The conclusion of the write up, however, veers into the upskilling craziness. The article states:

Your domain experts are more valuable than ever. AI can write syntax, but only your people understand business logic. Train them to master Horizon 1 tools to prepare for Horizon 2.

Horizon 1 and Horizon 2 are MBA speak for producing needed software faster and then pushing to get smart software to do the “work.” How does one move “domain experts” along this yellow brick road?

Easy. Upskill.

I want to point out:

  1. People who don’t “upskill” are essentially watching the train depart from the station. Most will not be on the train. A local to the local unemployment office is definitely a possibility.
  2. People who won’t “upskill” are waiting for the pink slip to arrive via email or a quick Zoom meeting. Resistance means termination.
  3. Training programs that don’t output appropriately upskilled individuals will be chasing new contracts or waiting for a local to the local unemployment office.
  4. The leadership who pitch, manage, and have to report to an upskilled board of directors will be in a precarious position. Failure is bad for one’s business career.

The larger question is, “Why do people believe that upskilling adults who may have their sense of self anchored in a particular bucket of knowledge, systems, and methods is going to work?”

Upskilling won’t just as modern education is not cranking out large numbers of high performing graduates. Isn’t upskilling just a stopping point on a road that requires off loading and on loading the people needed to make the business work in the smart software centric organization?

Stephen E Arnold, March 17, 2026

Consultants, Start Your Proposal Writing Engines. (Just Sell and Worry about Delivering Later. Okay?)

March 10, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read in my trusty newsfeed two quite different stories about using smart software to replace humans. MBAs and bean counters want to replace humans with software. I have reviewed the reasons in previous essays. I will mention two biggies: Humans cost money and humans require management. Software can be licensed and software (so far) does not want health care, vacations, and union representation.

Therefore, humans are expendable. The argument boils down to how quickly, how to achieve good enough, and how to reach this goal of a couple of smart people running a big company making huge amounts of money.

image

Thanks, Venice.ai. Good enough.

Almost every “knowledge value” outfit knows that smart software is coming for jobs. Microsoft is in a big hurry to shift from individual PCs running a copy of Word to a Tomorrowland with Microsoft software providing Softie Agents or Agents for Softie Software. This is a variant of the blue ocean approach to an opportunity. It is an Azure ocean. Maybe it is an Azure solar system? Could it be an Azure galaxy of revenue? Just ask Copilot in Excel to help you verify the math.

How do I know that Microsoft is into an agentic future? Easy. “Microsoft Wants You to Hire Its AI Agents” explains:

The company now wants you to hire its AI agents rather than just use them. That’s not just clever branding; it’s a fundamental change in how Microsoft expects enterprises to integrate AI into their operations. These new autonomous AI agents will take over defined roles. Instead of waiting for a prompt, they run continuously, managing sales data, scheduling workflows, even  monitoring IT systems. Microsoft describes them as “trusted team members” that can handle repeatable knowledge work, guided by user policy and corporate data access controls.

Microsoft will deploy its version of a temp agency like the old Kelly Girls’ operation. But no humans show up to type. The worker arrives from Azure Staffing and just works perfectly like other Microsoft products and services. Perfection is in the firm’s DNA. Ah, you doubt me. Well, buttercup, get with the program. Even Palantir Technologies uses Windows, and they will use it out way. If you don’t get it, sign up for boot camp. Now, give me 20.

The vision will become a Microsoft reality. The Softies are now so deep into the AI rabbit hole that backing out is not an option.

Let’s assume that Microsoft’s agentic workers are perfect (like the rest of Microsoft’s software and services). The rest of the work to replace humans is easy, isn’t it?

Once again, a possible hurdle may be on the Information Highway’s agentic bypass. “Enterprise Agentic AI Requires a Process Layer Most Companies Haven’t Built” suggests that companies are behind in the agentic worker movement. That’s easy to understand. Agentic workers are not yet ready for the Rumpke garbage business or Third Street Donuts. Therefore, the managers are indeed lacking an agentic enabling layer.

The write up states:

85% of enterprises want to become agentic within three years — yet 76% admit their operations can’t support it. According to the Celonis 2026 Process Optimization Report, based on a survey of more than 1,600 global business leaders, organizations are aggressively pursuing AI-driven transformation. Yet most acknowledge that the foundational work — modernizing workflows, reducing process friction, and building operational resilience — remains unfinished. The ambition is clear. The infrastructure to execute on it is not.

With universities struggling to output verifiable information, how will organizations get an agentic process layer? The answer is, “Consultants.” Yep, agentic AI is likely to usher in a golden age of experts who can help a garbage company or a donut baker to go agentic. Only consultants can address this issue:

AI that looks impressive in a demo but falters once it’s dropped into a real enterprise environment. That’s the wall companies are hitting. So, despite the overwhelming ambition, only 19% of organizations use multi-agent systems today. It all comes down to an operational readiness problem…

It seems to me that for Microsoft to get in the agent HR business, organizations have to figure out how to move their systems to a state of AI readiness. That seems like an easy problem to solve. Just look at the effectiveness of corporations training their employees to use AI to do their work. Employees, particularly older employees, are eager to master new systems and methods, revise proven workflows, and dedicate their attention to putting themselves or their colleagues out of a job.

Microsoft is confident. The consultants are writing proposals. The employees are yearning for AI to make their lives … different.

No problem, right?

Stephen E Arnold, March 10, 2026

Who Knew? Anyone Who Has Worked with the Young at Heart

March 6, 2026

The Register wrote about a study that confirms why we already knew about experience versus youthful optimism: “Study Confirms Experience Beats Youthful Enthusiasm.” Why is that so surprising? Youthful enthusiasm is great! It helps motivate older workers and keeps pushing society forward so we can accomplish bigger and better things.

Experience, however, is a tried and true approach to work and life that can only be acquired through years of trial and error. Younger workers want to blaze through work environments without paying their dues. While some of the old-fashioned “hazing” techniques of yesteryear should be done away with, nothing can beat

Herer’s information on the study:

“Annie Coleman, founder of consultancy RealiseLongevity, analyzed the data and highlighted a 2025 study finding peak performance occurs between the ages of 55-60. Writing in the Stanford Center on Longevity blog, she cited research examining 16 cognitive markers that confirm that although processing speed declines after early adulthood, other dimensions improve, and overall cognition peaks near retirement age. Studies from the past 15 years show that some qualities like vigilance may worsen with age alongside processing speed, but others improve, including the ability to avoid distractions and accumulated knowledge.”

This is important because AI is eliminating entry level and other jobs for new graduates. Older, experienced workers can mentor the younger generations and provide valuable knowledge that AI fails to duplicate.

As a counter, some older workers are stuck in their ways and fail to adapt to new circumstances. They might lack the crucial skills needed to push and lead into the future. That’s why it’s good to have a mixture of the old and new.

The dinobaby who has me write is inexperienced, old, and generally baffled by everything.

Whitney Grace, March 6, 2026

The AI Problem: Getting Left Behind

March 4, 2026

green-dino_thumb_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

After lots of clicks and learning that key features were in “gray,” I was able to read “Redefining the Software Engineering Profession for AI.” The write up explains a corollary to “home alone”; that is, left behind.

I waded through examples of AI output fixed up because a humanoid smarter than the AI spotted mistakes. Are there mistakes in AI output? If you ask a whiz kid at a big tech outfit (I shall not name names), the answer is, “Look at the score on this benchmark.” If you ask someone who knows about a specific topic, you may hear, “Hey, you have to double check this stuff.”

image

Thanks, Venice.ai. Good enough.

And there is a lot of stuff to check. That’s the main idea lurking behind the fancy lingo and the screenshots. The write up finally says:

Generative AI currently acts as seniority-biased technological change: It disproportionately amplifies engineers who already possess systems judgment, like taste for architecture, debugging under uncertainty, and operational intuition.

As a dinobaby, I am usually wrong by default. However, for me this means that a person who knows something cold is going to be in great demand. Why? The “older and more informed humans” can spot the AI mistakes. This is definitely good for senior types. The write up focuses on computer programming. I think the observation applies to other disciplines as well. I want to point out that the softer the user’s field, the less likely errors will be flagged and hopefully corrected. Question: Why? Answer: Programming works or it doesn’t. A squishy discipline like social science has more flexibility. Programming is brittle; explaining why a young female is unhappy is clay.

What’s the fix? I think the big idea is to go back to apprentice-type programs. A younger programmer with less experience works with a senior more skilled programmer. Somehow the knowledge of the senior diffuses to the younger. At least that’s my take. Does it work? Sure, for skilled and adept less seasoned programmers. But we live in a multi-tasking, accelerationalist environment. Will it work? Probably but some real life data are needed.

The write up concludes with:

The future of software engineering will be defined not by the volume of code AI can generate but by how effectively humans learn, reason, and mature alongside these systems. Investing in early-in-career developers through deliberate preceptorship ensures today’s expertise becomes tomorrow’s intuition. In balancing automation with apprenticeship, we preserve the enduring vitality of the software engineering profession.

How will this play out in the TikTok-type of programmer, financial engineer, or tax expert? Answer: Outputs are good enough. Look at these benchmark scores.

Stephen E Arnold, March 4, 2026

Now You Are Trained in AI. What Is Next?

February 11, 2026

green-dino_thumb_thumb[1]Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I am assuming that you have [a] watched some YouTubes about smart software, [b] you have read articles online, [c] you fooled around with free or low cost large language models, and [d] you dug into a specific use case and made it work (more or less). Are you an AI adept? Tip: Not too many people will doubt your expertise. That’s good news, right?

Now the bad news: You have to learn more. I will come back to the “more” at the end of this essay. First, however, I want to take a quick look at a write up called “Beyond Giant Models: Why AI Orchestration Is the New Architecture.” Spoiler: You better be good at life-long learning.

The write up says:

AI is having its microservices moment.

I think this means that the “old” single large language model that knocks out high school essays for cheating teams doesn’t work for some other real life applications. Therefore, developers want to “break down” LLMs or take pieces of LLMs, hook them together, and do the 1 + 1 = 3 calculation beloved by power thinkers, techno-whiz kids, and MBAs who want to buy an island. (There is one available, I believe, complete with tacky decor and trash in plastic boxes.)

The write up continues by identifying and explaining the AI stack; that is,

  1. A model layer (what you have learned)
  2. The tool layer (what you are, I assume, now learning)
  3. The orchestration layer (what you absolutely have to learn tomorrow).
image

A life long guitar player faces his first audition for a job at a symphony orchestra. The young guitar player, who is an adept at K pop music knows he has to get symphony experience before he can become the next Lenny Bernstein. Thanks, Venice.ai. Good enough.

So what do you need to master tomorrow? That’s the orchestration thing. You must become adept at:

  1. Sequential logic or the chain pattern. This is the type of orchestration that leads to thinking about putting big money into data centers the need for which may be reduced by to-be innovations.
  2. Retrieval first logic or the old-school search – and – retrieval “utility” of machine – generated indexing, automated tokenization, and smart manipulation
  3. Delegation logic. The idea is that software will make little components (the microservice analogy) work like one big, smoothly functioning, smart application.

The author sums these consultant-crafted statements with this observation:

AI orchestration represents a maturation of the field….The future of AI isn’t in finding the perfect model. It’s in learning to conduct the orchestra.

Nice concept. Learn an instrument. You’re good. Six months of lessons, form a rock bank, and play high school parties. Now  you want to play in a big time band. You move to Nashville. You hang out. You play free gigs. Someone in the bar says, “Come by and meet a couple of people.” You go. The fellow’s “people” say, “Yeah, our guitar guy is not available. Want to sit in?” You sit. You play and get some money. Maybe $50 or $100 (after you pay for your burger and sparkling water)? You do this a year, two, possibly three. You have met people. You do fill ins. You hear that the local symphony wants to do a chamber concert thing featuring the music of Andrés Segovia. You show up. You do your thing. You get picked to participate but just sort of background strum along. You practice. You do gigs. You hook up with a local K-pop and digital native group. You tour in Arkansas and Alabama. You hear that the Delta Symphony Orchestra in Jonesboro, Arkansas, needs a conductor. You read The Art of Conducting Technique by Keith Wilson. (Actually you memorize it because the life of a guitar player riding a bus with the K Pop digital native folks is very draining. You get the job. You have health insurance. You can pay down your credit card. You can think about maybe marrying Mary Jones, the country music singer you dated when you first moved to Nashville. You have a life. Jonesboro is THE place. You look back on the 13 years required to become an orchestra leader.

Read the write up about software orchestration. Then consider the analogy and my summary of a life long learner’s journey. Easy. No problems. Just do it. Also, I have a bridge for sale right outside of Harrod’s Creek, Kentucky. Buy it. You can make millions. AI, becoming a conductor, making big money running a toll bridge. Just apply yourself.

Stephen E Arnold, February 11, 2026

Data Center Engineers: But What about AI?

February 6, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read “Engineers Rush to Master New Skills for AI Data Centers.” I came away confused. I believe the AI revolution will change technology-anchored work. That means work in, for, and around data centers. AI is improving by leaps and bounds. Human-type intelligence is just around the corner.

image

Engineers at a consulting company learn they are to be retrained for data center jobs. Thanks, Venice.ai. Good enough.

Why, then, did the write up present this paragraph to my AI saturated eyes:

More than half (51%) of data center operators reported difficulty finding qualified candidates to fill job openings in 2024, according to Uptime Institute. The biggest challenge was filling junior and mid-level operations jobs, with 39% of data centers reporting shortfalls. This was followed by electrical jobs, at 33%, operations management at 32% and mechanical at 30%. Electrical labor climbed to the second highest concern last year since outfitting data center space for high-powered, high-density IT for AI and similar applications requires electrical distribution skills for both IT and cooling.

Let’s think about the headline. “Engineers” are trying to learn “new skills.” The shortages in the paragraph above span:

  • “Operations jobs”. This is undefined, but I assume it is someone who rides around in a golf cart checking when an anomaly is detected, uses a small or flip down terminal, and tries to solve the problem. If a cable is the problem, the “operations job person” retrieves a replacement, plugs it in, and rides the golf cart back to his cube in the “operations” center. Okay, this is a human task and “engineers” want to be retrained to handle this work. Does this mean a civil engineer more familiar with sewage treatment will be retrained? Interesting but engineers choose fields of study due to love, aptitude, or parental input.
  • “Electrical jobs”. I am not sure what this means. An electrical engineer, based on my experience, usually suggests he or she has the expertise to handle circuits and such. I worked with an electrical engineer who told me when I asked about a problem with a laptop, “Just buy a new one.” Granted the fellow was a Georgia Tech graduate who was an electrical engineer. But my recollection is that “Buy a new one” was his standard response to an electrical problem. However, fiddling around with one of those nifty electrical panels and super special circuit breakers and fusers might not lend itself to the “Buy a new one solution.” I do know that if a 440 power line is severed, exciting things happen to those panels. Would a mechanical engineer want to retrain to handle micro electronics and the industrial scale electrical installations usually located outside of a data center and shrouded with louvered panels to keep prying eyes out?
  • “Operations management.” How many engineers rush to learn how to be a manager. Once again I have to fall back on my experience at the so so nuclear engineering firm at which I worked for a number of years. Most nuclear engineers are not too keen on becoming managers. In fact, most of the nuclear enigneers wanted to do nuclear things. That did not include sitting in meetings with other types of engineers, executives in suits and ties, lawyers, or PR / marketing people. My hunch is that the “rush” will not include too many of the nuclear engineering category. Perhaps engineers who are really bad at their job or who miss the camaraderie of hanging out with engineers building robots might show a flicker of interest. But “rush.” Hmmm.
  • Mechanical.” This suggests to me an engineer who can design, test, and maybe map out the production process for a “thing.” I worked with a very capable mechanical engineer who specialized in stress analysis. He liked making testing devices and pushing objects to be tested to their limits. One example was this person’s ingenious solution to a problem involving ejecting spent cartridges from an automatic weapon suitable for use in a helicopter assault. He did get the ejection method worked out, but he told me, “Yeah, this approach worked, but I got a kick out of trying dozens of clips and ejection mechanisms. That was fun. Now the job is over. Bummer.” Would he rush to learn how to do data centers? Answer: Only if he could break things and then figure out how to make the “thing” more robust. Oh, he didn’t categorize himself as a “mechanical.” He was a “stress analysis expert.” Retraining him? Might be a tough sell.

I suppose my stupid idea that data center funders and leadership types would rush to use AI software, robots, and other state of the art methods to build their multi-million dollar data centers. But the write up makes clear that I was indeed a dolt. Humans are needed.

Does this suggest that the hyperbole about smart software and related break throughs is baloney?

Answer: Yep.

Stephen E Arnold, February 6, 2026

Software in 2026: Whoa, Nellie! You Lame, Girl?

December 31, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

When there were Baby Bells, I got an email from a person who had worked with me on a book. That person told me that one of the Baby Bells wanted to figure out how to do their Yellow Pages as an online service. The problem, as I recall my contact’s saying, was “not to screw up the ad sales in the dead tree version of the Yellow Pages.” I won’t go into the details about the complexities of this project. However, my knowledge of how pre- and post-Judge Green was supposed to work and some on-site experience creating software for what did business as Bell Communications Research gave me some basic information about the land of Bell Heads. (If you don’t get the reference, that’s okay. It’s an insider metaphor but less memorable than Achilles’ heel.)

image

A young whiz kid from a big name technology school experiences a real life learning experience. Programming the IBM MVS TSO set up is different from vibe coding an app from a college dorm room. Thanks, Qwen, good enough.

The point of my reference to a Baby Bell was that a newly minted stand alone telecommunications company ran on what was a pretty standard line up of IBM and B4s (designed by Bell Labs, the precursor to Bellcore) plus some DECs, Wangs, and other machines. The big stuff ran on the IBM machines with proprietary, AT&T specific applications on the B4s. If you are following along, you might have concluded that slapping a Yellow Pages Web application into the Baby Bell system was easy to talk about but difficult to do. We did the job using my old trick. I am a wrapper guy. Figure out what’s needed to run a Yellow Pages site online, what data are needed and where to get it and where to put it, and then build a nice little Web set up and pass data back and forth via what I call wrapper scripts and code. The benefits of the approach are that I did not have to screw around with the software used to make a Baby Bell actually work. When the Web site went down, no meetings were needed with the Regional Manager who had many eyeballs trained on my small team. Nope, we just fixed the Web site and keep on doing Yellow Page things. The solution worked. The print sales people could use the Web site to place orders or allow the customer to place orders. Open the valve to the IBM and B4s, push in data just the way these systems wanted it, and close the valve. Hooray.

Why didn’t my team just code up the Web stuff and run it on one of those IBM MVS TSO gizmos? The answer appears, quite surprisingly in a blog post published December 24, 2025. I never wrote about the “why” of my approach. I assumed everyone with some Young Pioneer T shirts knew the answer. Guess not. “Nobody Knows How Large Software Products Work” provides the information that I believed every 23 year old computer whiz kid knew.

The write up says:

Software is hard. Large software products are prohibitively complicated.

I know that the folks at Google now understand why I made cautious observations about the complexity of building interlocking systems without the type of management controls that existed at the pre break up AT&T. Google was very proud of its indexing, its 150 plus signal scores for Web sites, and yada yada. I just said, “Those new initiatives may be difficult to manage.” No one cared. I was an old person and a rental. Who really cares about a dinobaby living in rural Kentucky. Google is the new AT&T, but it lacks the old AT&T’s discipline.

Back to the write up. The cited article says:

Why can’t you just document the interactions once when you’re building each new feature? I think this could work in theory, with a lot of effort and top-down support, but in practice it’s just really hard….The core problem is that the system is rapidly changing as you try to document it.

This is an accurate statement. AT&T’s technical regional managers demanded commented code. Were the comments helpful? Sometimes. The reality is that one learns about the cute little workarounds required for software that can spit out the PIX (plant information exchange data) for a specific range of dialing codes. Google does some fancy things with ads. AT&T in the pre Judge Green era do some fancy things for the classified and unclassified telephone systems for every  US government entity, commercial enterprises, and individual phones and devices for the US and international “partners.”

What does this mean? In simple terms, one does not dive into a B4 running the proprietary Yellow Page data management system and start trying to read and write in real time from a dial up modem in some long lost corner of a US state with a couple of mega cities, some military bases, and the national laboratory.

One uses wrappers. Period. Screw up with a single character and bad things happen. One does not try to reconstruct what the original programming team actually did to make the PIX system “work.”

The write up says something that few realize in this era of vibe coding and AI output from some wonderful system like Claude:

It’s easier to write software than to explain it.

Yep, this is actual factual. The write up states:

Large software systems are very poorly understood, even by the people most in a position to understand them. Even really basic questions about what the software does often require research to answer. And once you do have a solid answer, it may not be solid for long – each change to a codebase can introduce nuances and exceptions, so you’ve often got to go research the same question multiple times. Because of all this, the ability to accurately answer questions about large software systems is extremely valuable.

Several observations are warranted:

  1. One gets a “feel” for how certain large, complex systems work. I have, prior to my retiring, had numerous interactions with young wizards. Most were job hoppers or little entrepreneurs eager to poke their noses out of the cocoon of a regular job. I am not sure if these people have the ability to develop a “feel” for a large complex of code like the old AT&T had assembled. These folks know their code, I assume. But the stuff running a machine lost in the mists of time. Probably not. I am not sure AI will be much help either.
  2. The people running some of the companies creating fancy new systems are even more divorced from the reality of making something work and how to keep it going. Hence, the problems with computer systems at airlines, hospitals, and — I hate to say it — government agencies. These problems will only increase, and I don’t see an easy fix. One sure can’t rely on ChatGPT, Gemini, or Grok.
  3. The push to make AI your coding partner is sort of okay. But the old-fashioned way involved a young person like myself working side by side with expert DEC people or IBM professionals, not AI. What one learns is not how to do something but how not to do something. Any one, including a software robot, can look up an instruction in a manual. But, so far, only a human can get a “sense” or “hunch” or “intuition” about a box with some flashing lights running something called CB Unix. There is, in my opinion, a one way ticket down the sliding board to system failure with the today 2025 approach to most software. Think about that the next time you board an airplane or head to the hospital for heart surgery.

Net net: Software excitement ahead. And that’s a prediction for 2026. I have a high level of confidence in this peek at the horizon.

Stephen E Arnold, December 31, 2025

AI Training: The Great Unknown

December 23, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Deloitte used to be an accounting firm. Then the company decided it could so much more. Normal people ask accountants for their opinions. Deloitte, like many other service firms, decided it could just become a general management consulting firm, an information technology company, a conference and event company, and also do the books.

image

A professional training program for business professionals at a blue chip consulting firm. One person speaks up, but the others keep their thoughts to themselves. How many are updating their LinkedIn profile? How many are wondering if AI will put them out of a job? How many don’t care because the incentives emphasize selling and upselling engagements? Thanks, Venice.ai. Good enough but you are AI and that’s a mark of excellence for some today.

I read an article that suggests a firm like Deloitte is not able to do much of the self assessment and introspection required to make informed decisions that make surprises part of some firms’ standard operating procedure.

This insight appears in “Deloitte’s CTO on a Stunning AI Transformation Stat: Companies Are Spending 93% on Tech and Only 7% on People.”  This headline suggests that Deloitte itself is making this error. [Note: This is a wonky link from my feed system. If it disappears, good luck.]

The write up in Fortune Magazine said:

According to Bill Briggs, Deloitte’s chief technology officer, as we move from AI experimentation to impact/value at scale, that fear is driving a lopsided investment strategy where companies are pouring 93% of their AI budget into technology and only 7% into the people expected to use it.

The question that popped into my mind was, “How much money is Deloitte spending relative to smart software on training its staff in AI?” Perhaps the not-so-surprising MBA type “fact” reflects what some Deloitte professionals realize is happening at the esteemed “we can do it in any business discipline” consulting firm?

The explanation is that “the culture, workflow, and training” of a blue chip consulting firm is not extensive. Now with AI finding its way from word processing to looking up a fact, educating employees about AI is given lip service, but is “training” possible. Remember, please, that some consulting firms want those over 55 to depart to retirement. However, what about highly paid experts with being friendly and word smithing their core competencies, can learn how, when, and when not to rely on smart software? Do these “best of the best” from MBA programs have the ability to learn, or are these people situational thinkers; that is, the skill is to be spontaneously helpful, to connect the dots, and reframe what a client tells them so it appears sage-like.

The Deloitte expert says:

“This incrementalism is a hard trap to get out of.”

Is Deloitte out of this incrementalism?

The Deloitte expert (apparently not asked the question by the Fortune reporter) says:

As organizations move from “carbon-based” to “silicon-based” employees (meaning a shift from humans to semiconductor chips, or robots), they must establish the equivalent of an HR process for agents, robots, and advanced AI, and complex questions about liability and performance management. This is going to be hard, because it involves complex questions. He brought up the hypothetical of a human creating an agent, and that agent creating five more generations of agents. If wrongdoing occurs from the fifth generation, whose fault is that? “What’s a disciplinary action? You’re gonna put your line robot…in a timeout and force them to do 10 hours of mandatory compliance training?”

I want to point out that blue chip consulting is a soft skill business. The vaunted analytics and other parade float decorations come from Excel, third parties, or recent hires do the equivalent of college research.

Fortune points to Deloitte and says:

The consequences of ignoring the human side of the equation are already visible in the workforce. According to Deloitte’s TrustID report, released in the third quarter, despite increasing access to GenAI in the workplace, overall usage has actually decreased by 15%. Furthermore, a “shadow AI” problem is emerging: 43% of workers with access to GenAI admit to noncompliance, bypassing employer policies to use unapproved tools. This aligns with previous Fortune reporting on the scourge of shadow AI, as surveys show that workers at up to 90% of companies are using AI tools while hiding that usage from their IT departments. Workers say these unauthorized tools are “easier to access” and “better and more accurate” than the approved corporate solutions. This disconnect has led to a collapse in confidence, with corporate worker trust in GenAI declining by 38% between May and July 2025. The data supports this need for a human-centric approach. Workers who received hands-on AI training and workshops reported 144% higher trust in their employer’s AI than those who did not.

Let’s get back to the question? Is Deloitte training its employees in AI so the “information” sticks and then finds its way into engagements? This passage seems to suggest that the answer is, “No for Deloitte. No for its clients. And no for most organizations.” Judge for yourself:

For Briggs [the Deloitte wizard], the message to the C-suite is clear: The technology is ready, but unless leaders shift their focus to the human and cultural transformation, they risk being left with expensive technology that no one trusts enough to use.

My take is that the blue chip consulting firms are:

  1. Trying to make AI good enough so headcount and other cost savings like health care can be reduced
  2. Selling AI consulting to their clients before knowing what will and won’t work in a context different from the consulting firms’
  3. Developing an understanding that AI cannot do what humans can do; that is, build relationships and sell engagements.

Sort of a pickle.

Stephen E Arnold, December 23, 2025

How to Get a Job in the Age of AI?

December 23, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Two interesting employment related articles appeared in my newsfeeds this morning. Let’s take a quick look at each. I will try to add some humor to these write ups. Some may find them downright gloomy.

The first is “An OpenAI Exec Identifies 3 Jobs on the Cusp of Being Automated.” I want to point out that the OpenAI wizard’s own job seems to be secure from his point of view. The write up points out:

Olivier Godement, the head of product for business products at the ChatGPT maker, shared why he thinks a trio of jobs — in life sciences, customer service, and computer engineering — is on the cusp of automation.

Let’s think about each of these broad categories. I am not sure what life sciences means in OpenAI world. The term is like a giant umbrella. Customer service makes some sense. Companies were trying to ignore, terminate, and prevent any money sucking operation related to answer customer’s questions and complaints for years. No matter how lousy and AI model is, my hunch is that it will be slapped into a customer service role even if it is arguably worse than trying to understand the accent of a person who speaks English as a second or third language.

image

Young members of “leadership” realize that the AI system used to replace lower-level workers has taken their jobs. Selling crafts on Etsy.com is a career option. Plus, there is politics and maybe Epstein, Epstein, Epstein related careers for some. Thanks, Qwen, you just output a good enough image but you are free at this time (December 13, 2025).

Now we come to computer engineering. I assume the OpenAI person will position himself as an AI adept, which fits under the umbrella of computer engineering. My hunch is that the reference is to coders who do grunt work. The only problem is that the large language model approach to pumping out software can be problematic in some situations. That’s why the OpenAI person is probably not worrying about his job. An informed human has to be in the process of machine-generated code. LLMs do make errors. If the software is autogenerated for one of those newfangled portable nuclear reactors designed to power football field sized data centers, someone will want to have a human check that software. Traditional or next generation nuclear reactors can create some excitement if the software makes errors. Do you want a thorium reactor next to your domicile? What about one run entirely by smart software?

What’s amusing about this write up is that the OpenAI person seems blissfully unaware of the precarious financial situation that Sam AI-Man has created. When and if OpenAI experiences a financial hiccup, will those involved in business products keep their jobs. Oliver might want to consider that eventuality. Some investors are thinking about their options for Sam AI-Man related activities.

The second write up is the type I absolutely get a visceral thrill writing. A person with a connection (probably accidental or tenuous) lets me trot out my favorite trope — Epstein, Epstein, Epstein — as a way capture the peculiarity of modern America. This article is “Bill Gates Predicts That Only Three Jobs Will Be Safe from Being Replaced by AI.” My immediate assumption upon spotting the article was that the type of work Epstein, Epstein, Epstein did would not be replaced by smart software. I think that impression is accurate, but, alas, the write up did not include Epstein, Epstein, Epstein work in its story.

What are the safe jobs? The write up identifies three:

  1. Biology. Remember OpenAI thinks life sciences are toast. Okay, which is correct?
  2. Energy expertise
  3. Work that requires creative and intuitive thinking. (Do you think that this category embraces Epstein, Epstein, Epstein work? I am not sure.)

The write up includes a statement from Bill Gates:

“You know, like baseball. We won’t want to watch computers play baseball,” he said. “So there’ll be some things that we reserve for ourselves, but in terms of making things and moving things, and growing food, over time, those will be basically solved problems.”

Several observations:

  1. AI will cause many people to lose their jobs
  2. Young people will have to make knick knacks to sell on Etsy or find equally creative ways of supporting themselves
  3. The assumption that people will have “regular” jobs, buy houses, go on vacations, and do the other stuff organization man type thinking assumed was operative, is a goner.

Where’s the humor in this? Epstein, Epstein, Epstein and OpenAI debt, OpenAI debt, and OpenAI debt. Ho ho ho.

Stephen E Arnold, December x, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta