Big Tech AI Tries to Understand Real Life
March 6, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “OpenAI’s Compromise with the Pentagon Is what Anthropic Feared.” I want to be upfront. Every time I read or hear about MIT, I think Epstein Epstein Epstein. This translates to my being [a] dismissive of what the MIT thing outputs, [b] the integrity of the institution, and [c] what it brings to the knowledge party. Therefore, if you are into MIT, stop reading.
This particular write up is one of those crazy analyses of the perception of the world from the point of view of wizards and how stuff actually works in the US government or any nation’s government. Whiz kids think they have something really cool. They give talks at conferences. They moms and dads pester their connections about Timmy’s or Wendy’s great new thing. They do brown bag lunches in the bowels of the GSA. They trek to FDIC events in interesting locations. They write Substacks, blog posts, and Forbes thought leader articles. They stand in trade show booths squinting at name tags and look crestfallen when big time people walk by their bright smiles.
The reality is that outfits want to make government sales, and if they want to close a deal and keep the deal, the people who sign those contracts expect vendors to do what they are told. Is this the optimal approach by governments? No. Is this an informed strategy? No. Is this a tactic to become best pals with vendors? No.
And guess what? No one in those governments’ procurement processes cares very much what a vendor wants. Sure, there is some flexibility. But one doesn’t have to be an MIT graduate or a doner like Mr. Epstein Epstein Epstein to figure out that the government is going to prevail. Even in countries which are obscure and unfamiliar to an American big tech outfit, the approach is the same: Read the terms of the deal, agree, get paid, and do what the client wants.

A group of AI wizards learn how life is versus how life should be. Thanks, Venice.ai. Good enough.
Painful, right.
The write up says:
In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon.
Hey, MIT writer publisher thing, OpenAI got the message. I could suggest that MIT check out the history of MITRE to put my observations in context.
Everything is clear. A company that wants to do business with the government regardless of country needs to drop the crazy idea that governmental institutions care about the emotional zeitgeist of the whiz kids. I know that it takes time for some government professionals to grasp what one can do with a technology that is new, unfamiliar, and less friendly than making a call on a iPhone. However, once that insight arrives in the mind of a government professionals, the mental orientation of the wizard is usually irrelevant. It’s noise. It’s a distraction. It’s unwanted. It’s infuriating.
The write up says:
The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use.
News flash. When the Department of War licenses a technology, that Department (regardless of the nation state) is going to use that technology to complete the mission its leadership deems appropriate. If a company or a wizard cannot understand this concept, why are these firms and their wizards in the meeting and procurement process. Go hunt for money elsewhere.
How about this statement from the write up:
But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground.
The leadership of the big tech AI companies think they are rational. Those well paid experts are not. The people in the government are not rational. Why? They are humans who have interesting ways of responding to work, technology, and the context in which they find themselves.
Why did MIT embrace Epstein Epstein Epstein? The leadership of MIT made a decision. The big AI tech people made a decision. Neither seems to have been eager to walk away. Why not try to own up to your decisions? That’s called adulting.
Stephen E Arnold, March 6, 2026
Hey, Job Hunters: Robots Get Some Love
March 6, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I assume there are other versions of this news story from PCnews.ru. “Replacing a Person with a Robot Can Already Pay for Itself in Just 10 Weeks” caught my attention. (Note: The source is in Russian.)
As a dinobaby, I have already been replaced by someone who just finished high school. However, for those who are grinding forward with a PhD or an MA, the write up might hold a hint about the future prospects for “real work.”

The result of good enough robots and software agents. Thanks, ChatGPT. Good enough.
The write up says:
With current prices for human labor in some economies of the world, replacing a person with a robot can already provide a quick payback. For example, a robot costing $15,000 with a man-hour cost of $41 will reach payback in 3.8 weeks, and with a man-hour cost of $7.25, it will reach payback in 21.6 weeks. Even a $35,000 robot can pay for itself in less than 9 weeks at a human cost of $41 an hour. People simply will not be able to compete with such indicators….
Are these numbers accurate? Of course not, but the projected numbers are going to make some bonus chasing managers and crazed bean counters lust for robots. Why? As a dinobaby, I think eliminating humanoids from work processes is a definite win.
Who supports these allegedly indisputable estimates? If you guessed blue chip consulting firms, you get a gold star. I noted this passage:
McKinsey & Company managing partner Bob Sternfels expects that the number of real employees and replacement AI agents in his company will be equal in 18 months. The company already has 20,000 AI agents per 40,000 people, although a year ago their number did not exceed 3,000. IMF Director Kristalina Georgieva said last month that artificial intelligence is already hitting the labor market like a tsunami, and most countries and businesses are simply not prepared for this.
Once again numbers but with the authority of the estimable McKinsey & Co.
Several observations:
- Robots (hardware and software) are going to find their way into work places and quickly. With the estimated payback in money, what money-saving member of leadership would say, “Hey, who thinks this is a bad idea?” I probably would not raise my hand.
- The “quality” of work has been, in my opinion, declining in the last 10 years. I get weird write ups that recycle information that no one has ever verified and validated. The knowledge recycling business works a heck of a lot better than Kroger’s plastic bag process. Therefore, good enough is going to become the norm for outputs. Got cancer? Well, this treatment is good enough.
- The cost savings analyses fascinate me. Why? A mostly ignored French guy wrote The Technology Bluff years ago. The fellow pointed out that the ancillary impacts are usually tough to remediate. Plus the fix is more technology which whiz kids assume can “fix” anything. From my point of view, a steady increase in knowledge friction will have some interesting problems for more technology to fix; for example, young people who cannot concentrate for longer than a TikTok-type video.
Now a final question, “Why is this write up of interest to Russian readers?” That beats me. Russia is losing more troops than it can replace. If this is true, then the special operation obviates fears for many job hunting Russians to worry about humanoid robots and software agents. Happy Fourth Anniversary!
Stephen E Arnold, March 6, 2026
Who Knew? Anyone Who Has Worked with the Young at Heart
March 6, 2026
The Register wrote about a study that confirms why we already knew about experience versus youthful optimism: “Study Confirms Experience Beats Youthful Enthusiasm.” Why is that so surprising? Youthful enthusiasm is great! It helps motivate older workers and keeps pushing society forward so we can accomplish bigger and better things.
Experience, however, is a tried and true approach to work and life that can only be acquired through years of trial and error. Younger workers want to blaze through work environments without paying their dues. While some of the old-fashioned “hazing” techniques of yesteryear should be done away with, nothing can beat
Herer’s information on the study:
“Annie Coleman, founder of consultancy RealiseLongevity, analyzed the data and highlighted a 2025 study finding peak performance occurs between the ages of 55-60. Writing in the Stanford Center on Longevity blog, she cited research examining 16 cognitive markers that confirm that although processing speed declines after early adulthood, other dimensions improve, and overall cognition peaks near retirement age. Studies from the past 15 years show that some qualities like vigilance may worsen with age alongside processing speed, but others improve, including the ability to avoid distractions and accumulated knowledge.”
This is important because AI is eliminating entry level and other jobs for new graduates. Older, experienced workers can mentor the younger generations and provide valuable knowledge that AI fails to duplicate.
As a counter, some older workers are stuck in their ways and fail to adapt to new circumstances. They might lack the crucial skills needed to push and lead into the future. That’s why it’s good to have a mixture of the old and new.
The dinobaby who has me write is inexperienced, old, and generally baffled by everything.
Whitney Grace, March 6, 2026
Apple’s Falling: AI Plops into the Truck of Another Farmer
March 5, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Two companies have been late to the AI “next big thing.” One of these outfits is Telegram. I understand how the arrest, travel restrictions, and impending trial on about a dozen charges of serious crime can derail a company. Pavel Durov, the GOAT in Russian social media, has been trying an end around. So far his efforts have been less than inspiring. Therefore, Telegram’s fumbling in AI can be understood.
However, Apple — run by the affable Tim Apple — has also been late to the game. And when the company entered the fray, it was benched with injury after injury. The dismal performance of Siri, the announcements of what was coming, and then AI that did not arrive allowed some to point out that Apple was really brilliant. No billions dumped into the AI cast iron stove. No rushing forward only to flail like those pesky Microsofties.
Then Apple had a Jobsian moment. The company would team up with another affable outfit just up Highway 101. Google would provide the AI software and Apple’s engineers would work their Lisa-style magic. Google would not have access to Apple data. Apple would control privacy. The pot of gold is at the end of the rainbow in the apple orchard.
I read “Report: Apple Asks Google to Run Siri on Its Servers.” The write up presents as actual factual:
Apple now wants to be prepared for a potential surge in AI use on its devices when the more powerful, Gemini-based version of ?Siri? debuts later this year, motivating the request for Google to run ?Siri? directly on its servers.
I have bumbled around a data center or two in my 60 year work career. Some of these were in different countries. Others were adjacent the machines running the FirstGov.gov search and retrieval service. Others were in small towns where no one was the wiser about what was zipping around. In each of these were log files, systems managers, technicians, and other skilled professionals.

An illustration of a hypothetical situation in a large AI data center at about 3 am on a Sunday morning. This Venice.ai generated image does not reflect how the world works in big time data centers. But you know AI? Hallucinations R’Us.
The most interesting thing my team and I learned is that one of these outfits which I shall not name had an employee whose name thankfully is lost in the mists of my dinobaby mind told me, “Yeah, I am running bitcoin mining on the company’s servers. No one has a clue.”
Yep, no one has a clue. That’s possibly a risk when Apple allows the super estimable Google to run Apple customers’ AI queries.
Then I read “Some Apple AI Servers Are Reportedly Sitting Unused on Warehouse Shelves, Due to Low Apple Intelligence Usage.” This write up asserts:
The Apple finance team has apparently been frustrated about the costs of this duplicate infrastructure, but also unwilling to invest billions in overhauling the stack. There has apparently been several attempts inside the company to unify everything, but those projects have stalled several times over the last decade. For Private Cloud Compute specifically, the system is described as underpowered and perhaps more trouble than it’s worth.
Let’s assume that these two cited articles’ information is sort of accurate. What does a dinobaby like me make of the information?
Answer 1. Apple has zero management control over AI other than dithering. A little bit this way and then a little bit that way. Dither. Repeat.
Answer 2. Apple has a master plan but the fluidity of AI combined with the guillotine of China dependence have created the same type of distraction that has caused Pavel Durov to miss so far the AI opportunity at Telegram.
Answer 3. Apple’s users don’t care one way or another about Apple’s AI efforts. If an Apple user wants AI, just download an app. Problem solved.
Net net: Management churn, ho hum mobiles, impossible to differentiate iPads, stupid pop ups to use iCloud, and the China thing — Apple is now turning to a fellow travelers in Monopoly Land. I am not sure the Google can solve Apple’s problem in AI, but I wonder if Google’s technical team may just take a little peak at those Apple data. Nah, impossible. No engineer with admin privileges would check out what a customer was doing on a Google system. No way! Right, Apple?
Stephen E Arnold, March 5, 2026
AI: Errors? Hey, No Problemo.
March 5, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I love the AI razzle dazzle. Some of the functions available to dinobabies like me are semi-useful. However, I am generally unimpressed with some of the “magic” functions these systems provide. Probabilities, flawed data used for training them, and humanoid (for now) wizard programmers doing their thing make me cautious.

Thanks, Venice.ai. Good enough.
That’s why I got a chuckle from “Unbelievably Dangerous: Experts Sound Alarm after ChatGPT Health Fails to Recognize Medical Emergencies.” The write up reports as actual factual:
The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.
Medical writing is as wonky as the information output by crypto bros. Here’s my translation of the statement. AI will miss more than half of serious health problems. My hunch is that real doctors and real AI wizards will say, “Hey, this is one study.” and “Wow, the sample is statistically flawed.”
Maybe.
The write up points out:
While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure. In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.
I understand that smart software is a work in progress. But MBAs and would be world visionaries want AI now, now, now. Move fast. Yep, and break things. I suppose putting a person’s life in jeopardy is an insignificant, trivial even.
Here’s the conclusion of the article:
Prof Paul Henman, a digital sociologist and policy expert with the University of Queensland, said: “This is a really important paper. “If ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.”…“It is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users…”
Several observations:
Net net: With so much money and so many egos caught in this “we have the answer” AI thing, why worry? Big tech has the answers, the lawyers, and the obsession to fill deliver reality their way.
Stephen E Arnold, March 5, 2026
The AI Booster. Believe It or Not
March 5, 2026
AI has changed the way work is done. It’s also upset the typical corporate ladder and making your way to the top is either harder or easier, according to ZDNet’s article: “AI Is Disrupting The Career Ladder – I Learned 5 Ways To Get To Leadership Anyway.” Mark Samuels shares tips he’s acquired and implemented to make it into leadership positions. His biggest claim is that proving you’re ready for responsibility is the way to climb the ladder.
He also explains that taking unusual opportunities, demonstrate commitment, staying humble, supporting the next generation, and demonstrating a hands-off style is the way to the top. I paused here and thought nab out this for a few minutes. Why? Because Samuels’s tips sound like the old way of making it to the top. He forgot to add important information about brown-nosing and being in the right place at the right time.
This advice is as old school as the Roman Empire. Taking unusual opportunities is probably the most risky, especially when it might be dangerous. Yes, it’s important to take risks but you need to weigh the consequences. Don’t just do dumb stuff. There is proper advice about how to take unusual opportunities
Barry Panayi, group chief data officer at the insurance firm Howden] shared:
“As he climbed into senior positions, Panayi told ZDNET, he looked for opportunities outside his comfort zone to prove his leadership credentials. One of Panayi’s most crucial development opportunities was taking on non-executive positions — with UK energy regulator Ofgem since 2020, and media company Reach since 2021. ‘Those positions really gave me perspective, because I was quite narrow,’ he said. ‘All I’d ever done was data. I felt like I wasn’t rounded enough, and being around the board table, contributing as a board member, forced me to consider other things.’”
Do step outside of your comfort zone. That’s something that can never be replaced by AI.
Whitney Grace, March 5, 2026
Part II of Our Andrei Grachev Essay Now Available
March 4, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
This is Stephen E Arnold. Part II of my Andrei Grachev essay is live. You can find "Part II. Grachev Aloft: Watching for HFT Prey" on the Bear Blog service. Like the other posts in my "Telegram Notes," I have adopted a very different style from that in my new book "The Telegram Labyrinth." No paywall, no ads, no registration.
Stephen E Arnold, March 4, 2026
The AI Problem: Getting Left Behind
March 4, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
After lots of clicks and learning that key features were in “gray,” I was able to read “Redefining the Software Engineering Profession for AI.” The write up explains a corollary to “home alone”; that is, left behind.
I waded through examples of AI output fixed up because a humanoid smarter than the AI spotted mistakes. Are there mistakes in AI output? If you ask a whiz kid at a big tech outfit (I shall not name names), the answer is, “Look at the score on this benchmark.” If you ask someone who knows about a specific topic, you may hear, “Hey, you have to double check this stuff.”

Thanks, Venice.ai. Good enough.
And there is a lot of stuff to check. That’s the main idea lurking behind the fancy lingo and the screenshots. The write up finally says:
Generative AI currently acts as seniority-biased technological change: It disproportionately amplifies engineers who already possess systems judgment, like taste for architecture, debugging under uncertainty, and operational intuition.
As a dinobaby, I am usually wrong by default. However, for me this means that a person who knows something cold is going to be in great demand. Why? The “older and more informed humans” can spot the AI mistakes. This is definitely good for senior types. The write up focuses on computer programming. I think the observation applies to other disciplines as well. I want to point out that the softer the user’s field, the less likely errors will be flagged and hopefully corrected. Question: Why? Answer: Programming works or it doesn’t. A squishy discipline like social science has more flexibility. Programming is brittle; explaining why a young female is unhappy is clay.
What’s the fix? I think the big idea is to go back to apprentice-type programs. A younger programmer with less experience works with a senior more skilled programmer. Somehow the knowledge of the senior diffuses to the younger. At least that’s my take. Does it work? Sure, for skilled and adept less seasoned programmers. But we live in a multi-tasking, accelerationalist environment. Will it work? Probably but some real life data are needed.
The write up concludes with:
The future of software engineering will be defined not by the volume of code AI can generate but by how effectively humans learn, reason, and mature alongside these systems. Investing in early-in-career developers through deliberate preceptorship ensures today’s expertise becomes tomorrow’s intuition. In balancing automation with apprenticeship, we preserve the enduring vitality of the software engineering profession.
How will this play out in the TikTok-type of programmer, financial engineer, or tax expert? Answer: Outputs are good enough. Look at these benchmark scores.
Stephen E Arnold, March 4, 2026
Voice Cloning Made Easy
March 4, 2026
Any voice, even those of long dead people, can be replicated via the power of AI. All it takes is a few voice recordings and BAM, instantaneous voice that bends to your will. Geeky Gadgets shares there’s a new voice AI software that is more powerful than others on the market: “Qwen 3 TTS AI Released : Clone Any Voice For Free And Craft Rich Speech.”
Qwen 3 TTS is described as a new AI agent that “levels the playing field, offering unprecedented creative freedom to developers, researchers, and hobbyists alike. It’s not just a step forward; it’s a seismic shift in how we approach voice synthesis.” It’s an exciting proposition, because the possibilities of what can be down with the tool are amazing. Qwen 3 TTS was designed for creativity:
“…combines voice customization, multilingual capabilities, and emotional expression to deliver lifelike results. Whether you’re curious about designing bespoke voices for creative projects or exploring how this technology could transform industries like education and entertainment, there’s something here for everyone. But the real magic lies in its simplicity, what once required expensive resources and expertise is now available to anyone with a vision. The possibilities are as exciting as they are endless, and they might just change how you think about the voices around you.”
Once you get over the hype, the realization of how this software can be exploited sinks in. Fishing is the term used to describe scam artists. The new phrase for vocal scamming is called “vishing.” Qwen 3 TTS is a tool but it will be used for nefarious purposes by bad actors for vishing activities. It is likely that grandmothers will be transferring money when Billy or Mary call and say, “Gran, I need some money to help my roomy with a problem.”
Whitney Grace, March 4, 2026
AI Logic: Not the Logic You Recognize from Geometry, Folks
March 3, 2026
Remember that one science fiction movie where a super computer deemed humans as inferior and took over a space ship or society? In order to regain control of the computer, an intelligent, debonair space hero asked one simple question: “Why?” The computer encounters a paradox and it shuts itself down or explodes. Apparently real life AI experience similar situations with simple logic problems says Popular Mechanics: “Scientists Found AI’s Fatal Flaw—The Most Advanced Models Are Failing Basic Logic Tests.”
Carleton College, Cal Tech, and Stanford combined their research on LLMs, such as Claude and ChatGPT. They learned that LLMs are as prone to error as humans and might even perform worse than their creators. Here are their errors with cognitive reasoning:
-
“LLMs perpetuate human errors like bias, and they make other human-like errors because they don’t have the intuitive scaffolding that helps us learn not to make those mistakes.
-
LLMs lack core executive functions (working memory, cognitive flexibility, and inhibitory control) that help humans succeed in reasoning, leading to systemic failures in LLMs.
-
LLMs are poor at abstract reasoning, like understanding relationships between intangible concepts (e.g. knowledge, trust, security) and picking out rules affecting small sets.
-
LLMs show human-like confirmation bias toward information they already parse well.
-
LLMs show order and anchoring biases, like overweighting the first example in a list of items.”
LLMs are even worse at social reasoning. They lack morals, can’t understand social rules, are easily manipulated, and given to hallucinations. The LLMs can’t “comprehend” basic logic such as two-hop reasoning, basic yes and no questions, and casual inference. That’s only the tip of the iceberg in LLMs
It’s good to know that LLMs are not yet perfect. Each improvement opens the door for more hyperbolic marketing. The question I have is, “Will humans learn how to deal in a constructive manner with AI logic?” Some humans have that doom scrolling nailed. But AI logic?
Whitney Grace, March 3, 2026

