Big Tech AI Tries to Understand Real Life
March 6, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “OpenAI’s Compromise with the Pentagon Is what Anthropic Feared.” I want to be upfront. Every time I read or hear about MIT, I think Epstein Epstein Epstein. This translates to my being [a] dismissive of what the MIT thing outputs, [b] the integrity of the institution, and [c] what it brings to the knowledge party. Therefore, if you are into MIT, stop reading.
This particular write up is one of those crazy analyses of the perception of the world from the point of view of wizards and how stuff actually works in the US government or any nation’s government. Whiz kids think they have something really cool. They give talks at conferences. They moms and dads pester their connections about Timmy’s or Wendy’s great new thing. They do brown bag lunches in the bowels of the GSA. They trek to FDIC events in interesting locations. They write Substacks, blog posts, and Forbes thought leader articles. They stand in trade show booths squinting at name tags and look crestfallen when big time people walk by their bright smiles.
The reality is that outfits want to make government sales, and if they want to close a deal and keep the deal, the people who sign those contracts expect vendors to do what they are told. Is this the optimal approach by governments? No. Is this an informed strategy? No. Is this a tactic to become best pals with vendors? No.
And guess what? No one in those governments’ procurement processes cares very much what a vendor wants. Sure, there is some flexibility. But one doesn’t have to be an MIT graduate or a doner like Mr. Epstein Epstein Epstein to figure out that the government is going to prevail. Even in countries which are obscure and unfamiliar to an American big tech outfit, the approach is the same: Read the terms of the deal, agree, get paid, and do what the client wants.

A group of AI wizards learn how life is versus how life should be. Thanks, Venice.ai. Good enough.
Painful, right.
The write up says:
In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon.
Hey, MIT writer publisher thing, OpenAI got the message. I could suggest that MIT check out the history of MITRE to put my observations in context.
Everything is clear. A company that wants to do business with the government regardless of country needs to drop the crazy idea that governmental institutions care about the emotional zeitgeist of the whiz kids. I know that it takes time for some government professionals to grasp what one can do with a technology that is new, unfamiliar, and less friendly than making a call on a iPhone. However, once that insight arrives in the mind of a government professionals, the mental orientation of the wizard is usually irrelevant. It’s noise. It’s a distraction. It’s unwanted. It’s infuriating.
The write up says:
The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use.
News flash. When the Department of War licenses a technology, that Department (regardless of the nation state) is going to use that technology to complete the mission its leadership deems appropriate. If a company or a wizard cannot understand this concept, why are these firms and their wizards in the meeting and procurement process. Go hunt for money elsewhere.
How about this statement from the write up:
But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground.
The leadership of the big tech AI companies think they are rational. Those well paid experts are not. The people in the government are not rational. Why? They are humans who have interesting ways of responding to work, technology, and the context in which they find themselves.
Why did MIT embrace Epstein Epstein Epstein? The leadership of MIT made a decision. The big AI tech people made a decision. Neither seems to have been eager to walk away. Why not try to own up to your decisions? That’s called adulting.
Stephen E Arnold, March 6, 2026
Comments
Got something to say?

