Google Presents an Innovative Way to Say, “Generate Revenue”
December 9, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
One of my contacts sent me a link to an interesting document. Its title is “A Pragmatic Vision for Interpretability.” I am not sure about the provenance of the write up, but it strikes me as an output from legal, corporate, and wizards. First impression: Very lengthy. I estimate that it requires about 11,000 words to say, “Generate revenue.” My second impression: A weird blend of consulting speak and nervousness.

A group of Googlers involved in advanced smart software ideation get a phone call clarifying they have to hit revenue targets. No one looks too happy. The esteemed leader is on the conference room wall. He provides a North Star to the wandering wizards. Thanks, Venice.ai. Good enough, just like so much AI system output these days.
The write up is too long to meander through its numerous sections, arguments, and arm waving. I want to highlight three facets of the write up and leave it up to you to print this puppy out, read it on a delayed flight, and consider how different this document is from the no output approach Google used when it was absolutely dead solid confident that its search-ad business strategy would rule the world forever. Well, forever seems to have arrived for Googzilla. Hence, be pragmatic. This, in my experience, is McKinsey speak for hit your financial targets or hit the road.
First, consider this selected set of jargon:
Comparative advantage (maybe keep up with the other guys?)
Load-bearing beliefs
Mech Interp” / “mechanistic interpretability” (as opposed to “classic” interp)
Method minimalism
North Star (is it the person on the wall in the cartoon or just revenue?)
Proxy task
SAE (maybe sparse autoencoders?)
Steering against evaluation awareness (maybe avoiding real world feedback?)
Suppression of eval-awareness (maybe real-world feedback?)
Time-box for advanced research
The document tries to hard to avoid saying, “Focus on stuff that makes money.” I think that, however, is what the word choice is trying to present in very fancy, quasi-baloney jingoism.
Second, take a look at the three sets of fingerprints in what strikes me as a committee-written document.
- Researchers want to just follow their ideas about smart software just as we have done at Google for many years
- Lawyers and art history majors who want to cover their tailfeathers when Gemini goes off the rails
- Google leadership who want money or at the very least research that leads to products.
I can see a group meeting virtually, in person, and in the trenches of a collaborative Google Doc until this masterpiece of management weirdness is given the green light for release. Google has become artful in make work, wordsmithing, and pretend reconciliation of the battles among the different factions, city states, and empires within Google. One can almost anticipate how the head of ad sales reacts to money pumped into data centers and research groups who speak a language familiar to Klingons.
Third, consider why Google felt compelled to crank out a tortured document to nail on the doors of an AI conference. When I interacted with Google over a number of years, I did not meet anyone reminding me of Martin Luther. Today, if I were to return to Shoreline Drive, I might encounter a number of deep fakes armed with digital hammers and fervid eyes. I think the Google wants to make sure that no more Loons and Waymos become the butt of stand up comedians on late night TV or (heaven forbid, TikTok). The dead cat in the Mission and the dead puppy in what’s called (I think) the Western Addition. (I used to live in Berkeley, and I never paid much attention to the idiosyncratic names slapped on undifferentiable areas of the City by the Bay.)
I think that Google leadership seeks in this document:
- To tell everyone it is focusing on stuff that sort of works. The crazy software that is just like Sundar is not on the to do list
- To remind everyone at the Google that we have to pay for the big, crazy data centers in space, our own nuclear power plants, and the cost of the home brew AI chips. Ads alone are no longer going to be 24×7 money printing machines because of OpenAI
- To try to reduce the tension among the groups, cliques, and digital street gangs in the offices and the virtual spaces in which Googlers cogitate, nap, and use AI to be more efficient.
Net net: Save this document. It may become a historical artefact.
Stephen E Arnold, December 9, 2025
Telegram’s Cocoon AI Hooks Up with AlphaTON
December 5, 2025
[This post is a version of an alert I sent to some of the professionals for whom I have given lectures. It is possible that the entities identified in this short report will alter their messaging and delete their Telegram posts. However, the thrust of this announcement is directionally correct.]
Telegram’s rapid expansion into decentralized artificial intelligence announced a deal with AlphaTON Capital Corp. The Telegram post revealed that AlphaTON would be a flagship infrastructure and financial partner. The announcement was posted to the Cocoon Group within hours of AlphaTON getting clear of U.S. SEC “baby shelf” financial restrictions. AlphaTON promptly launched a $420.69 million securities push. Telegram and AlphaTON either acted in a coincidental way or Pavel Durov moved to make clear his desire to build a smart, Telegram-anchored financial service.
AlphaTON, a Nasdaq microcap formerly known as Portage Biotech rebranded in September 2025. The “new” AlphaTON claims to be deploying Nvidia B200 GPU clusters to support Cocoon, Telegram’s confidential-compute AI network. The company’s pivot from oncology to crypto-finance and AI infrastructure was sudden. Plus AlphaTON’s CEO Brittany Kaiser (best known for Cambridge Analytica) has allegedly interacted with Russian political and business figures during earlier data-operations ventures. If the allegations are accurate, Ms. Kaiser has connections to Russia-linked influence and financial networks. Telegram is viewed by some organizations like Kucoin as a reliable operational platform for certain financial activities.
Telegram has positioned AlphaTON as a partner and developer in the Telegram ecosystem. Firms like Huione Guarantee allegedly used Telegram for financial maneuvers that resulted in criminal charges. Other alleged uses of the Telegram platform have included other illegal activities identified in the more than a dozen criminal charges for which Pavel Durov awaits trial in France. Telegram’s instant promotion of AlphaTON, combined with the firm’s new ability to raise hundreds of millions, points to a coordinated strategy to build an AI-enabled financial services layer using Cocoon’s VAIC or virtual artificial intelligence complex.
The message seems clear. Telegram is not merely launching a distributed AI compute service; it is enabling a low latency, secrecy enshrouded AI-crypto financial construct. Telegram and AlphaTON both see an opportunity to profit from a fusion of distributed AI, cross jurisdictional operation, and a financial pay off from transactions at scale. For me and my research team, the AlphaTON tie-up signals that Telegram’s next frontier may blend decentralized AI, speculative finance, and actors operating far from traditional regulatory guardrails.
In my monograph “Telegram Labyrinth” (available only to law enforcement, US intelligence officers, and cyber attorneys in the US), Telegram requires close monitoring and a new generation of intelware software. Yesterday’s tools were not designed for what Telegram is deploying itself and with its partners. Thank you.
Stephen E Arnold, December 5, 2025, 1034 am US Eastern time
AI Bubble? What Bubble? Bubble?
December 5, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “JP Morgan Report: AI Investment Surge Backed by Fundamentals, No Bubble in Sight.” The “report” angle is interesting. It implies unbiased, objective information compiled and synthesized by informed individuals. The content, however, strikes me as a bit of fancy dancing.
Here’s what strikes me as the main point:
A recent JP Morgan report finds the current rally in artificial intelligence (AI) related investments to be justified and sustainable, with no evidence of a bubble forming at this stage.
Feel better now? I don’t. The report strikes me as bank marketing with a big dose of cooing sounds. You know, cooing like a mother to her month old baby. Does the mother makes sense? Nope. The point is that warm cozy feeling that the cooing imparts. The mother knows she is doing what is necessary to reduce the likelihood of the baby making noises for sustained periods. The baby knows that mom’s heart is thudding along and the comfort speaks volumes.

Financial professionals in Manhattan enjoy the AI revolution. They know there is no bubble. I see bubbles (plural). Thanks, MidJourney. Good enough.
Sorry. The JP Morgan cooing is not working for me.
The write up says, quoting the estimable financial institution:
“The ingredients are certainly in place for a market bubble to form, but for now, at least, we believe the rally in AI-related investments is justified and sustainable. Capex is massive, and adoption is accelerating.”
What about this statement in the cited article?
JP Morgan contrasts the current AI investment environment to previous speculative cycles, noting the absence of cheap speculative capital or financial structures that artificially inflate prices. As AI investment continues, leverage may increase, but current AI spending is being driven by genuine earnings growth rather than assumptions of future returns.
After stating the “no bubble” argument three times, I think I understand.
Several observations:
- JP Morgan needed to make a statement that the AI data center thing, the depreciation issue, the power problem, and the potential for an innovation that derails the current LLM-type of processing are not big deals. These issues play no part in the non-bubble environment.
- The report is a rah rah for AI. Because there is no bubble, organizations should go forward and implement the current versions of smart software despite their proven “feature” of making up answers and failing to handle many routine human-performed tasks.
- The timing is designed to allow high net worth people a moment to reflect upon the wisdom of JP Morgan and consider moving money to the estimable financial institution for shepherding in what others think are effervescent moments.
My view: Consider the problems OpenAI has: [a] A need for something that knocks Googzilla off the sidewalk on Shoreline Drive and [b] more cash. Amazon — ever the consumer’s friend — is involved in making its own programmers use its smart software, not code cranked out by a non-Amazon service. Plus, Amazon is in the building mode, but it has allegedly government money to spend, a luxury some other firms are denied. Oracle is looking less like a world beater in databases and AI and more of a media-type outfit. Perplexity is probably perplexed because there are rumors that it may be struggling. Microsoft is facing some backlash because of its [a] push to make Copilot everyone’s friend and [b] dealing with the flawed updates to its vaunted Windows 11 software. Gee, why is FileManager not working? Let’s ask Copilot. On the other hand, let’s not.
Net net: JP Morgan is marketing too hard, and I am not sure it is resonating with me as unbiased and completely objective. As sales collateral, the report is good. As evidence there is no bubble, nope.
Stephen E Arnold, December 5, 2025
AI Breaks Career Ladders
December 2, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
My father used to tell me that it was important to work at a company and climb the career ladder. I did not understand the concept. In my first job, I entered at a reasonably elevated level. I reported to a senior vice president and was given a big, messy project to fix up and make successful. In my second job, I was hired to report to the president of a “group” of companies. I don’t think I had a title. People referred to me as Mr. X’s “get things done person.” My father continued to tell me about the career ladder, but it did not resonate with me.
Thanks Venice.ai. I fired five prompts before you came close to what I specified. Good work, considering.
Only later, when I ran my own small consulting firm did the concept connect. I realized that as people worked on tasks, some demonstrated exceptional skill. I tried to find ways to expand those individuals capabilities. I think I succeeded, and several have contacted me years after I retired to tell me they were grateful for the opportunities I provided.
Imagine my surprise when I read “The Career Ladder Just Got Terminated: AI Kills Jobs Before They’re Born.” I understand. Co workers have no way to learn and earn the right to pursue different opportunities in order to grow their capabilities.
The write up says:
Artificial intelligence isn’t just taking jobs. It’s removing the rungs of the ladder that turn rookies into experts.
Here’s a statement from the rock and roll magazine that will make some young, bright eyed overachievers nervous:
In addition to making labor more efficient, it [AI] actually makes labor optional. And the disruption won’t unfold over generations like past revolutions; it’s happening in real time, collapsing decades of economic evolution into a few short years.
Forget optional. If software can replace hard to manage, unpredictable, and good enough humans, AI will get the nod. The goal of most organizations is to generate surplus cash. Then that cash is disbursed to stakeholders, deserving members of the organization’s leadership, and lavish off site meetings, among other important uses.
Here’s another passage that unintentionally will make art history majors, programmers, and, yes, even some MBA with the right stuff think about becoming a plumber:
And this AI job problem isn’t confined to entertainment. It’s happening in law, medicine, finance, architecture, engineering, journalism — you name it. But not every field faces the same cliff. There’s one place where the apprenticeship still happens in real time: live entertainment and sports.
Perhaps there will be an MBA Comedy Club? Maybe some computer scientists will lean into their athletic prowess for table tennis or quoits?
Here’s another cause of heart burn for the young job hunter:
Today, AI isn’t hunting our heroes; it’s erasing their apprentices before they can exist. The bigger danger is letting short-term profits dictate our long-term cultural destiny. If the goal is simply to make the next quarter’s numbers look good, then automating and cutting is the easy answer. But if the goal is culture, originality and progress, then the choice is just as clear: protect the training grounds, take risks on the unknown and invest in the people who will surprise us.
I don’t see the BAIT (big AI technology companies) leaning into altruistic behavior for society. These outfits want to win, knock off the competition, and direct the masses to work within the bowling alley of life between two gutters. Okay, job hunters, have at it. As a dinobaby, I have no idea what it impact job hunting in the early days of AI will have. Did I mention plumbing?
Stephen E Arnold, December 2, 2025
What Can a Monopoly Type Outfit Do? Move Fast and Break Things Not Yet Broken
November 26, 2025
This essay is the work of a dumb dinobaby. No smart software required.
CNBC published “Google Must Double AI Compute Every 6 Months to Meet Demand, AI Infrastructure Boss Tells Employees.”
How does the math work out? Big numbers result as well as big power demands, pressure on suppliers, and an incentive to enter hyper-hype mode for marketing I think.

Thanks, Venice.ai. Good enough.
The write up states:
Google ’s AI infrastructure boss [maybe a fellow named Amin Vahdat, the leadership responsible for Machine Learning, Systems and Cloud AI?] told employees that the company has to double its compute capacity every six months in order to meet demand for artificial intelligence services.
Whose demand exactly? Commercial enterprises, Google’s other leadership, or people looking for a restaurant in an unfamiliar town?
The write up notes:
Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.
Faced with this robust demand, what differentiates the Google for other monopoly-type companies? CNBC delivers a bang up answer to my question:
Google’s “job is of course to build this infrastructure but it’s not to outspend the competition, necessarily,” Vahdat said. “We’re going to spend a lot,” he said, adding that the real goal is to provide infrastructure that is far “more reliable, more performant and more scalable than what’s available anywhere else.” In addition to infrastructure buildouts, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018. Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years.
I see spend the same as a competitor but, because Google is Googley, the company will deliver better reliability, faster, and more easily made bigger AI than the non-Googley competition. Google is focused on efficiency. To me, Google bets that its engineering and programming expertise will give it an unbeatable advantage. The VP of Machine Learning, Systems and Cloud AI does not mention the fact that Google has its magical advertising system and about 85 percent of the global Web search market via its assorted search-centric services. Plus one must not overlook the fact that the Google is vertically integrated: Chips, data centers, data, smart people, money, and smart software.
The write up points out that Google knows there are risks with its strategy. But FOMO is more important than worrying about costs and technology. But what about users? Sure, okay, eyeballs, but I think Google means humanoids who have time to use Google whilst riding in Waymos and hanging out waiting for a job offer to arrive on an Android phone. Google doesn’t need to worry. Plus it can just bump up its investments until competitors are left dying in the desert known as Death Vall-AI.
After kicking beaten to the draw in the PR battle with Microsoft, the Google thinks it can win the AI jackpot. But what if it fails? No matter. The AI folks at the Google know that the automated advertising system that collects money at numerous touch points is for now churning away 24×7. Googzilla may just win because it is sitting on the cash machine of cash machines. Even counterfeiters in Peru and Vietnam cannot match Google’s money spinning capability.
Is it game over? Will regulators spring into action? Will Google win the race to software smarter than humans? Sure. Even if it part of the push to own the next big thing is puffery, the Google is definitely confident that it will prevail just like Superman and the truth, justice, and American way has. The only hitch in the git along may be having captured enough electrical service to keep the lights on and the power flowing. Lots of power.
Stephen E Arnold, November 26, 2025
Big AI Tech: Bait and Switch with Dancing Numbers?
November 20, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The BAIT outfits are a mix of public and private companies. Financial reports — even for staid outfits — can give analysts some eye strain. Footnotes in dense text can contain information relevant to a paragraph or a number that appears elsewhere in a document. I have been operating on a simple idea: The money flowing into AI is part of the “we can make it work” approach of many high technology companies. For the BAIT outfits, each enhancement has delivered a system which is not making big strides. Incrementalism or failure seems to be what the money has been buying. Part of the reason is that BAIT outfits need and lust for a play that will deliver a de facto monopoly in smart software.
Right now, the BAIT outfits depend on the Googley transformer technology. That method is undergoing enhancements, tweaks, refinements, and other manipulations to deliver more. The effort is expensive, and — based on my personal experience — not delivering what I expect. For example, You.com (one of the interface outfits that puts several models under one browser “experience” — told me I had been doing too many queries. When I contacted the company, I was told to fiddle with my browser and take other steps unrelated to the error message You.com’s system generated. I told You.com to address their error message, not tell me what to do with a computer that works with other AI services. I have Venice.ai ignoring prompts. Prior to updates, the Venice.ai system did a better job of responding to prompts. ChatGPT is now unable to concatenate three or four responses to quite specific prompts and output a Word file. I got a couple of hundred word summary instead of the outputs, several of which were wild and crazy.

Thanks, Venice.ai. Close enough for horse shoes.
When I read “Michael Burry Doubles Down On AI Bubble Claims As Short Trade Backfires: Says Oracle, Meta Are Overstating Earnings By ‘Understating Depreciation’,” I realized that others are looking more carefully at the BAIT outfits and what they report. [Mr. Burry is the head of Scion, an investment firm that is into betting certain stock prices will crater.] The article says:
In a post on X, Burry accused tech giants such as Meta Platforms Inc. and Oracle Corp. of “understating depreciation” by extending the useful life of assets, particularly chips and AI infrastructure.
This is an MBA way of saying, “These BAIT outfits are ignoring that the value of their fungible stuff like chips, servers, and data center plumbing is cratering. Software does processes, usually mindlessly. But when one pushes zeros and ones through software, the problems appear. These can be as simple as nothing happens or a server just sits there and blinks. Yikes, bottlenecks. The fix is usually just reboot and get the system up and running. The next step is to buy more of whatever hardware appeared to be the problem. Sure, the software wizards will look at their code, but that takes time. The approach is to spend more for compute or bandwidth and then slipstream the software fix into the work flow.
In parallel with the “spend to get going” approach, the vendors of processing chips suitable for handling flows of data are improving. The cadence is measured in months. But when new chips become available, BAIT outfits want them. Like building highways, a new highway does not solve a traffic problem. The flow of traffic increases until the new highway is just as slow as the old highway. The fix, which is similar to the BAIT outfits’ approach, is to build more highways. Meanwhile software fixes are slow and the chip cadence marches along.
Thus, understating depreciating and probably some other financial fancy dancing disguises how much cash is needed to keep those less and less impressive incremental AI innovations coming. The idea is that someone, somewhere in BAIT world will crack the problem. A transformer type breakthrough will solve the problems AI presents. Well, that’s the hope.
The article says:
Burry referred to this as one of the “more common frauds of the modern era,” used to inflate profits, and is something that he said all of the hyperscalers have since resorted to. “They will understate depreciation by $176 billion” through 2026 and 2028, he said.
Mr. Burry is a contrarian, and contrarians are not as popular as those who say, “Give me money. You will make a bundle.”
There are three issues involved with BAIT and somewhat fluffy financial situation AI companies in general face:
- China continues to put pressure on for profit outfits in the US. At the same time, China has been forced to find ways to “do” AI with less potent processors.
- China has more power generation tricks up its sleeve. Examples range from the wild and crazy mile wide dam with hydro to solar power, among other options. The US is lagging in power generation and alternative energy solutions. The cost of AI’s power is going to be a factor forcing BAIT outfits to do some financial two steps.
- China wants to put pressure on the US BAIT outfits as part of its long term plan to become the Big Dog in global technology and finance.
So what do we have? We have technical debit. We have a need to buy more expensive chips and data centers to house them. We have financial frippery to make the AI business look acceptable.
Is Mr. Burry correct? Those in the AI is okay camp say, “No. He’s the GameStop guy.”
Maybe Microsoft’s hiring of social media influencers will resolve the problem and make Microsoft number one in AI? Maybe Google will pop another transformer type innovation out of its creative engineering oven? Maybe AI will be the next big thing? How patient will investors be?
Stephen E Arnold, November 20, 2025
AI Is a Winner: The Viewpoint of an AI Believer
November 13, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Bubble, bubble, bubble. This is the Silicon Valley version of epstein, epstein, epstein. If you are worn out from the doom and gloom of smart software’s penchant for burning cash and ignoring the realities of generating electric power quickly, you will want to read “AI Is Probably Not a Bubble: AI Companies Have Revenue, Demand, and Paths to Immense Value.” [Note: You may encounter a paywall when you attempt to view this article. Don’t hassle me. Contact those charming visionaries at Substack, the new new media outfit.]
The predictive impact of the analysis has been undercut by a single word in the title “Probably.” A weasel word appears because the author’s enthusiasm for AI is a bit of contrarian thinking presented in thought leader style. Probably a pig can fly somewhere at some time. Yep, confidence.
Here’s a passage I found interesting:
… unlike dot-com companies, the AI companies have reasonable unit economics absent large investments in infrastructure and do have paths to revenue. OpenAI is demonstrating actual revenue growth and product-market fit that Pets.com and Webvan never had. The question isn’t whether customers will pay for AI capabilities — they demonstrably are — but whether revenue growth can match required infrastructure investment. If AI is a bubble and it pops, it’s likely due to different fundamentals than the dot-com bust.
Ah, ha, another weasel word: “Whether.” Is this AI bubble going to expand infinitely or will it become a Pets.com?
The write up says:
Instead, if the AI bubble is a bubble, it’s more likely an infrastructure bubble.
I think the ground beneath the argument has shifted. The “AI” is a technology like “the Internet.” The “Internet” became a big deal. AI is not “infrastructure.” That’s a data center with fungible objects like machines and connections to cables. Plus, the infrastructure gets “utilized immediately upon completion.” But what if [a] demand decreases due to lousy AI value, [b] AI becomes a net inflater of ancillary costs like a Microsoft subscription to Word, or [c] electrical power is either not available or too costly to make a couple of football fields of servers to run 24×7?
I liked this statement, although I am not sure some of the millions of people who cannot find jobs will agree:
As weird as it sounds, an AI eventually automating the entire economy seems actually plausible, if current trends keep continuing and current lines keep going up.
Weird. Cost cutting is a standard operating tactic. AI is an excuse to dump expensive and hard-to-manage humans. Whether AI can do the work is another question. Shifting from AI delivering value to server infrastructure shows one weakness in the argument. Ignoring the societal impact of unhappy workers seems to me illustrative of taking finance classes, not 18th century history classes.
Okay, here’s the wind up of the analysis:
Unfortunately, forecasting is not the same as having a magic crystal ball and being a strong forecaster doesn’t give me magical insight into what the market will do. So honestly, I don’t know if AI is a bubble or not.
The statement is a combination of weasel words, crawfishing away from the thesis of the essay, and an admission that this is a marketing thought leader play. That’s okay. LinkedIn is stuffed full of essays like this big insight:
So why are industry leaders calling AI a bubble while spending hundreds of billions on infrastructure? Because they’re not actually contradicting themselves. They’re acknowledging legitimate timing risk while betting the technology fundamentals are sound and that the upside is worth the risk.
The AI giants are savvy cats, are they not?
Stephen E Arnold, November 13, 2025
US Government Procurement Changes: Like Silicon Valley, Really? I Mean For Sure?
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I learned about the US Department of War overhaul of its procurement processes by reading “The Department of War Just Shot the Accountants and Opted for Speed.” Rumblings of procurement hassles have been reaching me for years. The cherished methods of capture planning, statement of work consulting, proposal writing, and evaluating bids consumes many billable hours by consultants. The processes involve thousands of government professionals: Lawyers, financial analysts, technical specialists, administrative professionals, and consultants. I can’t omit the consultants.
According to the essay written by Steve Blank (a person unfamiliar to me):
Last week the Department of War finally killed the last vestiges of Robert McNamara’s 1962 Planning, Programming, and Budgeting System (PPBS). The DoW has pivoted from optimizing cost and performance to delivering advanced weapons at speed.
The write up provides some of the history of the procurement process enshrined in such documents as FAR or the Federal Acquisition Regulations. If you want the details, Mr. Blank provides I urge you to read his essay in full.
I want to highlight what I think is an important point to the recent changes. Mr. Bloom writes:
The war in Ukraine showed that even a small country could produce millions of drones a year while continually iterating on their design to match changes on the battlefield. (Something we couldn’t do.) Meanwhile, commercial technology from startups and scaleups (fueled by an immense pool of private capital) has created off-the-shelf products, many unmatched by our federal research development centers or primes, that can be delivered at a fraction of the cost/time. But the DoW acquisition system was impenetrable to startups. Our Acquisition system was paralyzed by our own impossible risk thresholds, its focus on process not outcomes, and became risk averse and immoveable.
Based on my experience, much of it working as a consultant on different US government projects, the horrific “special operation” delivered a number of important lessons about modern warfare. Reading between the lines of the passage cited above, two important items of information emerged from what I view as an illegal international event:
- Under certain conditions human creativity can blossom and then grow into major business operations. I would suggest that Ukraine’s innovations in the use of drones, how the drones are deployed in battle conditions, and how the basic “drone idea” reduce the effectiveness of certain traditional methods of warfare
- Despite disruptions to transportation and certain third-party products, Ukraine demonstrated that just-in-time production facilities can be made operational in weeks, sometimes days.
- The combination of innovative ideas, battlefield testing, and right-sized manufacturing demonstrated that a relatively small country can become a world-class leader in modern warfighting equipment, software, and systems.
Russia, with its ponderous planning and procurement process, has become the fall guy to a president who was a stand up comedian. Who is laughing now? It is not the perpetrators of the “special operation.” The joke, as some might say, is on individuals who created the “special operation.”
Mr. Blank states about the new procurement system:
To cut through the individual acquisition silos, the services are creating Portfolio Acquisition Executives (PAEs). Each Portfolio Acquisition Executive (PAE) is responsible for the entire end-to-process of the different Acquisition functions: Capability Gaps/Requirements, System Centers, Programming, Acquisition, Testing, Contracting and Sustainment. PAEs are empowered to take calculated risks in pursuit of rapidly delivering innovative solutions.
My view of this type of streamlining is that it will become less flexible over time. I am not sure when the ossification will commence, but bureaucratic systems, no matter how well designed, morph and become traditional bureaucratic systems. I am not going to trot out the academic studies about the impact of process, auditing, and legal oversight on any efficient process. I will plainly state that the bureaucracies to which I have been exposed in the US, Europe, and Asia are fundamentally the same.

Can the smart software helping enable the Silicon Valley approach to procurement handle the load and keep the humanoids happy? Thanks, Venice.ai. Good enough.
Ukraine is an outlier when it comes to the organization of its warfighting technology. Perhaps other countries if subjected to a similar type of “special operation” would behave as the Ukraine has. Whether I was giving lectures for the Japanese government or dealing with issues related to materials science for an entity on Clarendon Terrace, the approach, rules, regulations, special considerations, etc. were generally the same.
The question becomes, “Can a new procurement system in an environment not at risk of extinction demonstrate the speed, creativity, agility, and productivity of the Ukrainian model?”
My answer is, “No.”
Mr. Blank writes before he digs into the new organizational structure:
The DoW is being redesigned to now operate at the speed of Silicon Valley, delivering more, better, and faster. Our warfighters will benefit from the innovation and lower cost of commercial technology, and the nation will once again get a military second to none.
This is an important phrase: Silicon Valley. It is the model for making the US Department of War into a more flexible and speedy entity, particularly with regard to procurement, the use of smart software (artificial intelligence), and management methods honed since Bill Hewlett and Dave Packard sparked the garage myth.
Silicon Valley has been an model for many organizations and countries. However, who thinks much about the Silicon Fen? I sure don’t. I would wager a slice of cheese that many readers of this blog post have never, ever heard of Sophia Antipolis. Everyone wants to be a Silicon Valley and high-technology, move fast and break things outfit.
But we have but one Silicon Valley. Now the question is, “Will the US government be a successful Silicon Valley, or will it fizzle out?” Based on my experience, I want to go out on a very narrow limb and suggest:
- Cronyism was important to Silicon Valley, particularly for funding and lawyering. The “new” approach to Department of War procurement is going to follow a similar path.
- As the stakes go up, growth becomes more important than fiscal considerations. As a result, the cost of becoming bigger, faster, cheaper spikes. Costs for the majority of Silicon Valley companies kill off most start ups. The failure rate is high, and it is exacerbated by the need of the winners to continue to win.
- Silicon Valley management styles produce some negative consequences. Often overlooked are such modern management methods as [a] a lack of common sense, [b] decisions based on entitlement or short term gains, and [c] a general indifference to the social consequences of an innovation, a product, or a service.
If I look forward based on my deeply flawed understanding of this Silicon Valley revolution I see monopolistic behavior emerging. Bureaucracies will emerge because people working for other people create rules, procedures, and processes to minimize the craziness of doing the go fast and break things activities. Workers create bureaucracies to deal with chaos, not cause chaos.
Mr. Blank’s essay strikes me as generally supportive of this reinvention of the Federal procurement process. He concludes with:
Let’s hope these changes stick.
My personal view is that they won’t. Ukraine’s created a wartime Silicon Valley in a real-time, shoot-and-survive conflict. The urgency is not parked in a giant building in Washington, DC, or a Silicon Valley dream world. A more pragmatic approach is to partition procurement methods. Apply Silicon Valley thinking in certain classes of procurement; modify the FAR to streamline certain processes; and leave some of the procedures unchanged.
AI is a go fast and break things technology. It also hallucinates. Drones from Silicon Valley companies don’t work in Ukraine. I know because someone with first hand information told me. What will the new methods of procurement deliver? Answer: Drones that won’t work in a modern asymmetric conflict. With decisions involving AI, I sure don’t want to find myself in a situation about which smart software makes stuff up or operates on digital mushrooms.
Stephen E Arnold, November 12, 2025
Agentic Software: Close Enough for Horse Shoes
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I read a document that I would describe as tortured. The lingo was trendy. The charts and graphs sported trendy colors. The data gathering seemed to be a mix of “interviews” and other people’s research. Plus the write up was a bit scattered. I prefer the rigidity of old-fashioned organization. Nevertheless, I did spot one chunk of information that I found interesting.
The title of the research report (sort of an MBA- or blue chip consulting firm-type of document) is “State of Agentic AI: Founder’s Edition.” I think it was issued in March 2025, but with backdating popular, who knows. I had the research report in my files, and yesterday (November 3, 2025) I was gathering some background information for a talk I am giving on November 6, 2025. The document walked through data about the use of software to replace people. Actually, the smart software agents generally do several things according to the agent vendors’ marketing collateral. The cited document restated these items this way:
- Agents are set up to reach specific goals
- Agents are used to reason which means “break down their main goal … into smaller manageable tasks and think about the next best steps.”
- Agents operate without any humans in India or Pakistan operating invisibly and behind the scenes
- Agents can consult a “memory” of previous tasks, “experiences,” work, etc.
Agents, when properly set up and trained, can perform about as well as a human. I came away from the tan and pink charts with a ball park figure of 75 to 80 percent reliability. Close enough for horseshoes? Yep.
There is a run down of pricing options. Pricing seems to be challenge for the vendors with API usage charges and traditional software licensing used by a third of the agentic vendors.
Now here’s the most important segment from the document:
We asked founders in our survey: “What are the biggest issues you have encountered when deploying AI Agents for your customers? Please rank them in order of magnitude (e.g. Rank 1 assigned to the biggest issue)” The results of the Top 3 issues were illuminating: we’ve frequently heard that integrating with legacy tech stacks and dealing with data quality issues are painful. These issues haven’t gone away; they’ve merely been eclipsed by other major problems. Namely:
- Difficulties in integrating AI agents into existing customer/company workflows, and the human-agent interface (60% of respondents)
- Employee resistance and non-technical factors (50% of respondents)
- Data privacy and security (50% of respondents).
Here’s the chart tallying the results:

Several ideas crossed my mind as I worked through this research data:
- Getting the human-software interface right is a problem. I know from my work at places like the University of Michigan, the Modern Language Association, and Thomson-Reuters that people have idiosyncratic ways to do their jobs. Two people with similar jobs add the equivalent of extra dashboard lights and yard gnomes to the process. Agentic software at this time is not particularly skilled in the dashboard LED and concrete gnome facets of a work process. Maybe someday, but right now, that’s a common deal breaker. Employees says, “I want my concrete unicorn, thank you.”
- Humans say they are into mobile phones, smart in-car entertainment systems, and customer service systems that do not deliver any customer service whatsoever. Change as somebody from Harvard said in a lecture: “Change is hard.” Yeah, and it may not get any easier if the humanoid thinks he or she will allowed to find their future pushing burritos at the El Nopal Restaurant in the near future.
- Agentic software vendors assume that licensees will allow their creations to suck up corporate data, keep company secrets, and avoid disappointing customers by presenting proprietary information to a competitor. Security is “regular” enterprise software is a bit of a challenge. Security in a new type of agentic software is likely to be the equivalent of a ride on roller coaster which has tossed several middle school kids to their death and cut off the foot of a popular female. She survived, but now has a non-smart, non-human replacement.
Net net: Agentic software will be deployed. Most of its work will be good enough. Why will this be tolerated in personnel, customer service, loan approvals, and similar jobs? The answer is reduced headcounts. Humans cost money to manage. Humans want health care. Humans want raises. Software which is good enough seems to cost less. Therefore, welcome to the agentic future.
Stephen E Arnold, November 11, 2025
AI Dreams Are Plugged into Big Rock Candy Mountain
November 5, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
One of the niches in the power generation industry is demand forecasting. When I worked at Halliburton Nuclear, I sat in meetings. One feature of these meetings was diagrams. I don’t have any of these diagrams because there were confidentiality rules. I followed those. This is what some of the diagrams resembled:

Source: https://mavink.com/
When I took a job at Booz, Allen, the firm had its own demand experts. The diagrams favored by one of the utility rate and demand experts looked like this. Note: Booz, Allen had rules, so the diagram comes from the cited source:

Source: https://vtchk.ru/photo/demand-curve/16
These curves speak volumes to the people who fund, engineer, and construct power generation facilities. The main idea for these semi-abstract curves is that balancing demand and supply is important. The price of electricity depends on figuring out the probable relationship of demand for power, the available supply and the supply that will come on line at estimated times in the future. The price people and organizations pay for electricity depend on these types of diagrams, the reams of data analysts crunch, and a group of people sitting in a green conference room at a plastic table agree the curves mean.
A recent report from Turner & Townsend (a UK consulting outfit) identifies some trends in the power generation sector with some emphasis on the data centers required for smart software. You can work through the report on the Turner & Townsend Web site by clicking this link. The main idea is that huge AI-centric data centers needed to power the Googley transformer-centric approach to smart software outstrips available power.
The response to this in the bit AI companies is, “We can put servers in orbit” and “We can build small nuclear reactors and park them near the data centers” and “We can buy turbines and use gas or other carbon fuels to power out data centers.” These are comments made by individuals who choose not to look at the wonky type of curves I show.
It takes time to build a conventional power generation facility. The legal process in the US has traditionally been difficult and expensive. A million dollars won’t even pay for a little environmental impact study. Lawyers can cost than a rail car loaded with specialized materials required for nuclear reactors. The costs for the PR required to place a baby nuke in Memphis next to a big data center may be more expensive than buying some Google ads and hiring a local marketing firm. Some people may not feel comfortable with a new, unproven baby nuke in their neighborhood. Coal- and oil-fired plants invite certain types of people to mount noisy and newsworthy protests. Putting a data center in orbit poses some additional paperwork challenges and a little bit of extra engineering work.
So what’s the big detailed report show. Here’s my diagram of the power, demand, and price future with those giant data centers in the US. You can work out the impact on non-US installations:

This diagram was whipped up by Stephen E Arnold.
The message in these curves reflects one of the “challenges” identified in the Turner & Townsend report: Cost.
What does this mean to those areas of the US where Big AI Boys plan to build large data centers? Answer: Their revenue streams need to be robust and their funding sources have open wallets.
What does this mean for the cost of electricity to consumers and run-of-the-mill organizations? Answer: Higher costs, brown outs, and fancy new meters than can adjust prices and current on the fly. Crank up the data center, and the Super Bowl broadcast may not be in some homes.
What does this mean for ubiquitous, 24×7 AI availability in software, home appliances, and mobile devices? Answer: Higher costs, brown outs, and degraded services.
How will the incredibly self aware, other centric, ethical senior managers at AI companies respond? Answer: No problem. Think thorium reactors and data centers in space.
Also, the cost of building new power generation facilities is not a problem for some Big Dogs. The time required for licensing, engineering, and construction. No problem. Just go fast, break things.
And overcoming resistance to turbines next to a school or a small thorium reactor in a subdivision? Hey, no problem. People will adapt or they can move to another city.
What about the engineering and the innovation? Answer: Not to worry. We have the smartest people in the world.
What about common sense and self awareness? Response: Yo, what do those terms mean are they synonyms for disco biscuits?
The next big thing lives on Big Rock Candy Mountain.
Stephen E Arnold, November 5, 2025

