Surprise! Countries Not Pals with the US Are Using AI to Spy. Shocker? Hardly
November 17, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The Beeb is a tireless “real” news outfit. Like some Manhattan newscasters, fixing up reality to make better stories, the BBC allowed a couple of high-profile members of leadership to find their future elsewhere. Maybe the chips shop in Slough?

Thanks, Venice.ai. You are definitely outputting good enough art today.
I am going to suspend my disbelief and point to a “real” news story about a US company. The story is “AI Firm Claims Chinese Spies Used Its Tech to Automate Cyber Attacks.” The write up reveals information that should not surprise anyone except the Beeb. The write up reports:
The makers of artificial intelligence (AI) chatbot Claude claim to have caught hackers sponsored by the Chinese government using the tool to perform automated cyber attacks against around 30 global organizations. Anthropic said hackers tricked the chatbot into carrying out automated tasks under the guise of carrying out cyber security research. The company claimed in a blog post this was the “first reported AI-orchestrated cyber espionage campaign”.
What’s interesting is that Anthropic itself was surprised. If Google and Microsoft are making smart software part of the “experience,” why wouldn’t bad actors avail themselves of the tools. Information about lashing smart software to a range of online activities is not exactly a secret.
What surprises me about this “news” is:
- Why is Anthropic spilling the beans about a nation state using its technology. Once such an account is identified, block it. Use pattern matching to determine if others are doing substantially similar exploits. Block those. If you want to become a self appointed police professional, get used to the cat-and-mouse game. You created the system. Deal with it.
- Why is the BBC presenting old information as something new? Perhaps its intrepid “real” journalists should pay attention to the public information distributed by cyber security firms? I think that is called “research”, but that may be surfing on news releases or running queries against ChatGPT or Gemini. Why not try Qwen, the China-affiliated system.
- I wonder why the Google-Anthropic tie up is not mentioned in the write up. Google released information about a quite specific smart exploit a few months ago. Was this information used by Anthropic to figure out that an bad actor was an Anthropic user? Is there a connection here? I don’t know, but that’s what investigative types are supposed to consider and address.
My personal view is that Anthropic is positioning itself as a tireless defender of truth, justice, and the American way. The company may also benefit from some of Google’s cyber security efforts. Google owns Mandiant and is working hard to make the Wiz folks walk down the yellow brick road to the Googleplex.
Net net: Bad actors using low cost, subsidized, powerful, and widely available smart software is not exactly a shocker.
Stephen E Arnold, November 17, 2025
Danes May Ban Social Media for Kids
November 17, 2025
Australia’s ban on social media for kids under 16 goes into effect December 10. Now another country is pursuing a similar approach. Euro News reports, “Denmark Wants to Ban Access to Social Media for Children Under 15.” We learn:
“The move, led by the Ministry of Digitalisation, would set the age limit for access to social media but give some parents – after a specific assessment – the right to give consent to let their children access social media from age 13. Such a measure would be among the most sweeping steps yet by a European Union government to address concerns about the use of social media among teens and younger children, which has drawn concerns in many parts of an increasingly online world. … The Danish digitalisation ministry statement said the age minimum of 15 would be introduced for ‘certain’ social media, though it did not specify which ones.”
If the Danes follow Australia’s example, those platforms could include TikTok, Facebook, Snapchat, Reddit, Kick, X, Instagram, and YouTube. The write-up describes the motivation behind the push:
“A coalition of lawmakers from the political right, left and centre ‘are making it clear that children should not be left alone in a digital world where harmful content and commercial interests are too much a part of shaping their everyday lives and childhoods,’ the ministry said. ‘Children and young people have their sleep disrupted, lose their peace and concentration, and experience increasing pressure from digital relationships where adults are not always present,’ it said. ‘This is a development that no parent, teacher, or educator can stop alone’.”
That may be true. And it is certainly true that social media poses certain dangers to children and teens. But how would the ban be enforced? The statement does not say. Teens, after all, famously find ways to get around security measures. If only there had been a way for platforms to know about these risks sooner.
Cynthia Murrell, November 17, 2025
Despite Assurances, AI Firms’ Future May Depend on Replacing Human Labor
November 17, 2025
For centuries, the market economy has been powered by workers. Human ones. Sure, they have tended to get the raw end of any deal, but at least their participation has been necessary. Now one industry has a powerful incentive to change that. Futurism reports, “The AI Industry Can’t Profit Unless It Replaces Human Jobs, Warns Man Who Helped Create It.” Writer Joe Wilkins tells us:
“According to Nobel laureate Geoffrey Hinton — often called ‘the godfather of AI’ for his contributions to the tech — the future for AI in its current form is likely to be an economic dystopia. ‘I think the big companies are betting on it causing massive job replacement by AI, because that’s where the big money is going to be,’ he warned in a recent interview with Bloomberg. Hinton was commenting on enormous investments in the AI industry, despite a total lack of profit so far. By typical investment standards, AI should be a pariah.”
As an illustration, Wilkins notes OpenAI alone lost $11.5 billion in revenue just last quarter. The write-up continues:
“Asked by Bloomberg whether these jaw dropping investments could ever pay off without eviscerating the job market, Hinton’s reply was telling. ‘I believe that it can’t,’ he said. ‘I believe that to make money you’re going to have to replace human labor.’ For many who study labor and economics, it’s not a statement to be made lightly. Since it first emerged out of feudalism centuries ago, the market economy has relied on the exploitation of human labor — looms, steel mills, and automobile plants straight up can’t run without it.”
Until now, apparently. Or soon. In the Bloomberg interview, Hinton observes the fate of workers depends on “how we organize society.” Will the out-of-work masses starve? Or will society meet everyone’s basic needs, freeing us to live fulfilling lives? And who gets to make those decisions?
Cynthia Murrell, November 14, 2025
AI and Learning: Does a Retail Worker Have to Know How to Make Change?
November 14, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I love the concept of increasingly shallow understanding. There is a simple beauty in knowing that you can know anything with smart software and a mobile phone. I read “Students Using ChatGPT Beware: Real Learning Takes Legwork, Study Finds.” What a revelation? Wow! Really?
What a quaint approach to smart software. This write up describes a weird type of reasoning. Using a better tool limits one’s understanding. I am not sure about you, but the idea of communicating by have a person run 26 miles to deliver a message and then fall over dead seems slow, somewhat unreliable, and potentially embarrassing. What if the deliver of the message expires in the midst of a kiddie birthday party. Why not embrace the technology of the mobile phone. Use a messaging app and zap the information to the recipient?

Thanks, Venice.ai. Good enough.
Following this logic that learning the old fashioned way will have many dire consequences, why not research topics by:
- Using colloquium held at a suitable religious organization’s facilities. Avoid math that references infinity, zeros, or certain numbers, and the process worked. Now math is a mental exercise. It is more easily mastered by doing concepts, not calculations. If something tricky is required, reach for smart software or that fun-loving Wolfram Mathematica software. Beats an abacus or making marks on cave walls.
- Find a library with either original documents (foul papers), scrolls, or books. Read them and take notes, preferably on a wax tablet or some sheepskin, the Microsoft Word back in the day.
- Use Google. Follow links. Reach conclusions. Assemble “factoids” into knowledge. Ignore the SEO-choked pages. Skip the pages hopelessly out of date when the article suggests that one use XyWrite as a word processor. Sidestep the ravings of a super genius predicting that Hollywood films are the same as maps of the future or advisors offering tips for making a million dollars tax free.
The write up presents the startling assertion:
The researchers concluded that, while large language models (LLMs) are exceptionally good at spitting out fluent answers at the press of a button, people who rely on synthesized AI summaries for research typically don’t come away with materially deeper knowledge. Only by digging into sources and piecing information together themselves do people tend to build the kind of lasting understanding that sticks…
Who knew?
The article includes this startling and definitely anti-AI statement:
A recent BBC-led investigation found that four of the most popular chatbots misrepresented news content in almost half their responses, highlighting how the same tools that promise to make learning easier often blur the boundary between speedy synthesis and confident-sounding fabrication.
I reacted to the idea that embracing a new technology damages a student’s understanding of a complex subject. If that were the case, why have humans compiled a relatively consistent track record in making information easier to find, absorb, and use. Dip-in, get what you need, and don’t read the entire book is a trendy view supported by some forward-thinking smart people.
This is intellectual grazing. I think it is related to snacking 24×7 and skipping what once were foolishly called “regular meals.” In my visits to Silicon Valley, there are similar approaches to difficult learning challenges; for example, forming a stable relationship, understanding the concept of ethical compass, and making decisions that do no harm. (Hey, remember that slogan from the Dark Ages of Internet time?)
The write up concludes:
One of the more striking takeaways of the study was that young people’s growing reliance on AI summaries for quick-hit facts could “deskill” their ability to engage in active learning. However they also noted that this only really applies if AI replaces independent study entirely — meaning LLMs are best used to support, rather than substitute, critical thinking. The authors concluded: “We thus believe that while LLMs can have substantial benefits as an aid for training and education in many contexts, users must be aware of the risks — which may often go unnoticed — of overreliance. Hence, one may be better off not letting ChatGPT, Google, or another LLM ‘do the Googling.'”
Now that’s a remedy that will be music to Googzilla’s nifty looking ear slits. Use Google, just skip the AI.
I want to point out that doing things the old fashioned way may be impossible, impractical, or dangerous. Rejecting newer technologies provides substantive information about the people who are in rejection mode. The trick, in my dinobaby opinion, is to raise children in an environment that encourages a positive self concept, presents a range of different learning mechanisms, and uses nifty technology with parental involvement.
For the children not exposed to this type of environment in their formative years, it will be unnecessary for these lucky people to be permanently happy. Remember the old saying: If ignorance is bliss, hello, happy person.
No matter how shallow the mass of students become and remain, a tiny percentage will learn the old fashioned way. These individuals will be just like the knowledge elite today: Running the fastest and most powerful vehicles on the Information Superhighway. Watch out for wobbling Waymos. Those who know stuff could become roadkill.
Stephen E Arnold, November 14, 2025
Microsoft Could Be a Microsnitch
November 14, 2025
Remember when you were younger and the single threat of, “I’m going to tell!” was enough to send chills through your body? Now Microsoft plans to do the same thing except on an adult level. Life Hacker shares that, “Microsoft Teams Will Soon Tell Your Boss When You’re Not In The Office.” The article makes an accurate observation that since the pandemic most jobs can be done from anywhere with an Internet connection.
Since the end of quarantine, offices are fighting to get their workers back into physical workspaces. Some of them have implemented hybrid working, while others have become more extreme by counting clock-ins and badge swipes. Microsoft is adding its own technology to the fight by making it possible to track remote workers.
“As spotted by Tom’s Guide, Microsoft Teams will roll out an update in December that will have the option to report whether or not you’re working from your company’s office. The update notes are sparse on details, but include the following: ‘When users connect to their organization’s [wifi], Teams will soon be able to automatically update their work location to reflect the building they’re working from. This feature will be off by default. Tenant admins will decide whether to enable it and require end-users to opt-in.’”
Microsoft whitewashed the new feature by suggesting employees use it to find their teammates. The article’s author says it all:
“But let’s be real. This feature is also going to be used by companies to track their employees, and ensure that they’re working from where they’re supposed to be working from. Your boss can take a look at your Teams status at any time, and if it doesn’t report you’re working from one of the company’s buildings, they’ll know you’re not in the office. No, the feature won’t be on by default, but if your company wants to, your IT can switch it on, and require that you enable it on your end as well.”
It is ridiculous to demand that employees return to offices, but at the same time many workers aren’t actually doing their job. The professionals are quiet quitting, pretending to do the work, and ignoring routine tasks. Surveillance seems to be a solution of interest.
It would be easier if humans were just machines. You know, meat AI systems. Bummer, we’re human. If we can get away with something, many will. But is Microsoft is going too far here to make sales to ineffective “leadership”? Worker’s aren’t children, and the big tech company is definitely taking the phrase, “I’m going to tell!” to heart.
Whitney Grace, November 14, 2025
Walmart Plans To Change Shopping With AI
November 14, 2025
Walmart shocked the world when it deployed robots to patrol aisles. The purpose of the robot wasn’t to steal jobs but report outages and messes to employees. Walmart has since backtracked on the robots, but they are turning to AI to enhance and forever alter the consumer shopping experience. According to MSN, “Walmart’s Newest Plan Could Change How You Shop Forever.”
Walmart plus to make the shopping experience smarter by using OpenAI’s ChatGPT. Samsung is also part of this partnership that will offer product suggestions to shoppers of both companies. The idea of incorporating ChatGPT takes the search bar and search query pattern to the next level:
“Far from just a search bar and a click experience, Walmart says the AI will learn your habits, can predict what you need, and even plan your shopping before realizing you’re in need of it. “ ‘Through AI-first shopping, the retail experience shifts from reactive to proactive as it learns, plans, and predicts, helping customers anticipate their needs before they do,’ Walmart stared in the release.
Amazon, Walmart, and other big retailers have been tracking consumer habits for years and sending them coupons and targeted ads. This is a more intrusive way to make consumers spend money. What will they think of next? How about Kroger’s smart price displays. These can deliver dynamic prices to “help” the consumer and add a bit more cash to the retailer. Yeah, AI is great.
Whitney Grace, November 14, 2025
Sweet Dreams of Data Centers for Clippy Version 2: The Agentic Operation System
November 13, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
If you have good musical recall, I want you to call up the tune for “Sweet Dreams (Are Made of This) by the Eurythmics. Okay, with that sound track buzzing through your musical memory, put it on loop. I want to point you to two write ups about Microsoft’s plans for a global agentic operating system and its infrastructure. From hiring social media influencers to hitting the podcast circuit, Microsoft is singing its own songs to its often reluctant faithful. Let’s turn down “Sweet Dream” and crank up the MSFT chart climbers.

Trans-continental AI infrastructure. Will these be silent, reduce pollution, and improve the life of kids who live near the facilities? Of course, because some mommies will say, “Just concentrate and put in your ear plugs. I am not telling you again.” Thanks, Venice. Good enough after four tries. Par for the AI course.
The first write up is by the tantalizingly named consulting firm doing business as SemiAnalysis. I smile everything I think about how some of my British friends laugh when they see a reference to a semi-truck. One quipped, “What you don’t have complete trucks in the US?” That same individual would probably say in response to the company name SemiAnalysis, “What you don’t have a complete analysis in the US?” I have no answer to either question, but “SemiAnalysis” does strike me as more amusing a moniker than Booz, Allen, McKinsey, or Bain.
You can find a 5000 word plus segment of a report with the remarkable title “Microsoft’s AI Strategy Deconstructed – From Energy to Tokens” online. To get the complete report, presumably not the semi report, one must subscribe. Thus, the document is content marketing, but I want to highlight three aspects of the MBA-infused write up. These reflect my biases, so if you are not into dinobaby think, click away, gentle reader.
The title “Microsoft’s AI Strategy Deconstructed” is a rah rah rah for Microsoft. I noted:
- Microsoft was first, now its is fifth, and it will be number one. The idea is that the inventor of Bob and Clippy was the first out of the gate with “AI is the future.” It stands fifth in terms of one survey’s ranking of usage. This “Microsoft’s AI Strategy Deconstructed” asserts that it is going to be a big winner. My standard comment to this blending of random data points and some brown nosing is, “Really?”
- Microsoft is building or at least promising to build lots of AI infrastructure. The write up does not address the very interesting challenge of providing power at a manageable cost to these large facilities. Aerial photos of some of the proposed data centers look quite a bit like airport runways stuffed with bland buildings filled with large numbers of computing devices. But power? A problem looming it seems.
- The write up does not pay much attention to the Google. I think that’s a mistake. From data centers in boxes to plans to put these puppies in orbit, the Google has been doing infrastructure, including fiber optic, chips, and interesting investments like its interest in digital currency mining operations. But Google appears to be of little concern to the Microsoft-tilted semi analysis from SemiAnalysis. Remember, I am a dinobaby, so my views are likely to rock the young wizards who crafted this “Microsoft is going to be a Big Dog.” Yeah, but the firm did Clippy. Remember?
The second write up picks up on the same theme: Microsoft is going to do really big things. “Microsoft Is Building Datacenter Superclusters That Span Continents” explains that MSFT’s envisioned “100 Trillion Parameter Models of the Near Future Can’t Be Built in One Place” and will be sort of like buildings that are “two stories tall, use direct-to-chip liquid cooling, and consume “almost zero water.”
The write up adds:
Microsoft is famously one of the few hyperscalers that’s standardized on Nvidia’s InfiniBand network protocol over Ethernet or a proprietary data fabric like Amazon Web Service’s EFA for its high-performance compute environments. While Microsoft has no shortage of options for stitching datacenters together, distributing AI workloads without incurring bandwidth- or latency-related penalties remains a topic of interest to researchers.
The real estate broker Arvin Haddad uses the phrase “Can you spot the flaw?” Okay, let me ask, “Can you spot the flaw in Microsoft’s digital mansions?” You have five seconds. Okay. What happens if the text centric technology upon which current AI efforts are based gets superseded by [a] a technical breakthrough that renders TensorFlow approaches obsolete, expensive, and slow? or [b] China dumps its chip and LLM technology into the market as cheap or open source? My thought is that the data centers that span continents may end up like the Westfield San Francisco Centre as a home for pigeons, graffiti artists, and security guards.
Yikes.
Building for the future of AI may be like shooting at birds not in sight. Sure, a bird could fly though the pellets, but probably not if they are nesting in pond a mile away.
Net net: Microsoft is hiring influencers and shooting where ducks will be. Sounds like a plan.
Stephen E Arnold, November 13, 2025
AI Is a Winner: The Viewpoint of an AI Believer
November 13, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Bubble, bubble, bubble. This is the Silicon Valley version of epstein, epstein, epstein. If you are worn out from the doom and gloom of smart software’s penchant for burning cash and ignoring the realities of generating electric power quickly, you will want to read “AI Is Probably Not a Bubble: AI Companies Have Revenue, Demand, and Paths to Immense Value.” [Note: You may encounter a paywall when you attempt to view this article. Don’t hassle me. Contact those charming visionaries at Substack, the new new media outfit.]
The predictive impact of the analysis has been undercut by a single word in the title “Probably.” A weasel word appears because the author’s enthusiasm for AI is a bit of contrarian thinking presented in thought leader style. Probably a pig can fly somewhere at some time. Yep, confidence.
Here’s a passage I found interesting:
… unlike dot-com companies, the AI companies have reasonable unit economics absent large investments in infrastructure and do have paths to revenue. OpenAI is demonstrating actual revenue growth and product-market fit that Pets.com and Webvan never had. The question isn’t whether customers will pay for AI capabilities — they demonstrably are — but whether revenue growth can match required infrastructure investment. If AI is a bubble and it pops, it’s likely due to different fundamentals than the dot-com bust.
Ah, ha, another weasel word: “Whether.” Is this AI bubble going to expand infinitely or will it become a Pets.com?
The write up says:
Instead, if the AI bubble is a bubble, it’s more likely an infrastructure bubble.
I think the ground beneath the argument has shifted. The “AI” is a technology like “the Internet.” The “Internet” became a big deal. AI is not “infrastructure.” That’s a data center with fungible objects like machines and connections to cables. Plus, the infrastructure gets “utilized immediately upon completion.” But what if [a] demand decreases due to lousy AI value, [b] AI becomes a net inflater of ancillary costs like a Microsoft subscription to Word, or [c] electrical power is either not available or too costly to make a couple of football fields of servers to run 24×7?
I liked this statement, although I am not sure some of the millions of people who cannot find jobs will agree:
As weird as it sounds, an AI eventually automating the entire economy seems actually plausible, if current trends keep continuing and current lines keep going up.
Weird. Cost cutting is a standard operating tactic. AI is an excuse to dump expensive and hard-to-manage humans. Whether AI can do the work is another question. Shifting from AI delivering value to server infrastructure shows one weakness in the argument. Ignoring the societal impact of unhappy workers seems to me illustrative of taking finance classes, not 18th century history classes.
Okay, here’s the wind up of the analysis:
Unfortunately, forecasting is not the same as having a magic crystal ball and being a strong forecaster doesn’t give me magical insight into what the market will do. So honestly, I don’t know if AI is a bubble or not.
The statement is a combination of weasel words, crawfishing away from the thesis of the essay, and an admission that this is a marketing thought leader play. That’s okay. LinkedIn is stuffed full of essays like this big insight:
So why are industry leaders calling AI a bubble while spending hundreds of billions on infrastructure? Because they’re not actually contradicting themselves. They’re acknowledging legitimate timing risk while betting the technology fundamentals are sound and that the upside is worth the risk.
The AI giants are savvy cats, are they not?
Stephen E Arnold, November 13, 2025
Dark Patterns Primer
November 13, 2025
Here is a useful explainer for anyone worried about scams brought to us by a group of concerned designers and researchers. The Dark Patterns Hall of Shame arms readers with its Catalog of Dark Patterns. The resource explores certain misleading tactics we all encounter online. The group’s About page tells us:
“We are passionate about identifying dark patterns and unethical design examples on the internet. Our [Hall of Shame] collection serves as a cautionary guide for companies, providing examples of manipulative design techniques that should be avoided at all costs. These patterns are specifically designed to deceive and manipulate users into taking actions they did not intend. HallofShame.com is inspired by Deceptive.design, created by Harry Brignull, who coined the term ‘Dark Pattern’ on 28 July 2010. And as was stated by Harry on Darkpatterns.org: The purpose of this website is to spread awareness and to shame companies that use them. The world must know its ‘heroes.’”
Can companies feel shame? We are not sure. The first page of the Catalog provides a quick definition of each entry, from the familiar Bait-and-Switch to the aptly named Privacy Zuckering (“service or a website tricks you into sharing more information with it than you really want to.”) One can then click through to real-world examples pulled from the Hall of Shame write-ups. Some other entries include:
“Disguised Ads. What’s a Disguised Ad? When an advertisement on a website pretends to be a UI element and makes you click on it to forward you to another website.
Roach Motel. What’s a roach motel? This dark pattern is usually used for subscription services. It is easy to sign up for it, but it’s much harder to cancel it (i.e. you have to call customer support).
Sneak into Basket. What’s a sneak into basket? When buying something, during your checkout, a website adds some additional items to your cart, making you take the action of removing it from your cart.
Confirmshaming. What’s confirmshaming? When a product or a service is guilting or shaming a user for not signing up for some product or service.”
One case of Confirmshaming: the pop-up Microsoft presents when one goes to download Chrome through Edge. Been there. See the post for the complete list and check out the extensive examples. Use the information to protect yourself or the opposite.
Cynthia Murrell, November 13, 2025
US Government Procurement Changes: Like Silicon Valley, Really? I Mean For Sure?
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I learned about the US Department of War overhaul of its procurement processes by reading “The Department of War Just Shot the Accountants and Opted for Speed.” Rumblings of procurement hassles have been reaching me for years. The cherished methods of capture planning, statement of work consulting, proposal writing, and evaluating bids consumes many billable hours by consultants. The processes involve thousands of government professionals: Lawyers, financial analysts, technical specialists, administrative professionals, and consultants. I can’t omit the consultants.
According to the essay written by Steve Blank (a person unfamiliar to me):
Last week the Department of War finally killed the last vestiges of Robert McNamara’s 1962 Planning, Programming, and Budgeting System (PPBS). The DoW has pivoted from optimizing cost and performance to delivering advanced weapons at speed.
The write up provides some of the history of the procurement process enshrined in such documents as FAR or the Federal Acquisition Regulations. If you want the details, Mr. Blank provides I urge you to read his essay in full.
I want to highlight what I think is an important point to the recent changes. Mr. Bloom writes:
The war in Ukraine showed that even a small country could produce millions of drones a year while continually iterating on their design to match changes on the battlefield. (Something we couldn’t do.) Meanwhile, commercial technology from startups and scaleups (fueled by an immense pool of private capital) has created off-the-shelf products, many unmatched by our federal research development centers or primes, that can be delivered at a fraction of the cost/time. But the DoW acquisition system was impenetrable to startups. Our Acquisition system was paralyzed by our own impossible risk thresholds, its focus on process not outcomes, and became risk averse and immoveable.
Based on my experience, much of it working as a consultant on different US government projects, the horrific “special operation” delivered a number of important lessons about modern warfare. Reading between the lines of the passage cited above, two important items of information emerged from what I view as an illegal international event:
- Under certain conditions human creativity can blossom and then grow into major business operations. I would suggest that Ukraine’s innovations in the use of drones, how the drones are deployed in battle conditions, and how the basic “drone idea” reduce the effectiveness of certain traditional methods of warfare
- Despite disruptions to transportation and certain third-party products, Ukraine demonstrated that just-in-time production facilities can be made operational in weeks, sometimes days.
- The combination of innovative ideas, battlefield testing, and right-sized manufacturing demonstrated that a relatively small country can become a world-class leader in modern warfighting equipment, software, and systems.
Russia, with its ponderous planning and procurement process, has become the fall guy to a president who was a stand up comedian. Who is laughing now? It is not the perpetrators of the “special operation.” The joke, as some might say, is on individuals who created the “special operation.”
Mr. Blank states about the new procurement system:
To cut through the individual acquisition silos, the services are creating Portfolio Acquisition Executives (PAEs). Each Portfolio Acquisition Executive (PAE) is responsible for the entire end-to-process of the different Acquisition functions: Capability Gaps/Requirements, System Centers, Programming, Acquisition, Testing, Contracting and Sustainment. PAEs are empowered to take calculated risks in pursuit of rapidly delivering innovative solutions.
My view of this type of streamlining is that it will become less flexible over time. I am not sure when the ossification will commence, but bureaucratic systems, no matter how well designed, morph and become traditional bureaucratic systems. I am not going to trot out the academic studies about the impact of process, auditing, and legal oversight on any efficient process. I will plainly state that the bureaucracies to which I have been exposed in the US, Europe, and Asia are fundamentally the same.

Can the smart software helping enable the Silicon Valley approach to procurement handle the load and keep the humanoids happy? Thanks, Venice.ai. Good enough.
Ukraine is an outlier when it comes to the organization of its warfighting technology. Perhaps other countries if subjected to a similar type of “special operation” would behave as the Ukraine has. Whether I was giving lectures for the Japanese government or dealing with issues related to materials science for an entity on Clarendon Terrace, the approach, rules, regulations, special considerations, etc. were generally the same.
The question becomes, “Can a new procurement system in an environment not at risk of extinction demonstrate the speed, creativity, agility, and productivity of the Ukrainian model?”
My answer is, “No.”
Mr. Blank writes before he digs into the new organizational structure:
The DoW is being redesigned to now operate at the speed of Silicon Valley, delivering more, better, and faster. Our warfighters will benefit from the innovation and lower cost of commercial technology, and the nation will once again get a military second to none.
This is an important phrase: Silicon Valley. It is the model for making the US Department of War into a more flexible and speedy entity, particularly with regard to procurement, the use of smart software (artificial intelligence), and management methods honed since Bill Hewlett and Dave Packard sparked the garage myth.
Silicon Valley has been an model for many organizations and countries. However, who thinks much about the Silicon Fen? I sure don’t. I would wager a slice of cheese that many readers of this blog post have never, ever heard of Sophia Antipolis. Everyone wants to be a Silicon Valley and high-technology, move fast and break things outfit.
But we have but one Silicon Valley. Now the question is, “Will the US government be a successful Silicon Valley, or will it fizzle out?” Based on my experience, I want to go out on a very narrow limb and suggest:
- Cronyism was important to Silicon Valley, particularly for funding and lawyering. The “new” approach to Department of War procurement is going to follow a similar path.
- As the stakes go up, growth becomes more important than fiscal considerations. As a result, the cost of becoming bigger, faster, cheaper spikes. Costs for the majority of Silicon Valley companies kill off most start ups. The failure rate is high, and it is exacerbated by the need of the winners to continue to win.
- Silicon Valley management styles produce some negative consequences. Often overlooked are such modern management methods as [a] a lack of common sense, [b] decisions based on entitlement or short term gains, and [c] a general indifference to the social consequences of an innovation, a product, or a service.
If I look forward based on my deeply flawed understanding of this Silicon Valley revolution I see monopolistic behavior emerging. Bureaucracies will emerge because people working for other people create rules, procedures, and processes to minimize the craziness of doing the go fast and break things activities. Workers create bureaucracies to deal with chaos, not cause chaos.
Mr. Blank’s essay strikes me as generally supportive of this reinvention of the Federal procurement process. He concludes with:
Let’s hope these changes stick.
My personal view is that they won’t. Ukraine’s created a wartime Silicon Valley in a real-time, shoot-and-survive conflict. The urgency is not parked in a giant building in Washington, DC, or a Silicon Valley dream world. A more pragmatic approach is to partition procurement methods. Apply Silicon Valley thinking in certain classes of procurement; modify the FAR to streamline certain processes; and leave some of the procedures unchanged.
AI is a go fast and break things technology. It also hallucinates. Drones from Silicon Valley companies don’t work in Ukraine. I know because someone with first hand information told me. What will the new methods of procurement deliver? Answer: Drones that won’t work in a modern asymmetric conflict. With decisions involving AI, I sure don’t want to find myself in a situation about which smart software makes stuff up or operates on digital mushrooms.
Stephen E Arnold, November 12, 2025

