Meta: An AI Management Issue Maybe?
December 17, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I really try not to think about Facebook, Mr. Zuckerberg, his yachts, and Llamas. I mean the large language model, not the creatures I associate with Peru. (I have been there, and I did not encounter any reptilian snakes. Cuy chactado, si. Vibora, no.)
I read in the pay-walled orange newspaper online “Inside Mark Zuckerberg’s Turbulent Bet on AI.” Hmm. Turbulent. I was thinking about synonyms I would have suggested; for example, unjustifiable, really big, wild and crazy, and a couple of others. I am not a real journalist so I will happily accept turbulent. The word means, however, “relating to or denoting flow of a fluid in which the velocity at any point fluctuates irregularly and there is continual mixing rather than a steady or laminar flow pattern” according to the Google’s opaque system. I think the idea is that Meta is operating in a chaotic way. What about “juiced running fast and breaking things”? Yep. Chaos, a modern management method that is supposed to just work.
A young executive with oodles of money hears an older person, probably a blue chip consultant, asking one of those probing questions about a top dog’s management method. Will this top dog listen or just fume and keep doing what worked for more than a decade? Thanks, Qwen. Good enough.
What does the write up present? Please, sign up for the FT and read the original article. I want to highlight two snippets.
The first is:
Investors are also increasingly skittish. Meta’s 2025 capital expenditures are expected to hit at least $70bn, up from $39bn the previous year, and the company has started undertaking complex financial maneuverings to help pay for the cost of new data centers and chips, tapping corporate bond markets and private creditors.
Not RIFed employees, not users, not advertisers, and not government regulators. The FT focuses on investors who are skittish. The point is that when investors get skittish, an already unsettled condition is sufficiently significant to increase anxiety. Investors do not want to be anxious. Has Mr. Zuckerberg mismanaged the investors that help keep his massive investments in to be technology chugging along. First, there was the metaverse. That may arrive in some form, but for Meta I perceive it as a dumpster fire for cash.
Now investors are anxious and the care and feeding of these entities is more important. The fact that the investors are anxious suggests that Mr. Zuckerberg has not managed this important category of professionals in a way that calms them down. I don’t think the FT’s article will do much to alleviate their concern.
The second snippet is:
But the [Meta] model performed worse than those by rivals such as OpenAI and Google on jobs including coding tasks and complex problem solving.
This suggests to me that Mr. Zuckerberg did not manage the process in an optimal way. Some wizards left for greener pastures. Others just groused about management methods. Regardless of the signals one receives about Meta, the message I receive is that management itself is the disruptive factor. Mismanagement is, I think, part of the method at Meta.
Several observations:
- Meta like the other AI outfits with money to toss in the smart software dumpster fire are in the midst of realizing “if we think it, it will become reality” is not working. Meta’s spinning off chunks of flaming money bundles and some staff don’t want to get burned.
- Meta is a technology follower, and it may have been aced by its message and social media competitor Telegram. If Telegram’s approach is workable, Meta may be behind another AI eight ball.
- Mr. Zuckerberg is a wonder of American business. He began as a boy wonder. Now as an adult wonder, the question is, “Why are investors wondering about his current wonder-fulness?”
Net net: Meta faces a management challenge. The AI tech is embedded in that. Some of its competitors lack management finesse, but some of them are plugging along and not yet finding their companies presented in the Financial Times as outfits making “increasingly skittish.” Perhaps in the future, but right now, the laser focus of the Financial Times is on Meta. The company is an easy target in my opinion.
Stephen E Arnold, December 17, 2025
Tech Whiz Wants to Go Fishing (No, Not Phishing), Hook, Link, Sinker Stuff
December 17, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
My deeply flawed service that feeds me links produced a rare gem. “I Work in Tech But I Hate Everything Big Tech Has Become” is interesting because it states clearly what I have heard from other Silicon Valley types recently. I urge you to read the essay because the discomfort the author feels jumps off the screen or printed page if you are a dinobaby. If the essay has a rhetorical weakness, it is no resolution. My hunch is that the author has found himself in a digital construct with No Exit signs on the door.

Thanks, Venice.ai. Good enough.
The essay states:
We try to build products that help people. We try to solve mostly problems we ourselves face using tech. We are nerds, misfits, borderline insane people driven by a passion to build. we could probably get a job in big tech if we tried as hard as we try building our own startup. but we don’t want to. in fact we can’t. we’d have to kill a little (actually a lot) of ourselves to do that.
This is an interesting comment. I interpreted it to mean that the tech workers and leadership who build “products that help people” are have probably “killed” some of their inner selves. I never thought of the luminaries who head the outfits pushing AI or deploying systems that governments have to ban for users under a certain age as being dead inside. Is it true? I am not sure. Thought provoking notion? Yes.
The essay states:
I hate everything big tech stands for today. Facebook openly admitting they earn millions from scam ads. VCs funding straight up brain rot or gambling. Big tech is not even pretending to be good these days.
The word “hate” provides a glimpse of how the author is responding to the current business set up in certain sectors of the technology industry. Instead of focusing on what might be called by some dinobaby like me as “ethical behavior” is viewed as abnormal by many people. My personal view is that this idea of doing whatever to reach a goal operates across many demographics. Is this a-ethical behavior now the norm.
The essay states:
If tech loses people like us, all it’ll have left are psychopaths. Look I’m not trying to take a holier-than-thou stance here. I’m just saying objectively it seems insane what’s happening in mainstream tech these days.
I noted a number of highly charged words. These make sense in the context of the author’s personal situation. I noted “psychopaths” and “insane.” When many instances of a-ethical behavior bubble up from technical, financial, and political sectors, a-ethics mean one cannot trust, rely, or believe words. Actions alone must be scrutinized.
The author wants to “keep fighting,” but against who or what system? Deception, trickery, double dealing, criminal activity can be identified in most business interactions.
The author mentions going fishing. The caution I would offer is to make sure you are not charged a dynamic price based on your purchasing profile. Shop around if any fishing stores are open. If not, Amazon will deliver what you need.
Stephen E Arnold, December 17, 2025
Google: Trying Hard Not to Be Noticed in a Crypto Club
December 16, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Google continues to creep into crypto. Google has interacted with ANT Financial. Google has invested in some interesting compute services. And now Google will, if “Exclusive: YouTube Launches Option for U.S. Creators to Receive Stablecoin Payouts through PayPal” is on the money, give crypto a whirl among its creators.

A friendly creature warms up in a yoga studio. Few notice the suave green beast. But one person spots a subtle touch: Pink gym shoes purchased with PayPal crypto. Such a deal. Thanks, Venice.ai. Good enough.
The Fortune article reports as actual factual:
A spokesperson for Google, which owns YouTube, confirmed the video site has added payouts for creators in PayPal’s stablecoin but declined to comment further. YouTube is already an existing customer of PayPal’s and uses the fintech giant’s payouts service, which helps large enterprises pay gig workers and contractors.
How does this work?
Based on the research we did for our crypto lectures, a YouTuber in the US would have to have a PayPal account. Google puts the payment in PayPal’s crypto in the account. The YouTuber would then use PayPal to convert PayPal crypto into US dollars. Then the YouTuber could move the US dollars to his or her US bank account. Allegedly there would be no gas fee slapped on the transactions, but there is an opportunity to add service charges at some point. (I mean what self respecting MBA angling for a promotion wouldn’t propose that money making idea?)
Several observations:
- In my new monograph “The Telegram Labyrinth” available only to law enforcement officials, we identified Google as one of the firms moving in what we call the “Telegram direction.” The Google crypto creeps plus PayPal reinforce that observation. Why? Money and information.
- Information about how Google’s activities in crypto will conform to assorted money related rules and regulations are not clear to me. Furthermore as we completed our “The Telegram Labyrinth” research in early September 2025, not too many people were thinking about Google as a crypto player. But that GOOGcoin does seem like something even the lowest level wizard at Alphabet could envision, doesn’t it?
- Google has a track record of doing what it wants. Therefore, in my opinion, more little tests, baby steps, and semi-low profile moves probably are in the wild. Hopefully someone will start looking.
Net net: Google does do pretty much what it wants to do. From gaining new training data from its mobile-to-ear-bud translation service to expanding its AI capabilities with its new silicon, the Google is a giant creature doing some low impact exercises. When the Google shifts to lifting big iron, a number of interesting challenges will arise. Are regulators ready? Are online fraud investigators ready? Is Microsoft ready?
What’s your answer?
Stephen E Arnold, December 16, 2025
The EU – Google Soap Opera Titled “What? Train AI?”
December 16, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Ka-ching. That’s the sound of the EU ringing up another fine for one of its favorite US big tech outfits. Once again it is Googzilla in the headlights of a restored 2CV. Here’s the pattern:
- EU fines
- Googzilla goes to court
- EU finds Googzilla guilty
- Googzilla appeals
- EU finds Googzilla guilty
- Googzilla negotiates and says, “We don’t agree but we will pay”
- Go back to item 1.
This version of the EU soap opera is called training Gemini on whatever content Google has.
The formal announcement of Googzilla’s re-run of a fan favorite is “Commission Opens Investigation into Possible Anticompetitive Conduct by Google in the Use of Online Content for AI Purposes.” I note the hedge word “possible,” but as soap opera fans we know the arc of this story. Can you hear the cackle of the legal eagles anticipating the billings? I can.

The mythical creature Googzilla apologizes to an august body for a mistake. Googzilla is very, very sincere. Thanks, MidJourney. Actually pretty good this morning. Too bad you not consistent.
The cited show runner document says:
The European Commission has opened a formal antitrust investigation to assess whether Google has breached EU competition rules by using the content of web publishers, as well as content uploaded on the online video-sharing platform YouTube, for artificial intelligence (‘AI’) purposes. The investigation will notably examine whether Google is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage.
The EU is trying via legal process to alter the DNA of Googzilla. I am fond of pointing out that beavers do what beavers do. Similarly Googzillas do exactly what the one and unique Googzilla does; that is, anything it wants to do. Why? Googzilla is now entering its prime. It has a small would on its knee. If examined closely, it is a scar that seems to be the word “monopoly”.
News flash: Filing legal motions against Googzilla will not change its DNA. The outfit is purpose built to keep control of its billions of users and keep the snoops from do gooder and regulatory outfits clueless about what happens to the [a] parsed and tagged data, [b] the metrics thereof, [c] the email, the messages, and the voice data, [d] the YouTube data, and [e] whatever data flows into the Googzilla’s maw from advertisers, ad systems, and ad clickers.
The EU does not get the message. I wrote three books about Google, and it was pretty evident in the first one (The Google Legacy) that baby Google was the equivalent of a young Maradona or Messi was going to wear a jersey with Googzilla 10 emblazoned on its comely yet spikey back.
The write up contains this statement from Teresa Ribera, Executive Vice-President for Clean, Just and Competitive Transition:
A free and democratic society depends on diverse media, open access to information, and a vibrant creative landscape. These values are central to who we are as Europeans. AI is bringing remarkable innovation and many benefits for people and businesses across Europe, but this progress cannot come at the expense of the principles at the heart of our societies. This is why we are investigating whether Google may have imposed unfair terms and conditions on publishers and content creators, while placing rival AI models developers at a disadvantage, in breach of EU competition rules.
Interesting idea as the EU and the US stumble to the side of street where these ideas are not too popular.
Net net: Googzilla will not change for the foreseeable future. Furthermore, those who don’t understand this are unlikely to get a job at the company.
Stephen E Arnold, December 16, 2025
A Thought for the New Year: Be Techy
December 16, 2025
George Orwell wrote in 1984: “Who controls the past controls the future. Who controls the present controls the past.” The Guardian published an article that embodies this quote entitled: “How Big Tech Is Creating Its Own Friendly Media Bubble To ‘Win The Narrative Battle Online’.”
Big Tech billionaire CEOs aren’t cast in the best light these days. In order to counteract the negative attitudes towards their leaders, Big Tech companies are giving their CEOs Walt Disney makeovers. If you didn’t know, Disney wasn’t the congenial uncle figure his company likes to portray him as. Walt was actually an OCD micromanager with a short temper and tendencies reminiscent of bipolar disorder. Big Tech CEOs are portraying themselves as nice guys in cozy interviews via news outlets they own or are copacetic.
Big Tech leaders are doing this because the public doesn’t trust them:
“The rise of tech’s new media is also part of a larger shift in how public figures are presenting themselves and the level of access they are willing to give journalists. The tech industry has a long history of being sensitive around media and closely guarded about their operations, a tendency that has intensified following scandals…”
The content they’re delivering isn’t that great though:
“The content that the tech industry is creating is frequently a reflection of how its elites see themselves and the world they want to build – one with less government regulation and fewer probing questions on how their companies are run. Even the most banal questions can also be a glimpse into the heads of people who exist primarily in guarded board rooms and gated compounds.”
The responses are typical of entitled, out-of-touch idiots. They’re smart in their corner of the world but can’t relate to the working individual. Happy New Year!
Whitney Grace, December 16, 2025
How Not to Get a Holiday Invite: The Engadget Method
December 15, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Sam AI-Man may not invite anyone from Engadget to a holiday party. I read “OpenAI’s House of Cards Seems Primed to Collapse.” The “house of cards” phrase gives away the game. Sam AI-Man built a structure that gravity or Google will pull down. How do I know? Check out this subtitle:
In 2025, it fell behind the one company it couldn’t lose ground to: Google.
The Google. The outfit that shifted into Red Alert or whatever the McKinsey playbook said to call an existential crisis klaxon. The Google. Adjudged a monopoly getting down to work other than running and online advertising system. The Google. An expert in reorganizing a somewhat loosely structured organization. The Google: Everyone except the EU and some allegedly defunded YouTube creators absolutely loves. That Google.
Thanks Venice.ai. I appreciate your telling me I cannot output an image with a “young programmer.” Plugging in “30 year old coder” worked. Very helpful. Intelligent too.
The write up points out:
It’s safe to say GPT-5 hasn’t lived up to anyone’s expectations, including OpenAI’s own. The company touted the system as smarter, faster and better than all of its previous models, but after users got their hands on it, they complained of a chatbot that made surprisingly dumb mistakes and didn’t have much of a personality. For many, GPT-5 felt like a downgrade compared to the older, simpler GPT-4o. That’s a position no AI company wants to be in, let alone one that has taken on as much investment as OpenAI.
Did OpenAI suck it up and crank out a better mouse trap? The write up reports:
With novelty and technical prowess no longer on its side though, it’s now on Altman to prove in short order why his company still deserves such unprecedented levels of investment.
Forget the problems a failed OpenAI poses to investors, employees, and users. Sam AI-Man now has an opportunity to become the highest profile technology professional to cause a national and possibly global recession. Short of war mongering countries, Sam AI-Man will stand alone. He may end up in a museum if any remain open when funding evaporate. School kids could read about him in their history books; that is, if kids actually attend school and read. (Well, there’s always the possibility of a YouTube video if creators don’t evaporate like wet sidewalks when the sun shines.)
Engadget will have to find another festive event to attend.
Stephen E Arnold, December 15, 2025
AI Year in Review: The View from an Expert in France
December 11, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I suggest you read “Stanford, McKinsey, OpenAI: What the 2025 Reports Tell Us about the Present and Future of AI (and Autonomous Agents) in Business.” The document is in French. You can get an okay translation via the Google or Yandex.
I have neither the energy nor the inclination to do a blue chip consulting type of analysis of this fine synthesis of multiple source documents. What I will do in this blog post is highlight several statements and offer a comment or two. For context, I have read some of the sources the author Fabrice Frossard has cited. M. Frossard is a graduate of Ecole Supérieure Libre des Sciences Commerciales Appliquées and the Ecole de Guerre Economique in Paris I think. Remember: I am a dinobaby and generally too lazy and inept to do “real” research. These are good places to learn how to think about business issues.
Let’s dive into his 2000 word write up.
The first point that struck me is that he include what I think is a point not given sufficient emphasis by the experts in the US. This theme is not forced down the reader’s throat, but it has significant implications for M. Frossard’s comments about the need to train people to use smart software. The social implication of AI and the training creates a new digital divide. Like the economic divide in the US and some other countries, crossing the border is not going to possible for many people. Remember these people have been trained to use the smart software deployed. When one cannot get from ignorance to informed expertise, that person is likely to lose a job. Okay, here’s the comment from the source document:
To put it another way: if AI is now everywhere, its real mastery remains the prerogative of an elite.
Is AI a winner today? Not a winner, but it is definitely an up and comer in the commercial world. M. Frossard points out:
- McKinsey reveals that nearly two thirds of companies are still stuck in the experimentation or piloting phase.
- The elite escaping: only 7% of companies have successfully deployed AI in a fully integrated manner across the entire organization.
- Peak workers use coding or data analysis tools 17 times more than the median user.
These and similar facts support the point that “the ability to extract value creates a new digital divide, no longer based on access, but on the sophistication of use.” Keep this in mind when it comes to learning a new skill or mastering a new area of competence like smart software. No, typing a prompt is not expert use. Typing a prompt is like using an automatic teller machine to get money. Basic use is not expert level capabilities.

If Mary cannot “learn” AI and demonstrate exceptional skills, she’s going to be working as an Etsy.com reseller. Thanks, Venice.ai. Not what I prompted but I understand that you are good enough, cash strapped, and degrading.
The second point is that in 2025, AI does not pay for itself in every use case. M. Frossard offers:
EBIT impact still timid: only 39% of companies report an increase in their EBIT (earnings before interest and taxes) attributable to AI, and for the most part, this impact remains less than 5%.
One interesting use case comes from a McKinsey report where billability is an important concept. The idea is that a bit of Las Vegas type thinking is needed when it comes to smart software. M. Frossard writes:
… the most successful companies [using artificial intelligence] are paradoxically those that report the most risks and negative incidents.
Takes risks and win big seems to be one interpretation of this statement. The timid and inept will be pushed aside.
Third, I was delighted to see that M. Frossard picked up on some of the crazy spending for data centers. He writes:
The cost of intelligence is collapsing: A major accelerating factor noted by the Stanford HAI Index is the precipitous fall in inference costs. The cost to achieve performance equivalent to GPT-3.5 has been divided by 280 in 18 months. This commoditization of intelligence finally makes it possible to make complex use cases profitable which were economically unviable in 2023. Here is a paradox: the more efficient and expensive artificial intelligence becomes produce (exploding training costs), the less expensive it is consume (free-fall inference costs). This mental model suggests that intelligence becomes an abundant commodity, leading not to a reduction, but to an explosion of demand and integration.
Several ideas bubble from this passage. First, we are back to training. Second, we are back to having significant expertise. Third, the “abundant commodity” idea produces greater demand. The problem (in addition to not having power for data centers will be people with exceptional AI capabilities).
Fourth, the replacement of some humans may not be possible. The essay reports:
the deployment of agents at scale remains rare (less than 10% in a given function according to McKinsey), hampered by the need for absolute reliability and data governance.
Data governance is like truth, love, and ethics. Easy to say and hard to define. The reliability angle is slightly less tricky. These two AI molecules require a catalyst like an expert human with significant AI competence. And this returns the essay to training. M. Frossard writes:
The transformation of skills: The 115K report emphasizes the urgency of training. The barrier is not technological, it is human. Businesses face a cultural skills gap. It’s not about learning to “prompt”, but about learning to collaborate with non-human intelligence.
Finally, the US has a China problem. M. Frossard points out:
… If the USA dominates investment and the number of models, China is closing the technical gap. On critical benchmarks such as mathematics or coding, the performance gap between the US and Chinese models has narrowed to nothing (less than 1 to 3 percentage points).
Net net: If an employee cannot be trained, that employee is likely to be starting a business at home. If the trained employees are not exceptional, those folks may be terminated. Elites like other elite things. AI may be good enough, but it provides an “objective” way to define and burn dead wood.
Stephen E Arnold, December 11, 2025
Social Media Companies: Digital Drug Pushers?
December 11, 2025
Social media is a drug. Let’s be real, it’s not a real drug but it affects the brain in the same manner as drugs and alcohol. Social media stimulates the pleasure centers of the brain, releases endorphins, and creates an immediate hit. Delayed gratification becomes a thing of the past as users are constantly seeking their thrills with instantaneous hits from TikTok, Snapchat, Instagram, Facebook, and YouTube.
Politico includes a quote from the recent lawsuit filed against Meta in Northern California that makes a great article title: “‘We’re Basically Pushers’: Court Filing Alleges Staff At Social Media Giants Compared Their Platforms To Drugs.” According to the lawsuit, Meta, Instagram, TikTok, Snapchat, and YouTube ignored their platforms’ potential dangers and hid them from users.
The lawsuit has been ongoing doe years and a federal judge ordered its contents to be opened in October 2025. Here are the details:
“The filing includes a series of detailed reports from four experts, who examined internal documents, research and direct communications between engineers and executives at the companies. Experts’ opinions broadly concluded that the companies knew their platforms were addictive but continued to prioritize user engagement over safety.”
It sounds like every big company ever. Money over consumer safety. We’re doomed.
Whitney Grace, December 11, 2025
Google Data Slurps: Never, Ever
December 11, 2025
Here’s another lie from Googleland via Techspot, “Google Denies Gmail Reads Your Emails And Attachments To Train AI, But Here’s How To Opt-Out Anyway.” Google claims that it doesn’t use emails and attachments to train AI, but we know that’s false. Google correctly claims that it uses user-generation data for personalization of their applications, like Gmail. We all know that’s a workaround to use that data for other purposes.
The article includes instructions on how to opt out of information being used to train AI and “personalize” experiences. Gmail users, however, have had bad experiences with that option, including the need to turn the feature off multiple times.
Google claims it is committed to privacy but:
“Google has flatly denied using user content to train Gemini, noting that Gmail has offered some of these features for many years. However, the Workspace menu refers to newly added Gemini functionality several times.
The company also denied automatically modifying user permissions, but some people have reported needing multiple attempts to turn off smart features.”
There’s also security vulnerabilities:
“In addition to raising privacy concerns, Gmail’s AI functionality has exposed serious vulnerabilities. In March, Mozilla found that attackers could easily inject prompts that would cause the client’s AI generated summaries to become phishing messages.”
Imagine that one little digital switch protects your privacy and data. Methinks it is a placebo effect. Whitney Grace, December 11, 2025
Google Gemini Hits Copilot with a Dang Block: Oomph
December 10, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Smart software is finding its way into interesting places. One of my newsfeeds happily delivered “The War Department Unleashes AI on New GenAI.mil Platform.” Please, check out the original document because it contains some phrasing which is difficult for a dinobaby to understand. Here’s an example:
The War Department today announced the launch of Google Cloud’s Gemini for Government as the first of several frontier AI capabilities to be housed on GenAI.mil, the Department’s new bespoke AI platform.
There are a number of smart systems with government wide contracts. Is the Google Gemini deal just one of the crowd or is it the cloud over the other players? I am not sure what a “frontier” capability is when it comes to AI. The “frontier” of AI seems to be shifting each time a performance benchmark comes out from a GenX consulting firm or when a survey outfit produces a statement that QWEN accounts for 30 percent of AI involving an open source large language model. The idea of a “bespoke AI platform” is fascinating. Is it like a suit tailored on Oxford Street or a vehicle produced by Chip Foose, or is it one of those enterprise software systems with extensive customization? Maybe like an IBM government systems solution?
Thanks, Google. Good enough. I wanted square and you did horizontal, but that’s okay. I understand.
And that’s just the first sentence. You are now officially on your own.
For me, the big news is that the old Department of Defense loved PowerPoint. If you have bumped into any old school Department of Defense professionals, the PowerPoint is the method of communication. Sure, there’s Word and Excel. But the real workhorse is PowerPoint. And now that old nag has Copilot inside.
The way I read this news release is that Google has pulled a classic blocking move or dang. Microsoft has been for decades the stallion in the stall. Now, the old nag has some competition from Googzilla, er, excuse me, Google. Word of this deal was floating around for several months, but the cited news release puts Microsoft in general and Copilot in particular on notice that it is no longer the de facto solution to a smart Department of War’s digital needs. Imagine a quarter century after screwing up a big to index the US government servers, Google has emerged as a “winner” among “several frontier AI capabilities” and will reside on “the Department’s new bespoke AI platform.”
This is big news for Google and Microsoft, its certified partners, and, of course, the PowerPoint users at the DoW.
The official document says:
The first instance on GenAI.mil, Gemini for Government, empowers intelligent agentic workflows, unleashes experimentation, and ushers in an AI-driven culture change that will dominate the digital battlefield for years to come. Gemini for Government is the embodiment of American AI excellence, placing unmatched analytical and creative power directly into the hands of the world’s most dominant fighting force.
But what about Sage, Seerist, and the dozens of other smart platforms? Obviously these solutions cannot deliver “intelligent agentic workflows” or unleash the “AI driven culture change” needed for the “digital battlefield.” Let’s hope so. Because some of those smart drones from a US firm have failed real world field tests in Ukraine. Perhaps the smart drone folks can level up instead of doing marketing?
I noted this statement:
The Department is providing no-cost training for GenAI.mil to all DoW employees. Training sessions are designed to build confidence in using AI and give personnel the education needed to realize its full potential. Security is paramount, and all tools on GenAI.mil are certified for Controlled Unclassified Information (CUI) and Impact Level 5 (IL5), making them secure for operational use. Gemini for Government provides an edge through natural language conversation, retrieval-augmented generation (RAG), and is web-grounded against Google Search to ensure outputs are reliable and dramatically reduces the risk of AI hallucinations.
But wait, please. I thought Microsoft and Palantir were doing the bootcamps, demonstrating, teaching, and then deploying next generation solutions. Those forward deployed engineers and the Microsoft certified partners have been beavering away for more than a year. Who will be doing the training? Will it be Googlers? I know that YouTube has some useful instructional videos, but those are from third parties. Google’s training is — how shall I phrase it — less notable than some of its other capabilities like publicizing its AI prowess.
The last paragraph of the document does not address the questions I have, but it does have a stentorian ring in my opinion:
GenAI.mil is another building block in America’s AI revolution. The War Department is unleashing a new era of operational dominance, where every warfighter wields frontier AI as a force multiplier. The release of GenAI.mil is an indispensable strategic imperative for our fighting force, further establishing the United States as the global leader in AI.
Several observations:
- Google is now getting its chance to put Microsoft in its place from inside the Department of War. Maybe the Copilot can come along for the ride, but it could be put on leave.
- The challenge of training is interesting. Training is truly a big deal, and I am curious how that will be handled. The DoW has lots of people to teach about the capabilities of Gemini AI.
- Google may face some push back from its employees. The company has been working to stop the Googlers from getting out of the company prescribed lanes. Will this shift to warfighting create some extra work for the “leadership” of that estimable company? I think Google’s management methods will be exercised.
Net net: Google knows about advertising. Does it have similar capabilities in warfighting?
Stephen E Arnold, December 10, 2025

