AI Year in Review: The View from an Expert in France
December 11, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I suggest you read “Stanford, McKinsey, OpenAI: What the 2025 Reports Tell Us about the Present and Future of AI (and Autonomous Agents) in Business.” The document is in French. You can get an okay translation via the Google or Yandex.
I have neither the energy nor the inclination to do a blue chip consulting type of analysis of this fine synthesis of multiple source documents. What I will do in this blog post is highlight several statements and offer a comment or two. For context, I have read some of the sources the author Fabrice Frossard has cited. M. Frossard is a graduate of Ecole Supérieure Libre des Sciences Commerciales Appliquées and the Ecole de Guerre Economique in Paris I think. Remember: I am a dinobaby and generally too lazy and inept to do “real” research. These are good places to learn how to think about business issues.
Let’s dive into his 2000 word write up.
The first point that struck me is that he include what I think is a point not given sufficient emphasis by the experts in the US. This theme is not forced down the reader’s throat, but it has significant implications for M. Frossard’s comments about the need to train people to use smart software. The social implication of AI and the training creates a new digital divide. Like the economic divide in the US and some other countries, crossing the border is not going to possible for many people. Remember these people have been trained to use the smart software deployed. When one cannot get from ignorance to informed expertise, that person is likely to lose a job. Okay, here’s the comment from the source document:
To put it another way: if AI is now everywhere, its real mastery remains the prerogative of an elite.
Is AI a winner today? Not a winner, but it is definitely an up and comer in the commercial world. M. Frossard points out:
- McKinsey reveals that nearly two thirds of companies are still stuck in the experimentation or piloting phase.
- The elite escaping: only 7% of companies have successfully deployed AI in a fully integrated manner across the entire organization.
- Peak workers use coding or data analysis tools 17 times more than the median user.
These and similar facts support the point that “the ability to extract value creates a new digital divide, no longer based on access, but on the sophistication of use.” Keep this in mind when it comes to learning a new skill or mastering a new area of competence like smart software. No, typing a prompt is not expert use. Typing a prompt is like using an automatic teller machine to get money. Basic use is not expert level capabilities.

If Mary cannot “learn” AI and demonstrate exceptional skills, she’s going to be working as an Etsy.com reseller. Thanks, Venice.ai. Not what I prompted but I understand that you are good enough, cash strapped, and degrading.
The second point is that in 2025, AI does not pay for itself in every use case. M. Frossard offers:
EBIT impact still timid: only 39% of companies report an increase in their EBIT (earnings before interest and taxes) attributable to AI, and for the most part, this impact remains less than 5%.
One interesting use case comes from a McKinsey report where billability is an important concept. The idea is that a bit of Las Vegas type thinking is needed when it comes to smart software. M. Frossard writes:
… the most successful companies [using artificial intelligence] are paradoxically those that report the most risks and negative incidents.
Takes risks and win big seems to be one interpretation of this statement. The timid and inept will be pushed aside.
Third, I was delighted to see that M. Frossard picked up on some of the crazy spending for data centers. He writes:
The cost of intelligence is collapsing: A major accelerating factor noted by the Stanford HAI Index is the precipitous fall in inference costs. The cost to achieve performance equivalent to GPT-3.5 has been divided by 280 in 18 months. This commoditization of intelligence finally makes it possible to make complex use cases profitable which were economically unviable in 2023. Here is a paradox: the more efficient and expensive artificial intelligence becomes produce (exploding training costs), the less expensive it is consume (free-fall inference costs). This mental model suggests that intelligence becomes an abundant commodity, leading not to a reduction, but to an explosion of demand and integration.
Several ideas bubble from this passage. First, we are back to training. Second, we are back to having significant expertise. Third, the “abundant commodity” idea produces greater demand. The problem (in addition to not having power for data centers will be people with exceptional AI capabilities).
Fourth, the replacement of some humans may not be possible. The essay reports:
the deployment of agents at scale remains rare (less than 10% in a given function according to McKinsey), hampered by the need for absolute reliability and data governance.
Data governance is like truth, love, and ethics. Easy to say and hard to define. The reliability angle is slightly less tricky. These two AI molecules require a catalyst like an expert human with significant AI competence. And this returns the essay to training. M. Frossard writes:
The transformation of skills: The 115K report emphasizes the urgency of training. The barrier is not technological, it is human. Businesses face a cultural skills gap. It’s not about learning to “prompt”, but about learning to collaborate with non-human intelligence.
Finally, the US has a China problem. M. Frossard points out:
… If the USA dominates investment and the number of models, China is closing the technical gap. On critical benchmarks such as mathematics or coding, the performance gap between the US and Chinese models has narrowed to nothing (less than 1 to 3 percentage points).
Net net: If an employee cannot be trained, that employee is likely to be starting a business at home. If the trained employees are not exceptional, those folks may be terminated. Elites like other elite things. AI may be good enough, but it provides an “objective” way to define and burn dead wood.
Stephen E Arnold, December 11, 2025
Social Media Companies: Digital Drug Pushers?
December 11, 2025
Social media is a drug. Let’s be real, it’s not a real drug but it affects the brain in the same manner as drugs and alcohol. Social media stimulates the pleasure centers of the brain, releases endorphins, and creates an immediate hit. Delayed gratification becomes a thing of the past as users are constantly seeking their thrills with instantaneous hits from TikTok, Snapchat, Instagram, Facebook, and YouTube.
Politico includes a quote from the recent lawsuit filed against Meta in Northern California that makes a great article title: “‘We’re Basically Pushers’: Court Filing Alleges Staff At Social Media Giants Compared Their Platforms To Drugs.” According to the lawsuit, Meta, Instagram, TikTok, Snapchat, and YouTube ignored their platforms’ potential dangers and hid them from users.
The lawsuit has been ongoing doe years and a federal judge ordered its contents to be opened in October 2025. Here are the details:
“The filing includes a series of detailed reports from four experts, who examined internal documents, research and direct communications between engineers and executives at the companies. Experts’ opinions broadly concluded that the companies knew their platforms were addictive but continued to prioritize user engagement over safety.”
It sounds like every big company ever. Money over consumer safety. We’re doomed.
Whitney Grace, December 11, 2025
Google Data Slurps: Never, Ever
December 11, 2025
Here’s another lie from Googleland via Techspot, “Google Denies Gmail Reads Your Emails And Attachments To Train AI, But Here’s How To Opt-Out Anyway.” Google claims that it doesn’t use emails and attachments to train AI, but we know that’s false. Google correctly claims that it uses user-generation data for personalization of their applications, like Gmail. We all know that’s a workaround to use that data for other purposes.
The article includes instructions on how to opt out of information being used to train AI and “personalize” experiences. Gmail users, however, have had bad experiences with that option, including the need to turn the feature off multiple times.
Google claims it is committed to privacy but:
“Google has flatly denied using user content to train Gemini, noting that Gmail has offered some of these features for many years. However, the Workspace menu refers to newly added Gemini functionality several times.
The company also denied automatically modifying user permissions, but some people have reported needing multiple attempts to turn off smart features.”
There’s also security vulnerabilities:
“In addition to raising privacy concerns, Gmail’s AI functionality has exposed serious vulnerabilities. In March, Mozilla found that attackers could easily inject prompts that would cause the client’s AI generated summaries to become phishing messages.”
Imagine that one little digital switch protects your privacy and data. Methinks it is a placebo effect. Whitney Grace, December 11, 2025
Google Gemini Hits Copilot with a Dang Block: Oomph
December 10, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Smart software is finding its way into interesting places. One of my newsfeeds happily delivered “The War Department Unleashes AI on New GenAI.mil Platform.” Please, check out the original document because it contains some phrasing which is difficult for a dinobaby to understand. Here’s an example:
The War Department today announced the launch of Google Cloud’s Gemini for Government as the first of several frontier AI capabilities to be housed on GenAI.mil, the Department’s new bespoke AI platform.
There are a number of smart systems with government wide contracts. Is the Google Gemini deal just one of the crowd or is it the cloud over the other players? I am not sure what a “frontier” capability is when it comes to AI. The “frontier” of AI seems to be shifting each time a performance benchmark comes out from a GenX consulting firm or when a survey outfit produces a statement that QWEN accounts for 30 percent of AI involving an open source large language model. The idea of a “bespoke AI platform” is fascinating. Is it like a suit tailored on Oxford Street or a vehicle produced by Chip Foose, or is it one of those enterprise software systems with extensive customization? Maybe like an IBM government systems solution?
Thanks, Google. Good enough. I wanted square and you did horizontal, but that’s okay. I understand.
And that’s just the first sentence. You are now officially on your own.
For me, the big news is that the old Department of Defense loved PowerPoint. If you have bumped into any old school Department of Defense professionals, the PowerPoint is the method of communication. Sure, there’s Word and Excel. But the real workhorse is PowerPoint. And now that old nag has Copilot inside.
The way I read this news release is that Google has pulled a classic blocking move or dang. Microsoft has been for decades the stallion in the stall. Now, the old nag has some competition from Googzilla, er, excuse me, Google. Word of this deal was floating around for several months, but the cited news release puts Microsoft in general and Copilot in particular on notice that it is no longer the de facto solution to a smart Department of War’s digital needs. Imagine a quarter century after screwing up a big to index the US government servers, Google has emerged as a “winner” among “several frontier AI capabilities” and will reside on “the Department’s new bespoke AI platform.”
This is big news for Google and Microsoft, its certified partners, and, of course, the PowerPoint users at the DoW.
The official document says:
The first instance on GenAI.mil, Gemini for Government, empowers intelligent agentic workflows, unleashes experimentation, and ushers in an AI-driven culture change that will dominate the digital battlefield for years to come. Gemini for Government is the embodiment of American AI excellence, placing unmatched analytical and creative power directly into the hands of the world’s most dominant fighting force.
But what about Sage, Seerist, and the dozens of other smart platforms? Obviously these solutions cannot deliver “intelligent agentic workflows” or unleash the “AI driven culture change” needed for the “digital battlefield.” Let’s hope so. Because some of those smart drones from a US firm have failed real world field tests in Ukraine. Perhaps the smart drone folks can level up instead of doing marketing?
I noted this statement:
The Department is providing no-cost training for GenAI.mil to all DoW employees. Training sessions are designed to build confidence in using AI and give personnel the education needed to realize its full potential. Security is paramount, and all tools on GenAI.mil are certified for Controlled Unclassified Information (CUI) and Impact Level 5 (IL5), making them secure for operational use. Gemini for Government provides an edge through natural language conversation, retrieval-augmented generation (RAG), and is web-grounded against Google Search to ensure outputs are reliable and dramatically reduces the risk of AI hallucinations.
But wait, please. I thought Microsoft and Palantir were doing the bootcamps, demonstrating, teaching, and then deploying next generation solutions. Those forward deployed engineers and the Microsoft certified partners have been beavering away for more than a year. Who will be doing the training? Will it be Googlers? I know that YouTube has some useful instructional videos, but those are from third parties. Google’s training is — how shall I phrase it — less notable than some of its other capabilities like publicizing its AI prowess.
The last paragraph of the document does not address the questions I have, but it does have a stentorian ring in my opinion:
GenAI.mil is another building block in America’s AI revolution. The War Department is unleashing a new era of operational dominance, where every warfighter wields frontier AI as a force multiplier. The release of GenAI.mil is an indispensable strategic imperative for our fighting force, further establishing the United States as the global leader in AI.
Several observations:
- Google is now getting its chance to put Microsoft in its place from inside the Department of War. Maybe the Copilot can come along for the ride, but it could be put on leave.
- The challenge of training is interesting. Training is truly a big deal, and I am curious how that will be handled. The DoW has lots of people to teach about the capabilities of Gemini AI.
- Google may face some push back from its employees. The company has been working to stop the Googlers from getting out of the company prescribed lanes. Will this shift to warfighting create some extra work for the “leadership” of that estimable company? I think Google’s management methods will be exercised.
Net net: Google knows about advertising. Does it have similar capabilities in warfighting?
Stephen E Arnold, December 10, 2025
Google Presents an Innovative Way to Say, “Generate Revenue”
December 9, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
One of my contacts sent me a link to an interesting document. Its title is “A Pragmatic Vision for Interpretability.” I am not sure about the provenance of the write up, but it strikes me as an output from legal, corporate, and wizards. First impression: Very lengthy. I estimate that it requires about 11,000 words to say, “Generate revenue.” My second impression: A weird blend of consulting speak and nervousness.

A group of Googlers involved in advanced smart software ideation get a phone call clarifying they have to hit revenue targets. No one looks too happy. The esteemed leader is on the conference room wall. He provides a North Star to the wandering wizards. Thanks, Venice.ai. Good enough, just like so much AI system output these days.
The write up is too long to meander through its numerous sections, arguments, and arm waving. I want to highlight three facets of the write up and leave it up to you to print this puppy out, read it on a delayed flight, and consider how different this document is from the no output approach Google used when it was absolutely dead solid confident that its search-ad business strategy would rule the world forever. Well, forever seems to have arrived for Googzilla. Hence, be pragmatic. This, in my experience, is McKinsey speak for hit your financial targets or hit the road.
First, consider this selected set of jargon:
Comparative advantage (maybe keep up with the other guys?)
Load-bearing beliefs
Mech Interp” / “mechanistic interpretability” (as opposed to “classic” interp)
Method minimalism
North Star (is it the person on the wall in the cartoon or just revenue?)
Proxy task
SAE (maybe sparse autoencoders?)
Steering against evaluation awareness (maybe avoiding real world feedback?)
Suppression of eval-awareness (maybe real-world feedback?)
Time-box for advanced research
The document tries to hard to avoid saying, “Focus on stuff that makes money.” I think that, however, is what the word choice is trying to present in very fancy, quasi-baloney jingoism.
Second, take a look at the three sets of fingerprints in what strikes me as a committee-written document.
- Researchers want to just follow their ideas about smart software just as we have done at Google for many years
- Lawyers and art history majors who want to cover their tailfeathers when Gemini goes off the rails
- Google leadership who want money or at the very least research that leads to products.
I can see a group meeting virtually, in person, and in the trenches of a collaborative Google Doc until this masterpiece of management weirdness is given the green light for release. Google has become artful in make work, wordsmithing, and pretend reconciliation of the battles among the different factions, city states, and empires within Google. One can almost anticipate how the head of ad sales reacts to money pumped into data centers and research groups who speak a language familiar to Klingons.
Third, consider why Google felt compelled to crank out a tortured document to nail on the doors of an AI conference. When I interacted with Google over a number of years, I did not meet anyone reminding me of Martin Luther. Today, if I were to return to Shoreline Drive, I might encounter a number of deep fakes armed with digital hammers and fervid eyes. I think the Google wants to make sure that no more Loons and Waymos become the butt of stand up comedians on late night TV or (heaven forbid, TikTok). The dead cat in the Mission and the dead puppy in what’s called (I think) the Western Addition. (I used to live in Berkeley, and I never paid much attention to the idiosyncratic names slapped on undifferentiable areas of the City by the Bay.)
I think that Google leadership seeks in this document:
- To tell everyone it is focusing on stuff that sort of works. The crazy software that is just like Sundar is not on the to do list
- To remind everyone at the Google that we have to pay for the big, crazy data centers in space, our own nuclear power plants, and the cost of the home brew AI chips. Ads alone are no longer going to be 24×7 money printing machines because of OpenAI
- To try to reduce the tension among the groups, cliques, and digital street gangs in the offices and the virtual spaces in which Googlers cogitate, nap, and use AI to be more efficient.
Net net: Save this document. It may become a historical artefact.
Stephen E Arnold, December 9, 2025
The Web? She Be Dead
December 9, 2025
Journalists, Internet experts, and everyone with a bit of knowledge has declared the World Wide Web dead for thirty years. The term “World Wide Web” officially died with the new millennium, but what about the Internet itself? Ernie Smith at Tedium wrote about the demise of the Web: “The Sky Is Falling, The Web Is Dead.” Smith noticed that experts stated the Web is dead many times and he decided to investigate.
He turned to another expert: George Colony, the founder of Forrester Research. Forrester Research is a premier tech and business advisory firms in the world. Smith wrote this about Colony and his company:
“But there’s one area where the company—particularly Colony—gets it wrong. And it has to do with the World Wide Web, which Colony declared “dead” or dying on numerous occasions over a 30-year period. In each case, Colony was trying to make a bigger point about where online technology was going, without giving the Web enough credit for actually being able to get there.”
Smith strolls through instances of Colony declaring the Web is dead. The first was in 1995 followed by many other declarations of the dead Web. Smith made another smart observation:
“Can you see the underlying fault with his commentary? He basically assumed that Web technology would never improve and would be replaced with something else—when what actually happened is that the Web eventually integrated everything he wanted, plus more.
Which is funny, because Forrester’s main rival, International Data Corp., essentially said this right in the piece. ‘The web is the dirt road, the basic structure,’ IDC analyst Michael Sullivan-Trainor said. ‘The concept that you can kill the Web and start from square one is ridiculous. We are talking about using the Web, evolving it.’”
The Web and Internet evolve. Technology evolves. Smith has an optimistic view that is true about the Web: “I, for one, think the Web will do what it always does: Democratize knowledge.”
Whitney Grace, December 9, 2025
Clippy, How Is Copilot? Oh, Too Bad
December 8, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
In most of my jobs, rewards landed on my desk when I sold something. When the firms silly enough to hire me rolled out a product, I cannot remember one that failed. The sales professionals were the early warning system for many of our consulting firm’s clients. Management provided money to a product manager or R&D whiz with a great idea. Then a product or new service idea emerged, often at a company event. Some were modest, but others featured bells and whistles. One such roll out had a big name person who a former adviser to several presidents. These firms were either lucky or well managed. Product dogs, diseased ferrets, and outright losers were identified early and the efforts redirected.

Two sales professionals realize that their prospects resist Microsoft’s agentic pawing. Mortgages must be paid. Sneakers must be purchased. Food has to be put on the table. Sales are needed, not push backs. Thanks, Venice.ai. Good enough.
But my employers were in tune with what their existing customer base wanted. Climbing a tall tree and going out on a limb were not common occurrences. Even Apple, which resides in a peculiar type of commercial bubble, recognizes a product that does not sell. A recent example is the itsy bitsy, teeny weenie mobile thingy. Apple bounced back with the Granny Scarf designed to hold any mobile phone. The thin and light model is not killed; its just not everywhere like the old reliable orange iPhone.
Sales professionals talk to prospects and customers. If something is not selling, the sales people report, “Problemo, boss.”
In the companies which employed me, the sales professionals knew what was coming and could mention in appropriately terms to those in the target market. This happened before the product or service was in production or available to clients. My employers (Halliburton, Booz, Allen, and a couple of others held in high esteem) had the R&D, the market signals, the early warning system for bad ideas, and the refinement or improvement mechanism working in a reliable way.
I read “Microsoft Drops AI Sales Targets in Half after Salespeople Miss Their Quotas.” The headline suggested three things to me instantly:
- The pre-sales early warning radar system did not exist or it was broken
- The sales professionals said in numbers, “Boss, this Copilot AI stuff is not selling.”
- Microsoft committed billions of dollars and significant, expensive professional staff time on something that prospects and customers do not rush to write checks, use, or tell their friends about the next big thing.”
The write up says:
… one US Azure sales unit set quotas for salespeople to increase customer spending on a product called Foundry, which helps customers develop AI applications, by 50 percent. Less than a fifth of salespeople in that unit met their Foundry sales growth targets. In July, Microsoft lowered those targets to roughly 25 percent growth for the current fiscal year. In another US Azure unit, most salespeople failed to meet an earlier quota to double Foundry sales, and Microsoft cut their quotas to 50 percent for the current fiscal year. The sales figures suggest enterprises aren’t yet willing to pay premium prices for these AI agent tools. And Microsoft’s Copilot itself has faced a brand preference challenge: Earlier this year, Bloomberg reported that Microsoft salespeople were having trouble selling Copilot to enterprises because many employees prefer ChatGPT instead.
Microsoft appears to have listened to the feedback. The adjustment, however, does not address the failure to implement the type of marketing probing process used by Halliburton and Booz, Allen: Microsoft implemented the “think it and it will become real.” The thinking in this case is that software can perform human work roles in a way that is equivalent to or better than a human’s execution.
I may be a dinobaby, but I figured out quickly that smart software has been for the last three years a utility. It is not quite useless, but it is not sufficiently robust to do the work that I do. Other people are on the same page with me.
My take away from the lower quotas is that Microsoft should have a rethink. The OpenAI bet, the AI acquisitions, the death march to put software that makes mistakes in applications millions use in quite limited ways, and the crazy publicity output to sell Copilot are sending Microsoft leadership both audio and visual alarms.
Plus, OpenAI has copied Google’s weird Red Alert. Since Microsoft has skin in the game with OpenAI, perhaps Microsoft should open its eyes and check out the beacons and listen to the klaxons ringing in Softieland sales meetings and social media discussions about Microsoft AI? Just a thought. (That Telegram virtual AI data center service looks quite promising to me. Telegram’s management is avoiding the Clippy-type error. Telegram may fail, but that outfit is paying GPU providers in TONcoin, not actual fiat currency. The good news is that MSFT can make Azure AI compute available to Telegram and get paid in TONcoin. Sounds like a plan to me.)
Stephen E Arnold, December 8, 2025
Telegram’s Cocoon AI Hooks Up with AlphaTON
December 5, 2025
[This post is a version of an alert I sent to some of the professionals for whom I have given lectures. It is possible that the entities identified in this short report will alter their messaging and delete their Telegram posts. However, the thrust of this announcement is directionally correct.]
Telegram’s rapid expansion into decentralized artificial intelligence announced a deal with AlphaTON Capital Corp. The Telegram post revealed that AlphaTON would be a flagship infrastructure and financial partner. The announcement was posted to the Cocoon Group within hours of AlphaTON getting clear of U.S. SEC “baby shelf” financial restrictions. AlphaTON promptly launched a $420.69 million securities push. Telegram and AlphaTON either acted in a coincidental way or Pavel Durov moved to make clear his desire to build a smart, Telegram-anchored financial service.
AlphaTON, a Nasdaq microcap formerly known as Portage Biotech rebranded in September 2025. The “new” AlphaTON claims to be deploying Nvidia B200 GPU clusters to support Cocoon, Telegram’s confidential-compute AI network. The company’s pivot from oncology to crypto-finance and AI infrastructure was sudden. Plus AlphaTON’s CEO Brittany Kaiser (best known for Cambridge Analytica) has allegedly interacted with Russian political and business figures during earlier data-operations ventures. If the allegations are accurate, Ms. Kaiser has connections to Russia-linked influence and financial networks. Telegram is viewed by some organizations like Kucoin as a reliable operational platform for certain financial activities.
Telegram has positioned AlphaTON as a partner and developer in the Telegram ecosystem. Firms like Huione Guarantee allegedly used Telegram for financial maneuvers that resulted in criminal charges. Other alleged uses of the Telegram platform have included other illegal activities identified in the more than a dozen criminal charges for which Pavel Durov awaits trial in France. Telegram’s instant promotion of AlphaTON, combined with the firm’s new ability to raise hundreds of millions, points to a coordinated strategy to build an AI-enabled financial services layer using Cocoon’s VAIC or virtual artificial intelligence complex.
The message seems clear. Telegram is not merely launching a distributed AI compute service; it is enabling a low latency, secrecy enshrouded AI-crypto financial construct. Telegram and AlphaTON both see an opportunity to profit from a fusion of distributed AI, cross jurisdictional operation, and a financial pay off from transactions at scale. For me and my research team, the AlphaTON tie-up signals that Telegram’s next frontier may blend decentralized AI, speculative finance, and actors operating far from traditional regulatory guardrails.
In my monograph “Telegram Labyrinth” (available only to law enforcement, US intelligence officers, and cyber attorneys in the US), Telegram requires close monitoring and a new generation of intelware software. Yesterday’s tools were not designed for what Telegram is deploying itself and with its partners. Thank you.
Stephen E Arnold, December 5, 2025, 1034 am US Eastern time
Apples Misses the AI Boat Again
December 4, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Apple and Telegram have a characteristic in common. Both did not recognize the AI boomlet that began in 2020 or so. Apple was thinking about Granny scarfs that could hold an iPhone and working out ways to cope with its dependence on Chinese manufacturing. Telegram was struggling with the US legal system and trying to create a programming language that a mere human could use to code a distributed application.
Apple’s ship has sailed, and it may dock at Google’s Gemini private island or it could decide to purchase an isolated chunk of real estate and build its de-perplexing AI system at that location.

Thanks, MidJourney. Good enough.
I thought about missing a boat or a train. The reason? I read “Apple AI Chief John Giannandrea Retiring After Siri Delays.” I simply don’t know who has been responsible for Apple AI. Siri did not work when I looked at it on my wife’s iPhone many years ago. Apparently it doesn’t work today. Could that be a factor in the leadership changes at the Tim Apple outfit?
The write up states:
Giannandrea will serve as an advisor between now and 2026, with former Microsoft AI researcher Amar Subramanya set to take over as vice president of AI. Subramanya will report to Apple engineering chief Craig Federighi, and will lead Apple Foundation Models, ML research, and AI Safety and Evaluation. Subramanya was previously corporate vice president of AI at Microsoft, and before that, he spent 16 years at Google.
Apple will probably have a person who knows some people to call at Softie and Google headquarters. However, when will the next AI boat arrive. Apple excelled at announcing AI, but no boat arrived. Telegram has an excuse; for example, our owner Pavel Durov has been embroiled in legal hassles and arm wrestling with the reality that developing complex applications for the Telegram platform is too difficult. One would have thought that Apple could have figured out a way to improve Siri, but it apparently was lost in a reality distortion field. Telegram didn’t because Pavel Durov was in jail in Paris, then confined to the country, and had to report to the French judiciary like a truant school boy. Apple just failed.
The write up says:
Giannandrea’s departure comes after Apple’s major iOS 18 Siri failure. Apple introduced a smarter, “?Apple Intelligence?” version of ?Siri? at WWDC 2024, and advertised the functionality when marketing the iPhone 16. In early 2025, Apple announced that it would not be able to release the promised version of ?Siri? as planned, and updates were delayed until spring 2026. An exodus of Apple’s AI team followed as Apple scrambled to improve ?Siri? and deliver on features like personal context, onscreen awareness, and improved app integration. Apple is now rumored to be partnering with Google for a more advanced version of ?Siri? and other ?Apple Intelligence? features that are set to come out next year.
My hunch is that grafting AI into the bizarro world of the iPhone and other Apple computing devices may be a challenge. Telegram’s solution is to not do hardware. Apple is now an outfit distinguishing itself by missing the boat. When does the next one arrive?
Stephen E Arnold, December 4, 2025
A New McKinsey Report Introduces New Jargon for Its Clients
December 3, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “Agents, Robots, and Us: Skill Partnerships in the Age of AI.” The write up explains that lots of employees will be terminated. Think of machines displacing seamstresses. AI is going to do that to jobs, lots of jobs.
I want to focus on a different aspect of the McKinsey Global Institute Report (a PR and marketing entity not unlike Telegram’s TON Foundation in my view).

Thanks, Vencie. Your cartoon contains neither females nor minorities. That’s definitely a good enough approach. But you have probably done a number on a few graphic artists.
First, the report offers you the potential client an opportunity to use McKinsey’s AI chatbot. The service is a test, but I have a hunch that it is not much of a test. The technology will be deployed so that McKinsey can terminate those who underperform in certain work related tasks. The criteria for keeping one’s job at a blue chip consulting firm varies from company to company. But those who don’t quit to find greener or at least less crazy pastures will now work under the watchful eye of McKinsey AI. It takes a great deal of time to write a meaningful analysis of a colleague’s job performance. Let AI do it with exceptions made for “special” hires of course. Give it a whirl.
Second, the report what I call consultant facts. These are statements which link the painfully obvious with a rationale. Let me give you an example from this pre-Thanksgiving sales document. McKinsey states:
Two thirds of US work hours require only nonphysical capabilities
The painfully obvious: Most professional work is not “physical.” That means 67 percent of an employee’s fully loaded cost can be shifted to smart or semi-smart, good enough AI agentic systems. Then the obvious and the implication of financial benefits is supported by a truly blue chip chart. I know because as you can see, the graphics are blue. Here’s a segment of the McKinsey graph:

Notice that the chart is presented so that a McKinsey professional can explain the nested bar charts and expand on such data as “5 percent of a health care workforce can be agentized.” Will that resonate with hospital administrators working for roll up of individual hospitals. That’s big money. Get the AI working in one facility and then roll it out. Boom. An upside that seems credible. That’s the key to the consultant facts. Additional analysis is needed to tailor these initial McKinsey graph data to a specific use case. As a sales hook, this works and has worked for decades. Fish never understand hooks with plastic bait. Deer never quite master automobiles and headlights.
Third, the report contains sales and marketing jargon for 2026 and possibly beyond. McKinsey hopes for decades to come I think. Here’s a small selection of the words that will be used, recycled, and surface in lectures by AI experts to quite large crowds of conference attendees:
AI adjacent capabilities
AI fluency
Embodied AI
HMC or human machine collaboration
High prevalence skills
Human-agent-robot roles
technical automation potential
If you cannot define these, you need to hire McKinsey. If you want to grow as a big time manager, send that email or FedEx with a handwritten note on your engraved linen stationery.
Fourth, some humans will be needed. McKinsey wants to reassure its clients that software cannot replace the really valuable human. What do you think makes a really valuable worker beyond AI fluency? [a] A professional who signed off on a multi-million McKinsey consulting contract? [b] A person who helped McKinsey consultants get the needed data and interviews from an otherwise secretive client with compartmentalized and secure operating units? [b] A former McKinsey consultant now working for the firm to which McKinsey is pitching an AI project.
Fifth, the report introduces a new global index. The data in this free report is unlikely to be free in the future. McKinsey clients can obtain these data. This new global index is called the Skills Change Index. Here’s an example. You can get a bit more marketing color in the cited report. Just feast your eyes on this consultant fact packed chart:

Several comments. The weird bubble in the right hand page’s margin is your link to the McKinsey AI system. Give it a whirl, please. Look at the wonderland of information in a single chart presented in true blue, “just the facts, mam” style. The hawk-eyed will see that “leadership” seems immune to AI. Obviously senior management smart enough to hire McKinsey will be AI fluent and know the score or at least the projected financial payoff resulting from terminating staff who fail to up their game when robots do two thirds of the knowledge workers’ tasks.
Why has McKinsey gone to such creative lengths to create an item like this marketing collateral? Multiple teams labored on this online brochure. Graphic designers went through numerous versions of the sliding panels. McKinsey knows there is money in those AI studies. The firm will apply its intellectual method to the wizards who are writing checks to AI companies to build big data centers. Even Google is hedging its bets by packaging its data centers as providers to super wary customers like NATO. Any company can benefit from AI fluency-centric efficiency inputs. Well, not any. The reason is that only companies who can pay McKinsey fees quality to be clients.
The 11 people identified as the authors have completed the equivalent of a death march. Congratulations. I applaud you. At some point in your future career, you can look back on this document and take pride in providing a road map for companies eager to dump human workers for good enough AI systems. Perhaps one of you will be able to carry a sign in a major urban area that calls attention to your skills? You can look back and tell your friends and family, “I was part of this revolution.” So Happy holidays to you, McKinsey, and to the other blue chip firms exploiting good enough smart software.
Stephen E Arnold, December 3, 2025

