Way to Go, Waymo: The Non-Googley Drivers Are Breaking the Law
December 26, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
To be honest I like to capture Googley moments in the real world. Forget the outputs of Google’s wonderful Web search engine. (Hey, where did those pages of links go?) The Google-reality interface is a heck of lot more fun.
Consider this article, which I assume like everything I read on the Internet, to be rock solid capital T truth. “Waymo Spotted Driving Wrong Way Down Busy Street.” The write up states as actual factual:
This week, one of Waymo’s fully driverless cabs was spotted blundering down the wrong side of a street in Austin, Texas, causing the human motorists driving in the correct direction to cautiously come to a halt, not unlike hikers encountering a bear.
That was no bear. That was a little Googzilla. These creatures, regardless of physical manifestation, operate by a set of rules and cultural traditions understandable only to those who have been in the Google environment.

Thanks to none of the AI image generators. I had to use three smart software to create a pink car driving the wrong way on a one way street. Great proof of a tiny problem with today’s best: ChatGPT, Venice, and MidJourney. Keep up the meh work.
The cited article continues:
The incident was captured in footage uploaded to Reddit. For a split second, it shows the Waymo flash its emergency signal, before switching to its turn signal. The robotaxi then turns in the opposite direction indicated by its blinker and pulls into a gas station, taking its sweet time.
I beg to differ. Google does not operate on “sweet time.” Google time is a unique way to move toward its ultimate goal: Humans realizing that they are in the path of a little Googzilla. Therefore, adapt to the Googleplex. The Googleplex does not adapt to humanoids. Humanoids click and buy things. Google facilitates this by allowing humanoids to ride in little Googzilla vehicles and absorb Google advertisements.
The write up illustrates that it fails to grasp the brilliance of the Googzilla’s smart software; to wit:
Waymo recalled a software patch after its robotaxis were caught blowing past stopped school buses with active warning lights and stop signs, including at least one incident where a Waymo drove right by students who were disembarking. Twenty of these incidents were reported in Austin alone, MySA noted, prompting the National Highway Traffic Safety Administration to open an investigation into the company. It’s not just school buses: the cabs don’t always stop for law enforcement, either. Earlier this month, a Waymo careened into an active police standoff, driving just a few feet away from a suspect who was lying facedown in the asphalt while cops had their guns trained on him.
These examples point out the low level of understanding that exists among the humanoids who consume advertising. Googzilla would replace humanoids if it could, but — for now — big and little Googzillas have to tolerate the inefficient friction humanoids introduce to the Google systems.
Let’s recap:
- Humans fail to understand Google rules
- Examples of Waymo “failures” identify the specific weaknesses Gemini can correct
- Little Googzillas define traffic rules.
So what if a bodega cat goes to the big urban area with dark alleys in the sky. Study Google and learn.
Stephen E Arnold, December 26, 2025
Yep, Making the AI Hype Real Will Be Expensive. Hundreds of Billions, Probably More, Says Microsoft
December 26, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I really don’t want to write another “if you think it, it will become real.” But here goes. I read “Microsoft AI CEO Mustafa Suleyman Says It Will Cost Hundreds of Billions to Keep Up with Frontier AI in the Next Decade.”
What’s the pitch? The write up says:
Artificial general intelligence, or AGI, refers to AI systems that can match human intelligence across most tasks. Superintelligence goes a step further — systems that surpass human abilities.
So what’s the cost? Allegedly Mr. AI at Microsoft (aka Microsoft AI CEO Mustafa Suleyman) asserts:
it’s going to cost “hundreds of billions of dollars” to compete at the frontier of AI over the next five to 10 years….Not to mention the prices that we’re paying for individual researchers or members of technical staff.
Microsoft seems to have some “we must win” DNA. The company appears to be willing to ignore users requests for less of that Copilot goodness.

The vice president of AI finance seems shocked by an AI wizard’s request for additional funds… right now. Thanks, Qwen. Good enough.
Several observations:
- The assumption is that more money will produce results. When? Who knows?
- The mental orientation is that outfits like Microsoft are smart enough to convert dreams into reality. That is a certain type of confidence. A failure is a stepping stone, a learning experience. No big deal.
- The hype has triggered some non-AI consequences. The lack of entry level jobs that AI will do is likely to derail careers. Remember the baloney that online learning was better than sitting in a classroom. Real world engagement is work. Short circuiting that work in my opinion is a problem not easily corrected.
Let’s step back. What’s Microsoft doing? First, the company caught Google by surprise in 2022. Now Google is allegedly as good or better than OpenAI’s technology. Microsoft, therefore, is the follower instead of the pace setter. The result is mild concern with a chance of fear tomorrow. the company’s “leadership” is not stabilizing the company, its messages, and its technology offerings. Wobble wobble. Not good.
Second, Microsoft has demonstrated its “certain blindness” to two corporate activities. The first is the amount of money Microsoft has spent and apparently will continue to spend. With inputs from the financially adept Mr. Suleyman, the bean counters don’t have a change. Sure, Microsoft can back out of some data center deals and it can turn some knobs and dials to keep the company’s finances sparkling in the sun… for a while. How long? Who knows?
Third, even Microsoft fan boys are criticizing the idea of shifting from software that a users uses for a purpose to an intelligent operating system that users its users. My hunch is that this bulldozing of user requests, preferences, and needs may be what some folks call a “moment.” Google’s Waymo killed a cat in the Mission District. Microsoft may be running over its customers. Is this risky? Who knows.
Fourth, can Microsoft deliver AI that is not like AI from other services; namely, the open source solutions that are available and the customer-facing apps built on Qwen, for example. AI is a utility and not without errors. Some reports suggest that smart software is wrong two thirds of the time. It doesn’t matter what the “real” percentage is. People now associate smart software with mistakes, not a rock solid tool like a digital tire pressure gauge.
Net net: Mr. Suleyman will have an opportunity to deliver. For how long? Who knows?
Stephen E Arnold, December 26, 2025
Forget AI AI AI. Think Enron Enron Enron
December 25, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Happy holidays AI industry. The Financial Times seems to be suggesting that lignite coal may be in your imitative socks hanging on your mantels. In the nightmare before Christmas edition of the orange newspaper, the story “Tech Groups Shift $120bn of AI Data Centre Debt Off Balance Sheets” says:
Creative financing helps insulate Big Tech while binding Wall Street to a future boom or bust.
What’s this mean? The short answer in my opinion is, “Enron Enron Enron.” That was the online oil information short cake that was inedible, choking a big accounting firm and lots of normal employees and investors. Some died. Houston and Wall Street had a problem. for years after the event, the smell of burning credibility could be detected by those with sensitive noses.
Thanks, Venice.ai. Good enough.
The FT, however, is not into Enron Enron Enron. The FT is into AI AI AI.
The write up says:
Financial institutions including Pimco, BlackRock, Apollo, Blue Owl Capital and US banks such as JPMorgan have supplied at least $120bn in debt and equity for these tech groups’ computing infrastructure, according to a Financial Times analysis.
So what? The FT says:
That money is channeled through special purpose holding companies known as SPVs. The rush of financings, which do not show up on the tech companies’ balance sheets, may be obscuring the risks that these groups are running — and who will be on the hook if AI demand disappoints. SPV structures also increase the danger that financial stress for AI operators in the future could cascade across Wall Street in unpredictable ways.
These sentence struck me as a little to limp. First, everyone knows what happens if AI works and creates the Big Rock Candy Mountain the tech bros will own. That’s okay. Lots of money. No worries. Second, the more likely outcome is [a] rain pours over the sweet treat and it melts gradually or [b] a huge thundercloud perches over the fragile peak and it goes away in a short time. One day a mountain and the next a sticky mess.
How is this possible? The FT states:
Data center construction has become largely reliant on deep-pocketed private credit markets, a rapidly inflating $1.7tn industry that has itself prompted concerns due to steep rises in asset valuations, illiquidity and concentration of borrowers.
The FT does not mention the fact that there may be insufficient power, water, and people to pull off the data center boom. But that’s okay, the FT wants to make clear that “risky lending” seems to be the go-approach for some of the hopefuls in the AI AI AI hoped-for boom.
What can make the use of financial engineering to do Enron Enron Enron maneuvers more tricky? How about this play:
A number of tech bankers said they had even seen securitization deals on AI debt in recent months, where lenders pool loans and sell slices of them, known as asset-backed securities, to investors. Two bankers estimated these deals currently numbered in the single-digit billions of dollars. These deals spread the risk of the data center loans to a much wider pool of investors, including asset managers and pension funds.
When playing Enron Enron Enron games, the ideas for “special purpose vehicles” or SPVs reduce financial risk. Just create a separate legal entity with its own balance sheet. If the SPV burns up (salute to Enron), the parent company’s assets are in theory protected. Enron’s money people cooked up some chrome trim for their SPVs; for example, just fund the SPVs with Enron stock. What could go wrong? Nothing unless, the stock tanked. It did. Bingo, another big flame out. Great idea as long as the rain clouds did not park over Big Rock Candy Mountain. But the rains came and stayed.
The result is that the use of these financial fancy dance moves suggests that some AI AI AI outfits are learning the steps to the Enron Enron Enron boogie.
Several observations:
- The “think it and it will work” folks in the AI AI AI business have some doubters among their troops
- The push back about AI leads to wild and crazy policies like those promulgated by Einstein’s old school. See ETH’s AI Policies. These indicate no one is exactly what to do with AI.
- Companies like Microsoft are experiencing what might be called post-AI AI AI digital Covid. If the disease spreads, trouble looms until herd immunity kicks in. Time costs money. Sick AI AI AI could be fatal.
Net net: The FT has sent an interesting holiday greeting to the AI AI AI financial engineers. 2026 will be exciting and perhaps a bit stressful for some in my opinion. AI AI AI.
Stephen E Arnold, December 25, 2025
Google Web Indexing: Some Think It Is Degrading. Impossible
December 25, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I think Google’s Web indexing is the cat’s pajamas. It is the best. It has the most Web pages. It digs deep into sites few people visit like the Inter-American Foundation. Searches are quick, especially if you don’t use Gemini which seems to hang in dataspace today (December 12, 2025).
Imagine my reaction when I read that a person using a third-party blog publishing service was de-indexed. Now the pointer is still there, but it is no longer displayed by the esteemed Google system. You can read the human’s version of the issue he encountered in “Google De-Indexed My Entire Bear Blog and I Don’t Know Why.”

Google has brightened the day of a blogger. Google understands advertising. Some other things are bafflers to Googzilla. Thanks, Qwen. Good enough, but you spelled “de-indexed” correctly.
The write up reveals the issue. The human clicked on something and Google just followed its rules. Here’s the “why” from the source document:
On Oct 14, as I was digging around GSC, I noticed that it was telling me that one of the URLs weren’t indexed. I thought that was weird, and not being very familiar with GSC, I went ahead and clicked the “Validate” button. Only after did I realized that URL was the RSS feed subscribe link,
https://blog.james-zhan.com/feed/?type=rss, which wasn’t even a page so it made sense that it hadn’t been indexed, but it was too late and there was no way for me to stop the validation.
The essay explains how Google’s well crafted system responded to this signal to index an invalid url. Google could have taken time to add a message like “Are you sure?” or maybe a statement saying, “Clicking okay will cause de-indexing of the content at the root url.” But Google — with its massive amounts of user behavior data — knows that its interfaces are crystal clear. The vast majority of human Googlers understand what happens when they click on options to delete images from the Google Cloud. Or, when a Gmail user tries to delete old email using the familiar from: command.
But the basic issue is that a human caused the de-indexing.
What’s interesting about the human’s work around is that those actions could be interpreted as a gray or black hat effort to fiddle with Google’s exceptional approach to indexing. Here’s what the human did:
I copied my blog over to a different subdomain (you are on it right now), moved my domain from GoDaddy to Porkbun for URL forwarding, and set up URL forwarding with paths so any blog post URLs I posted online will automatically be redirected to the corresponding blog post on this new blog. I also avoided submitting the sitemap of the new blog to GSC. I’m just gonna let Google naturally index the blog this time. Hopefully, this new blog won’t run into the same issue.
I would point out that “hope” is not often an operative concept at the Google.
What’s interesting about this essay about a human error is that it touched a nerve amongst the readers of Hacker News. Here a few comments about this human error:
- PrairieFire offers this gentle observation: “Whether or not this specific author’s blog was de-indexed or de-prioritized, the issue this surfaces is real and genuine. The real issue at hand here is that it’s difficult to impossible to discover why, or raise an effective appeal, when one runs afoul of Google, or suspects they have. I shudder to use this word as I do think in some contexts it’s being overused, I think it’s the best word to use here though: the issue is really that Google is a Gatekeeper.
- FuturisticLover is a bit more direct: “Google search results have gone sh*t. I am facing some deindexing issues where Google is citing a content duplicate and picking a canonical URL itself, despite no similar content. Just the open is similar, but the intent is totally different, and so is the focus keyword. Not facing this issue in Bing and other search engines.
- Aldipower raises a question about excellence and domination of Web search technology: Yeah, Google search results are almost useless. How could they have neglected their core competence so badly?
Several observations are warranted:
- Don’t click on any Google button unless one does one’s homework. Interpreting Google speak without having fluency in the lingo can result in some interesting downstream consequences
- Google is unlikely to change due to its incentive programs. One does not get promoted for fixing up an statement that could lead to a site being removed from public view. One gets the brass ring for doing AI which hopefully works more reliably that Gemini today (December 12, 2025)
- Quite a few people posting to this Hacker News’ item don’t have the same level of affection I have for the scintillating Google search experience.
Net net: Get with the program. The courts have spoken in the US. The EU just collects money. Users consume ads. Adapt. My suggestion is to not screw around too much; otherwise, Bear Blogs might be de-indexed by an annoyed search administrator in Switzerland.
Stephen E Arnold, December 25, 2025
Telegram Notes: Manny, Snoop, and Millions in Minutes
December 24, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
In the mass of information my team and I gathered for my new study “The Telegram Labyrinth,” we saw several references to what may be an interesting intersection of Manuel (Manny) Stotz, a hookah company in the Middle East, Snoop Dog (the musical luminary), and Telegram.
At some point in Mr. Stotz’ financial career, he acquired an interest in a company doing business as Advanced Inhalation Rituals or AIR. This firm owned or had an interest in a hookah manufacturer doing business as Al Fakher. By chance, Mr. Stotz interacted with Mr. Snoop Dog. As the two professionals discussed modern business, Mr. Stotz suggested that Mr. Snoop Dog check out Telegram.

Thanks, Venice.ai. I needed smoke coming out of the passenger side window, but smoke existing through the roof is about right for smart software.
Telegram allowed Messenger users to create non fungible tokens. Mr. Snoop Dog thought this was a very interesting idea. In July 2025, Mr. Snoop Dogg
I found the anecdotal Manny Stotz information in social media and crypto centric online services suggestive but not particularly convincing and rarely verifiable.
One assertion did catch my attention. The Snoop Dogg NFT allegedly generated US$12 million in 30 minutes. Is the number in “Snoop Dogg Rakes in $12M in 30 Minutes with Telegram NFT Drop” on the money? I have zero clue. I don’t even know if the release of the NFT or drop took place. Let’s go to the write up:
Snoop Dogg is back in the web3 spotlight, this time partnering with Telegram to launch the messaging app’s first celebrity digital collectibles drop. According to Telegram CEO Pavel Durov, the launch generated $12 million in sales, with nearly 1 million items sold out in just 30 minutes. While the items aren’t minted yet, users purchased the collectibles internally on Telegram, with minting on The Open Network (TON) scheduled to go live later this month [July 2025].
Is this important? It depends on one’s point of view. As an 81 year old dinobaby, I find the comments online about this alleged NFT for a popular musician not too surprising. I have several other dinobaby observations to offer, of course:
- Mr. Stotz allegedly owns shares in a company (possibly more than 50 percent or more of the outfit) that does business in the UAE and other countries where hookahs are popular. That’s AIR.
- Mr. Stotz worked for a short time a a senior manager at the TON Foundation. That’s an organization allegedly 100 percent separate from Telegram. That’s the totally independent, Swiss registered TON Foundation, not to be confused with the other TON Foundation in Abu Dhabi. (I wonder why there are two Telegram linked foundations. Maybe someone will look into that? Perhaps these are legal conventions or something akin to Trojan horses? This dinobaby does not know.
- By happenstance, Mr. Snoop Dogg learned about Telegram NFTs and at the same time Mr. Stotz was immersed in activities related to the Foundation and its new NASDAQ listed property TON Strategy Company, the NFT spun up and then moved forward allegedly.
- Does a regulatory entity monitor and levy tax on the sale of NFTs within Telegram? I mean Mr. Snoop Dogg resides in America. Mr. Stotz resides allegedly in London. The TON Foundation which “runs” the TON blockchain is in United Arab Emirates, and Mr. Pavel Durov is an AirBnB type of entrepreneur — this question of paying taxes is probably above my pay grade which is US$0.00.
One simple question I have is, “Does Mr. Snoop Dogg have an Al Faker hookah?
This is an example of one semi interesting activity involving Mr. Stotz, his companies (Koenigsweg Holdings Ltd Holdings Ltd and its limited liability unit Kingsway Capital) and the Telegram / TON Foundation interactions cross borders, business types, and cultural boundaries. Crypto seems to be a magnetic agent.
As Mr. Snoop Dogg sang in 1994:
“With so much drama in the LBC, it’s kinda hard being Snoop D-O-double-G.” (“Gin and Juice, 1994)
For those familiar with NFT but not LBC, the “LBC” refers to Long Beach, California. There is much mystery surrounding many words and actions in Telegram-related activities.
PS. My team and I are starting an information service called “Telegram Notes.” We have a url, some of the items will be posted to LinkedIn and the cyber crime groups which allowed me to join. We are not sure what other outlets will accept these Telegram-related essays. It’s kinda hard being a double DINO-B-A-BEEE.
Stephen E Arnold, December 24, 2025
All I Want for Xmas Is Crypto: Outstanding Idea GenZ
December 24, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I wish I knew an actual GenZ person. I would love to ask, “What do you want for Christmas?” Because I am a dinobaby, I expect an answer like cash, a sweater, a new laptop, or a job. Nope, wrong.
According to the most authoritative source of real “news” to which I have access, the answer is crypto. “45% of Gen Z Wants This Present for Christmas—Here’s What Belongs on Your Gift List” explains:
[A] Visa survey found that 45% of Gen Z respondents in the United States would be excited to receive cryptocurrency as their holiday gift. (That’s way more than Americans overall, which was only 28%.)

Two geezers try to figure out what their grandchildren want for Xmas. Thanks, Qwen. Good enough.
Why? Here’s the answer from Jonathan Rose, CEO of BlockTrust IRA, a cryptocurrency-based individual retirement account (IRA) platform:
“Gen Z had a global pandemic and watched inflation eat away at the power of the dollar by around 20%. Younger people instinctively know that $100 today will buy them significantly less next Christmas. Asking for an asset that has a fixed supply, such as bitcoin, is not considered gambling to them—it is a logical decision…. We say that bull markets make you money, but bear markets get you rich. Gen Z wants to accumulate an asset that they believe will define the future of finance, at an affordable price. A crypto gift is a clear bet that the current slump is temporary while the digital economy is permanent.”
I like that line “a logical decision.”
The world of crypto is an interesting one.
The Readers Digest explains to a dinobaby how to obtain crypto. Here’s the explanation for a dinobaby like me:
One easy way to gift crypto is by using a major exchange or crypto-friendly trading app like Robinhood, Kraken or Crypto.com. Kraken’s app, for example, works almost like Venmo for digital assets. You buy a cryptocurrency—such as bitcoin—and send it to someone using a simple pay link. The recipient gets a text message, taps the link, verifies their account, and the crypto appears in their wallet. It’s a straightforward option for beginners.
What will those GenZ folks do with their funds? Gig tripping. No, I don’t know what that means.
Several observations:
- I liked getting practical gifts, and I like giving practical gifts. Crypto is not practical. It is, in my opinion, idea for money laundering, not buying sweaters.
- GenZ does have an uncertain future. Not only are those basic skill scores not making someone like me eager to spend time with “units” from this cohort, I am not sure I know how to speak to a GenZ entity. Is that why so many of these young people prefer talking to chatbots? Do dinobabies make the uncomfortable?
- When the Readers Digest explains how to buy crypto, the good old days of a homey anecdote and a summary of an article from a magazine with a reading level above the sixth grade are officially over.
Net net: I am glad I am old.
Stephen E Arnold, December 24, 2025
Way More Goofs for Waymo: Power Failure? Humans at Fault!
December 23, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I found the stories about Google smart Waymo self driving vehicles in the recent San Francisco power failure amusing. Dangerous, yes. A hoot? Absolutely. Google, as its smart software wizards remind us on a PR cadence to make Colgate toothpaste envious, Google is the big dog of smart software. ChatGPT, a loser. Grok, a crazy loser. Chinese open source LLMs. Losers all.
State of the art artificial intelligence makes San Francisco residents celebrate true exceptionalism. Thanks, Venice.ai. Good enough.
I read “Waymo Robotaxis Stop in the Streets during San Francisco Power Outage.” Okay. Google’s Waymo can’t go. Are they electric vehicles suddenly deprived of power? Nope. The smart software did not operate in a way that thrilled riders and motorists when most of San Francisco lost power. The BBC says a very Googley expert said:
"While the Waymo Driver is designed to treat non-functional signals as four-way stops, the sheer scale of the outage led to instances where vehicles remained stationary longer than usual to confirm the state of the affected intersections," a Waymo spokesperson said in a statement provided to the BBC. That "contributed to traffic friction during the height of the congestion," they added.
What other minor issues do the Googley Waymos offer? I noticed the omission of the phrase “out of an abundance of caution.” It is probably out there in some Google quote.
Several observations:
- Google’s wizards will talk about this unlikely event and figure out how to have its cars respond when traffic lights go on the fritz. Will Google be able to fix the problem? Sure. Soon.
- What caused the problem? From Google’s point of view, it was the person responsible for the power failure. Google had nothing to do with that because Google’s Gemini was not autonomously operating the San Francisco power generation system. Someone get on that, please.
- After how many years of testing and how many safe miles (except of course for the bodega cat) will have to pass before Google Waymo does what normal humans would do. Pull over. Or head to a cul de sac in Cow Hollow.
Net net: Google and its estimable wizards can overlook some details. Leadership will take action.
As Elon Musk allegedly said:
"Tesla Robotaxis were unaffected by the SF power outage," Musk posted on X, along with a repost of video showing Waymo vehicles stopped at an intersection with down traffic lights as a line of cars honk and attempt to go around them. Musk also reposted a video purportedly showing a Tesla self-driving car navigating an intersection with non-functioning traffic lights.
Does this mean that Grok is better than Gemini?
Stephen E Arnold, December 23, 2025
France Arrested Pavel. The UK Signals Signal: What Might Follow?
December 23, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
As a dinobaby, I play no part in the machinations of those in the encrypted messaging arena. On one side, are those who argue that encryption helps preserve a human “right” to privacy. On the other hand, are those who say, “Money laundering, kiddie pix, drugs, and terrorism threaten everything.” You will have to pick your side. That decision will dictate how you interpret the allegedly actual factual information in “Creating Apps Like Signal or WhatsApp Could Be Hostile Activity, Claims UK Watchdog.”

An American technology company leader looks at a UK prison and asks the obvious question. The officers escort the tech titan to the new cell. Thanks, Venice.ai. Close enough for a single horse shoe.
“Hostile activity” suggests bad things will happen if a certain behavior persists. These include:
- Fines
- Prohibitions on an online service in a country (this is popular in Iran among other nation states)
- Potential legal hassles (a Heathrow holding cell is probably the Ritz compared to HMP Woodhill)
The write up reports:
Developers of apps that use end-to-end encryption to protect private communications could be considered hostile actors in the UK.
That is the stark warning from Jonathan Hall KC, the government’s Independent Reviewer of State Threats Legislation and Independent Reviewer of Terrorism Legislation
I interpret this as a helpful summary of a UK government brief titled State Threats Legislation in 2024. The timing of Mr. Hall’s observation may “signal” an overt action. That step may not be on the scale of the French arrest of a Russian with a French passport, but it will definitely create a bit of a stir in the American encrypted messaging sector. Believe it or not, the UK is not thrilled with some organizations’ reluctance to provide information relevant to certain UK legal matters.
In my experience, applying the standard “oh, we didn’t get the email” or “we’ll get back to you, thanks” is unlikely to work for certain UK government entities. Although unfailingly polite, there are some individuals who learned quite particular skills in specialized training. The approach, like the French action, can cause surprise among the individuals identified as problematic.
With certain international tensions rising, the UK may seize an opportunity to apply both PR and legal pressure to overcome what may be seen an impolite and ill advised behavior by certain American companies in the end to end encrypted messaging business.
The article “Creating Apps Like Signal” points out:
In his independent review of the Counter-Terrorism and Border Security Act and the newly implemented National Security Act, Hall KC highlights the incredibly broad scope of powers granted to authorities.
The article adds:
While the report’s strong wording may come as a shock, it doesn’t exist in a vacuum. Encrypted apps are increasingly in the crosshairs of UK lawmakers, with several pieces of legislation targeting the technology. Most notably, Apple was served with a technical capability notice under the Investigatory Powers Act (IPA) demanding it weaken the encryption protecting iCloud data. That legal standoff led the tech giant to disable its Advanced Data Protection instead of creating a backdoor.
What will the US companies do? I learned from the write up:
With the battle lines drawn, we can expect a challenging year ahead for services like Signal and WhatsApp. Both companies have previously pledged to leave the UK market rather than compromise their users’ privacy and security.
My hunch is that more European countries may look at France’s action and the “signals” emanating from the UK and conclude, “We too can take steps to deal with the American companies.”
Stephen E Arnold, December 23, 2025
AI Training: The Great Unknown
December 23, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Deloitte used to be an accounting firm. Then the company decided it could so much more. Normal people ask accountants for their opinions. Deloitte, like many other service firms, decided it could just become a general management consulting firm, an information technology company, a conference and event company, and also do the books.

A professional training program for business professionals at a blue chip consulting firm. One person speaks up, but the others keep their thoughts to themselves. How many are updating their LinkedIn profile? How many are wondering if AI will put them out of a job? How many don’t care because the incentives emphasize selling and upselling engagements? Thanks, Venice.ai. Good enough but you are AI and that’s a mark of excellence for some today.
I read an article that suggests a firm like Deloitte is not able to do much of the self assessment and introspection required to make informed decisions that make surprises part of some firms’ standard operating procedure.
This insight appears in “Deloitte’s CTO on a Stunning AI Transformation Stat: Companies Are Spending 93% on Tech and Only 7% on People.” This headline suggests that Deloitte itself is making this error. [Note: This is a wonky link from my feed system. If it disappears, good luck.]
The write up in Fortune Magazine said:
According to Bill Briggs, Deloitte’s chief technology officer, as we move from AI experimentation to impact/value at scale, that fear is driving a lopsided investment strategy where companies are pouring 93% of their AI budget into technology and only 7% into the people expected to use it.
The question that popped into my mind was, “How much money is Deloitte spending relative to smart software on training its staff in AI?” Perhaps the not-so-surprising MBA type “fact” reflects what some Deloitte professionals realize is happening at the esteemed “we can do it in any business discipline” consulting firm?
The explanation is that “the culture, workflow, and training” of a blue chip consulting firm is not extensive. Now with AI finding its way from word processing to looking up a fact, educating employees about AI is given lip service, but is “training” possible. Remember, please, that some consulting firms want those over 55 to depart to retirement. However, what about highly paid experts with being friendly and word smithing their core competencies, can learn how, when, and when not to rely on smart software? Do these “best of the best” from MBA programs have the ability to learn, or are these people situational thinkers; that is, the skill is to be spontaneously helpful, to connect the dots, and reframe what a client tells them so it appears sage-like.
The Deloitte expert says:
“This incrementalism is a hard trap to get out of.”
Is Deloitte out of this incrementalism?
The Deloitte expert (apparently not asked the question by the Fortune reporter) says:
As organizations move from “carbon-based” to “silicon-based” employees (meaning a shift from humans to semiconductor chips, or robots), they must establish the equivalent of an HR process for agents, robots, and advanced AI, and complex questions about liability and performance management. This is going to be hard, because it involves complex questions. He brought up the hypothetical of a human creating an agent, and that agent creating five more generations of agents. If wrongdoing occurs from the fifth generation, whose fault is that? “What’s a disciplinary action? You’re gonna put your line robot…in a timeout and force them to do 10 hours of mandatory compliance training?”
I want to point out that blue chip consulting is a soft skill business. The vaunted analytics and other parade float decorations come from Excel, third parties, or recent hires do the equivalent of college research.
Fortune points to Deloitte and says:
The consequences of ignoring the human side of the equation are already visible in the workforce. According to Deloitte’s TrustID report, released in the third quarter, despite increasing access to GenAI in the workplace, overall usage has actually decreased by 15%. Furthermore, a “shadow AI” problem is emerging: 43% of workers with access to GenAI admit to noncompliance, bypassing employer policies to use unapproved tools. This aligns with previous Fortune reporting on the scourge of shadow AI, as surveys show that workers at up to 90% of companies are using AI tools while hiding that usage from their IT departments. Workers say these unauthorized tools are “easier to access” and “better and more accurate” than the approved corporate solutions. This disconnect has led to a collapse in confidence, with corporate worker trust in GenAI declining by 38% between May and July 2025. The data supports this need for a human-centric approach. Workers who received hands-on AI training and workshops reported 144% higher trust in their employer’s AI than those who did not.
Let’s get back to the question? Is Deloitte training its employees in AI so the “information” sticks and then finds its way into engagements? This passage seems to suggest that the answer is, “No for Deloitte. No for its clients. And no for most organizations.” Judge for yourself:
For Briggs [the Deloitte wizard], the message to the C-suite is clear: The technology is ready, but unless leaders shift their focus to the human and cultural transformation, they risk being left with expensive technology that no one trusts enough to use.
My take is that the blue chip consulting firms are:
- Trying to make AI good enough so headcount and other cost savings like health care can be reduced
- Selling AI consulting to their clients before knowing what will and won’t work in a context different from the consulting firms’
- Developing an understanding that AI cannot do what humans can do; that is, build relationships and sell engagements.
Sort of a pickle.
Stephen E Arnold, December 23, 2025
How to Get a Job in the Age of AI?
December 23, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Two interesting employment related articles appeared in my newsfeeds this morning. Let’s take a quick look at each. I will try to add some humor to these write ups. Some may find them downright gloomy.
The first is “An OpenAI Exec Identifies 3 Jobs on the Cusp of Being Automated.” I want to point out that the OpenAI wizard’s own job seems to be secure from his point of view. The write up points out:
Olivier Godement, the head of product for business products at the ChatGPT maker, shared why he thinks a trio of jobs — in life sciences, customer service, and computer engineering — is on the cusp of automation.
Let’s think about each of these broad categories. I am not sure what life sciences means in OpenAI world. The term is like a giant umbrella. Customer service makes some sense. Companies were trying to ignore, terminate, and prevent any money sucking operation related to answer customer’s questions and complaints for years. No matter how lousy and AI model is, my hunch is that it will be slapped into a customer service role even if it is arguably worse than trying to understand the accent of a person who speaks English as a second or third language.
Young members of “leadership” realize that the AI system used to replace lower-level workers has taken their jobs. Selling crafts on Etsy.com is a career option. Plus, there is politics and maybe Epstein, Epstein, Epstein related careers for some. Thanks, Qwen, you just output a good enough image but you are free at this time (December 13, 2025).
Now we come to computer engineering. I assume the OpenAI person will position himself as an AI adept, which fits under the umbrella of computer engineering. My hunch is that the reference is to coders who do grunt work. The only problem is that the large language model approach to pumping out software can be problematic in some situations. That’s why the OpenAI person is probably not worrying about his job. An informed human has to be in the process of machine-generated code. LLMs do make errors. If the software is autogenerated for one of those newfangled portable nuclear reactors designed to power football field sized data centers, someone will want to have a human check that software. Traditional or next generation nuclear reactors can create some excitement if the software makes errors. Do you want a thorium reactor next to your domicile? What about one run entirely by smart software?
What’s amusing about this write up is that the OpenAI person seems blissfully unaware of the precarious financial situation that Sam AI-Man has created. When and if OpenAI experiences a financial hiccup, will those involved in business products keep their jobs. Oliver might want to consider that eventuality. Some investors are thinking about their options for Sam AI-Man related activities.
The second write up is the type I absolutely get a visceral thrill writing. A person with a connection (probably accidental or tenuous) lets me trot out my favorite trope — Epstein, Epstein, Epstein — as a way capture the peculiarity of modern America. This article is “Bill Gates Predicts That Only Three Jobs Will Be Safe from Being Replaced by AI.” My immediate assumption upon spotting the article was that the type of work Epstein, Epstein, Epstein did would not be replaced by smart software. I think that impression is accurate, but, alas, the write up did not include Epstein, Epstein, Epstein work in its story.
What are the safe jobs? The write up identifies three:
-
Biology. Remember OpenAI thinks life sciences are toast. Okay, which is correct?
-
Energy expertise
-
Work that requires creative and intuitive thinking. (Do you think that this category embraces Epstein, Epstein, Epstein work? I am not sure.)
The write up includes a statement from Bill Gates:
“You know, like baseball. We won’t want to watch computers play baseball,” he said. “So there’ll be some things that we reserve for ourselves, but in terms of making things and moving things, and growing food, over time, those will be basically solved problems.”
Several observations:
-
AI will cause many people to lose their jobs
-
Young people will have to make knick knacks to sell on Etsy or find equally creative ways of supporting themselves
-
The assumption that people will have “regular” jobs, buy houses, go on vacations, and do the other stuff organization man type thinking assumed was operative, is a goner.
Where’s the humor in this? Epstein, Epstein, Epstein and OpenAI debt, OpenAI debt, and OpenAI debt. Ho ho ho.
Stephen E Arnold, December x, 2025

