Meta: An AI Management Issue Maybe?
December 17, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I really try not to think about Facebook, Mr. Zuckerberg, his yachts, and Llamas. I mean the large language model, not the creatures I associate with Peru. (I have been there, and I did not encounter any reptilian snakes. Cuy chactado, si. Vibora, no.)
I read in the pay-walled orange newspaper online “Inside Mark Zuckerberg’s Turbulent Bet on AI.” Hmm. Turbulent. I was thinking about synonyms I would have suggested; for example, unjustifiable, really big, wild and crazy, and a couple of others. I am not a real journalist so I will happily accept turbulent. The word means, however, “relating to or denoting flow of a fluid in which the velocity at any point fluctuates irregularly and there is continual mixing rather than a steady or laminar flow pattern” according to the Google’s opaque system. I think the idea is that Meta is operating in a chaotic way. What about “juiced running fast and breaking things”? Yep. Chaos, a modern management method that is supposed to just work.
A young executive with oodles of money hears an older person, probably a blue chip consultant, asking one of those probing questions about a top dog’s management method. Will this top dog listen or just fume and keep doing what worked for more than a decade? Thanks, Qwen. Good enough.
What does the write up present? Please, sign up for the FT and read the original article. I want to highlight two snippets.
The first is:
Investors are also increasingly skittish. Meta’s 2025 capital expenditures are expected to hit at least $70bn, up from $39bn the previous year, and the company has started undertaking complex financial maneuverings to help pay for the cost of new data centers and chips, tapping corporate bond markets and private creditors.
Not RIFed employees, not users, not advertisers, and not government regulators. The FT focuses on investors who are skittish. The point is that when investors get skittish, an already unsettled condition is sufficiently significant to increase anxiety. Investors do not want to be anxious. Has Mr. Zuckerberg mismanaged the investors that help keep his massive investments in to be technology chugging along. First, there was the metaverse. That may arrive in some form, but for Meta I perceive it as a dumpster fire for cash.
Now investors are anxious and the care and feeding of these entities is more important. The fact that the investors are anxious suggests that Mr. Zuckerberg has not managed this important category of professionals in a way that calms them down. I don’t think the FT’s article will do much to alleviate their concern.
The second snippet is:
But the [Meta] model performed worse than those by rivals such as OpenAI and Google on jobs including coding tasks and complex problem solving.
This suggests to me that Mr. Zuckerberg did not manage the process in an optimal way. Some wizards left for greener pastures. Others just groused about management methods. Regardless of the signals one receives about Meta, the message I receive is that management itself is the disruptive factor. Mismanagement is, I think, part of the method at Meta.
Several observations:
- Meta like the other AI outfits with money to toss in the smart software dumpster fire are in the midst of realizing “if we think it, it will become reality” is not working. Meta’s spinning off chunks of flaming money bundles and some staff don’t want to get burned.
- Meta is a technology follower, and it may have been aced by its message and social media competitor Telegram. If Telegram’s approach is workable, Meta may be behind another AI eight ball.
- Mr. Zuckerberg is a wonder of American business. He began as a boy wonder. Now as an adult wonder, the question is, “Why are investors wondering about his current wonder-fulness?”
Net net: Meta faces a management challenge. The AI tech is embedded in that. Some of its competitors lack management finesse, but some of them are plugging along and not yet finding their companies presented in the Financial Times as outfits making “increasingly skittish.” Perhaps in the future, but right now, the laser focus of the Financial Times is on Meta. The company is an easy target in my opinion.
Stephen E Arnold, December 17, 2025
How Not to Get a Holiday Invite: The Engadget Method
December 15, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Sam AI-Man may not invite anyone from Engadget to a holiday party. I read “OpenAI’s House of Cards Seems Primed to Collapse.” The “house of cards” phrase gives away the game. Sam AI-Man built a structure that gravity or Google will pull down. How do I know? Check out this subtitle:
In 2025, it fell behind the one company it couldn’t lose ground to: Google.
The Google. The outfit that shifted into Red Alert or whatever the McKinsey playbook said to call an existential crisis klaxon. The Google. Adjudged a monopoly getting down to work other than running and online advertising system. The Google. An expert in reorganizing a somewhat loosely structured organization. The Google: Everyone except the EU and some allegedly defunded YouTube creators absolutely loves. That Google.
Thanks Venice.ai. I appreciate your telling me I cannot output an image with a “young programmer.” Plugging in “30 year old coder” worked. Very helpful. Intelligent too.
The write up points out:
It’s safe to say GPT-5 hasn’t lived up to anyone’s expectations, including OpenAI’s own. The company touted the system as smarter, faster and better than all of its previous models, but after users got their hands on it, they complained of a chatbot that made surprisingly dumb mistakes and didn’t have much of a personality. For many, GPT-5 felt like a downgrade compared to the older, simpler GPT-4o. That’s a position no AI company wants to be in, let alone one that has taken on as much investment as OpenAI.
Did OpenAI suck it up and crank out a better mouse trap? The write up reports:
With novelty and technical prowess no longer on its side though, it’s now on Altman to prove in short order why his company still deserves such unprecedented levels of investment.
Forget the problems a failed OpenAI poses to investors, employees, and users. Sam AI-Man now has an opportunity to become the highest profile technology professional to cause a national and possibly global recession. Short of war mongering countries, Sam AI-Man will stand alone. He may end up in a museum if any remain open when funding evaporate. School kids could read about him in their history books; that is, if kids actually attend school and read. (Well, there’s always the possibility of a YouTube video if creators don’t evaporate like wet sidewalks when the sun shines.)
Engadget will have to find another festive event to attend.
Stephen E Arnold, December 15, 2025
A Job Bright Spot: RAND Explains Its Reality
December 10, 2025
Optimism On AI And Job Market
Remember when banks installed automatic teller machines at their locations? They’re better known by the acronym ATM. ATMs didn’t take away jobs, instead they increased the number of banks, and created more jobs. AI will certainly take away jobs but the technology will also create more. Rand.org investigates how AI is affecting the job market in the article, “AI Is Making Jobs, Not Taking Them.”
What I love about this article is that it says the truth about aI technology: no one knows what will happen with it. We have theories ,explored in science fiction, about what AI will do: from the total collapse of society to humdrum normal societal progress. What Rand’s article says is that the research shows AI adoption is uneven and much slower than Wall Street and Silicon Valley say. Rand conducted some research:
“At RAND, our research on the macroeconomic implications of AI also found that adoption of generative AI into business practices is slow going. By looking at recent census surveys of businesses, we found the level of AI use also varies widely by sector. For large sectors like transportation and warehousing, AI adoption hovered just above 2 percent. For finance and insurance, it was roughly 10 percent. Even in information technology—perhaps the most likely spot for generative AI to leave its mark—only 25 percent of businesses were using generative AI to produce goods and services.”
Most of the fear related to AI stems from automation of job tasks. Here are some statistics from OpenAI:
“In a widely referenced study, OpenAI estimated that 80 percent of the workforce has at least 10 percent of their tasks exposed to LLM-driven automation, and 19 percent of workers could have at least 50 percent of their tasks exposed. But jobs are more than individual tasks. They are a string of tasks assembled in a specific way. They involve emotional intelligence. Crude calculations of labor market exposure to AI have seemingly failed to account for the nuance of what jobs actually are, leading to an overstated risk of mass unemployment.”
AI is a wondrous technology, but it’s still infantile and stupid. Humans will adapt and continue to have jobs.
Whitney Grace, December 10, 2025
Google Presents an Innovative Way to Say, “Generate Revenue”
December 9, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
One of my contacts sent me a link to an interesting document. Its title is “A Pragmatic Vision for Interpretability.” I am not sure about the provenance of the write up, but it strikes me as an output from legal, corporate, and wizards. First impression: Very lengthy. I estimate that it requires about 11,000 words to say, “Generate revenue.” My second impression: A weird blend of consulting speak and nervousness.

A group of Googlers involved in advanced smart software ideation get a phone call clarifying they have to hit revenue targets. No one looks too happy. The esteemed leader is on the conference room wall. He provides a North Star to the wandering wizards. Thanks, Venice.ai. Good enough, just like so much AI system output these days.
The write up is too long to meander through its numerous sections, arguments, and arm waving. I want to highlight three facets of the write up and leave it up to you to print this puppy out, read it on a delayed flight, and consider how different this document is from the no output approach Google used when it was absolutely dead solid confident that its search-ad business strategy would rule the world forever. Well, forever seems to have arrived for Googzilla. Hence, be pragmatic. This, in my experience, is McKinsey speak for hit your financial targets or hit the road.
First, consider this selected set of jargon:
Comparative advantage (maybe keep up with the other guys?)
Load-bearing beliefs
Mech Interp” / “mechanistic interpretability” (as opposed to “classic” interp)
Method minimalism
North Star (is it the person on the wall in the cartoon or just revenue?)
Proxy task
SAE (maybe sparse autoencoders?)
Steering against evaluation awareness (maybe avoiding real world feedback?)
Suppression of eval-awareness (maybe real-world feedback?)
Time-box for advanced research
The document tries to hard to avoid saying, “Focus on stuff that makes money.” I think that, however, is what the word choice is trying to present in very fancy, quasi-baloney jingoism.
Second, take a look at the three sets of fingerprints in what strikes me as a committee-written document.
- Researchers want to just follow their ideas about smart software just as we have done at Google for many years
- Lawyers and art history majors who want to cover their tailfeathers when Gemini goes off the rails
- Google leadership who want money or at the very least research that leads to products.
I can see a group meeting virtually, in person, and in the trenches of a collaborative Google Doc until this masterpiece of management weirdness is given the green light for release. Google has become artful in make work, wordsmithing, and pretend reconciliation of the battles among the different factions, city states, and empires within Google. One can almost anticipate how the head of ad sales reacts to money pumped into data centers and research groups who speak a language familiar to Klingons.
Third, consider why Google felt compelled to crank out a tortured document to nail on the doors of an AI conference. When I interacted with Google over a number of years, I did not meet anyone reminding me of Martin Luther. Today, if I were to return to Shoreline Drive, I might encounter a number of deep fakes armed with digital hammers and fervid eyes. I think the Google wants to make sure that no more Loons and Waymos become the butt of stand up comedians on late night TV or (heaven forbid, TikTok). The dead cat in the Mission and the dead puppy in what’s called (I think) the Western Addition. (I used to live in Berkeley, and I never paid much attention to the idiosyncratic names slapped on undifferentiable areas of the City by the Bay.)
I think that Google leadership seeks in this document:
- To tell everyone it is focusing on stuff that sort of works. The crazy software that is just like Sundar is not on the to do list
- To remind everyone at the Google that we have to pay for the big, crazy data centers in space, our own nuclear power plants, and the cost of the home brew AI chips. Ads alone are no longer going to be 24×7 money printing machines because of OpenAI
- To try to reduce the tension among the groups, cliques, and digital street gangs in the offices and the virtual spaces in which Googlers cogitate, nap, and use AI to be more efficient.
Net net: Save this document. It may become a historical artefact.
Stephen E Arnold, December 9, 2025
Telegram’s Cocoon AI Hooks Up with AlphaTON
December 5, 2025
[This post is a version of an alert I sent to some of the professionals for whom I have given lectures. It is possible that the entities identified in this short report will alter their messaging and delete their Telegram posts. However, the thrust of this announcement is directionally correct.]
Telegram’s rapid expansion into decentralized artificial intelligence announced a deal with AlphaTON Capital Corp. The Telegram post revealed that AlphaTON would be a flagship infrastructure and financial partner. The announcement was posted to the Cocoon Group within hours of AlphaTON getting clear of U.S. SEC “baby shelf” financial restrictions. AlphaTON promptly launched a $420.69 million securities push. Telegram and AlphaTON either acted in a coincidental way or Pavel Durov moved to make clear his desire to build a smart, Telegram-anchored financial service.
AlphaTON, a Nasdaq microcap formerly known as Portage Biotech rebranded in September 2025. The “new” AlphaTON claims to be deploying Nvidia B200 GPU clusters to support Cocoon, Telegram’s confidential-compute AI network. The company’s pivot from oncology to crypto-finance and AI infrastructure was sudden. Plus AlphaTON’s CEO Brittany Kaiser (best known for Cambridge Analytica) has allegedly interacted with Russian political and business figures during earlier data-operations ventures. If the allegations are accurate, Ms. Kaiser has connections to Russia-linked influence and financial networks. Telegram is viewed by some organizations like Kucoin as a reliable operational platform for certain financial activities.
Telegram has positioned AlphaTON as a partner and developer in the Telegram ecosystem. Firms like Huione Guarantee allegedly used Telegram for financial maneuvers that resulted in criminal charges. Other alleged uses of the Telegram platform have included other illegal activities identified in the more than a dozen criminal charges for which Pavel Durov awaits trial in France. Telegram’s instant promotion of AlphaTON, combined with the firm’s new ability to raise hundreds of millions, points to a coordinated strategy to build an AI-enabled financial services layer using Cocoon’s VAIC or virtual artificial intelligence complex.
The message seems clear. Telegram is not merely launching a distributed AI compute service; it is enabling a low latency, secrecy enshrouded AI-crypto financial construct. Telegram and AlphaTON both see an opportunity to profit from a fusion of distributed AI, cross jurisdictional operation, and a financial pay off from transactions at scale. For me and my research team, the AlphaTON tie-up signals that Telegram’s next frontier may blend decentralized AI, speculative finance, and actors operating far from traditional regulatory guardrails.
In my monograph “Telegram Labyrinth” (available only to law enforcement, US intelligence officers, and cyber attorneys in the US), Telegram requires close monitoring and a new generation of intelware software. Yesterday’s tools were not designed for what Telegram is deploying itself and with its partners. Thank you.
Stephen E Arnold, December 5, 2025, 1034 am US Eastern time
AI Bubble? What Bubble? Bubble?
December 5, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “JP Morgan Report: AI Investment Surge Backed by Fundamentals, No Bubble in Sight.” The “report” angle is interesting. It implies unbiased, objective information compiled and synthesized by informed individuals. The content, however, strikes me as a bit of fancy dancing.
Here’s what strikes me as the main point:
A recent JP Morgan report finds the current rally in artificial intelligence (AI) related investments to be justified and sustainable, with no evidence of a bubble forming at this stage.
Feel better now? I don’t. The report strikes me as bank marketing with a big dose of cooing sounds. You know, cooing like a mother to her month old baby. Does the mother makes sense? Nope. The point is that warm cozy feeling that the cooing imparts. The mother knows she is doing what is necessary to reduce the likelihood of the baby making noises for sustained periods. The baby knows that mom’s heart is thudding along and the comfort speaks volumes.

Financial professionals in Manhattan enjoy the AI revolution. They know there is no bubble. I see bubbles (plural). Thanks, MidJourney. Good enough.
Sorry. The JP Morgan cooing is not working for me.
The write up says, quoting the estimable financial institution:
“The ingredients are certainly in place for a market bubble to form, but for now, at least, we believe the rally in AI-related investments is justified and sustainable. Capex is massive, and adoption is accelerating.”
What about this statement in the cited article?
JP Morgan contrasts the current AI investment environment to previous speculative cycles, noting the absence of cheap speculative capital or financial structures that artificially inflate prices. As AI investment continues, leverage may increase, but current AI spending is being driven by genuine earnings growth rather than assumptions of future returns.
After stating the “no bubble” argument three times, I think I understand.
Several observations:
- JP Morgan needed to make a statement that the AI data center thing, the depreciation issue, the power problem, and the potential for an innovation that derails the current LLM-type of processing are not big deals. These issues play no part in the non-bubble environment.
- The report is a rah rah for AI. Because there is no bubble, organizations should go forward and implement the current versions of smart software despite their proven “feature” of making up answers and failing to handle many routine human-performed tasks.
- The timing is designed to allow high net worth people a moment to reflect upon the wisdom of JP Morgan and consider moving money to the estimable financial institution for shepherding in what others think are effervescent moments.
My view: Consider the problems OpenAI has: [a] A need for something that knocks Googzilla off the sidewalk on Shoreline Drive and [b] more cash. Amazon — ever the consumer’s friend — is involved in making its own programmers use its smart software, not code cranked out by a non-Amazon service. Plus, Amazon is in the building mode, but it has allegedly government money to spend, a luxury some other firms are denied. Oracle is looking less like a world beater in databases and AI and more of a media-type outfit. Perplexity is probably perplexed because there are rumors that it may be struggling. Microsoft is facing some backlash because of its [a] push to make Copilot everyone’s friend and [b] dealing with the flawed updates to its vaunted Windows 11 software. Gee, why is FileManager not working? Let’s ask Copilot. On the other hand, let’s not.
Net net: JP Morgan is marketing too hard, and I am not sure it is resonating with me as unbiased and completely objective. As sales collateral, the report is good. As evidence there is no bubble, nope.
Stephen E Arnold, December 5, 2025
AI Breaks Career Ladders
December 2, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
My father used to tell me that it was important to work at a company and climb the career ladder. I did not understand the concept. In my first job, I entered at a reasonably elevated level. I reported to a senior vice president and was given a big, messy project to fix up and make successful. In my second job, I was hired to report to the president of a “group” of companies. I don’t think I had a title. People referred to me as Mr. X’s “get things done person.” My father continued to tell me about the career ladder, but it did not resonate with me.
Thanks Venice.ai. I fired five prompts before you came close to what I specified. Good work, considering.
Only later, when I ran my own small consulting firm did the concept connect. I realized that as people worked on tasks, some demonstrated exceptional skill. I tried to find ways to expand those individuals capabilities. I think I succeeded, and several have contacted me years after I retired to tell me they were grateful for the opportunities I provided.
Imagine my surprise when I read “The Career Ladder Just Got Terminated: AI Kills Jobs Before They’re Born.” I understand. Co workers have no way to learn and earn the right to pursue different opportunities in order to grow their capabilities.
The write up says:
Artificial intelligence isn’t just taking jobs. It’s removing the rungs of the ladder that turn rookies into experts.
Here’s a statement from the rock and roll magazine that will make some young, bright eyed overachievers nervous:
In addition to making labor more efficient, it [AI] actually makes labor optional. And the disruption won’t unfold over generations like past revolutions; it’s happening in real time, collapsing decades of economic evolution into a few short years.
Forget optional. If software can replace hard to manage, unpredictable, and good enough humans, AI will get the nod. The goal of most organizations is to generate surplus cash. Then that cash is disbursed to stakeholders, deserving members of the organization’s leadership, and lavish off site meetings, among other important uses.
Here’s another passage that unintentionally will make art history majors, programmers, and, yes, even some MBA with the right stuff think about becoming a plumber:
And this AI job problem isn’t confined to entertainment. It’s happening in law, medicine, finance, architecture, engineering, journalism — you name it. But not every field faces the same cliff. There’s one place where the apprenticeship still happens in real time: live entertainment and sports.
Perhaps there will be an MBA Comedy Club? Maybe some computer scientists will lean into their athletic prowess for table tennis or quoits?
Here’s another cause of heart burn for the young job hunter:
Today, AI isn’t hunting our heroes; it’s erasing their apprentices before they can exist. The bigger danger is letting short-term profits dictate our long-term cultural destiny. If the goal is simply to make the next quarter’s numbers look good, then automating and cutting is the easy answer. But if the goal is culture, originality and progress, then the choice is just as clear: protect the training grounds, take risks on the unknown and invest in the people who will surprise us.
I don’t see the BAIT (big AI technology companies) leaning into altruistic behavior for society. These outfits want to win, knock off the competition, and direct the masses to work within the bowling alley of life between two gutters. Okay, job hunters, have at it. As a dinobaby, I have no idea what it impact job hunting in the early days of AI will have. Did I mention plumbing?
Stephen E Arnold, December 2, 2025
What Can a Monopoly Type Outfit Do? Move Fast and Break Things Not Yet Broken
November 26, 2025
This essay is the work of a dumb dinobaby. No smart software required.
CNBC published “Google Must Double AI Compute Every 6 Months to Meet Demand, AI Infrastructure Boss Tells Employees.”
How does the math work out? Big numbers result as well as big power demands, pressure on suppliers, and an incentive to enter hyper-hype mode for marketing I think.

Thanks, Venice.ai. Good enough.
The write up states:
Google ’s AI infrastructure boss [maybe a fellow named Amin Vahdat, the leadership responsible for Machine Learning, Systems and Cloud AI?] told employees that the company has to double its compute capacity every six months in order to meet demand for artificial intelligence services.
Whose demand exactly? Commercial enterprises, Google’s other leadership, or people looking for a restaurant in an unfamiliar town?
The write up notes:
Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.
Faced with this robust demand, what differentiates the Google for other monopoly-type companies? CNBC delivers a bang up answer to my question:
Google’s “job is of course to build this infrastructure but it’s not to outspend the competition, necessarily,” Vahdat said. “We’re going to spend a lot,” he said, adding that the real goal is to provide infrastructure that is far “more reliable, more performant and more scalable than what’s available anywhere else.” In addition to infrastructure buildouts, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018. Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years.
I see spend the same as a competitor but, because Google is Googley, the company will deliver better reliability, faster, and more easily made bigger AI than the non-Googley competition. Google is focused on efficiency. To me, Google bets that its engineering and programming expertise will give it an unbeatable advantage. The VP of Machine Learning, Systems and Cloud AI does not mention the fact that Google has its magical advertising system and about 85 percent of the global Web search market via its assorted search-centric services. Plus one must not overlook the fact that the Google is vertically integrated: Chips, data centers, data, smart people, money, and smart software.
The write up points out that Google knows there are risks with its strategy. But FOMO is more important than worrying about costs and technology. But what about users? Sure, okay, eyeballs, but I think Google means humanoids who have time to use Google whilst riding in Waymos and hanging out waiting for a job offer to arrive on an Android phone. Google doesn’t need to worry. Plus it can just bump up its investments until competitors are left dying in the desert known as Death Vall-AI.
After kicking beaten to the draw in the PR battle with Microsoft, the Google thinks it can win the AI jackpot. But what if it fails? No matter. The AI folks at the Google know that the automated advertising system that collects money at numerous touch points is for now churning away 24×7. Googzilla may just win because it is sitting on the cash machine of cash machines. Even counterfeiters in Peru and Vietnam cannot match Google’s money spinning capability.
Is it game over? Will regulators spring into action? Will Google win the race to software smarter than humans? Sure. Even if it part of the push to own the next big thing is puffery, the Google is definitely confident that it will prevail just like Superman and the truth, justice, and American way has. The only hitch in the git along may be having captured enough electrical service to keep the lights on and the power flowing. Lots of power.
Stephen E Arnold, November 26, 2025
Big AI Tech: Bait and Switch with Dancing Numbers?
November 20, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The BAIT outfits are a mix of public and private companies. Financial reports — even for staid outfits — can give analysts some eye strain. Footnotes in dense text can contain information relevant to a paragraph or a number that appears elsewhere in a document. I have been operating on a simple idea: The money flowing into AI is part of the “we can make it work” approach of many high technology companies. For the BAIT outfits, each enhancement has delivered a system which is not making big strides. Incrementalism or failure seems to be what the money has been buying. Part of the reason is that BAIT outfits need and lust for a play that will deliver a de facto monopoly in smart software.
Right now, the BAIT outfits depend on the Googley transformer technology. That method is undergoing enhancements, tweaks, refinements, and other manipulations to deliver more. The effort is expensive, and — based on my personal experience — not delivering what I expect. For example, You.com (one of the interface outfits that puts several models under one browser “experience” — told me I had been doing too many queries. When I contacted the company, I was told to fiddle with my browser and take other steps unrelated to the error message You.com’s system generated. I told You.com to address their error message, not tell me what to do with a computer that works with other AI services. I have Venice.ai ignoring prompts. Prior to updates, the Venice.ai system did a better job of responding to prompts. ChatGPT is now unable to concatenate three or four responses to quite specific prompts and output a Word file. I got a couple of hundred word summary instead of the outputs, several of which were wild and crazy.

Thanks, Venice.ai. Close enough for horse shoes.
When I read “Michael Burry Doubles Down On AI Bubble Claims As Short Trade Backfires: Says Oracle, Meta Are Overstating Earnings By ‘Understating Depreciation’,” I realized that others are looking more carefully at the BAIT outfits and what they report. [Mr. Burry is the head of Scion, an investment firm that is into betting certain stock prices will crater.] The article says:
In a post on X, Burry accused tech giants such as Meta Platforms Inc. and Oracle Corp. of “understating depreciation” by extending the useful life of assets, particularly chips and AI infrastructure.
This is an MBA way of saying, “These BAIT outfits are ignoring that the value of their fungible stuff like chips, servers, and data center plumbing is cratering. Software does processes, usually mindlessly. But when one pushes zeros and ones through software, the problems appear. These can be as simple as nothing happens or a server just sits there and blinks. Yikes, bottlenecks. The fix is usually just reboot and get the system up and running. The next step is to buy more of whatever hardware appeared to be the problem. Sure, the software wizards will look at their code, but that takes time. The approach is to spend more for compute or bandwidth and then slipstream the software fix into the work flow.
In parallel with the “spend to get going” approach, the vendors of processing chips suitable for handling flows of data are improving. The cadence is measured in months. But when new chips become available, BAIT outfits want them. Like building highways, a new highway does not solve a traffic problem. The flow of traffic increases until the new highway is just as slow as the old highway. The fix, which is similar to the BAIT outfits’ approach, is to build more highways. Meanwhile software fixes are slow and the chip cadence marches along.
Thus, understating depreciating and probably some other financial fancy dancing disguises how much cash is needed to keep those less and less impressive incremental AI innovations coming. The idea is that someone, somewhere in BAIT world will crack the problem. A transformer type breakthrough will solve the problems AI presents. Well, that’s the hope.
The article says:
Burry referred to this as one of the “more common frauds of the modern era,” used to inflate profits, and is something that he said all of the hyperscalers have since resorted to. “They will understate depreciation by $176 billion” through 2026 and 2028, he said.
Mr. Burry is a contrarian, and contrarians are not as popular as those who say, “Give me money. You will make a bundle.”
There are three issues involved with BAIT and somewhat fluffy financial situation AI companies in general face:
- China continues to put pressure on for profit outfits in the US. At the same time, China has been forced to find ways to “do” AI with less potent processors.
- China has more power generation tricks up its sleeve. Examples range from the wild and crazy mile wide dam with hydro to solar power, among other options. The US is lagging in power generation and alternative energy solutions. The cost of AI’s power is going to be a factor forcing BAIT outfits to do some financial two steps.
- China wants to put pressure on the US BAIT outfits as part of its long term plan to become the Big Dog in global technology and finance.
So what do we have? We have technical debit. We have a need to buy more expensive chips and data centers to house them. We have financial frippery to make the AI business look acceptable.
Is Mr. Burry correct? Those in the AI is okay camp say, “No. He’s the GameStop guy.”
Maybe Microsoft’s hiring of social media influencers will resolve the problem and make Microsoft number one in AI? Maybe Google will pop another transformer type innovation out of its creative engineering oven? Maybe AI will be the next big thing? How patient will investors be?
Stephen E Arnold, November 20, 2025
AI Is a Winner: The Viewpoint of an AI Believer
November 13, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Bubble, bubble, bubble. This is the Silicon Valley version of epstein, epstein, epstein. If you are worn out from the doom and gloom of smart software’s penchant for burning cash and ignoring the realities of generating electric power quickly, you will want to read “AI Is Probably Not a Bubble: AI Companies Have Revenue, Demand, and Paths to Immense Value.” [Note: You may encounter a paywall when you attempt to view this article. Don’t hassle me. Contact those charming visionaries at Substack, the new new media outfit.]
The predictive impact of the analysis has been undercut by a single word in the title “Probably.” A weasel word appears because the author’s enthusiasm for AI is a bit of contrarian thinking presented in thought leader style. Probably a pig can fly somewhere at some time. Yep, confidence.
Here’s a passage I found interesting:
… unlike dot-com companies, the AI companies have reasonable unit economics absent large investments in infrastructure and do have paths to revenue. OpenAI is demonstrating actual revenue growth and product-market fit that Pets.com and Webvan never had. The question isn’t whether customers will pay for AI capabilities — they demonstrably are — but whether revenue growth can match required infrastructure investment. If AI is a bubble and it pops, it’s likely due to different fundamentals than the dot-com bust.
Ah, ha, another weasel word: “Whether.” Is this AI bubble going to expand infinitely or will it become a Pets.com?
The write up says:
Instead, if the AI bubble is a bubble, it’s more likely an infrastructure bubble.
I think the ground beneath the argument has shifted. The “AI” is a technology like “the Internet.” The “Internet” became a big deal. AI is not “infrastructure.” That’s a data center with fungible objects like machines and connections to cables. Plus, the infrastructure gets “utilized immediately upon completion.” But what if [a] demand decreases due to lousy AI value, [b] AI becomes a net inflater of ancillary costs like a Microsoft subscription to Word, or [c] electrical power is either not available or too costly to make a couple of football fields of servers to run 24×7?
I liked this statement, although I am not sure some of the millions of people who cannot find jobs will agree:
As weird as it sounds, an AI eventually automating the entire economy seems actually plausible, if current trends keep continuing and current lines keep going up.
Weird. Cost cutting is a standard operating tactic. AI is an excuse to dump expensive and hard-to-manage humans. Whether AI can do the work is another question. Shifting from AI delivering value to server infrastructure shows one weakness in the argument. Ignoring the societal impact of unhappy workers seems to me illustrative of taking finance classes, not 18th century history classes.
Okay, here’s the wind up of the analysis:
Unfortunately, forecasting is not the same as having a magic crystal ball and being a strong forecaster doesn’t give me magical insight into what the market will do. So honestly, I don’t know if AI is a bubble or not.
The statement is a combination of weasel words, crawfishing away from the thesis of the essay, and an admission that this is a marketing thought leader play. That’s okay. LinkedIn is stuffed full of essays like this big insight:
So why are industry leaders calling AI a bubble while spending hundreds of billions on infrastructure? Because they’re not actually contradicting themselves. They’re acknowledging legitimate timing risk while betting the technology fundamentals are sound and that the upside is worth the risk.
The AI giants are savvy cats, are they not?
Stephen E Arnold, November 13, 2025

