Old and Fired? Suck It Up, Buttercup

March 26, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

A fellow 54 years old claims age discrimination. The senior director of monetization analytics wrote an article that makes clear he believes the estimable Meta dumped old people. Full disclosure: I am 82 and I think a person who is a bit more than a quarter century younger than I am is not what I would call old. I completed college at a one-horser in the Midwest and managed to fool enough people that I deserved a graduate degree. I had been working in “real” jobs with a secretary and staff (believe it or not) before this Franchet fellow was conceived.

image

Thanks, Venice.ai. Sort of bad, but good old Google Gemini fixed up your output. Is that why having just one AI system is a really lame idea? I think it is.

The same whimpers were emitted when IBM (another outstanding company) identified employees who bumped up health care liabilities, wanted vacations with pay, and expected retirement accounts. Why keep these dodders around when cheap and good enough professionals were available in the idyllic city of Bangalore, India? Some of these people who were allowed to find their future elsewhere posted in social media about their job loss. How did that work out? It didn’t. The opportunity to push boundaries was not withdrawn. That hot desk in New Jersey went to a contract worker somewhere forcing the manager to obtain a world clock to schedule a video conference F2F or face to face.

Meta Unfairly Targeted Older Workers During Layoffs Last Year, Lawsuit Claims” explains:

“Employees 40 and older were 1.5 times as likely to be included in the layoffs than employees under 40, and employees 50 and older were 2.5 times as likely to be terminated than employees under 40,” the lawsuit reads, allegedly citing data provided by the company to laid-off workers.

Am I, an authentic dinobaby, surprised? You have to be kidding me with that stupid question. Let me explain why Silicon Valley-type outfits and the BAIT outfits (big AI tech firms) do not want people who appear to be old timers to their leadership. I will give three reasons and make them really simple and clear:

  1. Cost
  2. Cost
  3. Cost

Now there may be other issues; for example, a dinobaby like myself listens, questions, and then when warranted, pushes back. How many zippy computer scientists under the age of 23 want that? Answer: Zero. How many MBAs want to have their cherished boilerplate game plans disabused? Answer: Zero. How many Peter Principle promotees want to be reminded they are making a bad decision? Answer: Zero.

I find the idea that Meta is culling old cattle believable and part of the playbook. Many of these outfits senior managers struggle with imposter syndrome. These individuals sense that something is amiss. Therefore, a wide range of coping mechanisms come into play. Examples range from forming a squishy bond with another humanoid to buying a vehicle with a big engine, from ignoring physical exercise to a gym rat (albeit a gym with chrome machines and odor free plastic on the weight bench). I would include the odd cruise ship scale yacht and trophy wife or companion. Yes, these icons of American business have to deal with those inner anxieties. (I will not mention drugs, Epstein Epstein Epstein, and causing a discarded companion to attempt suicide. No, I definitely will not.)

The terminated Franchet is the source of this passage in the cited article:

Six months before his termination, in August 2024, Franchet received an “At or Above Expectations” performance rating. Just a few months later, Meta introduced a new “lowest performer” category. The lawsuit claims the review process used ahead of the layoffs was less rigorous than usual. During that process, Franchet received a “Met Most Expectations” performance rating and was classified as one of the company’s lowest performers.

So the personnel procedure did not work. How many systems and policies regarding people work at Meta? I don’t know the answer, but there is the occasional suicide attributed to the firm’s “bringing everyone together” system. I have heard that law enforcement in some cities checks Facebook Marketplace for that area if there is a notable robbery. One officer told me a couple of years ago, “Who needs a fence. There’s Facebook Marketplace.” I thought this was an interesting observation.

Net net: Old people belong in the warehouses for the soon to be unliving. Get used to it. Worrying will take years off your life. Be a happy dinobaby and don’t litigate. That reduces one’s changes for a consulting gig. Former employees who take a big company to court may get a “lowest performer” hashtag.

Stephen E Arnold, March 26, 2026

AI and Hitting a Math Wall

March 25, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

The average AI chatbot user realizes that the technology has its limits. An intelligent user (who doubles checks their facts) knows that the bots are prone to hallucinations and takes everything it dishes out with a binary salt grain. Gizmodo explains the limits of AI bots and how the technology is about to hit a computational brick wall: “AI Agents Are Poised to Hit A Mathematical Wall, Study Finds."

AI bots are built on LLMS with the belief that they will infinitely grow, gain more knowledge, and become more human in their autonomy. The father and son research team, Vishal Sikka and Varin Sikka, wrote a paper (hopefully without AI’s help) about the limits of AI. Apparently LLMs can’t do agentic and computation tasks beyond a certain complexity. In other words, AI may face computational limits. Thus, mathy innovation is going to be needed.

The paper explains that AI are programmed to complete tasks only as far as the parameters of the LLM. LLMs have limited processing capabilities and must operate within its their bands of knowledge. When tasks go beyond those parameters, more complex models are needed. The LLMs can’t extrapolate the required information so they either fail in the tasks or return incorrect information.

AI, therefore, needs to be helped out with humans who come up with new methods and techniques:

"The basic premise of the research really pours some cold water on the idea that agentic AI, models that are able to be given multi-step tasks that are completed completely autonomously without human supervision, will be the vehicle for achieving artificial general intelligence. That’s not to say that the technology doesn’t have a function or won’t improve, but it does place a much lower ceiling on what is possible than what AI companies would like to acknowledge when giving a “sky is the limit” pitch.”

Other experts have reported similar results and the average user can tell you the same thing. Can AI replace humans. No, but the MBAs and bean counters have calculated that smart software is cheaper and faster. Plus, AI does not need health care, retirement contributions, or vacations.

Whitney Grace, March 25, 2026

AlphaTON Capital: From Cocoon to Consumer Gaming

March 24, 2026

goat 3Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Talk a flip. I read a surprising story titled “AlphaTON Capital Acquires Controlling Interest in GAMEE, Adding 119 Million Users to its Ecosystem.” The article appeared in the online publication Hackernoon. AlphaTON is what I call a Swanson TV Dinner company. The firm was pharmaceutical outfit called Portage, and it was listed on NASDAQ. A firm named RSV swam up and bought the firm in the fall of 2025. The company was renamed, paperwork filed, and the AlphaTON Capital entity was in business and listed on the US NASDAQ. After some organizational shifting, the company has modified its original business plan; that was, acquire AI compute, let Telegram sell it, and get paid some money by Telegram.

image

Surprise, mom. Thanks, Venice.ai. Good enough.

Now the company is in the “ecosystem” business with a focus on consumer online games. The Hackernoon article reports:

AlphaTON Capital Corp. (Nasdaq: ATON), a public technology company dedicated to scaling the Telegram super-app ecosystem, today announced it has entered into a definitive agreement to acquire a 60% controlling interest in GAMEE, a leading mobile gaming platform and wholly owned subsidiary of Animoca Brands.  Concurrently, AlphaTON and Animoca Brands have formalized a Strategic Alliance to pursue broader commercial opportunities across blockchain and social gaming.

If you are “into” the Telegram, TON Foundation, AlphaTON Capital ecosystem, you understand the references. For some people, a bit of deconstruction might be helpful; to  wit:

  • Telegram super-app ecosystem. This means creating software applications that run on the Telegram platform. The platform includes smart contracts, bots or software robots, advertising, and blockchain (the TONcoin crypto, bridging technology to move crypto across different blockchains, unique programming languages, hundreds of partners and thousands of developers)
  • Animoca Brands. The company is well-known to the TON Foundation. The company is a partner and a developer and investor in crypto projects for the Telegram ecosystem. This is important because the AlphaTON Capital outfit is part of the Telegram ecosystem and as a new start up, AlphaTON Capital has no “ecosystem.”
  • Social gaming. This is a term used to describe a wide range of online games running in the Messenger environment. Many games are gambling platforms. The goal is to attract people to a “harmless” game like Hamstr Kombat and then get the player comfortable with winning tokens and then converting them into TONcoin. For a 13 year old interested in getting something for nothing and beating other players to the top of the leader board, social gaming is the ideal gateway to gambling. (In my monograph, “The Telegram Labyrinth” I explain how the social games morphs into casino gambling run by international operators from locations with flexible regulations. To request a copy, write kentmaxwell at proton dot me.)

Armed with these contextual items, the AlphaTON Capital acquisition is either a diversification of the AI compute business model or it is a attempt to generate revenue from online games.

The Hackernoon story explains that Animoca can’t buy AlphaTON Capital for several years.

Several observations are warranted:

  1. The value of the TONcoin has dropped since AlphaTON Capital was conceived and rolled out in late 2025
  2. The ticker symbol for AlphaTON Capital is ATON. That is the name of a large Russian bank.
  3. The company has severed ties with an individual and a firm with expertise in generating a market for TONcoin and other crypto
  4. No references to Yuri Mitin, the principal figure at Red Shark Ventures aka RSV appear in the story. Red Shark was founded in Moscow and moved to Toronto, Ontario prior to the AlphaTON Capital shell flip. This linkage could be interesting to some investors.
  5. The shift from AI compute signals that AlphaTON Capital had to find another way to generate excitement, value, and revenue. Perhaps online gaming will succeed.

Net net: AlphaTON Capital is evolving. Another Telegram shell flip which took place a few months before the AlphaTON Capital play. TON Strategy Company faces similar headwinds. With Pavel Durov’s trial in France approach and the Kremlin’s steadily increased pressure on Telegram users in Russia to abandon the platform, the Telegram ecosystem may be facing significant financial challenges.

Stephen E Arnold, March 24, 2026

Nvidia: PR That Screws Up Some Data Center Planning

March 19, 2026

green-dino_thumb_thumb[3]Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read a remarkable piece of content marketing collateral. The information appeared as a feature item in the online publication Venture Beat. “Nvidia Introduces Vera Rubin, a Seven-Chip AI Platform with OpenAI, Anthropic and Meta on Board” struck me as a somewhat bold attempt to make sure Nvidia remained a trillion dollar, super high performance, innovative company. If that were not enough, without Nvidia the artificial intelligence, agentic golden age would not come to pass. Wow! The only hitch in the git along is that nothing substantive was revealed about where this marvelous constellation of new chips and software would be housed, powered, cooled, and connected.

image

The conflict of turtle and digital time. Clock speeds matter. Thanks, Venice.ai. Good enough.

I think that was not an intentional omission. The point of the featured article was to market the company, its technology, and its “now in production” innovative silicon. In this post, I want to talk about the data center omission and link the grandiosity of the announcement to what are valiant efforts of a small publicly-traded company trying to generate revenue from the AI agentic revolution.

First the data center angle:

The Data Center Angle

Building or retrofitting an existing exercise is different from buying a table lamp from Wayfair and plugging it in. The work is at best a multi-month effort and more likely a project that will reach out 24, 36, or more months before the system goes online for customers. Silicon and software moves according to one clock speed. Permits, power, plumbing, and planning chug along at a different clock speed.

Upgrading or getting a new data center online typically morphs into a multi-year capital commitment that bumps into the constraints on building physical facilities. These efforts run on what I call “turtle time.” No leadership speech can change the clicks in turtle time.

A new data center deployment begins with land and working in a political environment which is local with the potential to become a state or federal matter. People in the US ignore the availability of power and water. When my family lived in Brazil, we had power only a few hours a day. Water was a separate effort because when the tap flowed, the stuff that emerged could and would kill someone careless enough to drink it. The 1950s in Campinas was an education in modern conveniences for my mother, father, and me. Many US  professionals make assumptions about water and power. Those assumptions may be different from what is economically feasible.

In the US and some countries, sufficient electrical grid capacity, access to water for cooling, acceptable distance from flood zones and seismic risk, and zoning classifications that permit the construction of industrial-scale power and thermal infrastructure. In some US states, each of these tasks are variables. Navigating a public approval process, getting appropriate zoning variances, working out utility agreements, and even obtaining building permits do not move on the clock ticks of a high-technology company’s marketing department or bean counters prefer. Schedules of municipal planning boards, state environmental agencies, and utility commissions are measured in turtle time. Keynote enthusiasm may not be shared with understaffed and politically constrained state, county, and city bureaucrats.

Once land permits are secured, old-fashioned work begins. The Vera Rubin rack-scale system rack appears to weigh about two US tons (the same as two black rhinoceroses) and demands liquid cooling infrastructure and power delivery systems equivalent to a small city like Seymour, Indiana. Liquid cooling loops, coolant distribution units, and high-amperage busbars must be engineered, procured, installed, and tested. The supply chains for those components like heat exchangers, precision-machined cooling trays, high-current switchgear are likely to be interrupted by [a] current data center construction and [b] supply chain problems caused by war.

Connectivity requirements add another layer. The Vera Rubin platform relies on NVLink sixth-generation switch fabric delivering 260 terabytes per second of scale-up bandwidth within a rack. Nvidia’s innovation also needs ConnectX-9 SuperNIC networking capable of providing 1,600 gigabits per second per graphics processing unit for scale-out. The fiber, transceivers, and switching infrastructure required to support those bandwidths at facility scale are not commodity items available from a warehouse. Most are at this time specialty components with their own lead times, qualification requirements, and integration complexity. Speed is everything until a physical device is required. Then, you guessed it, turtle time.

For existing operational data centers — including facilities like AT&T Ashburn or Equinix in Miami — the retrofit path is not without pitfalls. Floor load ratings, raised floor depths, power distribution architectures, and cooling topologies have to support the forthcoming Nvidia AI systems. Structural fixes, electrical panel replacement, and cooling system overhaul are not software adding a dozen lines of code. Facilities to be partially or fully taken offline during upgrades shifts from go-fast to go-slow quickly.

Now… the Marketing Gap

Nvidia’s announcement of the Vera Rubin platform is technically slick at the component level. Seven chip types, rack-scale integration, stated performance improvements of five times inference throughput and ten times lower cost per token compared to the Blackwell platform. Impressive indeed. The silicon exists. Yields are reportedly acceptable. The fabrication process at TSMC’s three-nanometer node is allegedly delivering satisfactory yields.

What the announcement does not address is the Grand Canyon-scale gap between chip availability and operational deployment.

That gap is currently wider than it has been at any point in the modern data center era. The global supply chain for advanced semiconductor manufacturing equipment, particularly the extreme ultraviolet lithography systems produced almost exclusively by ASML in the Netherlands, is at this time [a] operating under geopolitical constraints, [b] export control regimes, and [c] demand pressures that have no recent precedent. Access to process-critical materials like helium, which is used in significant quantities throughout semiconductor fabrication and precision cooling, faces supply disruptions tied to instability in producing regions; for example, the Iran War. Specialty packaging substrates, high-bandwidth memory fourth-generation components, and silicon photonics transceivers each faces its own procurement vulnerabilities.

“Full production” in Nvidia’s usage means the company’s fabrication partners are producing chips at volume. It does not mean that the Full Monty of rack, cooling, networking, software stack, and facility is going to be turned on sometime between July and December 2026. Based on my experience, there is a difference between a system in a lab and in a production data center meeting service level agreements 24×7.

Will I overlook Nvidia’s own digital clock for inventing or refining its chips and software? Nope. First, the company does not run on turtle time. It is zipping along on Silicon Valley time. Nvidia’s own product cadence creates a rational deterrent to capital commitment for Excel jockeys. An optimist with access to money has to consider the infrastructure investment required to deploy Vera Rubin at scale and not gobble antacids because Nvidia’s or a competitor’s next big thing is already in development or in preliminary testing. The infrastructure clock ticks in turtle time. The silicon clock ticks fast. Interest on loans ignores both clocks. It just accrues. Asymmetry is the name of the gambling game. This is not Hamstr Kombat.

What about Those Data Center Plays?

Readers of this dinobaby blog and our new “Telegram Notes” know that we have been monitoring one of the most unusual AI compute / data center plays in the last two years. Believe me, there have been some wild and crazy ones. Have you been to Memphis lately?

I am referencing the publicly traded cancer research company that flipped into a reseller of AI compute. Yep, makes perfect sense. Plus the maneuver was accomplished in five or six months in 2025. I am referring to AlphaTON Capital, NASDAQ:ATON. (Did you know “ATON” is the name of a big Russian financial outfit?)

AlphaTON Capital has publicly claimed ownership of 500 Nvidia GB200 graphics processing units, positioning that holding as a substantial artificial intelligence infrastructure asset. Examined against the Vera Rubin announcement, that position has the profile of a stranded investment in accelerating decline.

The GB200 is what I would called stable, current-generation hardware today. But Vera Rubin’s stated economics — one-tenth the cost per token compared to Blackwell — mean that GB200-based compute will become progressively uncompetitive as Vera Rubin deployments roll out. Customers purchasing artificial intelligence compute services make decisions on cost per token and performance per watt. On both metrics, GB200 infrastructure is likely to be less efficient even if one considers the deployment delays discussed above.

AlphaTON Capital’s position is further weakened by its infrastructure dependency. The company has no purpose-built facility. Its 500 units are dependent on a colocation arrangement with a data center in Sweden. This wonderful country’s data center operators partnering or hired by AlphaTON Capital have to navigate the same procurement, permitting, cooling retrofit, and optimization timeline that constrains most AI compute data center operators. By the time the AlphaTON Capital chips are producing revenue, the market value of GB200-based compute will have deteriorated significantly.

Graphics processing units that cannot compete on cost-per-token economics in a market increasingly defined by Vera Rubin benchmarks do not retain enterprise customers. These chips seem destined to be suited for lower margin jobs. At some point, the GB200s may turn up on eBay. For a publicly-traded company whose valuation depends on the market accepting its artificial intelligence infrastructure swizzle, words won’t work.

The consequences will not be measured in turtle time clock tips. The speed of the problem will startle many, including AlphaTON Capital’s investors.

Stephen E Arnold, March 19, 2026

Real or Imaginary Negativism: AI and Social Media

March 19, 2026

green-dino_thumb_thumb[3]Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I heard that another couple of CEOs have pulled the cord on their golden parachutes. BlueSky and DarkTrace watched their CEOs drift gently to earth. Other seemingly unrelated news choked my newsfeed in the last 24 hours. I won’t talk about the US economy or the attitude of some semi-allies of the United States.

Nope. No my lane of the Information Highway.

One of my feeds pointed me to “The State of Social Media Engagement in 2026: 52M+ Posts Analyzed.” This “state of” document explains that interaction rates on Instagram (Meta), LinkedIn (Microslop), and Threads (Meta) went down. What the heck is an interaction rate? The document does not define “interaction rate” is not explained either. What do I make of this? I suppose if I put my Statistics 101 hat on, this is an unacceptable method. What the heck are these “experts” studying? In my opinion, not much. After working through the document, I came up with several items of interest to me.

Overall it looks to me that whoever uses these platforms are doing less of the engagement and interaction thing. Some outfits like LinkedIn, despite a poor employment environment, are declining. Is that Microslop’s influence? Who knows, but that’s something I will look into at some point.

image

The “experts” undertaking this exercise in Statistics 101 type analysis have no clue what’s happening on YouTube, one of the largest social media services available in 2025. How can this be? Aren’t those who design research into “State of” documents supposed to tackle exactly this type of difficult problem? Well, not in 2026 in this specific report. Shrugging ones shoulders is not exactly a confidence builder when key concepts are impossible to differentiate.

The new iteration of Twitter is bopping along. It remains in the study and the “data” show that it is not in collapse. I think I would suggest Twitter is stagnant, despite its blue checks and its willingness to annoy those who don’t want free services to generate images that thrill those with the brain power of a 14 year old male. Therefore, I have concluded that if the study knows little about the big dog YouTube, I should not be surprised that the experts cannot shed much research light on the Twitter thing. Why is it on the list if it is bumbling along with no big change in engagement and interaction?

Now let’s look at another example of 2026 statistical analysis. I refer to Hart Research Associates’ “Study 260072.” (Very informative title, don’t you agree?) The data suggest that about half of those in the sample are not too thrilled with smart software. About one fourth of those in the sample were “positive” in their views of AI. And another quarter of the sample did not care one way or another.

I noted that negative feels may be higher than the figure of “46 percent.” My take is that negative feels push past 55 percent, but I am a dinobaby, not a whiz kid survey expert. The survey seems to have a political bias, and the lack of granularity is not disguised by the numerous tables.

Several observations:

  1. Both research groups need to spend less time chatting in conference rooms and more time with the basics presented in introductory statistics classes.
  2. The “decline” in major online services could suggest that the buzz from the first couple of decades of social media is changing frequency. That hypothesis needs more serious investigation.
  3. The negativism toward AI is an issue that the hard chargers in Silicon Valley have to consider. The bull-in-the-china shop may work at first, but some people just back away. With the next big thing on the blackjack table, the big bets may be more risky than the MBAs think.

Net net: Even though people build it, some may move away from “it” or just not bother to play the game when given a choice.

Stephen E Arnold, March 19, 2026

Oracle Chained: Is James Bond Available?

March 18, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read a data center report with an ominous vibe. “Oracle Is Building Yesterday’s Data Centers with Tomorrow’s Debt” reports:

OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units, according to a person familiar with the matter.

image

Brave Roman soldiers know the biggest  and most modern ship in the fleet has been rammed and will sink. Thanks, Google. Close enough for horse shoes.

I find that anonymous reports and unidentified sources a very good indicator of the effort a news gathering output expends for a story.

Okay, we have some clicky names: OpenAI, Oracle, and Nvidia. Stargate has a weird Captain Video resonance. And Abilene? I think it is known as the official storybook capital of the US. Oh, yes, there are wide open spaces, some water, some power due to Texas’ outstanding power generation set up, and cows.

The article states:

Oracle secured the site, ordered the hardware, and spent billions of dollars on construction and staff, with the expectation of going bigger.

But — and  this is an important “but” —  the story presents:

OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units

What is behind this Oracle problem? The write up asserts:

For the companies building frontier models, the smallest improvement in performance could equate to huge gaps in model benchmarks and rankings, which are closely followed by developers and translate directly to usage, revenue, and valuation.  That all points to a bigger problem at play. For infrastructure companies, securing a site, connecting power and standing up a facility takes 12 to 24 months at minimum. But customers want the latest and greatest, and they’re tracking the yearly chip upgrades.

The “real” news story seems to be in the last paragraph, and I quote:

Beyond Oracle, GPU depreciation is a risk for the broader market and could have ramifications across the AI landscape. Every infrastructure deal signed today may result in a commitment to outdated hardware before the power is even connected.

The issue is time. Data centers require time to build. Outfits like Nvidia operate on Silicon Valley clock cycles. The challenge of getting the two approaches to time to line up is difficult. When the timing does not line up, multiple costs stack up. One hopes that whiz kids can figure out how to deal with two clocks without collapsing under the fungible weight of the intangible clicks. In a James Bond film, the fictional hero stopped the ticking clock at 007. Without his skills, the bomb housing to which he had been handcuffed would have detonated.

Is James available to provide consulting advice to Oracle- and OpenAI-type outfits?

Stephen E Arnold, March 18, 2026

Telegram Updates Available for March 17, 2026

March 17, 2026

goat 3Informal write ups about Telegram and its associated entities.

We have posted some new stories about Telegram. These are part of our Telegram Notes’ series.

The new additions are:

Skolkovo’s Microwaved Shells Recipe. The focus is on Moty Cristil and the AlphaTON Capital firm

?-Note: Telegram Ads Now Illegal… in Russia. A person or company buying advertising on Telegram can be in trouble in Russia.

?-Note: Kremlin Steps Up Telegram Pressure. The Kremlin has fined Telegram for being Telegram.

We will be lecturing about Telegram at the 2026 National Cyber Crime Conference. Attendees receive a copy of my new book “The Telegram Labyrinth.”

Stephen E Arnold, March 17, 2026

Who Pays for Electricity?

March 9, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

The cost of electricity is an interesting topic. Figuring out power demand and the cost of the infrastructure requires research, math, experience, and knowledge of technical options. I worked years ago with a utility rate expert. He was a specialist, and he was, no pun intended, in demand.

I thought about the easy assumption that power is just available. Buy a gizmo, plug it in, and it turns on. The electric bill arrives, and it is relatively predictable. Rates typically don’t gyrate chaotically. When my family lived in Brazil, the power would go off but the “cost” of the electricity was reasonably predictable. What happens when one assumes that a data center can be built and plugged in? The electricity is there and the bill arrives. No big deal.

image

When the batteries go dead, recharging may be a challenge. Thanks, Venice.ai. Good enough.

Nope. Big deal. Utility demand forecasting is tricky when nothing wobbles too far from the forecasters’ and analysts’ projections. Plugging in numerous data centers means that the complex machinery for power generation has to be refigured. Behind the scenes of what is to most people a power generation company are many moving parts. Out of sight and out of mind does not mean easy to operate and trivial to scale.

I thought about this assumption that electricity is just “there” when I read “Tech Giants Sign Energy Pledge at White House Ahead of Midterms.” The operative word is “pledge.” Promising to pay for electricity required to run the data centers some companies are building, trying to build, planning to build, and hoping to operate is just that a promise. A promise is not cash. Furthermore, I am not sure the “tech giants” know the cost of the electricity their AI factories will require.

The BBC article “Tech Firms Pledge to Pay for AI data Center Power Costs. But Will They?” cuts to the core of the electricity problem. The write up states:

The tech pledge may be difficult to enforce, said John Quigley, a senior fellow at the Kleinman Center for Energy Policy at the University of Pennsylvania. He cited the multiple layers of government, grid managers and electricity regulators involved in power projects.

Saying is one thing. Doing is another.

When it comes to power generation, fast talk may provide superficial reassurance to some people. The shift from a pledge to forking over money may not be the same as putting one’s luxury car in gear and allowing the smart self driving vehicle to whisk the passengers to an off site face to face meeting. How is that working out by the way?

Several observations:

  1. Rolling out cost effective power generation takes time, specialty manufacturing, and planning. One does not ring up a company and have a modular nuclear generating machine dropped off at the loading dock.
  2. Creating electricity and getting it where it needs to be are shotgun marriages of old-fashioned technology, advanced materials, and infrastructure both physical and digital. The costs to ensure a happy, stable union are usually steep and difficult to project with accuracy.
  3. People living near a new or expanded power generation facility, infrastructure, or a big data center might throw a wrench in the works. What few people realize is that power generation is one of the more vulnerable “it’s just there” services. Angry consumers can create some tricky challenges easily and quickly.

Net net: Pledges are not power. Pledges are words. Power generation makes modern life possible, yet few whiz kids know anything about the business. When the power goes away as it did where my family lived in Brazil, one needs a Plan B. A pledge is not a Plan B. A pledge is public relations.

Stephen E Arnold, March 9, 2026

Big Tech AI Tries to Understand Real Life

March 6, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read “OpenAI’s Compromise with the Pentagon Is what Anthropic Feared.” I want to be upfront. Every time I read or hear about MIT, I think Epstein Epstein Epstein. This translates to my being [a] dismissive of what the MIT thing outputs, [b] the integrity of the institution, and [c] what it brings to the knowledge party. Therefore, if you are into MIT, stop reading.

This particular write up is one of those crazy analyses of the perception of the world from the point of view of wizards and how stuff actually works in the US government or any nation’s government. Whiz kids think they have something really cool. They give talks at conferences. They moms and dads pester their connections about Timmy’s or Wendy’s great new thing. They do brown bag lunches in the bowels of the GSA. They trek to FDIC events in interesting locations. They write Substacks, blog posts, and Forbes thought leader articles. They stand in trade show booths squinting at name tags and look crestfallen when big time people walk by their bright smiles.

The reality is that outfits want to make government sales, and if they want to close a deal and keep the deal, the people who sign those contracts expect vendors to do what they are told. Is this the optimal approach by governments? No. Is this an informed strategy? No. Is this a tactic to become best pals with vendors? No.

And guess what? No one in those governments’ procurement processes cares very much what a vendor wants. Sure, there is some flexibility. But one doesn’t have to be an MIT graduate or a doner like Mr. Epstein Epstein Epstein to figure out that the government is going to prevail. Even in countries which are obscure and unfamiliar to an American big tech outfit, the approach is the same: Read the terms of the deal, agree, get paid, and do what the client wants.

image

A group of AI wizards learn how life is versus how life should be. Thanks, Venice.ai. Good enough.

Painful, right.

The write up says:

In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon.

Hey, MIT writer publisher thing, OpenAI got the message. I could suggest that MIT check out the history of MITRE to put my observations in context.

Everything is clear. A company that wants to do business with the government regardless of country needs to drop the crazy idea that governmental institutions care about the emotional zeitgeist of the whiz kids. I know that it takes time for some government professionals to grasp what one can do with a technology that is new, unfamiliar, and less friendly than making a call on a iPhone. However, once that insight arrives in the mind of a government professionals, the mental orientation of the wizard is usually irrelevant. It’s noise. It’s a distraction. It’s unwanted. It’s infuriating.

The write up says:

The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use.

News flash. When the Department of War licenses a technology, that Department (regardless of the nation state) is going to use that technology to complete the mission its leadership deems appropriate. If a company or a wizard cannot understand this concept, why are these firms and their wizards in the meeting and procurement process. Go hunt for money elsewhere.

How about this statement from the write up:

But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground.

The leadership of the big tech AI companies think they are rational. Those well paid experts are not. The people in the government are not rational. Why? They are humans who have interesting ways of responding to work, technology, and the context in which they find themselves.

Why did MIT embrace Epstein Epstein Epstein? The leadership of MIT made a decision. The big AI tech people made a decision. Neither seems to have been eager to walk away. Why not try to own up to your decisions? That’s called adulting.

Stephen E Arnold, March 6, 2026

AI: Errors? Hey, No Problemo.

March 5, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I love the AI razzle dazzle. Some of the functions available to dinobabies like me are semi-useful. However, I am generally unimpressed with some of the “magic” functions these systems provide. Probabilities, flawed data used for training them, and humanoid (for now) wizard programmers doing their thing make me cautious.

image

Thanks, Venice.ai. Good enough.

That’s why I got a chuckle from “Unbelievably Dangerous: Experts Sound Alarm after ChatGPT Health Fails to Recognize Medical Emergencies.” The write up reports as actual factual:

The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.

Medical writing is as wonky as the information output by crypto bros. Here’s my translation of the statement. AI will miss more than half of serious health problems. My hunch is that real doctors and real AI wizards will say, “Hey, this is one study.” and “Wow, the sample is statistically flawed.”

Maybe.

The write up points out:

While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure. In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

I understand that smart software is a work in progress. But MBAs and would be world visionaries want AI now, now, now. Move fast. Yep, and break things. I suppose putting a person’s life in jeopardy is an insignificant, trivial even.

Here’s the conclusion of the article:

Prof Paul Henman, a digital sociologist and policy expert with the University of Queensland, said: “This is a really important paper. “If ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.”…“It is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users…”

Several observations:

  • OpenAI is trying to find a way to make money. Health care is a discipline with money sloshing around. Therefore, a health play should work, right? (Remember Google tried health too and where is that now?)
  • This is one of those “if we build it, they will come” applications. Perfect use case because it made sense at lunch last month.
  • What happens when AI as it is today makes other important decisions? I think I know.
  • Net net: With so much money and so many egos caught in this “we have the answer” AI thing, why worry? Big tech has the answers, the lawyers, and the obsession to fill deliver reality their way.

    Stephen E Arnold, March 5, 2026

    Next Page »

    • Archives

    • Recent Posts

    • Meta