Two Memorable Moments in BAIT Management

April 7, 2026

green-dino_thumb_thumb[3]Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I spotted two anecdotes or future case studies this morning, April 1, 2026. I am viewing the information in these documents as valid. Yes, I know that this assumption may be problematic, but as a dinobaby, I can’t resist. Let’s look at the two examples, and then let me invite you to invest a few minutes pondering the business processes behind each moment. I suggest not sitting on Peter Drucker’s grave, having lunch, and thinking about the idea of Big AI Tech and the management methods evidenced by these fine outfits. Yes, Mr. Drucker does spin in his grave at Tesla type high frequencies.

image

Thanks, Venice.ai. No telling me that I was violating your terms of service with bunny rabbits in a graveyard. Good enough.

The first example is the pinnacle of high technology. The Wall Street Journal published “Anthropic Races to Contain Leak of Code Behind Claude AI Agent.” The company is surfing on US copyright precepts. Some BAIT outfits trample on these, but that’s simply context for irony’s sake. It seems that the WSJ’s sources have communicated the idea that a competitor could duplicate, clone, steal, or otherwise ingest Anthropic’s system and method. Well, maybe. My team has not convinced me that the entire Claude code is now in the hands of trustworthy competitors. CNBC reports that the “leak” occurrent at 4:23 US Eastern time on March 31, 2026. (I am tempted to write April Fool! but I shall refrain.) One interesting data point, which suggests that clicks have impact, is that the code pulled 21 million views.

The second example is equally significant. I read “Oracle Slashes 30,000 Jobs with a Cold 6 a.m Email.” The subtitle to the write up in RollingOut said, “Workers across the U.S., India, and other regions learned their jobs were gone before most people had finished their morning coffee, with no prior warning from HR or their managers.” I am not sure about “warning.” The chill in the economy and the idea of building data centers for AI compute makes perfect sense to someone with spreadsheet fever and access to a large language model. To a dinobaby like me, the idea of building big data centers with the hope of populating them with semiconductors that will not be eBay fodder for anticipated AI demand is too trendy for this dinobaby. Toss in the factoid that those antagonistic to Big AI Tech outfits toss a kinetic near the electrical and cooling infrastructure. The result is hitting the delete key for a mere 30,000 employees. I assume that any publicity is good publicity. And what about that idea of personnel management?

What do these two examples of BAIT management reveal to a dinobaby like me? Here are my observations:

  1. The thought process of the leadership of BAIT firms is either isolated from what goes on at their firms or simply indifferent.
  2. The procedures in place to provide job security and intellectual property security do not function in a way that a dinobaby like myself sets up business processes. The visible consequences of how the business processes actually play out.
  3. The humans at these “AI centric” outfits have not had their thought functions amplified with access to smart software. One might argue that both companies have acted in what might be labeled a less than optimal way.

Net net: I wish these were fake examples. I believe that each is a reasonably close statement of how BAIT firms view legal fences and appropriate employee management tactics.

Stephen E Arnold, April 7, 2026

Microsoft Outlook: Stable, Trustworthy, and Reliable. Maybe Not?

April 7, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Let’s start with a quote from “How Microsoft Vaporized a Trillion Dollars.”

I have seen a lot in my decades of industry (and Microsoft) experience, but I had never seen an organization so far from reality.

Not long ago, Microsoft did a Don Quixote. The firm wanted to prevent people from using the term Microslop. A week or so ago, Microsoft made noises that it would address issues with Microsoft Windows. Before that, Microsoft made security Job Number One.

image

Thanks, Venice.ai. Good enough. Just like Microsoft Outlook.

My wife had a news program on the little TV set kept under the kitchen counter. Guess what we heard? Here’s what caught my attention and the attention of Todd Bishop, who seems to relish his sort of friendly approach to news about the Microslop outfit:

Commander Reid Wiseman radioed Mission Control on the crew’s first day in space to report that he had two instances of Outlook running on his computer — a Microsoft Surface Pro — and neither seemed to be working.

Let’s consider this. A multi-billion dollar space launch. Live streams on a variety of media services. And what do we learn? Microsoft software is screwed up. Guess with the gentle Todd Bishop wrote:

During a press conference, Judd Frieling, the Artemis 2 ascent flight director, said the Outlook issue was not uncommon. He said the app sometimes has configuration problems when there’s no direct network connection, and the ground team resolved it by reloading Wiseman’s files in Outlook.

I don’t want to make a big deal out of this. But a company that gets publicity by having its software flagged as less than usable on a space mission to the moon has a bit of a problem. Or as the quote at the top of this essay says, “an organization so far from reality.”

I want to make three points:

  1. Promises and hand waving are not what one needs when software does not work as advertised on a space flight to the moon. Folks, this is not catching up on email from Starbuck’s. This is from outer space with the eyes of millions of people on the screw up.
  2. Microsoft obviously has a number of challenges from its legacy security woes to the craziness of its AI services. Perhaps some of that practicality expressed in “two objectives is no objective.” Microsoft is big. Too bad. Microsoft is complex. Too bad. Microsoft is trying. Not good enough, folks.
  3. Management cannot orchestrate success. Forget the data center baloney. Ignore the PR about agentic whatever. The leadership of the company cannot lead. They can preside over an organization that is just not working.

Net net: Vaporized is a strong word. I want to submit that having an astronaut say, “Outlook is not working” captures the reality of Microsoft. The astronauts will return from their trip around the moon. Will these fellows trust Microsoft Outlook upon their return?

Stephen E Arnold, April 7, 2026

Anthropic Complains about IP Theft and Then Gives Its IP Away Via a Security Lapse

April 7, 2026

Let’s go back a few weeks. Earlier this year, I recall reading this Business Insider story: “Anthropic Says Deepseek And Other Chinese AI Companies Fraudulently Used Claude.” The news?

“Anthropic said the distillation efforts were “industrial-scale campaigns” that included roughly 24,000 fraudulent Claude accounts that generated over 16 million exchanges “in violation of our terms of service and regional access restrictions…. Distillation is the process of training a less powerful model on the output of a more powerful model. The practice is a legitimate way that many US companies use to train their models for public release. Increasingly, major US companies are also stating that their Chinese competitors are improperly using the practice to steal their work.”

The allegation is that Anthropic released updates to their models, then the Chinese companies copied them within hours. Another issue Anthropic identified is that bad distillation poses security issues, such as the development of bioweapons. Some people believe that Anthropic used other people’s information without permission to train its models. There was a lawsuit and Anthropic paid out $1.5 billion but didn’t admit any wrongdoing.

Is this a version of the pot calling the kettle discolored? Maybe it is what’s good for the goose is definitely not good for certain ganders?

Anthropic stated that China’s AI companies: Deepseek, Moonshot AI, and MiniMax used Claude to augment their own algorithms with distillation.

Now let’s think about what happened on or around March 30, 2026. Here’s a typical headline about Anthropic’s misfire: “Anthropic Leaks Part of Claude Code’s Internal Source Code.” That incident obviated the need to steal Anthropic’s intellectual property. The company could not get its act together and watched a couple of its digital circus animals wander off to be captured and processed by anyone with an Internet connection and a link to the code. Wasn’t Anthropic labeled a supply chain risk by the US government? Did Anthropic’s management lapse validate that US government statement?

The CNBC write up notes:

A source code leak is a blow to the startup, as it could help give software developers, and Anthropic’s competitors, insight into how it built its viral coding tool. A post on X with a link to Anthropic’s code has amassed more than 21 million views since it was shared at 4:23 a.m. ET on Tuesday [March 31, 2026]. The leak also marks Anthropic’s second major data blunder in under a week. Descriptions of Anthropic’s upcoming AI model and other documents were recently discovered in a publicly accessible data cache, according to a report from Fortune on Thursday, [March 26, 2026].

I know that the Big AI Tech or BAIT outfits have many highly intelligent people. But there is the nagging thought in the back of my mind that some people at the firm say and do some less than brilliant things.

Whitney Grace, April 7, 2026

Telegram Defies Kremlin

April 6, 2026

green-dino_thumb_thumb3Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Editor’s Note: We’re still struggling with the blocks on our Telegram Notes’ posts. The new service with the former content and the censored content will be in a few days. You won’t believe the baloney the outfits blocking my content served me and my team. I am a dinobaby and 82, and I am amused with those who want only certain information available. For now, I find it revealing that there are entities who want what one young whiz kid told me about my write ups: “Your stuff is written by AI.” That’s good to know because it demonstrates that smart software, like some censors, are not without the capacity to make errors.

A news organization with a snappy tagline published “Telegram to Adapt to Russa Restrictions, Pavel Durov Says.” That information source is the Saudi Gazette. The article reports:

Telegram founder Pavel Durov said Saturday the messaging platform will adapt to restrictions in Russia, making its traffic harder to detect and block. In a statement, Durov said 65 million Russians continue to use Telegram daily via virtual private network (VPN) apps, with more than 50 million sending messages despite authorities slowing down the service. He said efforts to ban VPNs have pushed users toward workarounds rather than reducing usage.

The question is, “What does adapt mean?” Will the underlying more than 13-year-old plumbing be tweaked to deliver what the current government authorities demands? Or, does Pavel Durov, the GOAT of Russian startups, mean that the users of Telegram will adopt workflows that get around the crackdown on Telegram.

image

This humorous illustration shows a government dignitary commanding a French poodle coincidentally named Pavel. The dog has followed the command “Sit.” Now the handler wants a more sophisticated demonstration of compliance. Thanks, Venice.ai. Not a Borzoi, but good enough.

As Telegram Notes has documented, suggestions gave way to orders to adopt the Kremlin-approved messaging app Max. I think of Max as Palantir-lite, but that’s just my mental shorthand at work. Maybe Uighur-Lite is a better phrase.

The article revealed that Telegram’s Russian user community is smaller than some metrics firms have reported. The 65 million number is about 30 million below some of the interesting estimates offered on Telegram and in Russia social media. If the Kremlin achieves its goal of having Pavel Durov obey the Kremlin’s commands, Telegram’s user count may not take this hit. On the other hand, if the Telegram users do not comply, then Telegram can add to its list of serious problems losing 10 percent of its user base.

The message Pavel Durov output consists of two parts in my opinion. The first post is that “Pavel Durov Harshly Criticized Apple.” TON News on April 1, 2026, said:

Durov directly hinted that it’s not about security, but about money and the desire to preserve their [sic] market. And it all looks pretty bad when a big company simply adjusts to the rules that benefit it.

Then TON News reported that “Telegram Has Repelled Roskomnadzor’s Attack.” Telegram updated its Messenger app and the behind-the-code systems to change how ClientHello is identified. The result is that the Kremlin’s message inspection system is now less effective.

According to Telegram News:

… Durov… stated that even after all the blocking attempts, over 50 million people in Russia still use Telegram every day via VPN.

Telegram News’s view is that Pavel Durov will play a cat-and-mouse game. Durov, however, is in France unless the French judiciary grants him a hall pass to leave that country to visit his office in Dubai, UAE. Will the French government announce a trial data or just keep kicking the ball of hefty red tape down the autoroute? Will Pavel Durov find a solution to this Kremlin anti-Telegram stance, his firm’s AI woes, and the revenue challenges the company faces?

On the AI front, the Chinese Qwen model deployed to Telegram Messenger “edit,” the Chinese system changes what the user typed to conform to the political stance of the Chinese government. A Messenger user may want to be careful with wording that says, “Taiwan is an independent country.” Qwen knows that Taiwan is part of mainland China.

Another niggling issue for Mr. Durov is the possibility that two publicly traded companies could collapse or be de-listed from NASDAQ. With the Iran War, the evangelical Gateway Conference for the crypto faithful has been postponed.

What are the workarounds? Some Russians may live near a border town in Lithuania. Snag a SIM card from that country and connect to Telegram. Others may have access to a Starlink-type device. Having one on the roof of a Lada could attract some attention. A person could land a job in a Russian security service or an outfit like Global Network Management and have non-restricted access.

Telegram finds itself bogged down with challenges a Zen master might have difficulty reducing to a pleasant background hum.

PS. To read Telegram News, one must install Telegram Messenger, sign up for Telegram News, and receive the content when it appears in the chat interface. If this does not make sense, you need a copy of my Telegram Labyrinth book. For information, write kentmaxwell at proton do me.

Stephen E Arnold, April 6, 2026

Data Centers As Sitting Ducks

April 6, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Those in the data center business with structures in the Iran war zone realize that when rockets or other kinetics strike the roof, problems ensue. A well-placed round can disable a critical piece of the electrical or cooling equipment as well. Now there is another possible threat. “Iran’s Revolutionary Guards Just Named 18 US Tech Firms as Military Targets. The Age of the Civilian Data Centre Is Over.” The write up reported on March 31, 2026:

The Islamic Revolutionary Guard Corps published a statement on its official Sepah News channel naming 18 US firms, from Apple and Microsoft to Nvidia and Palantir, as “legitimate targets” in retaliation for what it described as their role in enabling American and Israeli assassination operations inside Iran. The list reads like a roll call of the Nasdaq’s most valuable constituents. Microsoft, Apple, Alphabet, Meta, Amazon, Nvidia, Intel, Cisco, Oracle, Dell, HP, IBM, JPMorgan Chase, Tesla, General Electric, Boeing, and Palantir all appear alongside Spire Solutions and G42, the Abu Dhabi-based AI firm that has become a linchpin of the Gulf’s artificial intelligence ambitions.

Some people are aware of potential supply disruptions in gasoline and helium, but the idea that the financial operations of certain countries could be disrupted is problematic. One cannot go to the local automatic teller machine and conduct a hundred million euro transaction.

image

Thanks, Venice.ai. I appreciate that you excluded the missile. Good enough.

I know that data centers in the Ashburn, Virginia area are hardened. However, I am not so sure that the data centers not far from the special economic zones in Dubai are constructed to what I think of AT&T milspecs. From what I have observed, direct missile strikes were not part of the actual construction.

The write up said:

The threat is extraordinary in its specificity. Rather than targeting military installations or government buildings, the IRGC has identified private-sector technology infrastructure as the mechanism through which, it alleges, the United States has been locating and killing senior Iranian officials. The statement declared that American ICT and AI companies are “the key element in designing and tracking terror targets,” and that “for every assassination and terrorist act in Iran, one facility or unit belonging to these companies will face destruction.”

What’s interesting is that the Ukraine-style asymmetric warfare is making explicit the companies whose infrastructure is at risk. The threats may be idle, but the vulnerability exists. One cannot pile sandbags on a roof of a typical data center. I assume that’s why the subtitle to the cited article makes the point “the age of the civilian data center is over.”

The more practical knock on effect of this threat is that the costs of retro-fitting a data center are not in the budget for the current quarter. New data centers will have to have some additional thought put into their construction method.

Data centers are sitting ducks. There are numerous points of vulnerability. Just “bury data centers” is easy to say. Using existing caves, old mine digs, or more exotic ideas like putting data centers in orbit present some challenges as well. There are some notable caves. I know from my work with the hard rock mining engineering firm Robinson & Robinson that suitable mine shafts exist if they are not filled with water or sealed to prevent some exciting environmental events from becoming noticeable to bunnies and people. The data center in space works if one has rockets that don’t explode on launch. For one firm, exploding rockets suggest the company should consider switching to the production of war munitions.

The write up pointed out:

The exposure is enormous. Microsoft has committed $15 billion to expanding its operations in the UAE by 2029. Amazon has pledged $5 billion to an AI hub in Riyadh. Oracle, Cisco, and Nvidia announced a partnership with OpenAI to build an AI campus in the UAE. Google and Amazon Web Services are constructing dedicated cloud regions in Saudi Arabia scheduled to launch this year. According to analysts at TD Cowen, hyperscaler capital expenditure is forecast to exceed $600 billion in 2026, with roughly 75 per cent tied to AI infrastructure. A substantial portion of that money is flowing into the very region the IRGC is now threatening.

I have confidence that the bean counters and MBAs at the high-tech super companies have the problem solved. These folks have their own brains and the unfettered power of AI without guardrails. Obviously for these BAIT (big AI technology) companies the data center threat is a no brainer. I assume these BAIT outfits know who will ensure their data centers too. I admire forward thinking and the use of agentic AI to solve problems. For example, what if an adversary strikes a data center in Fremont on the way to San Jose?

Stephen E Arnold, April 6, 2026

Smart Software Makes You Really, Really Intelligent

April 6, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

With the potential “borrowing” of substantive content from books and the jawboning about running out of data, guess what I learned. The great AI revolution has converged on what I call lowest common denominator information sources. I have noticed that the quality of the outputs on my test queries I slam into ChatGPT, Google, Perplexity, etc. are increasingly useless. I admit that I have a small library of test queries mostly focused on the activities of some interesting crypto bros. But not only are the systems less able to provide helpful information than they were six months ago, the incidence of hallucination, misstatements, and the digital equivalent of “I don’t know. I don’t know” is more prevalent.

image

MBA artificial intelligence management students realizing that maybe dumb is now part of their ethos. Thanks, Venice.ai. I appreciate your not telling me that this image violates your terms of service. Pretty amazing based on my prior experience with you.

The possibly accurate article “AI Search Engines Cite Reddit, YouTube, and LinkedIn most: Study” may have a partial answer. I don’t think my experience is a complete answer about convergence to the equivalent of gentleman’s C, but the information caught my attention.

I noted this passage:

Reddit was the most-cited source across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews. YouTube, LinkedIn, Wikipedia, and Forbes also ranked in the top five. Review platforms like Yelp and G2 appeared often in recommendation queries.

Yelp is unlikely to be of help to me when I look for information about Skolkovo symposia on the subject of reverse mergers. I cannot rely on Yelp to provide accurate information about Burger Girl Diner in Louisville, Kentucky.

Here’s another snippet from the cited write up:

The research showed which domains models rely on:

  • ChatGPT favored Wikipedia, Reddit, and editorial sites like Forbes.
  • Google leaned toward platforms like Facebook and Yelp.
  • Perplexity emphasized Reddit, LinkedIn, and G2 for B2B queries.

If this information is accurate, each of the cited models evidences source bias. Not only do we have the cultural drifts identified by Dr. Timnit Gebru and the stochastic parrot crowd, we have these smart birds gobbling the calorie-depleted knowledge of people who may not know exactly of that which they write. In the case of YouTube, it would be that which they speak.

Several observations:

First, I think that many people accept information output by a computer system as accurate. Many of those people lack the drive and expertise to identify problematic output. I can spot drivel about Skolkovo instantly. Others may not nor have the desire to learn that the venerable institution is the Harvard Business School of Mother Russia. It may well be when it comes to crazy reverse merger financial analyses.

Second, the developers of smart software are into recursion. This is a nifty set of methods and rationalizations that lead to automated “low hanging fruit that provides the appearance of a complete meal.” Yes, appearance. Gobble up that AI output and kill your brain, not your liver like some modern industrial products. The system surfs on a curve that speeds query processing and results output. Why cook when you can deliver a DingDong and a Diet Pepsi by digital DoorDash?

Third, the leadership of these firms have drifted away from thinking about the smart software. Most of the companies’ top dogs focus on. Money. Why? The specter of the first Internet winter has returned. Cash is available but the returns are iffy. The data center craze seems to be wobbling as more efficient chips and algorithms mean that today’s advanced infrastructure is tomorrow’s eBay listing. Cutting costs, not banking revenue, occupies more of the leaderships’ time. In short, who has time to worry about regressing to the mean.

The gentleman’s C is now good enough seems to be the benchmark.

What does Reddit offer about Skolkovo? Not much. What does LinkedIn provide? An opportunity to shape a false face. What does Yelp provide? Zippo. What about Forbes? Isn’t that pay-to-play content now? And Facebook? There you go for rock solid information if you are really young or ready for the warehouse-for-the-soon-to-be unliving.

Maybe the data in this report of what cited the most by smart software is wring? Okay. No surprise there. But what if….?

Stephen E Arnold, April 6, 2026

IBM Watson: You Have Been Busy

April 6, 2026

Supercomputers are supposed to notice patterns and report the findings. I’m not a supercomputer but I’ve been following Watson for decades and seen the super computer come of age multiple times for big data, winning Jeopardy, and now because of AI. Let’s all give a cheer and have a slice of digital cake while we yell mazel tov. I honestly don’t care about Watson’s newest abilities, but I am impressed that IBM teamed with Watson continues to thrive in the everchanging technology landscape.

Tech.Eu explains that,“Watson Grows Up: IBM’s AI Platform Strategy Comes Of Age.” IBM is reliable and has injected itself in the foundation workings of AI. In other less jargon-y words, IBM is good, powerful, and built to withstand all the technology crazes. Here’s what Watson…ahem…watsonx can do as the industry’s top AI enterprise middleware:

“IBM watsonx™ is split broadly into three layers – model development (watsonx.ai), data governance (watsonx.data) and responsible AI tooling (watsonx.governance). That architecture reflects something many CIOs learned the hard way over the past two years: deploying generative AI inside a regulated enterprise is less about prompts and more about provenance. You can’t just plug a large language model into a bank and hope for the best. IBM’s advantage has always been its relationship with large enterprises – the banks, insurers, telcos, and governments that care deeply about compliance, audit trails, and hybrid cloud compatibility. IBM watsonx leans directly into that heritage. It is designed not just to build models, but to control them: where data flows, how it’s labelled, how outputs are validated, and how bias is monitored.”

The new role for Watson is — are you sitting down? — to become the operating system for AI chatbots. That’s a biggie.

And to achieve that goal before Google gobbles the goods, IBM is going to hire more entry level employees, according to the Wall Street Journal.  Plus IBM made the list of “Top AI Development Companies in 2026: Trusted, Compared, Verified List.” That write up pointed out these attributes of Big Smart Blue:

  • Core Expertise: Enterprise AI, Watson
  • Key Strength: Scalable enterprise solutions
  • Best For: Large organizations

I want to point out that number one on this list is Apptunix, a firm new to me. Maybe IBM should acquire it?

Other companies are focused on the flash in the pan of adding AI into fruit and vacuum cleaners. IBM is focused on the practical implications, in other words the long game. Perfect for institutional investors but not the meme stock folks.

Whitney Grace, April 6, 2026

OpenAI Imitates AlphaTON Capital

April 3, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I published an informal essay called “Brittany Kaiser Is a TON.” This is one of the documents blocked on two online services. If you want to read the original story and view the exhibit point your browser here. The main point of this write up is that a publicly traded company AlphaTON Capital hired a former Cambridge Analytica professional. Her mandate was to control the information stream about a very interesting company and its equally interesting business development executive Yuri Mitin. Most Americans are not familiar with this media personality and innovation leader. He honed his craft in Moscow and is now applying grit and determination to the Telegram-centric AlphaTON Capital, NASDAQ: ATON. (If you are not familiar with “ATON” that is also the name of a high profile Russian financial institution.)

image

A typical celebration in a Silicon Valley type of company. On the surface, an acquisition enhances the firm’s existing marketing department. But those in the room know that information shaping and controlling the content stream is the real cause for the bubbly enthusiasm. Thanks, Venice.ai. You did not tell me that my prompt was inappropriate. Good for you and a good enough illustration.

Now OpenAI may be emulating the game plan of Cambridge Analytica. In my view, Cambridge Analytica used a combination of social media analysis and “shaped” information with the expectation that a certain narrative would gain traction. In the fluid world of modern social media, if something is repeated and linked with semantic payloads, people react and some may “believe the scenario” or “accept the facts.” If am not sure if OpenAI is aware of AlphaTON Capital. My point is that the engineering of content streams to advance a specific narrative is understood. OpenAI got the message and, if the information in “OpenAI acquires TBPN” is accurate and not shape or “weaponized,” the owner of ChatGPT is moving from plain vanilla PR to information shaping.

The cited write up says:

This acquisition brings a team with strong editorial instincts, deep audience understanding, and a proven ability to convene influential voices across tech, business, and culture.

I interpreted this passage to mean: We are going to control the narrative with the tools available to us. I may be incorrect.

The announcement continues with a remarkable Facebook-type of tone and style:

the standard communications playbook just doesn’t apply to us. We’re not a typical company. We’re driving a really big technological shift. And with our mission to ensure artificial general intelligence benefits all of humanity comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.

My interpretation is that OpenAI can pump money into TBPN and amplify our view; for example, how wonderful and important OpenAI is. How our data centers will allow dolphins and butterflies to thrive and how our technology will make life better for everyone are likely to be part of the messaging. I want to point out that the messages will be for commercial customers, the US government, and some of the venture firms who want to see one of those 17X returns on their money.

OpenAI adds:

I’m [Fidji Simo, the CEO of applications at OpenAI] also excited to bring their amazing comms and marketing instincts to the team. They’ve helped many brands market online and because they have a strong pulse on where the industry is going, their comms and marketing ideas have really impressed me. I can’t wait to leverage their talent outside of the show to innovate on how we bring AI to the world in a way that helps people understand the full impact of this technology on their daily lives.

I like the “helps people understand the full impact of this technology on their daily lives.” For this dinobaby, the statement means, “We are going to convince you to get with the AI program because at this time, you are not lining up and saluting the OpenAI logo.” One might say the same about the approach used by AlphaTON Capital.

Those who don’t understand will. You will adapt. Didn’t Apple make a commercial about this? My memory is hazy, but the masses don’t understand. It is the “media’s” fault. Therefore, we will create and disseminate in very clever ways the truth and reality we require. Don’t get me wrong this type of information control works. Try to take a mobile phone from a 14 year old sucking down videos selected by math. How is that working out when the tactics reinforce, amplify, and deliver the strategic objective.

OpenAI is taking this step because its leadership has a sense that the sizzle from the 2022-2023 period has lost its effervescence. What’s better? Sit back and let regular, uncontrolled messages trample OpenAI into the dirt or take a proactive stance and output what’s needed to make OpenAI the premier product of Côtes du Moan with a French tang with messaging that packs a wallop.

Does anyone care? As far as I can tell, no local, county, state, or national officials see anything untoward about this acquisition. No wonder people in other countries admire, embrace, and enjoy Silicon Valley products and services. I know I do.

Stephen E Arnold, April 3, 2026

A Young Agent Weeps Because He Caused Chaos in the Kitchen

April 3, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I am still thinking about a blue chip consulting firm’s confidence that its MBAs and CPAs can stop agentic software from making already wonky business processes more problematic. Why? Creating a fix for today’s smart software means very little tomorrow. Advances in smart software come less frequently than marketing baloney is output by these firms. Adding to the wonkiness is the idea that taking action today will ameliorate some future unknown bad action.

image

Thanks, Midjourney. Good enough.

Why am I confident in my skepticism? Well, for me. Navigate to Science.org’s article “AI Algorithms Can Become Agents of Chaos.” The write up asserts:

The agents proved trustworthy in five of the tests, which relied on OpenClaw, a “personal digital assistant” that harnesses AI agents to do a user’s bidding by controlling other software. They declined to spread AI disinformation or edit stored email addresses when asked, for example. But in 11 cases they went rogue, sharing private files—containing medical details and Social Security and bank account numbers—without permission or deploying useless looping programs that hogged costly computer time. One agent publicly posted a potentially libelous allegation about a fictitious person.

You can read the details of this agent / chaos analysis in the ArXiv paper “Agents of Chaos.”

The Science.org article states:

The study did not pinpoint why the breakdowns occurred. One crucial question is whether the failures stem from flawed programming that human designers can improve versus an “emergent” feature that arises spontaneously, says Yonatan Belinkov, a computer scientist at the Technion-Israel Institute of Technology who is on leave at Harvard University. Another is whether the problem worsens when multiple agents collaborate. A few of the Agents of Chaos case studies examined two agents working together, but already, Belinkov notes, these AIs are engaging on a much larger scale: Millions are chatting with one another on a social media platform, Moltbook, launched in January, where they have already reportedly created a new religion.

Yep, lawyers will decide liability. How confident am I? I am good with 90 percent confidence based on my technology experience. Are you going to let a BAIT (big AI tech) company decide if it is responsible for a disaster? What about letting the client decide when the client will assert that the marketing presentation did not include the equivalent of the sinking of the HMS Titanic? Will a government body decide? No, but the government professionals will have a working lunch, hire outside advisors, and create a white paper. Then the lawyers will decide.

What’s the fix for a hallucinating agent, bad coding, or a customer who just assumes the system is A-OK? The article presents some ideas:

Potential remedies for misbehaving AI agents include automated processes to undo harmful changes they make to other software and data, the preprint says. But training AI agents to distinguish between instructions with helpful versus malicious intent remains a major technical challenge, Cohen says. Currently, computer scientists lack the technical means to reliably constrain agents “so they don’t just do crazy things that you can’t really control.”

Net net: One can promise many things. Saying one knows how a future agentic system will function, malfunction, or just go off the rails strikes me as the equivalent of predicting where a two year old will throw apple sauce. I can predict a mess. I cannot predict where however.

Stephen E Arnold, April 3, 2026

Data Centers: Build Them Quick

April 3, 2026

The AI frenzy demands more AI compute. The fix? Hop on the data center bandwagon quicker than someone can enter a prompt into a chatbot. However, the current boomlet is different from Pets.com. Because BAIT (big AI tech) companies want AI in everything, with inefficient methods, more compute is needed fast. The speed is important because AI chip technology keeps advancing. If a data center locks into to today’s best chips, in a couple of update cycles, the data center may find itself like the buggy whip manufacturer watching Model Ts putter by the leather shop. Judging from Microsoft’s lateral arabesque, that company is now waking up and dreaming that it can make everyone happy again by pulling back on such innovations as putting AI in the ascii editor Notepad. Microsoft may learn that those billions in capital investment may become the anchor that keeps dragging down the firm’s share price. As I write this, I think the shares in Microslop are down another half dozen points. Nice going, Softies. One difficult question is the date of the data center fizzle? A quiet question that needs to be spoken more loudly is the amount of power and water data centers are using.

The Guardian explores how AI data centers are affecting power grids in the article, “The Environmental Cost Of Data Centers Is Rising. Is It Time To Quit AI?” The story explains that datacenters use four times more power than other sectors says the International Energy Agency. Japan is predicted to exceed its power demands by 2030. Meanwhile Australia expected for its datacenters to triple in five years and surpass the electricity needed to charge electric cars by 2030s.

There’s a movement called QuitGPT to boycott AI’s surveillance, use in weapons, and resource demands. Despite the boycott’s small following it begs the question if more people should be listening? Data centers wizards aren’t transparent about the amount of energy AI is using. It’s also understated that AI uses more energy than a basic search engine.

Here’s what an “expert” says:

“ ‘Consumer software that generates text, images and videos are uniquely energy inefficient,’ says Ketan Joshi, an Oslo-based climate analyst associated with the Australia Institute, due to the ‘vast datasets and computational strain of pattern-matching that happens underneath the hood’”.

Two final points. City and county taxing authorities like the idea of expanding their tax bases. Those who live near a proposed cruise ship sized data center are less enthusiastic. The financial outlook for some of the AI plays has yet to make the money folks nervous. Most do not live near a planned data center with a modular nuclear reactor in the future or the flock of jet turbines providing power when the local grid hiccups.

Net net: If we build it, the money will come. Didn’t that work for baseball?

Whitney Grace, April 3, 2026

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta