Titanic AI Goes Round and Round: Are You Dizzy Yet?
September 23, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Nvidia to Invest Up to $100 Billion in OpenAI, Linking Two Artificial Intelligence Titans.” The headline makes an important point. The words “big” and “huge” are not sufficiently monumental. Now we have “titans." As you may know, a “titan” is a person of great power. I will leave out the Greek mythology. I do want to point out that “titans” were the kiddies produced by Uranus and Gaea. Titans were big dogs until Zeus and a few other Olympian gods forced them to live in what is now Newark, New Jersey.
An AI-generated diagram of a simple circular deal. Regulators and and IRS professionals enjoy challenges. What are those people doing to make the process work? Thanks, MidJourney.com. Good enough.
The write up from the outfit that it is into trust explains how two “titans” are now intertwined. No, I won’t bring up the issue of incestuous behavior. Let’s stick to the “real” news story:
Nvidia will invest up to $100 billion in OpenAI and supply it with data center chips… Nvidia will start investing in OpenAI for non-voting shares once the deal is finalized, then OpenAI can use the cash to buy Nvidia’s chips.
I am not a finance, tax, or money wizard. On the surface, it seems to me that I loan a person some money and then that person gives me the money back in exchange for products and services. I may have this wrong, but I thought a similar arrangement landed one of the once-famous enterprise search companies in a world of hurt and a member of the firm’s leadership in prison.
Reuters includes this statement:
Analysts said the deal was positive for Nvidia but also voiced concerns about whether some of Nvidia’s investment dollars might be coming back to it in the form of chip purchases. "On the one hand this helps OpenAI deliver on what are some very aspirational goals for compute infrastructure, and helps Nvidia ensure that that stuff gets built. On the other hand the ‘circular’ concerns have been raised in the past, and this will fuel them further," said Bernstein analyst Stacy Rasgon.
“Circular” — That’s an interesting word. Some of the financial transaction my team and I examined during our Telegram (the messaging outfit) research used similar methods. One of the organizations apparently aware of “circular” transactions was Huione Guarantee. No big deal, but the company has been in legal hot water for some of its circular functions. Will OpenAI and Nvidia experience similar problems? I don’t know, but the circular thing means that money goes round and round. In circular transactions, at each touch point magical number things can occur. Money deals are rarely hallucinatory like AI outputs and semiconductor marketing.
What’s this mean to companies eager to compete in smart software and Fancy Dan chips? In my opinion, I hear my inner voice saying, “You may be behind a great big circular curve. Better luck next time.”
Stephen E Arnold, September 23, 2025
Pavel Durov Was Arrested for Online Stubbornness: Will This Happen in the US?
September 23, 2025
Written by an unteachable dinobaby. Live with it.
In august 2024, the French judiciary arrested Pavel Durov, the founder of VKontakte and then Telegram, a robust but non-AI platform. Why? The French government identified more than a dozen transgressions by Pavel Durov, who holds French citizenship as a special tech bro. Now he has to report to his French mom every two weeks or experience more interesting French legal action. Is this an example of a failure to communicate?
Will the US take similar steps toward US companies? I raise the question because I read an allegedly accurate “real” news write up called “Anthropic Irks White House with Limits on Models’ Use.” (Like many useful online resources, this story requires the curious to subscribe, pay, and get on a marketing list.) These “models,” of course, are the zeros and ones which comprise the next big thing in technology: artificial intelligence.
The write up states:
Anthropic is in the midst of a splashy media tour in Washington, but its refusal to allow its models to be used for some law enforcement purposes has deepened hostility to the company inside the Trump administration…
The write up says as actual factual:
Anthropic recently declined requests by contractors working with federal law enforcement agencies because the company refuses to make an exception allowing its AI tools to be used for some tasks, including surveillance of US citizens…
I found the write up interesting. If France can take action against an upstanding citizen like Pavel Durov, what about the tech folks at Anthropic or other outfits? These firms allegedly have useful data and the tools to answer questions? I recently fed the output of one AI system (ChatGPT) into another AI system (Perplexity), and I learned that Perplexity did a good job of identifying the weirdness in the ChatGPT output. Would these systems provide similar insights into prompt patterns on certain topics; for instance, the charges against Pavel Durov or data obtained by people looking for information about nuclear fuel cask shipments?
With France’s action, is the door open to take direct action against people and their organizations which cooperate reluctantly or not at all when a government official makes a request?
I don’t have an answer. Dinobabies rarely do, and if they do have a response, no one pays attention to these beasties. However, some of those wizards at AI outfits might want to ponder the question about cooperation with a government request.
Stephen E Arnold, September 24, 2025
Google Tactic: Blame Others for an Issue
September 23, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
The shift from irrelevant, SEO-corrupted results to the hallucinating world of Google smart software is allegedly affecting traffic to some sites. Google is on top of the situation. Its tireless wizards and even more tireless smart software is keeping track of ads. Oh, sorry, I mean Web site traffic. With AI making everything better for users, the complaints about declining referral traffic are annoying.
Google has an answer. “YouTube Addresses Lower View Counts Which Seem to Be Caused by Ad Blockers” reports:
Since mid-August, many YouTubers have noticed their view counts are considerably lower than they were before, in some cases with very drastic drops. The reason for the drop, though, has been shrouded in mystery for many creators.
Then adds:
The most likely explanation seems to be that YouTube is not counting views properly for users with an ad blocker enabled, another step in the platform’s continued war on ad blockers.
Yeah, maybe.
My view is that Google is steering traffic across its platform to extract as much revenue as possible. The model is the one used by olive oil producers. The good stuff is golden. The processes to squeeze the juice produces results that are unsatisfactory to some. The broken recommendations system, the smart summaries in search, and the other quantumly supreme innovations have fouled the gears in the Google advertising machine. That means Google has to up the amount of money it can obtain by squeezing harder.
How does one fix something that is the equivalent of an electric generator that once whizzed at Niagara Falls? That’s easy: One wraps it in something better, faster, and cheaper. Oh, I forgot easier to do. Googlers are busy people. Easy is efficient if it is cheap or produces additional revenue. The better outcome is to do both: Lower costs and boost revenue.
Google is going to have the same experience reinventing itself that IBM and Intel are experiencing. You may think I am just a bonkers dinobaby. Yeah, maybe. But, maybe not.
Stephen E Arnold, September 23, 2025
UAE: Will It Become U-AI?
September 23, 2025
Written by an unteachable dinobaby. Live with it.
UAE is moving forward in smart software, not just crypto. “Industry Leading AI Reasoning for All” reports that the Institute of foundation Models has “industry leading AI reasoning for all.” The new item reports:
Built on six pillars of innovation, K2 Think represents a new class of reasoning model. It employs long chain-of-thought supervised fine-tuning to strengthen logical depth, followed by reinforcement learning with verifiable rewards to sharpen accuracy on hard problems. Agentic planning allows the model to decompose complex challenges before reasoning through them, while test-time scaling techniques further boost adaptability.
I am not sure what the six pillars of innovation are, particularly after looking at some of the UAE’s crypto plays, but there is more. Here’s another passage which suggests that Intel and Nvidia may not be in the k2think.ai technology road map:
K2 Think will soon be available on Cerebras’ wafer-scale, inference-optimized compute platform, enabling researchers and innovators worldwide to push the boundaries of reasoning performance at lightning-fast speed. With speculative decoding optimized for Cerebras hardware, K2 Think will achieve unprecedented throughput of 2,000 tokens per second, making it both one of the fastest and most efficient reasoning systems in existence.
If you want to kick its tires (tAIres?), the system is available at k2think.ai and on Hugging Face. Oh, the write up quotes two people with interesting names: Eric Xing and Peng Xiao.
Stephen E Arnold, September 23, 2025
Can Meta Buy AI Innovation and Functioning Demos?
September 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
That “move fast and break things” has done a bang up job. Mark Zuckerberg, famed for making friends in Hawaii, demonstrated how “think and it becomes real” works in the real world. “Bad Luck for Zuckerberg: Why Meta Connect’s Live Demos Flopped” reported
two of Meta’s live demos epically failed. (A third live demo took some time but eventually worked.) During the event, CEO Mark Zuckerberg blamed it on the Wi-Fi connection.
Yep, blame the Wi-Fi. Bad Wi-Fi, not bad management or bad planning or bad prepping or bad decision making. No, it is bad Wi-Fi. Okay, I understand: A modern management method in action at Meta, Facebook, WhatsApp, and Instagram. Or, bad luck. No, bad Wi-Fi.
Thanks Venice.ai. You captured the baffled look on the innovator’s face when I asked Ron K., “Where did you get the idea for the hair dryer, the paper bag, and popcorn?”
Let’s think about another management decision. Navigate to the weirdly named write up “Meta Gave Millions to New AI Project Poaches, Now It Has a Problem.” That write up reports that Meta has paid some employees as much as $300 million to work on AI. The write up adds:
Such disparities appear to have unsettled longer-serving Meta staff. Employees were said to be lobbying for higher pay or transfers into the prized AI lab. One individual, despite receiving a grant worth millions, reportedly quit after concluding that newcomers were earning multiples more…
My recollection that there is some research that suggests pay is important, but other factors enter into a decision to go to work for a particular organization. I left the blue chip consulting game decades ago, but I recall my boss (Dr. William P. Sommers) explaining to me that pay and innovation are hoped for but not guaranteed. I saw that first hand when I visited the firm’s research and development unit in a rust belt city.
This outfit was cranking out innovations still able to wow people. A good example is the hot air pop corn pumper. Let that puppy produce popcorn for a group of six-year-olds at a birthday party, and I know it will attract some attention.
Here’s the point of the story. The fellow who came up with the idea for this innovation was an engineer, but not a top dog at the time. His wife organized a birthday party for a dozen six and seven year olds to celebrate their daughter’s birthday. But just as the girls arrived, the wife had to leave for a family emergency. As his wife swept out the door, she said, “Find some way to keep them entertained.”
The hapless engineer looked at the group of young girls and his daughter asked, “Daddy, will you make some popcorn?” Stress overwhelmed the pragmatic engineer. He mumbled, “Okay.” He went into the kitchen and found the popcorn. Despite his engineering degree, he did not know where the popcorn pan was. The noise from the girls rose a notch.
He poked his head from the kitchen and said, “Open your gifts. Be there in a minute.”
Adrenaline pumping, he grabbed the bag of popcorn, took a brown paper sack from the counter, and dashed into the bathroom. He poked a hole in the paper bag. He dumped in a handful of popcorn. He stuck the nozzle of the hair dryer through the hole and turned it on. Ninety seconds later, the kernels began popping.
He went into the family room and said, “Let’s make popcorn in the kitchen. He turned on the hair dryer and popped corn. The kids were enthralled. He let his daughter handle the hair dryer. The other kids scooped out the popcorn and added more kernels. Soon popcorn was every where.
The party was a success even though his wife was annoyed at the mess he and the girls made.
I asked the engineer, “Where did you get the idea to use a hair dryer and a paper bag?”
He looked at me and said, “I have no idea.”
That idea became a multi-million dollar product.
Money would not have caused the engineer to “innovate.”
Maybe Mr. Zuckerberg, once he has resolved his demo problems to think about the assumption that paying a person to innovate is an example of “just think it and it will happen” generates digital baloney?
Stephen E Arnold, September 22, 2025
AI and the Media: AI Is the Answer for Some Outfits
September 22, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
I spotted a news item about Russia’s government Ministry of Defense. The estimable defensive outfit now has an AI-generated news program. Here’s the paywalled source link. I haven’t seen it yet, but the statistics for viewership and the Telegram comments will be interesting to observe. Gee, do you think that bright Russian developers have found a way to steer the output to represent the political views of the Russian government? Did you say, “No.” Congratulations, you may qualify for a visa to homestead in Norilsk. Check it out on Google Maps.
Back in Germany, Axel Springer SE is definitely into AI as well. Coincidentally, Axel Springer will use AI for news. I noted Business Insider will allow its real and allegedly human journalists to use AI to write “drafts” of news stories. Here’s the paywalled source link. Hey, Axel, aren’t your developers able to pipe the AI output into slamma jamma banana and produce via AI complete TikTok-type news videos? Russia’s Ministry of Defense has this angle figured out. YouTube may be in the MoD’s plans. One has to fund that “defensive” special operation in Ukraine somehow.
Several observations:
- Steering or weaponing large language models is a feature of the systems. Can one trust AI generated news? Can one trust any AI output from a large organization? You may. I don’t.
- The economics of producing Walter Cronkite type news make “real” news expensive. Therefore, say hello to AI written news and AI delivered news. GenX and GenY will love this approach to information in my opinion.
- How will government regulators respond to AI news? In Russia, government controlled AI news will get a green light. Elsewhere, the shift may be slightly more contentious.
Net net: AI is great.
Stephen E Arnold, September 22, 2025
OpenAI Says, Hallucinations Are Here to Stay?
September 22, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
I read “OpenAI Admits AI Hallucinations Are Mathematically Inevitable, Not Just Engineering Flaws.” I am not sure the information in the write up will make people who are getting smart software whether they want it or not happy. Even less thrilled with the big outfits who are implementing AI with success ranging from five percent to 90 percent hoorahs. Close enough for horse shoes works for putting shoes on equines. I am not sure how that will work out for medical and financial applications. I won’t comment on the kinetic applications of smart software, but hallucination may not be a plus in some situations.
The write up begins with what may make some people — how shall I say it? — nervous, frightened, squeamish. I quote:
… OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
I quite liked the word always. It is obviously a statement that must persist for eternity, which to a dinobaby like me, quite a long time. I found the distinction between plausible and false delicious. The burden to figure out what is “correct,” “wrong,” slightly wonky, and false shifts to the user of smart software. But there is another word that struck me as significant: Perfect. Now that is another logical tar pit.
After this, I am not sure where the write up is going. I noted this passage:
OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.
There you go. The fundamental method in use today and believed to be the next big thing is always going to produce incorrect information. Always.
The Computerworld story points to the “research paper.” Computerworld points out that industry evaluations of smart software are slippery fish. Computerworld reminds its readers that “enterprises must adapt strategies.” (I would imagine. If smart software gets chemical formula wrong or outputs information that leads to a substantial loss of revenue, problems might arise, might they not?) Computerworld concludes with a statement that left me baffled; to wit: “Market already adapting.”
Okay.
I wonder how many Computerworld readers will consume this story standing next to a burning pile of cash tossed into the cost black holes of smart software.
Stephen E Arnold, September 22, 2025
Google Emits a Tiny Signal: Is It Stress or Just a Brilliant Management Move?
September 22, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
Google is chock full of technical and management wizards. Anything the firm does is a peak action. With the Google doing so many forward leaning things each day, it is possible for certain staggering insightful moments to be lost in the blitz of scintillating breakthroughs.
Tom’s Hardware spotted one sparkling decider diamond. “Google Terminates 200 AI Contractors — Ramp-Down Blamed, But Workers Claim Questions Over Pay and Job Insecurity Are the Real Reason Behind Layoffs” says:
Some believe they were let go because of complaints over working conditions and compensation.
Goes Google have a cancel culture?
The write up notes:
For the first half of 2025, AI growth was everywhere, and all the major companies were spending big to try to get ahead. Meta was offering individuals hundreds of millions to join its ranks … But while announcements of enormous industry deals continue, there’s also a lot of talk of contraction, particularly when it comes to lower-level positions like data annotation and AI response rating.
The individuals who are now free to find their future elsewhere have some ideas about why they were deleted from Google and promoted to Xooglers (former Google employees). The write up reports:
… many of them [the terminated with extreme Googliness] believe that it is their complaints over compensation that lead to them being laid off…. [Some] workers “attempted to unionize” earlier in the year to no avail. According to the report, “they [the future finders] allege that the company has retaliated against them.” … For its part, Google said in a statement that GlobalLogic is responsible for the working conditions of its employees.
See the brilliance of the management move. Google blames another outfit. Google reduces costs. Google makes it clear that grousing is not an path to the Google leadership enclave. Google AI is unscathed.
Google is A Number One in management in my opinion.
Stephen E Arnold, September 22, 2025
AI Poker: China Has Three Aces. Google, Your Play
September 19, 2025
No smart software involved. Just a dinobaby’s work.
TV poker seems to be a thing on free or low cost US television streams. A group of people squint, sigh, and fiddle as each tries to win the big pile of cash. Another poker game is underway in the “next big thing” of smart software or AI.
Google released the Nano Banana image generator. Social media hummed. Okay, that looks like a winning hand. But another player dropped some coin on the table, squinted at the Google, and smirked just a tiny bit.
“ByteDance Unveils New AI Image Model to Rival DeepMind’s Nano Banana” explains the poker play this way:
TikTok-owner ByteDance has launched its latest image generation artificial intelligence tool Seedream 4.0, which it said surpasses Google DeepMind’s viral “Nano Banana” AI image editor across several key indicators.
Now the cute jargon may make the poker hand friendly, there is menace behind the terminology. The write up states:
ByteDance claims that Seedream 4.0 beat Gemini 2.5 Flash Image for image generation and editing on its internal evaluation benchmark MagicBench, with stronger performance in prompt adherence, alignment and aesthetics.
Okay, prompt adherence, alignment (what the heck is that?), and aesthetics. That’s three aces right.
Who has the cost advantage? The write up says:
On Fal.ai, a global generative media hosting platform, Seedream 4.0 costs US$0.03 per generated image, while Gemini 2.5 Flash Image is priced at US$0.039.
I thought in poker one raised the stakes. Well, in AI poker one lowers the price in order to raise the stakes. These players are betting the money burned in the AI furnace will be “won” as the game progresses. Will AI poker turn up on the US free TV services? Probably. Burning cash makes for wonderful viewing, especially for those who are writing the checks.
What’s China’s view of this type of gambling? The write up says:
The state has signaled its support for AI-generated content by recognizing their copyright in late 2023, but has also recently introduced mandatory labelling of such content.
The game is not over. (Am I the only person who thinks that the name “nana banana” would have been better than “nano banana”?)
Stephen E Arnold, September 19, 2025
AI: The Tool for Humanity. Do Not Laugh.
September 19, 2025
Both sides of the news media are lamenting that AI is automating jobs and putting humans out of work. Conservative and liberals remain separated on how and why AI is “stealing” jobs, but the fear remains that humans are headed to obsoleteness…again. Humans have faced this issue since the start of human ingenuity. The key is to adapt and realize what AI truly is. Elizabeth Mathew of Signoz.io wrote: “I Built An MCP Server For Observability. This Is My Unhyped Take.”
If you’re unfamiliar with an MCP server it is an open standard that defines how LLMS or AI agents (i.e. Claude) uniformly connect external tools and data sources. It can be decoupled and used similar to a USB-C device then used for any agent.
After explaining some issues with MCP servers and why they are “schizophrenic”,
Mathew concludes with this:
“Ultimately, MCP-powered agents are not bringing us closer to automated problem-solving. They are giving us sophisticated hypothesis generators. They excel at exploring the known, but the unknown remains the domain of the human engineer. We’re not building an automated SRE; we’re building a co-pilot that can brainstorm, but can’t yet reason. And recognizing that distinction is the key to using these tools effectively without falling for the hype.”
She might be true from an optimistic and expert perspective, but that doesn’t prevent CEOs from implementing AI to replace their workforce or young adults being encouraged away from coding careers. Recent college graduates, do you have a job, any job?
Whitney Grace, September 19, 2025