The Skill for the AI World As Pronounced by the Google

September 24, 2025

Dino 5 18 25Written by an unteachable dinobaby. Live with it.

Worried about a job in the future: The next minute, day, decade. The secret of constant employment, big bucks, and even larger volumes of happiness has been revealed. “Google’s Top AI Scientist Says Learning How to Learn Will Be Next Generation’s Most Needed Skill” says:

the most important skill for the next generation will be “learning how to learn” to keep pace with change as Artificial Intelligence transforms education and the workplace.

Well, that’s the secret: Learn how to learn. Why? Surviving in the chaos of an outfit like Google means one has to learn. What should one learn? Well, the write up does not provide that bit of wisdom. I assume a Google search will provide the answer in a succinct AI-generated note, right?

The write up presents this chunk of wisdom from a person keen on getting lots of AI people aware of Google’s AI prowess:

The neuroscientist and former chess prodigy said artificial general intelligence—a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can—could arrive within a decade…. [He] Hassabis emphasized the need for “meta-skills,” such as understanding how to learn and optimizing one’s approach to new subjects, alongside traditional disciplines like math, science and humanities.

This means reading poetry, preferably Greek poetry. The Google super wizard’s father is “Greek Cypriot.” (Cyprus is home base for a number of interesting financial operations and the odd intelware outfit. Which part of Cyprus is which? Google Maps may or may not answer this question. Ask your Google Pixel smart phone to avoid an unpleasant mix up.)

The write up adds this courteous note:

[Greek Prime Minister Kyriakos] Mitsotakis rescheduled the Google Big Brain to “avoid conflicting with the European basketball championship semifinal between Greece and Turkey. Greece later lost the game 94-68.”

Will life long learning skill help the Greek basketball team win against a formidable team like Turkey?

Sure, if Google says it, you know it is true just like eating rocks or gluing cheese on pizza. Learn now.

Stephen E Arnold, September 24, 2025

Titanic AI Goes Round and Round: Are You Dizzy Yet?

September 23, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Nvidia to Invest Up to $100 Billion in OpenAI, Linking Two Artificial Intelligence Titans.” The headline makes an important point. The words “big” and “huge” are not sufficiently monumental. Now we have “titans." As you may know, a “titan” is a person of great power. I will leave out the Greek mythology. I do want to point out that “titans” were the kiddies produced by Uranus and Gaea. Titans were big dogs until Zeus and a few other Olympian gods forced them to live in what is now Newark, New Jersey.

image

An AI-generated diagram of a simple circular deal. Regulators and and IRS professionals enjoy challenges. What are those people doing to make the process work? Thanks, MidJourney.com. Good enough.

The write up from the outfit that it is into trust explains how two “titans” are now intertwined. No, I won’t bring up the issue of incestuous behavior. Let’s stick to the “real” news story:

Nvidia will invest up to $100 billion in OpenAI and supply it with data center chips… Nvidia will start investing in OpenAI for non-voting shares once the deal is finalized, then OpenAI can use the cash to buy Nvidia’s chips.

I am not a finance, tax, or money wizard. On the surface, it seems to me that I loan a person some money and then that person gives me the money back in exchange for products and services. I may have this wrong, but I thought a similar arrangement landed one of the once-famous enterprise search companies in a world of hurt and a member of the firm’s leadership in prison.

Reuters includes this statement:

Analysts said the deal was positive for Nvidia but also voiced concerns about whether some of Nvidia’s investment dollars might be coming back to it in the form of chip purchases. "On the one hand this helps OpenAI deliver on what are some very aspirational goals for compute infrastructure, and helps Nvidia ensure that that stuff gets built. On the other hand the ‘circular’ concerns have been raised in the past, and this will fuel them further," said Bernstein analyst Stacy Rasgon.

“Circular” — That’s an interesting word. Some of the financial transaction my team and I examined during our Telegram (the messaging outfit) research used similar methods. One of the organizations apparently aware of “circular” transactions was Huione Guarantee. No big deal, but the company has been in legal hot water for some of its circular functions. Will OpenAI and Nvidia experience similar problems? I don’t know, but the circular thing means that money goes round and round. In circular transactions, at each touch point magical number things can occur. Money deals are rarely hallucinatory like AI outputs and semiconductor marketing.

What’s this mean to companies eager to compete in smart software and Fancy Dan chips? In my opinion, I hear my inner voice saying, “You may be behind a great big circular curve. Better luck next time.”

Stephen E Arnold, September 23, 2025

Pavel Durov Was Arrested for Online Stubbornness: Will This Happen in the US?

September 23, 2025

Written by an unteachable dinobaby. Live with it.

In august 2024, the French judiciary arrested Pavel Durov, the founder of VKontakte and then Telegram, a robust but non-AI platform. Why? The French government identified more than a dozen transgressions by Pavel Durov, who holds French citizenship as a special tech bro. Now he has to report to his French mom every two weeks or experience more interesting French legal action. Is this an example of a failure to communicate?

Will the US take similar steps toward US companies? I raise the question because I read an allegedly accurate “real” news write up called “Anthropic Irks White House with Limits on Models’ Use.” (Like many useful online resources, this story requires the curious to subscribe, pay, and get on a marketing list.) These “models,” of course, are the zeros and ones which comprise the next big thing in technology: artificial intelligence.

The write up states:

Anthropic is in the midst of a splashy media tour in Washington, but its refusal to allow its models to be used for some law enforcement purposes has deepened hostility to the company inside the Trump administration…

The write up says as actual factual:

Anthropic recently declined requests by contractors working with federal law enforcement agencies because the company refuses to make an exception allowing its AI tools to be used for some tasks, including surveillance of US citizens…

I found the write up interesting. If France can take action against an upstanding citizen like Pavel Durov, what about the tech folks at Anthropic or other outfits? These firms allegedly have useful data and the tools to answer questions? I recently fed the output of one AI system (ChatGPT) into another AI system (Perplexity), and I learned that Perplexity did a good job of identifying the weirdness in the ChatGPT output. Would these systems provide similar insights into prompt patterns on certain topics; for instance, the charges against Pavel Durov or data obtained by people looking for information about nuclear fuel cask shipments?

With France’s action, is the door open to take direct action against people and their organizations which cooperate reluctantly or not at all when a government official makes a request?

I don’t have an answer. Dinobabies rarely do, and if they do have a response, no one pays attention to these beasties. However, some of those wizards at AI outfits might want to ponder the question about cooperation with a government request.

Stephen E Arnold, September 24, 2025

UAE: Will It Become U-AI?

September 23, 2025

Dino 5 18 25Written by an unteachable dinobaby. Live with it.

UAE is moving forward in smart software, not just crypto. “Industry Leading AI Reasoning for All” reports that the Institute of foundation Models has “industry leading AI reasoning for all.” The new item reports:

Built on six pillars of innovation, K2 Think represents a new class of reasoning model. It employs long chain-of-thought supervised fine-tuning to strengthen logical depth, followed by reinforcement learning with verifiable rewards to sharpen accuracy on hard problems. Agentic planning allows the model to decompose complex challenges before reasoning through them, while test-time scaling techniques further boost adaptability. 

I am not sure what the six pillars of innovation are, particularly after looking at some of the UAE’s crypto plays, but there is more. Here’s another passage which suggests that Intel and Nvidia may not be in the k2think.ai technology road map:

K2 Think will soon be available on Cerebras’ wafer-scale, inference-optimized compute platform, enabling researchers and innovators worldwide to push the boundaries of reasoning performance at lightning-fast speed. With speculative decoding optimized for Cerebras hardware, K2 Think will achieve unprecedented throughput of 2,000 tokens per second, making it both one of the fastest and most efficient reasoning systems in existence.

If you want to kick its tires (tAIres?), the system is available at k2think.ai and on Hugging Face. Oh, the write up quotes two people with interesting names: Eric Xing and Peng Xiao.

Stephen E Arnold, September 23, 2025

AI and the Media: AI Is the Answer for Some Outfits

September 22, 2025

Dino 5 18 25_thumb_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

I spotted a news item about Russia’s government Ministry of Defense. The estimable defensive outfit now has an AI-generated news program. Here’s the paywalled source link. I haven’t seen it yet, but the statistics for viewership and the Telegram comments will be interesting to observe. Gee, do you think that bright Russian developers have found a way to steer the output to represent the political views of the Russian government? Did you say, “No.” Congratulations, you may qualify for a visa to homestead in Norilsk. Check it out on Google Maps.

Back in Germany, Axel Springer SE is definitely into AI as well. Coincidentally, Axel Springer will use AI for news. I noted Business Insider will allow its real and allegedly human journalists to use AI to write “drafts” of news stories. Here’s the paywalled source link. Hey, Axel, aren’t your developers able to pipe the AI output into slamma jamma banana and produce via AI complete TikTok-type news videos? Russia’s Ministry of Defense has this angle figured out. YouTube may be in the MoD’s plans. One has to fund that “defensive” special operation in Ukraine somehow.

Several observations:

  1. Steering or weaponing large language models is a feature of the systems. Can one trust AI generated news? Can one trust any AI output from a large organization? You may. I don’t.
  2. The economics of producing Walter Cronkite type news make “real” news expensive. Therefore, say hello to AI written news and AI delivered news. GenX and GenY will love this approach to information in my opinion.
  3. How will government regulators respond to AI news? In Russia, government controlled AI news will get a green light. Elsewhere, the shift may be slightly more contentious.

Net net: AI is great.

Stephen E Arnold, September 22, 2025

OpenAI Says, Hallucinations Are Here to Stay?

September 22, 2025

Dino 5 18 25_thumb_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

I read “OpenAI Admits AI Hallucinations Are Mathematically Inevitable, Not Just Engineering Flaws.” I am not sure the information in the write up will make people who are getting smart software whether they want it or not happy. Even less thrilled with the big outfits who are implementing AI with success ranging from five percent to 90 percent hoorahs. Close enough for horse shoes works for putting shoes on equines. I am not sure how that will work out for medical and financial applications. I won’t comment on the kinetic applications of smart software, but hallucination may not be a plus in some situations.

The write up begins with what may make some people — how shall I say it? — nervous, frightened, squeamish. I quote:

… OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.

I quite liked the word always. It is obviously a statement that must persist for eternity, which to a dinobaby like me, quite a long time. I found the distinction between plausible and false delicious. The burden to figure out what is “correct,” “wrong,” slightly wonky, and false shifts to the user of smart software. But there is another word that struck me as significant: Perfect. Now that is another logical tar pit.

After this, I am not sure where the write up is going. I noted this passage:

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

There you go. The fundamental method in use today and believed to be the next big thing is always going to produce incorrect information. Always.

The Computerworld story points to the “research paper.” Computerworld points out that industry evaluations of smart software are slippery fish. Computerworld reminds its readers that “enterprises must adapt strategies.” (I would imagine. If smart software gets chemical formula wrong or outputs information that leads to a substantial loss of revenue, problems might arise, might they not?) Computerworld concludes with a statement that left me baffled; to wit: “Market already adapting.”

Okay.

I wonder how many Computerworld readers will consume this story standing next to a burning pile of cash tossed into the cost black holes of smart software.

Stephen E Arnold, September 22, 2025

Google Emits a Tiny Signal: Is It Stress or Just a Brilliant Management Move?

September 22, 2025

Dino 5 18 25Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

Google is chock full of technical and management wizards. Anything the firm does is a peak action. With the Google doing so many forward leaning things each day, it is possible for certain staggering insightful moments to be lost in the blitz of scintillating breakthroughs.

Tom’s Hardware spotted one sparkling decider diamond. “Google Terminates 200 AI Contractors — Ramp-Down Blamed, But Workers Claim Questions Over Pay and Job Insecurity Are the Real Reason Behind Layoffs” says:

Some believe they were let go because of complaints over working conditions and compensation.

Goes Google have a cancel culture?

The write up notes:

For the first half of 2025, AI growth was everywhere, and all the major companies were spending big to try to get ahead. Meta was offering individuals hundreds of millions to join its ranks … But while announcements of enormous industry deals continue, there’s also a lot of talk of contraction, particularly when it comes to lower-level positions like data annotation and AI response rating.

The individuals who are now free to find their future elsewhere have some ideas about why they were deleted from Google and promoted to Xooglers (former Google employees). The write up reports:

… many of them [the terminated with extreme Googliness] believe that it is their complaints over compensation that lead to them being laid off…. [Some] workers “attempted to unionize” earlier in the year to no avail. According to the report, “they [the future finders] allege that the company has retaliated against them.” … For its part, Google said in a statement that GlobalLogic is responsible for the working conditions of its employees.

See the brilliance of the management move. Google blames another outfit. Google reduces costs. Google makes it clear that grousing is not an path to the Google leadership enclave. Google AI is unscathed.

Google is A Number One in management in my opinion.

Stephen E Arnold, September 22, 2025

AI Poker: China Has Three Aces. Google, Your Play

September 19, 2025

animated-dinosaur-image-0062_thumb_t_thumb_thumbNo smart software involved. Just a dinobaby’s work. 

TV poker seems to be a thing on free or low cost US television streams. A group of people squint, sigh, and fiddle as each tries to win the big pile of cash. Another poker game is underway in the “next big thing” of smart software or AI.

Google released the Nano Banana image generator. Social media hummed. Okay, that looks like a winning hand. But another player dropped some coin on the table, squinted at the Google, and smirked just a tiny bit.

ByteDance Unveils New AI Image Model to Rival DeepMind’s Nano Banana” explains the poker play this way:

TikTok-owner ByteDance has launched its latest image generation artificial intelligence tool Seedream 4.0, which it said surpasses Google DeepMind’s viral “Nano Banana” AI image editor across several key indicators.

Now the cute jargon may make the poker hand friendly, there is menace behind the terminology. The write up states:

ByteDance claims that Seedream 4.0 beat Gemini 2.5 Flash Image for image generation and editing on its internal evaluation benchmark MagicBench, with stronger performance in prompt adherence, alignment and aesthetics.

Okay, prompt adherence, alignment (what the heck is that?), and aesthetics. That’s three aces right.

Who has the cost advantage? The write up says:

On Fal.ai, a global generative media hosting platform, Seedream 4.0 costs US$0.03 per generated image, while Gemini 2.5 Flash Image is priced at US$0.039.

I thought in poker one raised the stakes. Well, in AI poker one lowers the price in order to raise the stakes. These players are betting the money burned in the AI furnace will be “won” as the game progresses. Will AI poker turn up on the US free TV services? Probably. Burning cash makes for wonderful viewing, especially for those who are writing the checks.

What’s China’s view of this type of gambling? The write up says:

The state has signaled its support for AI-generated content by recognizing their copyright in late 2023, but has also recently introduced mandatory labelling of such content.

The game is not over. (Am I the only person who thinks that the name “nana banana” would have been better than “nano banana”?)

Stephen E Arnold, September 19, 2025

AI: The Tool for Humanity. Do Not Laugh.

September 19, 2025

Both sides of the news media are lamenting that AI is automating jobs and putting humans out of work. Conservative and liberals remain separated on how and why AI is “stealing” jobs, but the fear remains that humans are headed to obsoleteness…again. Humans have faced this issue since the start of human ingenuity. The key is to adapt and realize what AI truly is. Elizabeth Mathew of Signoz.io wrote: “I Built An MCP Server For Observability. This Is My Unhyped Take.”

If you’re unfamiliar with an MCP server it is an open standard that defines how LLMS or AI agents (i.e. Claude) uniformly connect external tools and data sources. It can be decoupled and used similar to a USB-C device then used for any agent.

After explaining some issues with MCP servers and why they are “schizophrenic”,

Mathew concludes with this:

“Ultimately, MCP-powered agents are not bringing us closer to automated problem-solving. They are giving us sophisticated hypothesis generators. They excel at exploring the known, but the unknown remains the domain of the human engineer. We’re not building an automated SRE; we’re building a co-pilot that can brainstorm, but can’t yet reason. And recognizing that distinction is the key to using these tools effectively without falling for the hype.”

She might be true from an optimistic and expert perspective, but that doesn’t prevent CEOs from implementing AI to replace their workforce or young adults being encouraged away from coding careers. Recent college graduates, do you have a job, any job?

Whitney Grace, September 19, 2025

AI Search Is Great. Believe It. Now!

September 18, 2025

Dino 5 18 25_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

Cheerleaders are necessary. The idea is that energetic people lead other people to chant: Stand Up, Sit Down, Fight! Fight! Fight! If you get with the program, you stand up. You sit down. You shout, of course, fight, fight, fight. Does it help? I don’t know because I don’t cheer at sports events. I say, “And again” or some other statement designed to avoid getting dirty looks or caught up in standing, sitting, and chanting.

Others are different. “GPT-5 Thinking in ChatGPT (aka Research Goblin) Is Shockingly Good at Search” states:

Don’t use chatbots as search engines” was great advice for several years… until it wasn’t. I wrote about how good OpenAI’s o3 was at using its Bing-backed search tool back in April. GPT-5 feels even better.

The idea is that instead of working with a skilled special librarian and participating in a reference interview, people started using online Web indexes. Now we have moved from entering a query to asking a smart software system for an answer.

Consider the trajectory. A person seeking information works with a professional with knowledge of commercial databases, traditional (book) reference tools, and specific ways of tracking down and locating information needed to answer the user’s question. When the user  was not sure, the special librarian would ask, “What specific information do you need?” Some users would reply, “Get me everything about subject X?” The special librarian would ask other questions until a particular item could be identified. In the good old days, special librarians would seek the information and provide selected items to the person with the question. Ellen Shedlarz at Booz, Allen & Hamilton when I was a lowly peon did this type of work as did Dominque Doré at Halliburton NUS (a nuclear outfit).

We then moved to the era of PCs and do-it-yourself research. Everyone became an expert. Google just worked. Then mobile phones arrived so research on the go was a thing. But keying words into a search box and fiddling with links was a drag. Now just tell the smart software your problem. The solution is just there like instant oatmeal.

The Stone Age process was knowledge work. Most people seeking information did not ask, preferring as one study found to look through trade publications in an old-fashioned in box or pick up the telephone and ask a person whom one assumed knew something about a particular subject. The process was slow, inefficient, and fraught with delays. Let’s be efficient. Let’s let software do everything.

Flash forward to the era of smart software or seemingly smart software. The write up reports:

I’ve been trying out hints like “go deep” which seem to trigger a more thorough research job. I enjoy throwing those at shallow and unimportant questions like the UK Starbucks cake pops one just to see what happens! You can throw questions at it which have a single, unambiguous answer—but I think questions which are broader and don’t have a “correct” answer can be a lot more fun. The UK supermarket rankings above are a great example of that. Since I love a questionable analogy for LLMs Research Goblin is… well, it’s a goblin. It’s very industrious, not quite human and not entirely trustworthy. You have to be able to outwit it if you want to keep it gainfully employed.

The reference / special librarians are an endangered species. The people seeking information use smart software. Instead of a back-and-forth and human-intermediated interaction between a trained professional and a person with a question, we get “trying out” and “accepting the output.”

I think there are three issues inherent in this cheerleading:

  1. Knowledge work is short circuited. Instead of information-centric discussion, users accept the output. What if the output is incorrect, biased, incomplete, or made up? Cheerleaders shout more enthusiastically until a really big problem occurs.
  2. The conditioning process of accepting outputs makes even intelligent people susceptible to mental short cuts. These are good, but accuracy, nuance, and a sense of understanding the information may be pushed to the side of the information highway. Sometimes those backroads deliver unexpected and valuable insights. Forget that. Grab a burger and go.
  3. The purpose of knowledge work is to make certain that an idea, diagnosis, research study can be trusted. The mechanisms of large language models are probabilistic. Think close enough for horseshoes. Cheering loudly does not deliver accuracy of output, just volume.

Net net: Inside each large language model lurks a system capable of suggesting glue cheese on pizza, the gray mass is cancer, and eat rocks.

What’s been lost? Knowledge value from the process of obtaining information the Stone Age way. Let’s work in caves with fire provided by burning books. Sounds like a plan, Sam AI-Man. Use GPT5, use GPT5, use GPT5.

Stephen E Arnold, September 18, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta