AI and the Media: AI Is the Answer for Some Outfits

September 22, 2025

Dino 5 18 25_thumb_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

I spotted a news item about Russia’s government Ministry of Defense. The estimable defensive outfit now has an AI-generated news program. Here’s the paywalled source link. I haven’t seen it yet, but the statistics for viewership and the Telegram comments will be interesting to observe. Gee, do you think that bright Russian developers have found a way to steer the output to represent the political views of the Russian government? Did you say, “No.” Congratulations, you may qualify for a visa to homestead in Norilsk. Check it out on Google Maps.

Back in Germany, Axel Springer SE is definitely into AI as well. Coincidentally, Axel Springer will use AI for news. I noted Business Insider will allow its real and allegedly human journalists to use AI to write “drafts” of news stories. Here’s the paywalled source link. Hey, Axel, aren’t your developers able to pipe the AI output into slamma jamma banana and produce via AI complete TikTok-type news videos? Russia’s Ministry of Defense has this angle figured out. YouTube may be in the MoD’s plans. One has to fund that “defensive” special operation in Ukraine somehow.

Several observations:

  1. Steering or weaponing large language models is a feature of the systems. Can one trust AI generated news? Can one trust any AI output from a large organization? You may. I don’t.
  2. The economics of producing Walter Cronkite type news make “real” news expensive. Therefore, say hello to AI written news and AI delivered news. GenX and GenY will love this approach to information in my opinion.
  3. How will government regulators respond to AI news? In Russia, government controlled AI news will get a green light. Elsewhere, the shift may be slightly more contentious.

Net net: AI is great.

Stephen E Arnold, September 22, 2025

OpenAI Says, Hallucinations Are Here to Stay?

September 22, 2025

Dino 5 18 25_thumb_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

I read “OpenAI Admits AI Hallucinations Are Mathematically Inevitable, Not Just Engineering Flaws.” I am not sure the information in the write up will make people who are getting smart software whether they want it or not happy. Even less thrilled with the big outfits who are implementing AI with success ranging from five percent to 90 percent hoorahs. Close enough for horse shoes works for putting shoes on equines. I am not sure how that will work out for medical and financial applications. I won’t comment on the kinetic applications of smart software, but hallucination may not be a plus in some situations.

The write up begins with what may make some people — how shall I say it? — nervous, frightened, squeamish. I quote:

… OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.

I quite liked the word always. It is obviously a statement that must persist for eternity, which to a dinobaby like me, quite a long time. I found the distinction between plausible and false delicious. The burden to figure out what is “correct,” “wrong,” slightly wonky, and false shifts to the user of smart software. But there is another word that struck me as significant: Perfect. Now that is another logical tar pit.

After this, I am not sure where the write up is going. I noted this passage:

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

There you go. The fundamental method in use today and believed to be the next big thing is always going to produce incorrect information. Always.

The Computerworld story points to the “research paper.” Computerworld points out that industry evaluations of smart software are slippery fish. Computerworld reminds its readers that “enterprises must adapt strategies.” (I would imagine. If smart software gets chemical formula wrong or outputs information that leads to a substantial loss of revenue, problems might arise, might they not?) Computerworld concludes with a statement that left me baffled; to wit: “Market already adapting.”

Okay.

I wonder how many Computerworld readers will consume this story standing next to a burning pile of cash tossed into the cost black holes of smart software.

Stephen E Arnold, September 22, 2025

Google Emits a Tiny Signal: Is It Stress or Just a Brilliant Management Move?

September 22, 2025

Dino 5 18 25Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

Google is chock full of technical and management wizards. Anything the firm does is a peak action. With the Google doing so many forward leaning things each day, it is possible for certain staggering insightful moments to be lost in the blitz of scintillating breakthroughs.

Tom’s Hardware spotted one sparkling decider diamond. “Google Terminates 200 AI Contractors — Ramp-Down Blamed, But Workers Claim Questions Over Pay and Job Insecurity Are the Real Reason Behind Layoffs” says:

Some believe they were let go because of complaints over working conditions and compensation.

Goes Google have a cancel culture?

The write up notes:

For the first half of 2025, AI growth was everywhere, and all the major companies were spending big to try to get ahead. Meta was offering individuals hundreds of millions to join its ranks … But while announcements of enormous industry deals continue, there’s also a lot of talk of contraction, particularly when it comes to lower-level positions like data annotation and AI response rating.

The individuals who are now free to find their future elsewhere have some ideas about why they were deleted from Google and promoted to Xooglers (former Google employees). The write up reports:

… many of them [the terminated with extreme Googliness] believe that it is their complaints over compensation that lead to them being laid off…. [Some] workers “attempted to unionize” earlier in the year to no avail. According to the report, “they [the future finders] allege that the company has retaliated against them.” … For its part, Google said in a statement that GlobalLogic is responsible for the working conditions of its employees.

See the brilliance of the management move. Google blames another outfit. Google reduces costs. Google makes it clear that grousing is not an path to the Google leadership enclave. Google AI is unscathed.

Google is A Number One in management in my opinion.

Stephen E Arnold, September 22, 2025

AI Poker: China Has Three Aces. Google, Your Play

September 19, 2025

animated-dinosaur-image-0062_thumb_t_thumb_thumbNo smart software involved. Just a dinobaby’s work. 

TV poker seems to be a thing on free or low cost US television streams. A group of people squint, sigh, and fiddle as each tries to win the big pile of cash. Another poker game is underway in the “next big thing” of smart software or AI.

Google released the Nano Banana image generator. Social media hummed. Okay, that looks like a winning hand. But another player dropped some coin on the table, squinted at the Google, and smirked just a tiny bit.

ByteDance Unveils New AI Image Model to Rival DeepMind’s Nano Banana” explains the poker play this way:

TikTok-owner ByteDance has launched its latest image generation artificial intelligence tool Seedream 4.0, which it said surpasses Google DeepMind’s viral “Nano Banana” AI image editor across several key indicators.

Now the cute jargon may make the poker hand friendly, there is menace behind the terminology. The write up states:

ByteDance claims that Seedream 4.0 beat Gemini 2.5 Flash Image for image generation and editing on its internal evaluation benchmark MagicBench, with stronger performance in prompt adherence, alignment and aesthetics.

Okay, prompt adherence, alignment (what the heck is that?), and aesthetics. That’s three aces right.

Who has the cost advantage? The write up says:

On Fal.ai, a global generative media hosting platform, Seedream 4.0 costs US$0.03 per generated image, while Gemini 2.5 Flash Image is priced at US$0.039.

I thought in poker one raised the stakes. Well, in AI poker one lowers the price in order to raise the stakes. These players are betting the money burned in the AI furnace will be “won” as the game progresses. Will AI poker turn up on the US free TV services? Probably. Burning cash makes for wonderful viewing, especially for those who are writing the checks.

What’s China’s view of this type of gambling? The write up says:

The state has signaled its support for AI-generated content by recognizing their copyright in late 2023, but has also recently introduced mandatory labelling of such content.

The game is not over. (Am I the only person who thinks that the name “nana banana” would have been better than “nano banana”?)

Stephen E Arnold, September 19, 2025

AI: The Tool for Humanity. Do Not Laugh.

September 19, 2025

Both sides of the news media are lamenting that AI is automating jobs and putting humans out of work. Conservative and liberals remain separated on how and why AI is “stealing” jobs, but the fear remains that humans are headed to obsoleteness…again. Humans have faced this issue since the start of human ingenuity. The key is to adapt and realize what AI truly is. Elizabeth Mathew of Signoz.io wrote: “I Built An MCP Server For Observability. This Is My Unhyped Take.”

If you’re unfamiliar with an MCP server it is an open standard that defines how LLMS or AI agents (i.e. Claude) uniformly connect external tools and data sources. It can be decoupled and used similar to a USB-C device then used for any agent.

After explaining some issues with MCP servers and why they are “schizophrenic”,

Mathew concludes with this:

“Ultimately, MCP-powered agents are not bringing us closer to automated problem-solving. They are giving us sophisticated hypothesis generators. They excel at exploring the known, but the unknown remains the domain of the human engineer. We’re not building an automated SRE; we’re building a co-pilot that can brainstorm, but can’t yet reason. And recognizing that distinction is the key to using these tools effectively without falling for the hype.”

She might be true from an optimistic and expert perspective, but that doesn’t prevent CEOs from implementing AI to replace their workforce or young adults being encouraged away from coding careers. Recent college graduates, do you have a job, any job?

Whitney Grace, September 19, 2025

AI Search Is Great. Believe It. Now!

September 18, 2025

Dino 5 18 25_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

Cheerleaders are necessary. The idea is that energetic people lead other people to chant: Stand Up, Sit Down, Fight! Fight! Fight! If you get with the program, you stand up. You sit down. You shout, of course, fight, fight, fight. Does it help? I don’t know because I don’t cheer at sports events. I say, “And again” or some other statement designed to avoid getting dirty looks or caught up in standing, sitting, and chanting.

Others are different. “GPT-5 Thinking in ChatGPT (aka Research Goblin) Is Shockingly Good at Search” states:

Don’t use chatbots as search engines” was great advice for several years… until it wasn’t. I wrote about how good OpenAI’s o3 was at using its Bing-backed search tool back in April. GPT-5 feels even better.

The idea is that instead of working with a skilled special librarian and participating in a reference interview, people started using online Web indexes. Now we have moved from entering a query to asking a smart software system for an answer.

Consider the trajectory. A person seeking information works with a professional with knowledge of commercial databases, traditional (book) reference tools, and specific ways of tracking down and locating information needed to answer the user’s question. When the user  was not sure, the special librarian would ask, “What specific information do you need?” Some users would reply, “Get me everything about subject X?” The special librarian would ask other questions until a particular item could be identified. In the good old days, special librarians would seek the information and provide selected items to the person with the question. Ellen Shedlarz at Booz, Allen & Hamilton when I was a lowly peon did this type of work as did Dominque Doré at Halliburton NUS (a nuclear outfit).

We then moved to the era of PCs and do-it-yourself research. Everyone became an expert. Google just worked. Then mobile phones arrived so research on the go was a thing. But keying words into a search box and fiddling with links was a drag. Now just tell the smart software your problem. The solution is just there like instant oatmeal.

The Stone Age process was knowledge work. Most people seeking information did not ask, preferring as one study found to look through trade publications in an old-fashioned in box or pick up the telephone and ask a person whom one assumed knew something about a particular subject. The process was slow, inefficient, and fraught with delays. Let’s be efficient. Let’s let software do everything.

Flash forward to the era of smart software or seemingly smart software. The write up reports:

I’ve been trying out hints like “go deep” which seem to trigger a more thorough research job. I enjoy throwing those at shallow and unimportant questions like the UK Starbucks cake pops one just to see what happens! You can throw questions at it which have a single, unambiguous answer—but I think questions which are broader and don’t have a “correct” answer can be a lot more fun. The UK supermarket rankings above are a great example of that. Since I love a questionable analogy for LLMs Research Goblin is… well, it’s a goblin. It’s very industrious, not quite human and not entirely trustworthy. You have to be able to outwit it if you want to keep it gainfully employed.

The reference / special librarians are an endangered species. The people seeking information use smart software. Instead of a back-and-forth and human-intermediated interaction between a trained professional and a person with a question, we get “trying out” and “accepting the output.”

I think there are three issues inherent in this cheerleading:

  1. Knowledge work is short circuited. Instead of information-centric discussion, users accept the output. What if the output is incorrect, biased, incomplete, or made up? Cheerleaders shout more enthusiastically until a really big problem occurs.
  2. The conditioning process of accepting outputs makes even intelligent people susceptible to mental short cuts. These are good, but accuracy, nuance, and a sense of understanding the information may be pushed to the side of the information highway. Sometimes those backroads deliver unexpected and valuable insights. Forget that. Grab a burger and go.
  3. The purpose of knowledge work is to make certain that an idea, diagnosis, research study can be trusted. The mechanisms of large language models are probabilistic. Think close enough for horseshoes. Cheering loudly does not deliver accuracy of output, just volume.

Net net: Inside each large language model lurks a system capable of suggesting glue cheese on pizza, the gray mass is cancer, and eat rocks.

What’s been lost? Knowledge value from the process of obtaining information the Stone Age way. Let’s work in caves with fire provided by burning books. Sounds like a plan, Sam AI-Man. Use GPT5, use GPT5, use GPT5.

Stephen E Arnold, September 18, 2025

AI Maggots: Are These Creatures Killing the Web?

September 18, 2025

The short answer is, “Yep.”

The early days of the free, open Web held such promise. Alas, AI is changing the Internet and there is, apparently, nothing we can do about it. The Register laments, “AI Web Crawlers Are Destroying Websites in their Never-Ending Hunger for Any and All Content: But the Cure May Ruin The Web.…” Writer Steven J. Vaughan-Nichols tells us a whopping 30% of traffic is now bots, according to Cloudflare. And 80% of that, reports Fastly, comes from AI-data fetcher bots. Web crawlers have been around since 1993, of course, but this volume is something new. And destructive. Vaughan-Nichols writes:

“Fastly warns that [today’s AI crawlers are] causing ‘performance degradation, service disruption, and increased operational costs.’ Why? Because they’re hammering websites with traffic spikes that can reach up to ten or even twenty times normal levels within minutes. Moreover, AI crawlers are much more aggressive than standard crawlers. As the InMotionhosting web hosting company notes, they also tend to disregard crawl delays or bandwidth-saving guidelines and extract full page text, and sometimes attempt to follow dynamic links or scripts. he result? If you’re using a shared server for your website, as many small businesses do, even if your site isn’t being shaken down for content, other sites on the same hardware with the same Internet pipe may be getting hit. This means your site’s performance drops through the floor even if an AI crawler isn’t raiding your website. Smaller sites, like my own Practical Tech, get slammed to the point where they’re simply knocked out of service. Thanks to Cloudflare Distributed Denial of Service (DDoS) protection, my microsite can shrug off DDoS attacks. AI bot attacks – and let’s face it, they are attacks – not so much.”

Even big websites are shelling out for more processor, memory, and network resources to counter the slowdown. And no wonder: According to Web hosting firms, most visitors abandon a site that takes more than three seconds to load. Site owners have some tools to try mounting a defense, like paywalls, logins, and annoying CAPTCHA games. Unfortunately, AI is good at getting around all of those. As for the tried and true, honor-system based robots.txt files, most AI crawlers breeze right on by. Hey, love maggots.

Cynthia Murrell, September 18, 2025

AI and Security? What? Huh?

September 18, 2025

As technology advances so do bad actors and their devious actions. Bad actors are so up to date with the latest technology that it takes white hat hackers and cyber security engineers awhile to catch up to them. AI has made bad actors smarter and EWeek explains that there is we are facing a banking security crisis: “Altman Warns Of AI-Powered Fraud Crisis in Banking, Urges Stronger Security Measures.”

OpenAI CEO Sam Altman warned that AI vocal technology is a danger to society. He told the Federal Reserve Vice Chair for Supervision Michelle Bowman that US banks are lagging behind Ai vocal security, because many financial institutions still rely on voiceprint technology to verify customers’ identities.

Altman warned that AI vocal technology can easily replicate humans and deepfake videos are even scarier when they become indistinguishable from reality. Bowman mentioned potential partnering with tech companies to create solutions.

Despite sounding the warning bells, Altman didn’t offer much help:

“Despite OpenAI’s prominence in the AI industry, Altman clarified that the company is not creating tools for impersonation. Still, he stressed that the broader AI community must take responsibility for developing new verification systems, such as “proof of human” solutions.

Altman is supporting tools like The Orb, developed by Tools for Humanity. The device aims to provide “proof of personhood” in a digital world flooded with fakes. His concerns go beyond financial fraud, extending to the potential for AI superintelligence to be misused in areas such as cyberwarfare or biological threats.”

Proof of personhood? It’s like the blue check on verified X/Twitter accounts. Altman might be helping make the future but he’s definitely also part of the problem.

Whitney Grace, September 18, 2025

Qwen: Better, Faster, Cheaper. Sure, All Three

September 17, 2025

Dino 5 18 25No smart software involved. Just a dinobaby’s work.

I spotted another China Smart, U S Dumb write up. Analytics India published “Alibaba Introduces Qwen3-Next as a More Efficient LLM Architecture.” The story caught my attention because it was a high five to the China-linked Alibaba outfit and because it is a signal that India and China are on the path to BFF bliss.

The write up says:

Alibaba’s Qwen team has introduced Qwen3-Next, a new large language model architecture designed to improve efficiency in both training and inference for ultra-long context and large-parameter settings.

The sentence reinforces the better, faster, cheaper sales mantra one beloved by Crazy Eddie.

Here’s another sentence catching my attention:

At its core, Qwen3-Next combines a hybrid attention mechanism with a highly sparse mixture-of-experts (MoE) design, activating just three billion of its 80 billion parameters during inference.  The announcement blog explains that the new mechanism allows the base model to match, and in some cases outperform, the dense Qwen3-32B, while using less than 10% of its training compute. In inference, throughput surpasses 10x at context lengths beyond 32,000 tokens.

This passage emphasizes the value of the mixture of experts approach in the faster and cheaper assertions.

Do I believe the data?

Sure, I believe every factoid presented in the better, faster, cheaper marketing of large language models. Personally I find that these models, regardless of development group, are useful for some specific functions. The hallucination issue is the deal breaker. Who wants to kill a person because a smart medical system is making benign out of malignancy? Who wants an autonomous AI underwater drone to take out those college students and not the adversary’s stealth surveillance boat?

Where can you get access this better, faster, cheaper winner? The write up says, “Hugging Face, ModelScope, Alibaba Cloud Model Studio and NVIDIA API Catalog, with support from inference frameworks like SGLang and vLLM.”

Stephen E Arnold, September 17, 2025

Professor Goes Against the AI Flow

September 17, 2025

One thing has Cornell professor Kate Manne dreading the upcoming school year: AI. On her Substack, “More to Hate,” the academic insists, “Yes, It Is Our Job as Professors to Stop our Students Using ChatGPT.” Good luck with that.

Manne knows even her students who genuinely love to learn may give in to temptation when faced with an unrelenting academic schedule. She cites the observations of sociologist Tressie McMillan Cottom as she asserts young, stressed-out students should not bear that burden. The responsibility belongs, she says, to her and her colleagues. How? For one thing, she plans to devote precious class time to having students hand-write essays. See the write-up for her other ideas. It will not be easy, she admits, but it is important. After all, writing assignments are about developing one’s thought processes, not the finished product. Turning to ChatGPT circumvents the important part. And it is sneaky. She writes:

“Again, McMillan Cottom crystallized this perfectly in the aforementioned conversation: learning is relational, and ChatGPT fools you into thinking that you have a relationship with the software. You ask it a question, and it answers; you ask it to summarize a text, and it offers to draft an essay; you request it respond to a prompt, using increasingly sophisticated constraints, and it spits out a response that can feel like your own achievement. But it’s a fake relationship, and a fake achievement, and a faulty simulacrum of learning. It’s not going to office hours, and having a meeting of the minds with your professor; it’s not asking a peer to help you work through a problem set, and realizing that if you do it this way it makes sense after all; it’s not consulting a librarian and having them help you find a resource you didn’t know you needed yet. Your mind does not come away more stimulated or enriched or nourished by the endeavor. You yourself are not forging new connections; and it makes a demonstrable difference to what we’ve come to call ‘learning outcomes.’”

Is it even possible to keep harried students from handing in AI-generated work? Manne knows she is embarking on an uphill battle. But to her, it is a fight worth having. Saddle up, Donna Quixote.

Cynthia Murrell, September 17, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta