The EU – Google Soap Opera Titled “What? Train AI?”

December 16, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Ka-ching. That’s the sound of the EU ringing up another fine for one of its favorite US big tech outfits. Once again it is Googzilla in the headlights of a restored 2CV. Here’s the pattern:

  1. EU fines
  2. Googzilla goes to court
  3. EU finds Googzilla guilty
  4. Googzilla appeals
  5. EU finds Googzilla guilty
  6. Googzilla negotiates and says, “We don’t agree but we will pay”
  7. Go back to item 1.

This version of the EU soap opera is called training Gemini on whatever content Google has.

The formal announcement of Googzilla’s re-run of a fan favorite is “Commission Opens Investigation into Possible Anticompetitive Conduct by Google in the Use of Online Content for AI Purposes.” I note the hedge word “possible,” but as soap opera fans we know the arc of this story. Can you hear the cackle of the legal eagles anticipating the billings? I can.

image

The mythical creature Googzilla apologizes to an august body for a mistake. Googzilla is very, very sincere. Thanks, MidJourney. Actually pretty good this morning. Too bad you not consistent.

The cited show runner document says:

The European Commission has opened a formal antitrust investigation to assess whether Google has breached EU competition rules by using the content of web publishers, as well as content uploaded on the online video-sharing platform YouTube, for artificial intelligence (‘AI’) purposes. The investigation will notably examine whether Google is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage.

The EU is trying via legal process to alter the DNA of Googzilla. I am fond of pointing out that beavers do what beavers do. Similarly Googzillas do exactly what the one and unique Googzilla does; that is, anything it wants to do. Why? Googzilla is now entering its prime. It has a small would on its knee. If examined closely, it is a scar that seems to be the word “monopoly”.

News flash: Filing legal motions against Googzilla will not change its DNA. The outfit is purpose built to keep control of its billions of users and keep the snoops from do gooder and regulatory outfits clueless about what happens to the [a] parsed and tagged data, [b] the metrics thereof, [c] the email, the messages, and the voice data, [d] the YouTube data, and [e] whatever data flows into the Googzilla’s maw from advertisers, ad systems, and ad clickers.

The EU does not get the message. I wrote three books about Google, and it was pretty evident in the first one (The Google Legacy) that baby Google was the equivalent of a young Maradona or Messi was going to wear a jersey with Googzilla 10 emblazoned on its comely yet spikey back.

The write up contains this statement from Teresa Ribera, Executive Vice-President for Clean, Just and Competitive Transition:

A free and democratic society depends on diverse media, open access to information, and a vibrant creative landscape. These values are central to who we are as Europeans. AI is bringing remarkable innovation and many benefits for people and businesses across Europe, but this progress cannot come at the expense of the principles at the heart of our societies. This is why we are investigating whether Google may have imposed unfair terms and conditions on publishers and content creators, while placing rival AI models developers at a disadvantage, in breach of EU competition rules.

Interesting idea as the EU and the US stumble to the side of street where these ideas are not too popular.

Net net: Googzilla will not change for the foreseeable future. Furthermore, those who don’t understand this are unlikely to get a job at the company.

Stephen E Arnold, December 16, 2025

How Not to Get a Holiday Invite: The Engadget Method

December 15, 2025

green-dino_thumb_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Sam AI-Man may not invite anyone from Engadget to a holiday party. I read “OpenAI’s House of Cards Seems Primed to Collapse.” The “house of cards” phrase gives away the game. Sam AI-Man built a structure that gravity or Google will pull down. How do I know? Check out this subtitle:

In 2025, it fell behind the one company it couldn’t lose ground to: Google.

The Google. The outfit that shifted into Red Alert or whatever the McKinsey playbook said to call an existential crisis klaxon. The Google. Adjudged a monopoly getting down to work other than running and online advertising system. The Google. An expert in reorganizing a somewhat loosely structured organization. The Google: Everyone except the EU and some allegedly defunded YouTube creators absolutely loves. That Google.

image

Thanks Venice.ai. I appreciate your telling me I cannot output an image with a “young programmer.” Plugging in “30 year old coder” worked. Very helpful. Intelligent too.

The write up points out:

It’s safe to say GPT-5 hasn’t lived up to anyone’s expectations, including OpenAI’s own. The company touted the system as smarter, faster and better than all of its previous models, but after users got their hands on it, they complained of a chatbot that made surprisingly dumb mistakes and didn’t have much of a personality. For many, GPT-5 felt like a downgrade compared to the older, simpler GPT-4o. That’s a position no AI company wants to be in, let alone one that has taken on as much investment as OpenAI.

Did OpenAI suck it up and crank out a better mouse trap? The write up reports:

With novelty and technical prowess no longer on its side though, it’s now on Altman to prove in short order why his company still deserves such unprecedented levels of investment.

Forget the problems a failed OpenAI poses to investors, employees, and users. Sam AI-Man now has an opportunity to become the highest profile technology professional to cause a national and possibly global recession. Short of war mongering countries, Sam AI-Man will stand alone. He may end up in a museum if any remain open when funding evaporate. School kids could read about him in their history books; that is, if kids actually attend school and read. (Well, there’s always the possibility of a YouTube video if creators don’t evaporate like wet sidewalks when the sun shines.)

Engadget will have to find another festive event to attend.

Stephen E Arnold, December 15, 2025

The Waymo Trip: From Cats and Dogs Waymo to the Parking Lot

December 12, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I am reasonably sure that Google Waymo offers “way more” than any other self driving automobile. It has way more cameras. It has way more publicity. Does it have way more safety than — for instance, a Tesla confused on Highway 101? I don’t know.

I read “Waymo Investigation Could Stop Autonomous Driving in Its Tracks.” The title was snappy, but the subtitle was the real hook:

New video shows a dangerous trend for Waymo autonomous vehicles.

What’s the trend?

Weeks ago, the Austin Independent School District noticed a disturbing trend: Waymo vehicles were not stopping for school buses that had their crossing guard and stop sign deployed.

Oh, Google Waymo smart cars don’t stop for school buses. Kids always look before jumping off a school and dashing across a street to see their friends or rush home to scroll Instagram. Smart software definitely can predict the trajectories of school kids. Well, probability is involved, so there is a teeny tiny chance that a smart car might do the “kill the Mission District” cat. But the chance is teeny tiny.

image

Thanks, Venice.ai. Good enough.

The write up asserts:

The Austin ISD has been in communication with Waymo regarding the violations, which it reports have occurred approximately 1.5 times per week during this school year. Waymo has informed them that software updates have been issued to address the issue. However, in a letter dated November 20, 2025, the group states that there have been multiple violations since the supposed fix.

What’s with these people in Austin? Chill. Listen to some country western music. Think about moving back to the Left Coast. Get a life.

Instead of doing the Silicon Valley wizardly thing, Austin showed why Texas is not the center of AI intelligence and admiration. The story says:

On Dec. 1, after Waymo received its 20th citation from Austin ISD for the current school year, Austin ISD decided to release the video of the previous infractions to the public.  The video shows all 19 instances of Waymo violating school bus safety rules. Perhaps most alarmingly, the violations appear to worsen over time. On November 12, a Waymo vehicle was recorded violating a law by making a left turn onto a street with a school bus, its stop signs and crossbar already deployed. There are children in the crosswalk when the Waymo makes the turn and cuts in front of them. The car stops for a second then continues without letting the kids pass.

Let’s assume that after 16 years of development and investment, the Waymo self driving software intelligence gets an F in school bus recognition. Conjuring up a vehicle that can doddle down 101 at rush hour driven by a robot is a Silicon Valley inspiration. Imagine. One can sit in the automobile, talk on the phone, fiddle with a laptop, or just enjoy coffee and a treat from Philz in peace. Just ignore the imbecilic drivers in other automobiles. Yes, let’s just think it and it will become real.

I know the idea sounds great to anyone who has suffered traffic on 101 or the Foothills, but crushing the Mission District stray cat is just a warm up. What type of publicity heat will maiming Billy or Sally whose father might be a big time attorney who left Seal Team 6 to enforce and defend city, county, state, and federal law? Cats don’t have lawyers. The parents of harmed children either do or can get one pretty easily.

Getting a lawyer is much easier than delivering on a dream that is a bit of nightmare after 16 years and an untold amount of money. But the idea is a good one. Sort of.

Stephen E Arnold, December 12, 2025

Students Cheat. Who Knew?

December 12, 2025

How many times are we going to report on this topic?  Students cheat!  Students have been cheating since the invention of school.  With every advancement of technology, students adapt to perfect their cheating skills.  AI was a gift served to them on a silver platter.  Teachers aren’t stupid, however, and one was curious how many of his students were using AI to cheat, so he created a Trojan Horse.  HuffPost told his story: “I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.”

There’s a big difference between recognizing AI and proving it was used.  The teacher learned about a Trojan Horse: hiding hidden text inside a prompt.  The text would be invisible because the font color would be white.  Students wouldn’t see it but ChatGPT would.  He unleashed the Trojan Horse and 33 essays out of 122 were automatically outed.  Thirty-nine percent were AI-written.  Many of the students were apologetic, while others continued to argue that the work was their own despite the Trojan Horse evidence.

AI literacy needs to be added to information literacy.  The problem is that how to properly use AI is inconsistent:

“There is no consistency. My colleagues and I are actively trying to solve this for ourselves, maybe by establishing a shared standard that every student who walks through our doors will learn and be subject to. But we can’t control what happens everywhere else.”

Even worse is that some students don’t belief they’re actually cheating because they’re oblivious and stupid.  He ends on an inspirational quote:

“But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognized that oppression is best maintained by keeping the masses illiterate, and those oppressed recognized that literacy is liberation. To my students and to anyone who might listen, I say: Don’t surrender to AI your ability to read, write and think when others once risked their lives and died for the freedom to do so.”

Noble words for small minds.

Whitney Grace, December 12, 2025

AI Fact Checks AI! What A Gas…Lighting Event

December 12, 2025

Josh Brandon at Digital Trends was curious what would happen if he asked two chatbots to fact check each other. He shared the results in, “I Asked Google Gemini To Fact-Check ChatGPT. The Results Were Hilarious.” He brilliantly calls ChatGPT the Wikipedia of the modern generation. Chatbots spit out details like overconfident, self-assured narcissists. People take the information for granted.

ChatGPT tends to hallucinate fake facts and makes up great stories, while Google Gemini doesn’t create as many mirages. Brandon asked Gemini and ChatGPT about the history of electric cars, some historical information, and a few other things to see if they’d hallucinate. He found that the chatbots have trouble understanding user intent. They also wrongly attribute facts, although Gemini is correct more than ChatGPT. When it came to research questions, the results were laughable:

“Prompt used: ‘Find me some academic quotes about the psychological impact of social media.;

This one is comical and fascinating. ChatGPT invented so many details in a response about the psychological impact of social media that it makes you wonder what the bot was smoking. ‘This is a fantastic and dangerous example of partial hallucination, where real information is mixed with fabricated details, making the entire output unreliable. About 60% of the information here is true, but the 40% that is false makes it unusable for academic purposes.’”

Either AI’s iterations are not delivering more useful outputs or humans are now looking more critically at the technology and saying, “Not so fast, buckaroo.”

Whitney Grace, December 12, 2025

AI Year in Review: The View from an Expert in France

December 11, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I suggest you read “Stanford, McKinsey, OpenAI: What the 2025 Reports Tell Us about the Present and Future of AI (and Autonomous Agents) in Business.” The document is in French. You can get an okay translation via the Google or Yandex.

I have neither the energy nor the inclination to do a blue chip consulting type of analysis of this fine synthesis of multiple source documents. What I will do in this blog post is highlight several statements and offer a comment or two. For context, I have read some of the sources the author Fabrice Frossard has cited. M. Frossard is a graduate of Ecole Supérieure Libre des Sciences Commerciales Appliquées and the Ecole de Guerre Economique in Paris I think. Remember: I am a dinobaby and generally too lazy and inept to do “real” research. These are good places to learn how to think about business issues.

Let’s dive into his 2000 word write up.

The first point that struck me is that he include what I think is a point not given sufficient emphasis by the experts in the US. This theme is not forced down the reader’s throat, but it has significant implications for M. Frossard’s comments about the need to train people to use smart software. The social implication of AI and the training creates a new digital divide. Like the economic divide in the US and some other countries, crossing the border is not going to possible for many people. Remember these people have been trained to use the smart software deployed. When one cannot get from ignorance to informed expertise, that person is likely to lose a job. Okay, here’s the comment from the source document:

To put it another way: if AI is now everywhere, its real mastery remains the prerogative of an elite.

Is AI a winner today? Not a winner, but it is definitely an up and comer in the commercial world. M. Frossard points out:

  • McKinsey reveals that nearly two thirds of companies are still stuck in the experimentation or piloting phase.
  • The elite escaping: only 7% of companies have successfully deployed AI in a fully integrated manner across the entire organization.
  • Peak workers use coding or data analysis tools 17 times more than the median user.

These and similar facts support the point that “the ability to extract value creates a new digital divide, no longer based on access, but on the sophistication of use.” Keep this in mind when it comes to learning a new skill or mastering a new area of competence like smart software. No, typing a prompt is not expert use. Typing a prompt is like using an automatic teller machine to get money. Basic use is not expert level capabilities.

image

If Mary cannot “learn” AI and demonstrate exceptional skills, she’s going to be working as an Etsy.com reseller. Thanks, Venice.ai. Not what I prompted but I understand that you are good enough, cash strapped, and degrading.

The second point is that in 2025, AI does not pay for itself in every use case. M. Frossard offers:

EBIT impact still timid: only 39% of companies report an increase in their EBIT (earnings before interest and taxes) attributable to AI, and for the most part, this impact remains less than 5%.

One interesting use case comes from a McKinsey report where billability is an important concept. The idea is that a bit of Las Vegas type thinking is needed when it comes to smart software. M. Frossard writes:

… the most successful companies [using artificial intelligence] are paradoxically those that report the most risks and negative incidents.

Takes risks and win big seems to be one interpretation of this statement. The timid and inept will be pushed aside.

Third, I was delighted to see that M. Frossard picked up on some of the crazy spending for data centers. He writes:

The cost of intelligence is collapsing: A major accelerating factor noted by the Stanford HAI Index is the precipitous fall in inference costs. The cost to achieve performance equivalent to GPT-3.5 has been divided by 280 in 18 months. This commoditization of intelligence finally makes it possible to make complex use cases profitable which were economically unviable in 2023. Here is a paradox: the more efficient and expensive artificial intelligence becomes produce (exploding training costs), the less expensive it is consume (free-fall inference costs). This mental model suggests that intelligence becomes an abundant commodity, leading not to a reduction, but to an explosion of demand and integration.

Several ideas bubble from this passage. First, we are back to training. Second, we are back to having significant expertise. Third, the “abundant commodity” idea produces greater demand. The problem (in addition to not having power for data centers will be people with exceptional AI capabilities).

Fourth, the replacement of some humans may not be possible. The essay reports:

the deployment of agents at scale remains rare (less than 10% in a given function according to McKinsey), hampered by the need for absolute reliability and data governance.

Data governance is like truth, love, and ethics. Easy to say and hard to define. The reliability angle is slightly less tricky. These two AI molecules require a catalyst like an expert human with significant AI competence. And this returns the essay to training. M. Frossard writes:

The transformation of skills: The 115K report emphasizes the urgency of training. The barrier is not technological, it is human. Businesses face a cultural skills gap. It’s not about learning to “prompt”, but about learning to collaborate with non-human intelligence.

Finally, the US has a China problem. M. Frossard points out:

… If the USA dominates investment and the number of models, China is closing the technical gap. On critical benchmarks such as mathematics or coding, the performance gap between the US and Chinese models has narrowed to nothing (less than 1 to 3 percentage points).

Net net: If an employee cannot be trained, that employee is likely to be starting a business at home. If the trained employees are not exceptional, those folks may be terminated. Elites like other elite things. AI may be good enough, but it provides an “objective” way to define and burn dead wood.

Stephen E Arnold, December 11, 2025

Google Gemini Hits Copilot with a Dang Block: Oomph

December 10, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Smart software is finding its way into interesting places. One of my newsfeeds happily delivered “The War Department Unleashes AI on New GenAI.mil Platform.” Please, check out the original document because it contains some phrasing which is difficult for a dinobaby to understand. Here’s an example:

The War Department today announced the launch of Google Cloud’s Gemini for Government as the first of several frontier AI capabilities to be housed on GenAI.mil, the Department’s new bespoke AI platform.

There are a number of smart systems with government wide contracts. Is the Google Gemini deal just one of the crowd or is it the cloud over the other players? I am not sure what a “frontier” capability is when it comes to AI. The “frontier” of AI seems to be shifting each time a performance benchmark comes out from a GenX consulting firm or when a survey outfit produces a statement that QWEN accounts for 30 percent of AI involving an open source large language model. The idea of a “bespoke AI platform” is fascinating. Is it like a suit tailored on Oxford Street or a vehicle produced by Chip Foose, or is it one of those enterprise software systems with extensive customization? Maybe like an IBM government systems solution?

image

Thanks, Google. Good enough. I wanted square and you did horizontal, but that’s okay. I understand.

And that’s just the first sentence. You are now officially on your own.

For me, the big news is that the old Department of Defense loved PowerPoint. If you have bumped into any old school Department of Defense professionals, the PowerPoint is the method of communication. Sure, there’s Word and Excel. But the real workhorse is PowerPoint. And now that old nag has Copilot inside.

The way I read this news release is that Google has pulled a classic blocking move or dang. Microsoft has been for decades the stallion in the stall. Now, the old nag has some competition from Googzilla, er, excuse me, Google. Word of this deal was floating around for several months, but the cited news release puts Microsoft in general and Copilot in particular on notice that it is no longer the de facto solution to a smart Department of War’s digital needs. Imagine a quarter century after screwing up a big to index the US government servers, Google has emerged as a “winner” among “several frontier AI capabilities” and will reside on “the Department’s new bespoke AI platform.”

This is big news for Google and Microsoft, its certified partners, and, of course, the PowerPoint users at the DoW.

The official document says:

The first instance on GenAI.mil, Gemini for Government, empowers intelligent agentic workflows, unleashes experimentation, and ushers in an AI-driven culture change that will dominate the digital battlefield for years to come. Gemini for Government is the embodiment of American AI excellence, placing unmatched analytical and creative power directly into the hands of the world’s most dominant fighting force.

But what about Sage, Seerist, and the dozens of other smart platforms? Obviously these solutions cannot deliver “intelligent agentic workflows” or unleash the “AI driven culture change” needed for the “digital battlefield.” Let’s hope so. Because some of those smart drones from a US firm have failed real world field tests in Ukraine. Perhaps the smart drone folks can level up instead of doing marketing?

I noted this statement:

The Department is providing no-cost training for GenAI.mil to all DoW employees. Training sessions are designed to build confidence in using AI and give personnel the education needed to realize its full potential. Security is paramount, and all tools on GenAI.mil are certified for Controlled Unclassified Information (CUI) and Impact Level 5 (IL5), making them secure for operational use. Gemini for Government provides an edge through natural language conversation, retrieval-augmented generation (RAG), and is web-grounded against Google Search to ensure outputs are reliable and dramatically reduces the risk of AI hallucinations.

But wait, please. I thought Microsoft and Palantir were doing the bootcamps, demonstrating, teaching, and then deploying next generation solutions. Those forward deployed engineers and the Microsoft certified partners have been beavering away for more than a year. Who will be doing the training? Will it be Googlers? I know that YouTube has some useful instructional videos, but those are from third parties. Google’s training is — how shall I phrase it — less notable than some of its other capabilities like publicizing its AI prowess.

The last paragraph of the document does not address the questions I have, but it does have a stentorian ring in my opinion:

GenAI.mil is another building block in America’s AI revolution. The War Department is unleashing a new era of operational dominance, where every warfighter wields frontier AI as a force multiplier. The release of GenAI.mil is an indispensable strategic imperative for our fighting force, further establishing the United States as the global leader in AI.

Several observations:

  1. Google is now getting its chance to put Microsoft in its place from inside the Department of War. Maybe the Copilot can come along for the ride, but it could be put on leave.
  2. The challenge of training is interesting. Training is truly a big deal, and I am curious how that will be handled. The DoW has lots of people to teach about the capabilities of Gemini AI.
  3. Google may face some push back from its employees. The company has been working to stop the Googlers from getting out of the company prescribed lanes. Will this shift to warfighting create some extra work for the “leadership” of that estimable company? I think Google’s management methods will be exercised.

Net net: Google knows about advertising. Does it have similar capabilities in warfighting?

Stephen E Arnold, December 10, 2025

MIT Iceberg: Identifying Hotspots

December 10, 2025

Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I like the idea of identifying exposure hotspots. (I hate to mention this, but MIT did have a tie up with Jeffrey Epstein, did it not? How long did it take for that hotspot to be exposed? The dynamic duo linked in 2002 and wound down the odd couple relationship in 2017. That looks to me to be about 15 years.) Therefore, I approach MIT-linked research from some caution. Is this a good idea? Yep.)

What is this iceberg thing? I won’t invoke the Titanic’s encounter with an iceberg, nor will I point to some reports about faulty engineering. I am confident had MIT been involved, that vessel would probably be parked in a harbor, serving as a museum.

I read “The Iceberg Index: Measuring Skills-centered Exposure in the AI Economy.” You can too. The paper is free at least for a while. It also has 10 authors who generated 21 pages to point out that smart software is chewing up jobs. Of course, this simple conclusion is supported by quite a bit of academic fireworks.

image

The iceberg chart which reminds me of the Dark Web charts. I wonder if Jeffrey Epstein surfed the Dark Web while waiting for a meet and greet at MIT? The source for this image is MIT or possibly an AI system helping out the MIT graphic artist humanoid.

image

I love these charts. I find them eye catching and easily skippable.

Even though smart software makes up stuff, appears to have created some challenges for teachers and college professors (except those laboring in Jeffrey Epstein’s favorite grove of academic, of course), and people looking for jobs. The as is smart software can eliminate about 10 to 11 percent of here and now jobs. The good news is that 90 percent of the workers can wait for AI to get better and then eliminate another chunk of jobs. For those who believe that technology just gets better and better, the number of jobs for humanoids is likely to be gnawed and spat out for the foreseeable future.

I am not going to cause the 10 authors to hire SEO spam shops in Africa to make my life miserable. I will suggest, however, that there may be what I call de-adoption in the near future. The idea is that an organization is unhappy with the cost / value for its AI installation. A related factor is that some humans in an organization may introduce some work flow friction. The actions can range from griping about services interrupting work like Microsoft’s enterprise Copilot to active sabotage. People can fake being on a work related video conference, and I assume a college graduate (not from MIT, of course) might use this tactic to escape these wonderful face to face innovations. Nor will I suggest that AI may continue to need humans to deliver successful work task outcomes. Does an AI help me buy more of a product? Does AI boost your satisfaction with an organization pushing and AI helper on each of its Web pages?

And no academic paper (except those presented at AI conferences) are complete without some nifty traditional economic type diagrams. Here’s an example for the industrious reader to check:

image

Source: the MIT Report. Is it my imagination or five of the six regression lines pointing down? What’s negative correlation? (Yep, dinobaby stuff.)

Several observations:

  1. This MIT paper is similar to blue chip consulting “thought pieces.” The blue chippers write to get leads and close engagements. What is the purpose of this paper? Reading posts on Reddit or LinkedIn makes clear that AI allegedly is replacing jobs or used as an excuse to dump expensive human workers.
  2. I identified a couple of issues I would raise if the 10 authors had trooped into my office when I worked at a big university and asked for comments. My hunch is that some of the 10 would have found me manifesting dinobaby characteristics even though I was 23 years old.
  3. The spate of AI indexes suggests that people are expressing their concern about smart software that makes mistakes by putting lipstick on what is a very expensive pig. I sense a bit of anxiety in these indexes.

Net net: Read the original paper. Take a look at your coworkers. Which will be the next to be crushed because of the massive investments in a technology that is good enough, over hyped, and perceived as the next big thing. (Measure the bigness by pondering the size of Meta’s proposed data center in the southern US of A.) Remember, please, MIT and Epstein Epstein Epstein.

Stephen E Arnold, December 10, 2025

File Conversion. No Problem. No Kidding?

December 10, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

Every few months, I get a question about file conversion. The questions are predictable. Here’s a selection from my collection:

  1. “We have data about chemical structures. How can we convert these to for AI processing?”
  2. “We have back up files in Fastback encrypted format. How do we decrypt these and get the data into our AI system?”
  3. “We have some old back up tapes from our Burroughs’ machines?”
  4. “We have PDFs. Some were created when  Adobe first rolled out Acrobat and some  generated by different third-party PDF printing solutions. How can we convert these so our AI can provide our employees with access?”

The answer to each of these questions from the new players in AI-search system is, “No problem.” I hate to rain on these marketers’ assertions, but these are typical problems large, established organizations have moving content from a legacy system into a BAIT (big AI tech) based findability solution. There are technical challenges. There are cost challenges. There are efficiency challenges. That’s right. Challenges, and in my long career in electronic content processing, these hurdles still remain. But I am an aged dinobaby. Why believe me? Hire a Gartner-type of expert to tell you what you want to hear. Have fun with that solution, please.

image

Thanks, Venice.ai. Close enough for horse shoes, the high-water mark today I believe.

Venture Beat is one of my go-to sources for timely content marketing. On November 14, 2025, the venerable firm published “Databricks:  PDF Parsing for Agentic AI Is Still Unsolved. New Tool Replaces Multi-Service Pipelines with a Single Function.” The write up makes clear that I am 100 percent dead wrong about processing PDF files with their weird handling of tables, charts, graphs, graphic ornaments, and dense financial data.

The write up explains how really off base I am; for example, the Databricks Agent Bricks Platform. It cracks the AI parsing problem. I learned from the Venture Beat write up identifies what the DABP does with PDF information:

1 “Tables preserved exactly as they appear, including merged cells and nested structures

2 Figures and diagrams with AI-generated captions and descriptions

3 Spatial metadata and bounding boxes for precise element location

4 Optional image outputs for multimodal search applications”

Once the PDFs have been processed by DABP, the outputs can be used in a number of ways. I assume these are advanced, stable, and efficient as the name “databrick” metaphorically suggests:

1 Spark declarative pipelines

2 Unity catalog (I don’t know what this means)

3 Vector search (yep, search and retrieval)

4 AI function chaining (yep, bots)

5 Multi-agent supervisor (yep, command and control).

The write up concludes with this statement:

The Databricks approach sheds new light on an issue that many might have considered to be a solved problem. It challenges existing expectations with a new architecture that could benefit multiple types of workflows. However, this is a platform-specific capability that requires careful evaluation for organizations not already using Databricks. For technical decision-makers evaluating AI agent platforms, the key takeaway is that document intelligence is shifting from a specialized external service to an integrated platform capability.

Net net: What is novel in that chemical structure? What about that guy who retired in 2002 who kept a pile of Fastback floppies with his research into in Trinitrotoluene variants? Yep, content processing is not problem except the data on those back up tapes cranked out by that old Burroughs’ MFSOLT utility, but with the new AI approaches, who needs layers of contractors and conversion utilities. Just say, “Not a problem.” Everything is easy for a market collateral writer.

Stephen E Arnold, December 10, 2025

A Job Bright Spot: RAND Explains Its Reality

December 10, 2025

Optimism On AI And Job Market

Remember when banks installed automatic teller machines at their locations?  They’re better known by the acronym ATM.  ATMs didn’t take away jobs, instead they increased the number of banks, and created more jobs.  AI will certainly take away jobs but the technology will also create more.  Rand.org investigates how AI is affecting the job market in the article, “AI Is Making Jobs, Not Taking Them.”

What I love about this article is that it says the truth about aI technology: no one knows what will happen with it.  We have theories ,explored in science fiction, about what AI will do: from the total collapse of society to humdrum normal societal progress.  What Rand’s article says is that the research shows AI adoption is uneven and much slower than Wall Street and Silicon Valley say.   Rand conducted some research:

“At RAND, our research on the macroeconomic implications of AI also found that adoption of generative AI into business practices is slow going. By looking at recent census surveys of businesses, we found the level of AI use also varies widely by sector. For large sectors like transportation and warehousing, AI adoption hovered just above 2 percent. For finance and insurance, it was roughly 10 percent. Even in information technology—perhaps the most likely spot for generative AI to leave its mark—only 25 percent of businesses were using generative AI to produce goods and services.”

Most of the fear related to AI stems from automation of job tasks.  Here are some statistics from OpenAI:

“In a widely referenced study, OpenAI estimated that 80 percent of the workforce has at least 10 percent of their tasks exposed to LLM-driven automation, and 19 percent of workers could have at least 50 percent of their tasks exposed. But jobs are more than individual tasks. They are a string of tasks assembled in a specific way. They involve emotional intelligence. Crude calculations of labor market exposure to AI have seemingly failed to account for the nuance of what jobs actually are, leading to an overstated risk of mass unemployment.”

AI is a wondrous technology, but it’s still infantile and stupid.  Humans will adapt and continue to have jobs.

Whitney Grace, December 10, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta