Smart Software Fix: Cash, Lots and Lots of Cash

August 19, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way. But I asked ChatGPT one question. Believe it or not.

If you have some spare money, Sam Altman aka Sam AI-Man wants to talk with you. It is well past two years since OpenAI forced the 20 year old Google to go back to the high school lab. Now OpenAI is dealing with the reviews of ChatGPT 5. The big news in my opinion is that quite a  few people are annoyed with the new smart software from the money burning Bessemer furnace at 3180 18th Street in San Francisco. (I have heard that a satellite equipped with an infra red camera gets a snazzy image of the heat generated from the immolation of cash. There are also tiny red dots scattered around the SF financial district. Those, I believe, are the burning brain cells of the folks who have forked over dough to participate in Sam AI-Man’s next big thing.

As People Ridicule GPT-5, Sam Altman Says OpenAI Will Need ‘Trillions’ in Infrastructure” addresses the need for cash. The write up says:

Whether AI is a bubble or not, Altman still wants to spend a certifiably insane amount of money building out his company’s AI infrastructure. “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” Altman told reporters.

Trillions is a general figure that most people cannot relate to everyday life. Years ago when I was an indentured servant at a consulting firm, I worked on a project that sought to figure out what types of decisions Boards of Directors of Fortune 1000 companies consumed the most time. The results surprised me then and still do.

Boards of directors spent the most time discussing relatively modest-scale projects; for example, expanding a parking lot or developing of list of companies for potential joint ventures. Really big deals like spending large sums to acquire a company were often handled in swift, decisive votes.

Why?

Boards of directors, like most average people, cannot relate to massive numbers. It is easier to think in terms of a couple hundred thousand dollars to lease a parking lot than borrow billions and buy a giant allegedly synergistic  company.

When Mr. Altman uses the word “trillions,” I think he is unable to conceptualize the amount of money represented in his casual “you should expect OpenAI to spend trillions…”

Several observations:

  1. AI is useful in certain use cases. Will AI return the type of payoff that Google’s charge ‘em every which way from Sunday for advertising model does?
  2. AI appears to produce incorrect outputs. I liked the application for oncology docs who reported losing diagnostic skills when relying on AI assistants.
  3. AI creates negative mental health effects. One old person, younger than I, believed a chat bot cared for him. On the way to meet his digital friend, he flopped over dead. Anticipative anxiety or a use case for AI sparking nutso behavior?

What’s a trillion look like? Answer: 1,000,000,000,000.

How many railroad boxcars would it take to move $1 trillion from a collection point like Denver, Colorado, to downtown San Francisco? Answer from ChatGPT: you would need 10,000 standard railroad boxcars. This calculation is based on the weight and volume of the bills, as well as the carrying capacity of a typical 50-foot boxcar. The train would stretch over 113.6 miles—about the distance from New York City to Philadelphia!

Let’s talk about expanding the parking lot.

Stephen E Arnold, August 19, 2025

News Flash from the Past: Bad Actors Use New Technology and Adapt Quickly

August 18, 2025

Dino 5 18 25_thumb_thumbNo AI. Just a dinobaby working the old-fashioned way.

NBC News is on top of cyber security trends. I think someone spotted Axios report that bad actors were using smart software to outfox cyber security professionals. I am not sure this is news, but what do I know?

Criminals, Good Guys and Foreign Spies: Hackers Everywhere Are Using AI Now” reports this “hot off the press” information. I quote:

The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.

My goodness. Who knew that stealers have been zipping around for many years? Even more startling old information is:

LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents.  The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster.

Stunning. A free chunk of smart software, unemployed or intra-gig programmers, and juicy targets pushed out with a fairy land of vulnerabilities. Isn’t it insightful that bad actors would apply these tools to clueless employees, inherently vulnerable operating systems, and companies too busy outputting marketing collateral to do routine security updates.

The cat-and-mouse game works this way. Bad actors with access to useful scripting languages, programming expertise, and smart software want to generate revenue or wreck havoc. One individual or perhaps a couple of people in a coffee shop hit upon a better way to access a corporate network or obtain personally identifiable information from a hapless online user.

Then, after the problem has been noticed and reported, cyber security professionals will take a closer look. If these outfits have smart software running, a human will look more closely at logs and say, “I think I saw something.”

Okay, mice are in and swarming. Now the cats jump into action. The cats will find [a] a way to block the exploit, [b] rush to push the fix to paying customers, and [c] share the information in a blog post or a conference.

What happens? The bad actors notice their mice aren’t working or they are being killed instantly. The bad actors go back to work. In most cases, the bad actors are not unencumbered by bureaucracy or tough thought problems about whether something is legal or illegal. The bad actors launch more attacks. If one works, its gravy.

Now the cats jump back into the fray.

In the current cyber crime world, cyber security firms, investigators, and lawyers are in reactive mode. The bad actors play offense.

One quick example: Telegram has been enabling a range of questionable online activities since 2013. In 2024 after a decade of inaction, France said, “Enough.” Authorities in France arrested Pavel Durov. The problem from my point of view is that it took 12 years to man up to the icon Pavel Durov.

What happens when a better Telegram comes along built with AI as part of its plumbing?

The answer is, “You can buy licenses to many cyber security systems. Will they work?”

There are some large, capable mice out there in cyber space.

Stephen E Arnold, August 18, 2025

The Early Bird Often Sings Alone

August 17, 2025

Mathematicians, computer developers, science-fiction writers, etc. smarter than the average human have known for decades that computers would outpace human intelligence. Computers have actually been capable of this since the first machine printed its initial binary 01. AI algorithms are the next evolution of computers and they can do research, explore science, and extrapolate formulas beyond all the last known recorded digit of PI.

Future House explains how its Robin the AI system is designed to automate scientific discovery: “Demonstrating End-To-End Scientific Discovery With Robin: A Multi-Agent System.” Future House developed AI agents that automated different segments of the discovery process, but Robin is the first unified system that does everything. Robin’s inventors automated the scientific process and used the new system to make a generated discovery by using the past AI agents.

They asked Robin to:

“We applied Robin to identify ripasudgl, a Rho-kinase (ROCK) inhibitor clinically used to treat glaucoma, as a novel therapeutic candidate for dry age-related macular degeneration (dAMD), a leading cause of irreversible blindness worldwide.”

Robin did follow the scientific process. It made an initial hypothesis, but mechanized investigation instead of doing things the old-fashioned way, and then it made a discovery. Everything was done by Robin the AI system:

“All hypotheses, experiment choices, data analyses, and main text figures in the manuscript describing this work were generated by Robin autonomously. Human researchers executed the physical experiments, but the intellectual framework was entirely AI-driven.”

Robins creators are happy with their progress:

“By automating hypothesis generation, experimental planning, and data analysis in an integrated system, Robin represents a powerful new paradigm for AI-driven scientific discovery. Although we first applied Robin to therapeutics, our agents are general-purpose and can be used for a wide variety of discoveries across diverse fields—from materials science to climate technology. “

Mathematicians are chugging away at AI development, including number theorists. Listen to Curt Jaimungal’s podcast episode, “The AI Math That Left Number Theorists Speechless” and within the first five minutes you’ll have an understanding of where AI is at being very smart. Here’s the summary: it’s beyond human comprehension.

Whitney Grace, August 17, 2025

The HR Gap: First in Line, First Fooled

August 15, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

Not long ago I spoke with a person who is a big time recruiter. I asked, “Have you encountered any fake applicants?” The response, “No, I don’t think so.”

That’s the problem. Whatever is happening in HR continuing education, deep fake spoof employees is not getting through. I am not sure there is meaningful “continuing education” for personnel professionals.

I mention this cloud of unknowing in one case example because I read “Cloud Breaches and Identity Hacks Explode in CrowdStrike’s Latest Threat Report.” The write up reports:

The report … highlights the increasingly strategic use of generative AI by adversaries. The North Korea-linked hacking group Famous Chollima emerged as the most generative AI-proficient actor, conducting more than 320 insider threat operations in the past year. Operatives from the group reportedly used AI tools to craft compelling resumes, generate real-time deepfakes for video interviews and automate technical work across multiple jobs.

My first job was at Nuclear Utilities Services (an outfit soon after I was hired became a unit of Halliburton. Dick Cheney, Halliburton, remember?). One of the engineers came up to me after I gave a talk about machine indexing at what was called “Allerton House,” a conference center at the University of Illinois decades ago. The fellow liked my talk and asked me if my method could index technical content in English. I said, “Yes.” He said, “I will follow up next week.”

True to his word, the fellow called me and said, “I am changing planes at O’Hare on Thursday. Can you meet me at the airport to talk about a project? I was teaching part time at Northern Illinois University and doing some administrative work for a little money. Simultaneously I was working on my PhD at the University of Illinois. I said, “Sure.” DeKalb, Illinois, was about an hour west of O’Hare. I drove to the airport, met the person whom I remember was James K. Rice, an expert in nuclear waste water, and talked about what I was doing to support my family, keep up with my studies, and do what 20 years olds do. That is to say, just try to survive.

I explained the indexing, the language analysis I did for the publisher of Psychology Today and Intellectual Digest magazines, and the newsletter I was publishing for high school and junior college teachers struggling to educate ill-prepared students. As a graduate student and family, I explained that I had information and wanted to make it available to teachers facing a tough problem. I remember his comment, “You do this for almost nothing.” He had that right.

End of meeting. I forgot about nuclear and went back to my regular routine.

A month later I got a call from a person named Nancy who said, “Are you available to come to Washington, DC, to meet some people?” I figured out that this was a follow up to the meeting I had at O’Hare Airport. I went. Long story short: I dumped my PhD and went to work for what is generally unknown; that is, Halliburton is involved in things nuclear.

Why is this story from the 1970s relevant? The interview process did not involve any digital anything. I showed up. Two people I did not know pretended to care about my research work. I had no knowledge about nuclear other than when I went to grade school in Washington, DC, we had to go into the hall and cover our heads in case a nuclear bomb was dropped on the White House.

The article “In Recruitment, an AI-on-AI War Is Rewriting the Hiring Playbook,” I learned:

“AI hasn’t broken hiring,” says Marija Marcenko, Head of Global Talent Acquisition at SaaS platform Semrush. “But it’s changed how we engage with candidates.”

The process followed for my first job did not involve anything but one-on-one interactions. There was not much chance of spoofing. I sat there, explained how I indexed sermons in Latin for a fellow named William Gillis, calculated reading complexity for the publisher, and how I gathered information about novel teaching methods. None of those activities had any relevance I could see to nuclear anything.

When I visited the company’s main DC office, it was in the technology corridor running from the Beltway to Germantown, Maryland. I remember new buildings and farm land. I met people who were like those in my PhD program except these individuals thoughts about radiation, nuclear effects modeling, and similar subjects.

One math PhD, who became my best friend, said, “You actually studied poetry in Latin?” I said, “Yep.” He said, “I never read a poem in my life and never will.” I recited a few lines of a Percy Bysshe Shelley poem. I think his written evaluation of his “interview” with me got me the job.

No computers. No fake anything. Just smart people listening, evaluating, and assessing.

Now systems can fool humans. In the hiring game, what makes a company is a collection of people, cultural information, and a desire to work with individuals who can contribute to the organization’s achieving goals.

The Crowdstrike article includes this paragraph:

Scattered Spider, which made headlines in 2024 when one of its key members was arrested in Spain, returned in 2025 with voice phishing and help desk social engineering that bypasses multifactor authentication protections to gain initial access.

Can hiring practices keep pace with the deceptions in use today? Tricks to get hired. Fakery to steal an organization’s secrets.

Nope. Few organizations have the time, money, or business processes to hire using inefficient means as personal interactions, site visits, and written evaluations of a candidate.

Oh, in case you are wondering, I did not go back to finish my PhD. Now I know a little bit about nuclear stuff, however and slightly more about smart software.

Stephen E Arnold, August 15, 2025

Airships and AI: A Similar Technology Challenge

August 14, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Vaclav Smil writes books about the environment and technology. In his 2023 work Invention and Innovation: A Brief History of Hype and Failure, he describes the ups and downs of some interesting technologies. I thought of this book when I read  “A Best Case Scenario for AI?” The author is a wealthy person who has some interaction in the relaxing crypto currency world. The item appeared on X.com.

I noted a passage in the long X.com post; to wit:

… the latest releases of AI models show that model capabilities are more decentralized than many predicted. While there is no guarantee that this continues — there is always the potential for the market to accrete to a small number of players once the investment super-cycle ends — the current state of vigorous competition is healthy. It propels innovation forward, helps America win the AI race, and avoids centralized control. This is good news — that the Doomers did not expect.

Reasonable. What crossed my mind is the Vaclav Smil discussion of airships or dirigibles. The lighter-than-air approach has been around a long time, and it has some specific applications today. Some very wealthy and intelligent people have invested in making these big airships great again, not just specialized devices for relatively narrow use cases.

So what? The airship history spans the 18th, 19th, 20th, and 21st century. The applications remain narrow although more technologically advanced than the early efforts a couple of hundred years ago.

What is smart software is a dirigible type of innovation? The use cases may remain narrow. Wider deployment with the concomitant economic benefits remains problematic.

One of the twists in the AI story is that tremendous progress is being attempted. The innovations as they are rolled out are incremental improvements. Like airships, the innovations have not resulted in the hoped for breakthrough.

There are numerous predictions about the downsides of smart software. But what if AI is little more than a modern version of the dirigible. We have a remarkable range of technologies, but each next steps is underwhelming. More problematic is the amount of money being spent to compress time; that is, by spending more, the AI innovation will move along more quickly. Perhaps that is not the case. Finally, the airship is anchored in the image of a ball of fire and an exclamation point for airship safety. Will their be a comparable moment for AI?

Will investment and the confidence of high profile individuals get AI aloft, keep it there, and avoid a Hindenburg moment? Much has been invested to drive AI forward and make it “the next big thing.” The goal is to generate money, substantial sums.

The X.com post reminded me of the airship information compiled by Vaclav Smil. I can’t shake the image. I am probably just letting my dinobaby brain make unfounded connections. But, what if….? We could ask Google and its self-shaming smart software. Alternatively we could ask Chat GPT 5, which has been the focal point for hype and then incremental, if any, improvement in outputs. We could ask Apple, Amazon, or Telegram. But what if…?

I think an apt figure of speech might be “pushing a string.”

Stephen E Arnold, August 14, 2025

AI Applesauce: Sweeten the Story about Muffing the Bunny

August 14, 2025

Dino 5 18 25_thumbNo AI. Just a dinobaby being a dinobaby.

I read “Apple CEO Tim Cook Calls AI ‘Bigger Than the Internet’ in Rare All-Hands Meeting.” I noted this passage:

In a global all-hands meeting hosted from Apple’s headquarters in Cupertino, California, CEO Tim Cook seemed to admit to what analysts and Apple enthusiasts around the world had been raising concerns about: that Apple has fallen behind competitors in the AI race. And Cook promised employees that the company will be doing everything to catch up. “Apple must do this. Apple will do this. This is sort of ours to grab.” …The AI revolution [is] “as big or bigger” than the internet.

Okay. Two companies of some significance have miss the train to AI Ville: Apple and Telegram. Both have interesting technology. Apple is far larger, but for some users Telegram is more important to their lives. One is fairly interested in China activities; the other is focused on Russia and crypto.

But both have managed their firms into the same digital row boat. Apple had Siri and it was not very good. Telegram knew about AI and allowed third-party bot developers to use it, but Telegram itself dragged its feet.

Both companies are asserting that each has plenty of time. Tim Cook is talking about smart software but so far the evidence of making an AI difference is scant. Telegram, on the other hand, has aimed Nikolai Durov at AI. That wizard is working on a Telegram AI system.

But the key point is that both of these forward leaning outfits are trying to catch up. This  is not keeping pace, mind. The two firms are trying to go from watching the train go down the tracks to calling an Uber to get to their respective destinations.

My take on both companies is that the “leadership” have some good reasons for muffing the AI bunny. Apple is struggling with its China “syndrome.” Will the nuclear reactor melt down, fizzle out, or blow up? Apple’s future in hardware may become radioactive.

Telegram is working under the shadow of the criminal trial lumbering toward its founder and owner Pavel Durov. More than a dozen criminal charges and a focused French judicial figure have Mr. Durov reporting a couple of times a week. To travel, he has to get a note from his new “mom.”

But well-run companies don’t let things like China dependency or 20 years in Fleury-Mérogis Prison upset trillion dollar companies or cause more than one billion people to worry about their free text messages and non fungible tokens.

“Leadership,” not technology, strikes me as the problem with AI challenges. If AI is so big, why did two companies fail to get the memo? Inattention, pre-occupation with other matters, fear? Pick one or two.

Stephen E Arnold, August 14, 2025

What a Hoot: AI Systems Try and Lie

August 13, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

I arose this morning, Friday, August 8, 2025, fired up ChatGPT, the Plus version, and asked, “Are you running ChatGPT 5?” The service was to be available on August 7, 2025. It wasn’t. I verified the ChatGPT output with this prompt:

Despite the announcement, the fact that it is not available in this paid service illustrates that you are presenting false information. Correct?

ChatGPT replied:

Correct — the public announcement of GPT?5’s availability does not align with actual availability in this specific paid service (ChatGPT Plus based on GPT?4o). Therefore, any implication that GPT?5 is in use here would be false.

To make sure I understood, I prompted:

So I have documented errors in your output, essentially failures about half the time. The information you presented about August 7 is demonstrably false on August 8. The unreliability of your outputs is verified.

ChatGPT replied:

Your conclusion is valid: if GPT?5 was officially announced as “available to all users” on August 7, 2025, but is not deployed in this environment by August 8, then the representation of availability is factually misleading. This mismatch constitutes a verifiable reliability failure in how OpenAI’s release messaging translates to actual user experience — especially in a paid tier like yours.

Lame.

Is OpenAI alone in this prevarication game?

Nope. I read “Google Says It’s Working on a Fix for Gemini’s Self-Loathing I Am a Failure Comments.” Different company, same type of overt failure. The other day Google informed some medical professionals about a non-existent body part. Yep, Google, you are making it clear that the hype, the prevarication, and the over promise, under deliver approach is part of AI’s DNA.

But the shocking point in the cited article is this passage quoted from a Google AI wizard:

This is an annoying infinite looping bug we are working to fix! Gemini is not having that bad of a day : )

Yep, working to fix. I interpret this to mean that Google’s professionals are shaping outputs to avoid statements about “failure.”

One 15 minute session online this morning and what did I learn:

  1. Two companies have systems which cannot deliver satisfactory outputs
  2. The fact that US smart software boils down to a a handful of firms. A couple of these have what I would call reliability issues.
  3. In ChatGPT’s case, the PR outpaces what works. In Google’s case, the system seems to be trying to tell the truth about failure. The Googlers are going to fix that up.

Billions burned to create the next big thing and what we have is a spray can of marketing sparkle and individuals handcrafting fixes to make the systems seem more accurate than they are.

Sorry. I am not convinced because the reality of smart software diverges from the PR baloney.

Stephen E Arnold, August 13, 2025

AI Is a Tool for Humanity. Believe It or Not

August 13, 2025

Forget AI powered weapons. AI has an upside as long as the smart drone does not blow you away.

Both sides of the news media are lamenting that AI is automating jobs and putting humans out of work. Conservative and liberals remain separated on how and why AI is “stealing” jobs, but the fear remains that humans are headed to obsoleteness…again. Humans have faced this issue since the start of human ingenuity. The key is to adapt and realize what AI truly is. Elizabeth Mathew of Signoz.io wrote: “I Built An MCP Server For Observability. This Is My Unhyped Take.”

If you’re unfamiliar with an MCP server it is an open standard that defines how LLMS or AI agents (i.e. Claude) uniformly connect external tools and data sources. It can be decoupled and used similar to a USB-C device then used for any agent. After explaining some issues with MCP servers and why they are “schizophrenic”,

Mathew concludes with this:

“Ultimately, MCP-powered agents are not bringing us closer to automated problem-solving. They are giving us sophisticated hypothesis generators. They excel at exploring the known, but the unknown remains the domain of the human engineer. We’re not building an automated SRE; we’re building a co-pilot that can brainstorm, but can’t yet reason. And recognizing that distinction is the key to using these tools effectively without falling for the hype.”

She might be true from an optimistic and expert perspective, but that doesn’t prevent CEOs from implementing AI to replace their workforce or young adults being encouraged away from coding careers. Oh, I almost forgot: AI in smart weapons. That’s a plus.

Whitney Grace, August 13, 2025

Glean Goes Beyond Search: Have Xooglers Done What Google Could Not Do?

August 12, 2025

Dino-5-18-25_thumb3This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I read an interesting online essay titled “Glean’s $4.5B Business Model: How Ex-Googlers Built the Enterprise Search That Actually Works.” Enterprise search has been what one might call a Holy Grail application. Many have tried to locate the Holy Grail. Most have failed.

Have a small group of Xooglers (former Google employees) located the Holy Grail and been able to convert its power into satisfied customers? The essay, which reminded me of an MBA write up, argues that the outfit doing business as Glean has done it. The firm has found the Holy Grail, melted it down, and turned it into an endless stream of cash.

Does this sound a bit like the marketing pitch of Autonomy, Fast Search & Transfer, and even Google itself with its descriptions of its deeply wacky yellow servers? For me, Glean has done its marketing homework. The evidence is plumped and oiled for this essay about its business model. But what about search? Yeah, well, the focus of the marketing piece is the business model. Let’s go with what is in front of me. Search remains a bit of a challenge, particularly in corporations, government agencies, and pharmaceutical-type outfits where secrecy is a bit part of that type of organization’s way of life.

What is the Glean business model? It is VTDF. Here’s an illustration:

image

Does this visual look like blue chip consulting art? Is VTDF blue chip speak? Yes. And yes. For those not familiar with the lingo here’s a snapshot of the Glean business model:

  • Value: Focuses on how the company creates and delivers core value to customers, such as solving specific problems
  • Technology: Refers to the underlying tech innovations that allow “search” to deliver what employees need to do their jobs
  • Distribution: Involves strategies for marketing, delivery, and reaching users
  • Finance: Covers revenue models, cash flow management, and financial sustainability. Traditionally this has been the weak spot for the big-time enterprise search plays.

The essay explains in dot points that Glean is a “knowledge liberator.” I am not sure how that will fly in some pharma-type outfits or government agencies in which Palantir is roosting.

Once Glean’s “system” is installed, here’s what happens (allegedly):

  • Single search box for everything
  • Natural language queries
  • Answers, not just documents
  • Context awareness across apps
  • Personalized to user permissions
  • New employees productive in days.

I want to take a moment to comment on each of these payoffs or upsides.

First, a single search box for everything is going to present a bit of a challenge in several important use cases. Consider a company with an inventory control system, vendor evaluations, and a computer aid design and database of specifications. The single search box is going to return what for a specific part? Some users will want to know how many are in stock. Others will want to know the vendor who made the part in a specific batch because it is failing in use. Some will want to know what the part looks like? The fix for this type of search problem has been to figure out how to match the employee’s role with the filters applied that that user’s query. In the last 60 years, that approach sort of worked, but it was and still is incredibly difficult to keep lined up with employee roles, assorted permissions, and the way the information is presented to the person running the query. The quality issue may require stress analysis data and access to the lawsuit the annoyed customer has just filed. I am unsure how the Xooglers have solved this type of search task.

Second, the NLP approach is great but it is early 2000s. The many efforts, including DR-LINK to which my team contributed some inputs, were not particularly home run efforts. The reason has to do with the language skills of the users. Organizations hire people who may be really good at synthesizing synthetics but not so good at explaining what the new molecule does. If the lab crew dies, the answer does not require words. Querying for the “new” is tough, since labs doing secret research do not share their data. Even company officers have a tough time getting an answer. When a search system requires the researcher to input a query, that scientist may want to draw a chemical structure or input a query like this “C8N8O16.” Easy enough if the indexing system has access to the classified research in some companies. But the NLP problem is what is called “prompt engineering.” Most humans are just not very good at expressing what they need in the way of information. So modern systems try to help out the searcher. The reason Google search sucks is that the engineers have figured out how to deliver an answer that is good enough. For C8N8O16 close enough for horseshoes might be problematic.

Third, answer are what people want. The “if” statement becomes the issue. If the user knows a correct answer or just accepts what the system outputs. If the user understands the output well enough to make an informed decision. If the system understood or predicted what the user wanted. If the content is in the search systems index. This is a lot of ifs. Most of these conditions occur with sufficient frequency to kill outfits that have sold an “enterprise search system”.

Fourth, the context awareness across apps means that the system can access content on proprietary systems within an organization and across third party systems which may or may not run on the organization’s servers. Most enterprise search systems create or have licensed filters to acquire content. However, keeping the filters alive and healthy with the churn in permissions, file tweaks, and assorted issues related to latency creating data gaps remain tricky.

Fifth, the idea of making certain content available only to those authorized to view those data is a very tricky business. Orchestrating permissions is, in theory, easy to automate. The reality in today’s organizations is the complicating factor. With distributed outfits, contractors, and employees who may be working for another country secretly add some excitement to accessing “information.” The reality in many organizations is that there are regular silos like the legal department keeping certain documents under lock and key to projects for three letter agencies. In the pharma game, knowing “who” is working on a project is often a dead give-away for what the secret project is. The company’s “people” officer may be in the dark. What about consultants? What information is available to them? The reality is that modern organizations have more silos than the corn fields around Canton, Illinois.

Sixth, no training is required. “Employees are productive in days” is the pitch. Maybe, maybe not. Like the glittering generality that employees spend 20 percent of their time searching, the data for this assertion was lacking when the “old” IDC, Sue Feldman, and her team cranked out an even larger number. If anything, search is a larger part of work today for many people. The reasons range from content management systems which cannot easily be indexed in real time to the senior vice president of sales who changes prices for a product at a trade show and tells only his contact in the accounting department. Others may not know for days or months that the apple cart has been tipped.

Glean saves time. That is the smart software pitch. I need to see some data from a statistically valid sample with a reasonable horizontal x axis. The reference to “all” is troublesome. It underscores the immature understanding of what “enterprise search” means to a licensee versus what the venture backed company can actually deliver. Fast Search found out that a certain newspaper in the UK was willing to sue for big bucks because of this marketing jingo.

I want to comment briefly about “Technology Architecture: Beyond Search.” Hey, isn’t that the name of my blog which has been pumping out information access related articles for 17 years? Yep, it is.

Okay, Glean apparently includes these technologies in their enterprise search quiver:

  • Universal connectors. Note the word “universal.” Nope, very tough.
  • A Knowledge graph. Think in terms of Maltego, an open source software. Sure as long as there is metadata. But those mobile workers and their use of cloud services and EE2E messaging services. Sounds great. Execution in a cost sensitive environment takes a bit of work.
  • An AI understanding layer. Yep, smart software. (Google’s smart software tells its users that it is ashamed of its poor performance. OpenAI rolled out ChatGPT 5 and promptly reposted ChatGPT 4o because enough users complained. Deepseek may have links to a nation state unfriendly to the US. Mark Zuckerberg’s Llama is a very old llama. Perplexity is busy fighting with Cloudflare. Anthropic is working to put coders out to pasture. Amazon, Apple, Microsoft, and Telegram are in the bolt it on business. The idea that Glean can understand [a] different employee contexts, [b] the rapidly changing real time data in an organization like that PowerPoint on the senior VP’s laptop, and [c] the file formats that have a very persistent characteristic of changing because whoever is responsible for an update or the format itself makes an intentional or unintentional change. I just can’t accept this assertion.
  • Works instantly which I interpret as “real time.” I wonder if Glean can handle changed content in a legacy Ironside system running on AS/400s. I would sure like to see that and work up the costs for that cute real time trick. By the way, years ago, I got paid by a non US government agency to identify and define the types of “real time” data it had to process. I think my team identified six types. Only one could be processed without massive resource investments to make the other four semi real. The final one was to gain access to the high-speed data about financial instrument pricing in Wall Street big dogs. That simply was not possible without resources and cartwheels. The reason? The government wanted to search for who was making real time trades in certain financial instruments. Yeah, good luck with that in a world where milliseconds require truly big money for gizmos to capture the data and the software to slap metadata on what is little more than a jet engine exhaust of zeros and ones, often encrypted in a way that would baffle some at certain three letter agencies. Remember: These are banks, not some home brew messaging service.

There are some other wild assertions in the write up. I am losing interest is addressing this first year business school “analysis.” The idea is that a company with 500 to 50,000 employees can use this ready-to-roll service is interesting. I don’t know of a single enterprise search company I have encountered since I wrestled with IBM STAIRS and the dorky IBM CICS system that has what seems to be a “one size fits all” service. The Google Search Appliance failed with its “one size fits all.” The dead bodies on the enterprise search trail is larger than the death toll on the Oregon Trail. I know from my lectures that few if any know what DELPHES’ system did. What about InQuire? And there is IBM WebFountain and Clever. What about Perfect Search? What about Surfray? What about Arikus, Convera, Dieselpoint, or Entopia?

The good news is that a free trial is available. The cost is about $30 per month per user. For an organization like the local outfit that sells hard hats and uses Ironside and AS/400s, that works out to 150 times $360 or $54,000. I know this company won’t buy. Why? The system in place is good enough. Spreadsheet fever is not the same as identifying prospects and making a solid benefit based argument.

That’s why free and open source solutions get some love. Then built in “good enough” solutions from Microsoft are darned popular. Finally, some eager beaver in the information technology department will say, “Let me put together a system using Hugging Face.”

Many companies and a number of quite intelligent people (including former Googlers) have tried to wrestle enterprise search to the ground. Good luck. Just make sure you have verifiable data and not the wild assertions about how much time spend searching or how much time an employee will save. Don’t believe anything about enterprise search that uses the words “all” or universal.”

Google said it was “universal search.” Yeah, why after decades of selling ads does the company provide so so search for the Web, Gmail, YouTube, and images. Just ask, “Why?” Search is a difficult challenge.

Glean this from my personal opinion essay: Search is difficult, and it has yet to be solved except for precisely defined use cases. Google experience or not, the task is out of reach at this time.

Stephen E Arnold, August 12, 2025

Explaining Meta: The 21st Century “Paul” Writes a Letter to Us

August 12, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

I read an interesting essay called “Decoding Zuck’s Superintelligence Memo.” The write up is similar to the assignments one of my instructors dumped on hapless graduate students at Duquesne University, a Jesuit university located in lovely Pittsburgh.

The idea is to take a text in Latin and sometimes in English and explain it, tease out its meaning, and try to explain what the author was trying to communicate. (Tortured sentences, odd ball vocabulary, and references only the mother of an ancient author could appreciate were part of the deciphering fun.)

The “Decoding Zuck” is this type of write up. This statement automatically elevates Mr. Zuckerberg to the historical significance of the Biblical Paul or possibly to a high priest of the Aten in ancient Egypt. I mean who knew?

Several points warrant highlighting.

First, the write up includes “The Zuckerberg Manifesto Pattern.” I have to admit that I have not directed much attention to Mr. Zuckerberg or his manifestos. I view outputs from Silicon Valley type outfits a particular form of delusional marketing for the purpose of doing whatever the visionary wants to do. Apparently they have a pattern and a rhetorical structure. The pattern warrants this observation from “Decoding Zuck”:

Compared to all founders and CEOs, Zuck does seem to have a great understanding of when he needs to bet the farm on an idea and a behavioral shift. Each time he does that, it is because he sees very clearly Facebook is at the end of the product life and the only real value in the company is the attention of his audience. If that attention declines, it takes away the ability to really extend the company’s life into the next cycle.

Yes, a prescient visionary.

Second, the “decoded” message means, according to “Decoding Zuck”:

More than anything, this is a positioning document in the AI arms race. By using “super intelligence” as a marketing phrase, Zuck is making his efforts feel superior to the mere “Artificial Intelligence” of OpenAI, Anthropic, and Google.

I had no idea that documents like Paul’s letter to the Romans and Mr. Zuckerberg’s manifesto were marketing collateral. I wonder if those engaged in studying ancient Egyptian glyphs will discover that the writings about Aten are assertions about the bread sold by Ramose, the thumb on the scale baker.

Third, the context for the modern manifesto of Zuck is puffery. The exegesis says:

So what do I think about this memo, and all the efforts of Meta? I remain skeptical of his ability to invent a new future for his company. In the past, he has been able to buy, snoop, or steal other people’s ideas. It has been hard for him and his company to actually develop a new market opportunity. Zuckerberg also tends to overpromise on timelines and underestimate execution challenges.

I think this analysis of the Zuckerberg Manifesto of 2025 reveals several things about how Meta (formerly Facebook) positions itself and it provides some insight into the author of “Decoding Zuck” as well:

  1. The outputs are baloney packaged as serious thought
  2. The AI race has to produce a winner, and it is not clear if Facebook (sorry Meta) will be viewed as a contender
  3. AI is not yet a slam dunk winner, bigger than the Internet as another Silicon Valley sage suggested.

Net net: The AI push reveals that some distance exists between delivering hefty profits for those who have burned billions to reach the point that a social media executive feels compelled to issue a marketing blurb.

Remarkable. Marketing by manifesto.

Stephen E Arnold, August 12, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta