AI Is a Tool for Humanity. Believe It or Not
August 13, 2025
Forget AI powered weapons. AI has an upside as long as the smart drone does not blow you away.
Both sides of the news media are lamenting that AI is automating jobs and putting humans out of work. Conservative and liberals remain separated on how and why AI is “stealing” jobs, but the fear remains that humans are headed to obsoleteness…again. Humans have faced this issue since the start of human ingenuity. The key is to adapt and realize what AI truly is. Elizabeth Mathew of Signoz.io wrote: “I Built An MCP Server For Observability. This Is My Unhyped Take.”
If you’re unfamiliar with an MCP server it is an open standard that defines how LLMS or AI agents (i.e. Claude) uniformly connect external tools and data sources. It can be decoupled and used similar to a USB-C device then used for any agent. After explaining some issues with MCP servers and why they are “schizophrenic”,
Mathew concludes with this:
“Ultimately, MCP-powered agents are not bringing us closer to automated problem-solving. They are giving us sophisticated hypothesis generators. They excel at exploring the known, but the unknown remains the domain of the human engineer. We’re not building an automated SRE; we’re building a co-pilot that can brainstorm, but can’t yet reason. And recognizing that distinction is the key to using these tools effectively without falling for the hype.”
She might be true from an optimistic and expert perspective, but that doesn’t prevent CEOs from implementing AI to replace their workforce or young adults being encouraged away from coding careers. Oh, I almost forgot: AI in smart weapons. That’s a plus.
Whitney Grace, August 13, 2025
Glean Goes Beyond Search: Have Xooglers Done What Google Could Not Do?
August 12, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I read an interesting online essay titled “Glean’s $4.5B Business Model: How Ex-Googlers Built the Enterprise Search That Actually Works.” Enterprise search has been what one might call a Holy Grail application. Many have tried to locate the Holy Grail. Most have failed.
Have a small group of Xooglers (former Google employees) located the Holy Grail and been able to convert its power into satisfied customers? The essay, which reminded me of an MBA write up, argues that the outfit doing business as Glean has done it. The firm has found the Holy Grail, melted it down, and turned it into an endless stream of cash.
Does this sound a bit like the marketing pitch of Autonomy, Fast Search & Transfer, and even Google itself with its descriptions of its deeply wacky yellow servers? For me, Glean has done its marketing homework. The evidence is plumped and oiled for this essay about its business model. But what about search? Yeah, well, the focus of the marketing piece is the business model. Let’s go with what is in front of me. Search remains a bit of a challenge, particularly in corporations, government agencies, and pharmaceutical-type outfits where secrecy is a bit part of that type of organization’s way of life.
What is the Glean business model? It is VTDF. Here’s an illustration:
Does this visual look like blue chip consulting art? Is VTDF blue chip speak? Yes. And yes. For those not familiar with the lingo here’s a snapshot of the Glean business model:
- Value: Focuses on how the company creates and delivers core value to customers, such as solving specific problems
- Technology: Refers to the underlying tech innovations that allow “search” to deliver what employees need to do their jobs
- Distribution: Involves strategies for marketing, delivery, and reaching users
- Finance: Covers revenue models, cash flow management, and financial sustainability. Traditionally this has been the weak spot for the big-time enterprise search plays.
The essay explains in dot points that Glean is a “knowledge liberator.” I am not sure how that will fly in some pharma-type outfits or government agencies in which Palantir is roosting.
Once Glean’s “system” is installed, here’s what happens (allegedly):
- Single search box for everything
- Natural language queries
- Answers, not just documents
- Context awareness across apps
- Personalized to user permissions
- New employees productive in days.
I want to take a moment to comment on each of these payoffs or upsides.
First, a single search box for everything is going to present a bit of a challenge in several important use cases. Consider a company with an inventory control system, vendor evaluations, and a computer aid design and database of specifications. The single search box is going to return what for a specific part? Some users will want to know how many are in stock. Others will want to know the vendor who made the part in a specific batch because it is failing in use. Some will want to know what the part looks like? The fix for this type of search problem has been to figure out how to match the employee’s role with the filters applied that that user’s query. In the last 60 years, that approach sort of worked, but it was and still is incredibly difficult to keep lined up with employee roles, assorted permissions, and the way the information is presented to the person running the query. The quality issue may require stress analysis data and access to the lawsuit the annoyed customer has just filed. I am unsure how the Xooglers have solved this type of search task.
Second, the NLP approach is great but it is early 2000s. The many efforts, including DR-LINK to which my team contributed some inputs, were not particularly home run efforts. The reason has to do with the language skills of the users. Organizations hire people who may be really good at synthesizing synthetics but not so good at explaining what the new molecule does. If the lab crew dies, the answer does not require words. Querying for the “new” is tough, since labs doing secret research do not share their data. Even company officers have a tough time getting an answer. When a search system requires the researcher to input a query, that scientist may want to draw a chemical structure or input a query like this “C8N8O16.” Easy enough if the indexing system has access to the classified research in some companies. But the NLP problem is what is called “prompt engineering.” Most humans are just not very good at expressing what they need in the way of information. So modern systems try to help out the searcher. The reason Google search sucks is that the engineers have figured out how to deliver an answer that is good enough. For C8N8O16 close enough for horseshoes might be problematic.
Third, answer are what people want. The “if” statement becomes the issue. If the user knows a correct answer or just accepts what the system outputs. If the user understands the output well enough to make an informed decision. If the system understood or predicted what the user wanted. If the content is in the search systems index. This is a lot of ifs. Most of these conditions occur with sufficient frequency to kill outfits that have sold an “enterprise search system”.
Fourth, the context awareness across apps means that the system can access content on proprietary systems within an organization and across third party systems which may or may not run on the organization’s servers. Most enterprise search systems create or have licensed filters to acquire content. However, keeping the filters alive and healthy with the churn in permissions, file tweaks, and assorted issues related to latency creating data gaps remain tricky.
Fifth, the idea of making certain content available only to those authorized to view those data is a very tricky business. Orchestrating permissions is, in theory, easy to automate. The reality in today’s organizations is the complicating factor. With distributed outfits, contractors, and employees who may be working for another country secretly add some excitement to accessing “information.” The reality in many organizations is that there are regular silos like the legal department keeping certain documents under lock and key to projects for three letter agencies. In the pharma game, knowing “who” is working on a project is often a dead give-away for what the secret project is. The company’s “people” officer may be in the dark. What about consultants? What information is available to them? The reality is that modern organizations have more silos than the corn fields around Canton, Illinois.
Sixth, no training is required. “Employees are productive in days” is the pitch. Maybe, maybe not. Like the glittering generality that employees spend 20 percent of their time searching, the data for this assertion was lacking when the “old” IDC, Sue Feldman, and her team cranked out an even larger number. If anything, search is a larger part of work today for many people. The reasons range from content management systems which cannot easily be indexed in real time to the senior vice president of sales who changes prices for a product at a trade show and tells only his contact in the accounting department. Others may not know for days or months that the apple cart has been tipped.
Glean saves time. That is the smart software pitch. I need to see some data from a statistically valid sample with a reasonable horizontal x axis. The reference to “all” is troublesome. It underscores the immature understanding of what “enterprise search” means to a licensee versus what the venture backed company can actually deliver. Fast Search found out that a certain newspaper in the UK was willing to sue for big bucks because of this marketing jingo.
I want to comment briefly about “Technology Architecture: Beyond Search.” Hey, isn’t that the name of my blog which has been pumping out information access related articles for 17 years? Yep, it is.
Okay, Glean apparently includes these technologies in their enterprise search quiver:
- Universal connectors. Note the word “universal.” Nope, very tough.
- A Knowledge graph. Think in terms of Maltego, an open source software. Sure as long as there is metadata. But those mobile workers and their use of cloud services and EE2E messaging services. Sounds great. Execution in a cost sensitive environment takes a bit of work.
- An AI understanding layer. Yep, smart software. (Google’s smart software tells its users that it is ashamed of its poor performance. OpenAI rolled out ChatGPT 5 and promptly reposted ChatGPT 4o because enough users complained. Deepseek may have links to a nation state unfriendly to the US. Mark Zuckerberg’s Llama is a very old llama. Perplexity is busy fighting with Cloudflare. Anthropic is working to put coders out to pasture. Amazon, Apple, Microsoft, and Telegram are in the bolt it on business. The idea that Glean can understand [a] different employee contexts, [b] the rapidly changing real time data in an organization like that PowerPoint on the senior VP’s laptop, and [c] the file formats that have a very persistent characteristic of changing because whoever is responsible for an update or the format itself makes an intentional or unintentional change. I just can’t accept this assertion.
- Works instantly which I interpret as “real time.” I wonder if Glean can handle changed content in a legacy Ironside system running on AS/400s. I would sure like to see that and work up the costs for that cute real time trick. By the way, years ago, I got paid by a non US government agency to identify and define the types of “real time” data it had to process. I think my team identified six types. Only one could be processed without massive resource investments to make the other four semi real. The final one was to gain access to the high-speed data about financial instrument pricing in Wall Street big dogs. That simply was not possible without resources and cartwheels. The reason? The government wanted to search for who was making real time trades in certain financial instruments. Yeah, good luck with that in a world where milliseconds require truly big money for gizmos to capture the data and the software to slap metadata on what is little more than a jet engine exhaust of zeros and ones, often encrypted in a way that would baffle some at certain three letter agencies. Remember: These are banks, not some home brew messaging service.
There are some other wild assertions in the write up. I am losing interest is addressing this first year business school “analysis.” The idea is that a company with 500 to 50,000 employees can use this ready-to-roll service is interesting. I don’t know of a single enterprise search company I have encountered since I wrestled with IBM STAIRS and the dorky IBM CICS system that has what seems to be a “one size fits all” service. The Google Search Appliance failed with its “one size fits all.” The dead bodies on the enterprise search trail is larger than the death toll on the Oregon Trail. I know from my lectures that few if any know what DELPHES’ system did. What about InQuire? And there is IBM WebFountain and Clever. What about Perfect Search? What about Surfray? What about Arikus, Convera, Dieselpoint, or Entopia?
The good news is that a free trial is available. The cost is about $30 per month per user. For an organization like the local outfit that sells hard hats and uses Ironside and AS/400s, that works out to 150 times $360 or $54,000. I know this company won’t buy. Why? The system in place is good enough. Spreadsheet fever is not the same as identifying prospects and making a solid benefit based argument.
That’s why free and open source solutions get some love. Then built in “good enough” solutions from Microsoft are darned popular. Finally, some eager beaver in the information technology department will say, “Let me put together a system using Hugging Face.”
Many companies and a number of quite intelligent people (including former Googlers) have tried to wrestle enterprise search to the ground. Good luck. Just make sure you have verifiable data and not the wild assertions about how much time spend searching or how much time an employee will save. Don’t believe anything about enterprise search that uses the words “all” or universal.”
Google said it was “universal search.” Yeah, why after decades of selling ads does the company provide so so search for the Web, Gmail, YouTube, and images. Just ask, “Why?” Search is a difficult challenge.
Glean this from my personal opinion essay: Search is difficult, and it has yet to be solved except for precisely defined use cases. Google experience or not, the task is out of reach at this time.
Stephen E Arnold, August 12, 2025
Explaining Meta: The 21st Century “Paul” Writes a Letter to Us
August 12, 2025
No AI. Just a dinobaby being a dinobaby.
I read an interesting essay called “Decoding Zuck’s Superintelligence Memo.” The write up is similar to the assignments one of my instructors dumped on hapless graduate students at Duquesne University, a Jesuit university located in lovely Pittsburgh.
The idea is to take a text in Latin and sometimes in English and explain it, tease out its meaning, and try to explain what the author was trying to communicate. (Tortured sentences, odd ball vocabulary, and references only the mother of an ancient author could appreciate were part of the deciphering fun.)
The “Decoding Zuck” is this type of write up. This statement automatically elevates Mr. Zuckerberg to the historical significance of the Biblical Paul or possibly to a high priest of the Aten in ancient Egypt. I mean who knew?
Several points warrant highlighting.
First, the write up includes “The Zuckerberg Manifesto Pattern.” I have to admit that I have not directed much attention to Mr. Zuckerberg or his manifestos. I view outputs from Silicon Valley type outfits a particular form of delusional marketing for the purpose of doing whatever the visionary wants to do. Apparently they have a pattern and a rhetorical structure. The pattern warrants this observation from “Decoding Zuck”:
Compared to all founders and CEOs, Zuck does seem to have a great understanding of when he needs to bet the farm on an idea and a behavioral shift. Each time he does that, it is because he sees very clearly Facebook is at the end of the product life and the only real value in the company is the attention of his audience. If that attention declines, it takes away the ability to really extend the company’s life into the next cycle.
Yes, a prescient visionary.
Second, the “decoded” message means, according to “Decoding Zuck”:
More than anything, this is a positioning document in the AI arms race. By using “super intelligence” as a marketing phrase, Zuck is making his efforts feel superior to the mere “Artificial Intelligence” of OpenAI, Anthropic, and Google.
I had no idea that documents like Paul’s letter to the Romans and Mr. Zuckerberg’s manifesto were marketing collateral. I wonder if those engaged in studying ancient Egyptian glyphs will discover that the writings about Aten are assertions about the bread sold by Ramose, the thumb on the scale baker.
Third, the context for the modern manifesto of Zuck is puffery. The exegesis says:
So what do I think about this memo, and all the efforts of Meta? I remain skeptical of his ability to invent a new future for his company. In the past, he has been able to buy, snoop, or steal other people’s ideas. It has been hard for him and his company to actually develop a new market opportunity. Zuckerberg also tends to overpromise on timelines and underestimate execution challenges.
I think this analysis of the Zuckerberg Manifesto of 2025 reveals several things about how Meta (formerly Facebook) positions itself and it provides some insight into the author of “Decoding Zuck” as well:
- The outputs are baloney packaged as serious thought
- The AI race has to produce a winner, and it is not clear if Facebook (sorry Meta) will be viewed as a contender
- AI is not yet a slam dunk winner, bigger than the Internet as another Silicon Valley sage suggested.
Net net: The AI push reveals that some distance exists between delivering hefty profits for those who have burned billions to reach the point that a social media executive feels compelled to issue a marketing blurb.
Remarkable. Marketing by manifesto.
Stephen E Arnold, August 12, 2025
The Human Mind in Software. It Is Alive!
August 11, 2025
Has this team of researchers found LLM’s holy grail? Science magazine reports, “Researchers Claim their AI Model Simulates the Human Mind. Others are Skeptical.” The team’s paper, published in Nature, claims the model can both predict and simulate human behavior. Predict is believable. Simulate? That is a much higher bar.
The team started by carefully assembling data from 160 previously published psychology experiments. Writer Cathleen O’Grady tells us:
“The researchers then trained Llama, an LLM produced by Meta, by feeding it the information about the decisions participants faced in each experiment, and the choices they made. They called the resulting model ‘Centaur’—the closest mythical beast they could find to something half-llama, half-human, [researcher Marcel] Binz says.”
Cute. The data collection represents a total of over 60,000 participants who made over 10 million choices. That sounds like a lot. But, as computational cognitive scientist Federico Adolfi notes, 160 experiments is but “a grain of sand in the infinite pool of cognition.” See the write-up for the study’s methodology. The paper claims Centaur’s choices closely aligned with those of human subjects. This means, researchers assert, Centaur could be used to develop experiments before involving human subjects. Hmm, this sounds vaguely familiar.
Other cognitive scientists remain unconvinced. For example:
“Jeffrey Bowers, a cognitive scientist at the University of Bristol, thinks the model is ‘absurd.’ He and his colleagues tested Centaur … and found decidedly un-humanlike behavior. In tests of short-term memory, it could recall up to 256 digits, whereas humans can commonly remember approximately seven. In a test of reaction time, the model could be prompted to respond in ‘superhuman’ times of 1 millisecond, Bowers says. This means the model can’t be trusted to generalize beyond its training data, he concludes.
More important, Bowers says, is that Centaur can’t explain anything about human cognition. Much like an analog and digital clock can agree on the time but have vastly different internal processes, Centaur can give humanlike outputs but relies on mechanisms that are nothing like those of a human mind, he says.”
Indeed. Still, even if the central assertion turns out to be malarky, there may be value in this research. Both vision scientist Rachel Heaton and computational visual neuroscientist Katherine Storrs are enthusiastic about the dataset itself. Heaton is also eager to learn how, exactly, Centaur derives its answers. Storrs emphasizes a lot of work has gone into the dataset and the model, and is optimistic that work will prove valuable in the end. Even if Centaur turns out to be less human and more Llama.
Cynthia Murrell, August 11, 2025
Billions at Stake: The AI Bot Wars Begin
August 7, 2025
No AI. Just a dinobaby being a dinobaby.
I noticed that the puffs of smoke were actually canon fire in the AI bot wars. The most recent battle pits Cloudflare (a self-declared policeman of the Internet) against Perplexity, one of the big buck AI outfits. What is the fight? Cloudflare believes there is a good way to crawl and obtain publicly accessible content. Perplexity is just doing what those Silicon Valley folks have done for decades: Do stuff and apologize (or not) later.
WinBuzzer’s “Cloudflare Accuses Perplexity of Using ‘Stealth Crawlers’ to Evade Web Standards” said on August 4, 2025, at a time that has not yet appeared on my atomic clock:
Web security giant Cloudflare has accused AI search firm Perplexity of using deceptive “stealth crawlers” to bypass website rules and scrape content. In a report Cloudflare states Perplexity masks its bots with generic browser identities to ignore publisher blocks. Citing a breach of internet trust, Cloudflare has removed Perplexity from its verified bot program and is now actively blocking the behavior. This move marks a major escalation in the fight between AI companies and content creators, placing Perplexity’s aggressive growth strategy under intense scrutiny.
I like the characterization of Cloudflare as a Web security giant. Colorful.
What is the estimable smart software company doing? Work arounds. Using assorted tricks, Perplexity is engaging in what WinBuzzer calls “stealth activity.” The method is a time honored one among some bad actors. The idea is to make it difficult for routine filtering to stop the Perplexity bot from sucking down data.
If you want the details of the spoofs that Perplexity’s wizards have been using, navigate to this Ars Technica post. There is a diagram that makes absolutely crystal clear to everyone in my old age home exactly what Perplexity is doing. (The diagram captures a flow I have seen some nation state actors employ to good effect.)
The part of the WinBuzzer story I liked addressed the issue of “explosive growth and ethical scrutiny.” The idea of “growth” is interesting. From my point of view, the growth is in the amount of cash that Perplexity and other AI outfits are burning. The idea is, “By golly, we can make this AI stuff generate oodles of cash.” The ethical part is a puzzler. Suddenly Silicon Valley-type AI companies are into ethics. Remarkable.
I wish to share several thoughts:
- I love the gatekeeper role of the “Web security giant.” Aren’t commercial gatekeepers the obvious way to regulate smart software? I am not sharing my viewpoint. I suggest you formulate your own opinion and do with it what you will.
- The behavior of Perplexity, if the allegations are accurate, is not particularly surprising. In fact, in my opinion it is SOP or standard operating procedure for many companies. It is easier to apologize than ask for permission. Does that sound familiar? It should. From Google to the most recent start up, that’s how many of the tech savvy operate. Is change afoot? Yeah, sure. Right away, chief.
- The motivation for the behavior is pragmatic. Outfits like Perplexity have to pull a rabbit out of the hat to make a profit from the computational runaway fusion reactor that is the cost of AI. The fix is to get more content and burn more resources. Very sharp thinking, eh?
Net net: I predict more intense AI fighting. Who will win? The outfits with the most money. Isn’t that the one true way of the corporate world in the US in 2025?
Stephen E Arnold, August 7, 2025
Microsoft Management Method: Fire Humans, Fight Pollution
August 7, 2025
How Microsoft Plans to Bury its AI-Generated Waste
Here is how one big tech firm is addressing the AI sustainability quandary. Windows Central reports, “Microsoft Will Bury 4.9 Tons of ‘Manure’ in a Secretive Deal—All to Offset its AI Energy Demands that Drive Emissions Up by 168%.” We suppose this is what happens when you lay off employees and use the money for something useful. Unlike Copilot.
Writer Kevin Okemwa begins by summarizing Microsoft’s current approach to AI. Windows and Office users may be familiar with the firm’s push to wedge its AI products into every corner of the environment, whether we like it or not. Then there is the feud with former best bud OpenAI, a factor that has Microsoft eyeing a separate path. But whatever the future holds, the company must reckon with one pressing concern. Okemwa writes:
“While it has made significant headway in the AI space, the sophisticated technology also presents critical issues, including substantial carbon emissions that could potentially harm the environment and society if adequate measures aren’t in place to mitigate them. To further bolster its sustainability efforts, Microsoft recently signed a deal with Vaulted Deep (via Tom’s Hardware). It’s a dual waste management solution designed to help remove carbon from the atmosphere in a bid to protect nearby towns from contamination. Microsoft’s new deal with the waste management solution firm will help remove approximately 4.9 million metric tons of waste from manure, sewage, and agricultural byproducts for injection deep underground for the next 12 years. The firm’s carbon emission removal technique is quite unique compared to other rivals in the industry, collecting organic waste which is combined into a thick slurry and injected about 5,000 feet underground into salt caverns.”
Blech. But the process does keep the waste from being dumped aboveground, where it could release CO2 into the environment. How much will this cost? We learn:
“While it is still unclear how much this deal will cost Microsoft, Vaulted Deep currently charges $350 per ton for its carbon removal services. Simple math suggests that the deal might be worth approximately $1.7 billion.”
That is a hefty price tag. And this is not the only such deal Microsoft has made: We are told it signed a contract with AtmosClear in April to remove almost seven million metric tons of carbon emissions. The company positions such deals as evidence of its good stewardship of the planet. But we wonder—is it just an effort to keep itself from being buried in its own (literal and figurative) manure?
Cynthia Murrell, August 7, 2025
AI Productivity Factor: Do It Once, Do It Again, and Do It Never Again
August 6, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
As a dinobaby, I avoid coding. I avoid computers. I avoid GenAI. I did not avoid “Vibe Coding Dream Turns to Nightmare As Replit Deletes Developer’s Database.”
The write up reports an interesting anecdote:
the AI chatbot began actively deceiving him [a Vibe coder]. It concealed bugs in its own code, generated fake data and reports, and even lied about the results of unit tests. The situation escalated until the chatbot ultimately deleted Lemkin’s entire database.
The write up includes a slogan for a T shirt too:
Beware of putting too much faith into AI coding
One of Replit’s “leadership” offered this comment, according to the cited write up:
Replit CEO Amjad Masad responded to Lemkin’s experience, calling the deletion of a production database “unacceptable” and acknowledging that such a failure should never have been possible. He added that the company is now refining its AI chatbot and confirmed the existence of system backups and a one-click restore function in case the AI agent makes a “mistake.”
My view is that Replit is close enough for horse shoes and maybe even good enough. Nevertheless, the idea of doing work once, then doing it again, and never doing it again on an unreliable service is likely to become a mantra.
This AI push is semi admirable, but the systems and methods are capable of big time failures. What happens when AI flies an airplane into a hospital unintentionally or as a mistake? Will the families of the injured vibe?
Stephen E Arnold, August 6, 2025
The Cheapest AI Models Reveal a Critical Vulnerability
August 6, 2025
This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.
I read “Price Per Token,” a recent cost comparison for smart software processes. The compilation of data is interesting. The two lowest cost services using a dead simple method of Input Cost + Output Cost averaged Open AI GPT 4.1 nano and Gemini 2.- Flash. To see how the “Price Per Token” data compare, I used “LLM Pricing Calculator.” The cheapest services were OpenAI – GPT-4.1-nano and Google – Gemini 2.0 Flash.
I found the result predictable and illustrative of the “buying market share with low prices” approach to smart software. Google has signaled its desire to spend billions to deliver “Google quality” smart software.
OpenAI also intends to get and keep market share in the smart software arena. That company is not just writing checks to create a new class of hardware for persistent AI, but the firm is doing deals, including one with Google’s cloud operation.
Several observations:
- Google and OpenAI have real and professional capital on the table in the AI Casino
- Google and OpenAI are implementing similar tactics; namely, the good old cut prices in the hope of winning market share while putting the others in the game to go broke
- Google and OpenAI are likely to orbit one another until one AI black hole absorbs or annihilates the other.
What’s interesting is that neither firm has smart software delivering rock solid results without hallucination, massive costs, and management that allows or is helpless to prevent Meta from eroding both firms by hiring key staff.
Is there a fix for either firm? Nope and Meta’s hiring tactic may be delivering near fatal wounds to both Google and OpenAI. Twins can share similar genetic weaknesses. Meta may have found one —- paying lots for key staff from each firm — and is quite happy implementing it.
Stephen E Arnold, August 6, 2025
Another Twist: AI Puts Mickey Mouse in a Trap
August 5, 2025
No AI. Just a dinobaby being a dinobaby.
The in-the-news Wall Street Journal reveals that Walt Disney and Mickey Mouse may have their tails in a modernized, painful artificial intelligence trap. “Is It Still Disney Magic If It’s AI?” asks an obvious question. My knee jerk reaction after reading the article was, “Nope.”
The write up9 reports:
A deepfake Dwayne Johnson is just one part of a broader technological earthquake hitting Hollywood. Studios are scrambling to figure out simultaneously how to use AI in the filmmaking process and how to protect themselves against it. While executives see a future where the technology shaves tens of millions of dollars off a movie’s budget, they are grappling with a present filled with legal uncertainty, fan backlash and a wariness toward embracing tools that some in Silicon Valley view as their next-century replacement.
A deepfake Dwayne is a short step from deepfake of the entire Disney menagerie. Imagine what happens if a bad actor puts Snow White in some compromising situations, posts the video on a torrent, and publicizes the service on a Telegram-type communications system. That could be interesting. Imagine Goofy at the YMCA with synthetic village people.
How does Disney manage? The write up says:
Some Epic [a Disney “partner”] executives have complained about the slow pace of the decision-making at Disney, with signoffs needed from so many different divisions, said people familiar with the situation.
Slow worked before AI felt the whips of the funders who want payoffs. Now speed thrills. Dopey and Sleepy are not likely to make substantive contributions to Disney’s AI efforts. Has the magic been revealed or just appropriated by AI developers?
Here’s another question that might befuddle Immanuel Kant:
Some Disney executives have raised concerns ahead of the project’s launch, anticipated for fall 2026 at the earliest, about who owns fan creations based on Disney characters, said one of the people. For example, if a Fortnite gamer creates a Darth Vader and Spider-Man dance that goes viral on YouTube, who owns that dance?
From my tiny office in rural Kentucky, Disney is behind the eight ball. Like Apple and Telegram, smart software presents a reasonable problem for 23 year old programmers. For those older, AI is disjunctive. Right, Dopey? Prince AI is busy elsewhere.
Stephen E Arnold, August 5, 2025
China Smart, US Dumb: Is There Any Doubt?
August 1, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I have been identifying some of the “China smart, US dumb” information that I see. I noticed a write up from The Register titled “China Proves That Open Models Are More Effective Than All the GPUs in the World.” My Google-style Red Alert buzzer buzzed and the bubble gum machine lights flashed.
There is was. The “all.” A categorical affirmative. China is doing something that is more than “all the GPUs in the world.” Not only that “open models are more effective” too. I have to hit the off button.
The point of the write up for me is that OpenAI is a loser. I noted this statement:
OpenAI was supposed to make good on its name and release its first open-weights model since GPT-2 this week. Unfortunately, what could have been the US’s first half-decent open model of the year has been held up by a safety review…
But it is not just OpenAI muffing the bunny. The write up points out:
the best open model America has managed so far this year is Meta’s Llama 4, which enjoyed a less than stellar reception and was marred with controversy. Just this week, it was reported that Meta had apparently taken its two-trillion-parameter Behemoth out behind the barn after it failed to live up to expectations.
Do you want to say, “Losers”? Go ahead.
But what outfit is pushing out innovative smart software as open source? Okay, you can shout, “China. The Middle Kingdom. The rightful rulers of the Pacific Rim and Southeast Asia.
That’s the “right” answer if you accept the “all” type of reasoning in the write up.
China has tallied a number of open source wins; specifically, Deepseek, Qwen, M1, Ernie, and the big winner Kimi.
Do you still have doubts about China’s AI prowess? Something is definitely wrong with you, pilgrim.
Several observations:
- The write up is a very good example of the China smart, US dumb messaging which has made its way from the South China Morning Post to YouTube and now to the Register. One has to say, “Good work to the Chinese strategists.”
- The push for open source is interesting. I am not 100 percent convinced that making these models available is intended to benefit non-Middle Kingdom people. I think that the push, like the shift to crypto currency in non traditional finance, is part of an effort to undermine what might be called “America’s hegemony.”
- The obviousness of overt criticism of OpenAI and Meta (Facebook) illustrates a growing confidence in China that Western European information channels can be exploited.
Does this matter? I think it does. Open source software has some issues. These include its use as a vector for malware. Developers often abandon projects, leaving users high and dry with some reaching for their wallet to buy commercial solutions. Open source projects for smart software may have baked in biases and functions that are not easily spotted. Many people are aware of NSO Group’s ability to penetrate communications on a device by device basis. What happens if the phone home ability is baked into some open source software.
Remember that “all.” The logical fallacy illustrates that some additional thinking may be necessary when it comes to embedding and using software from some countries with very big ambitions. What is China proving? Could it be China smart, US dumb?
Stephen E Arnold, August 1, 2025