The Future: Humans in Lawn Chairs. Robots Do the Sports Thing
May 8, 2025
Can a fast robot outrun a fast human? Not yet, apparently. MSN’s Interesting Engineering reports, “Humanoid ‘Tiangong Ultra’ Dons Winning Boot in World’s First Human Vs Robot Marathon.” In what appears to be the first event of its kind, a recent 13-mile marathon pitted robots and humans against each other in Beijing. Writer Christopher McFadden reports:
“Around 21 humanoid robots officially competed alongside human marathoners in a 13-mile (21 km) endurance race in Beijing on Saturday, April 19th. According to reports, this is the first time such an event has been held. Competitor robots varied in size, with some as short as 3 feet 9 inches (1.19 m) and others as tall as 5 feet 9 inches (1.8 m). Wheeled robots were officially banned from the race, necessitating that any entrants be able to walk or run similarly to humans.”
The winner was one of the tallest at 5 feet 9 inches and weighed 114 pounds. It took Tiangong Ultra two hours and forty minutes to complete the course. Despite its impressive performance, it lagged considerably behind the first-place human who finished at one hour and two minutes. The robots’ lane of the course was designed to test the machines’ capabilities, mixing inclines and both left and right turns with flat stretches.
See the article for a short video of the race. Most of it features the winner, but there is a brief shot of one smaller, cuter robot. The article continues:
“According to the robot’s creator, Tang Jian, who is also the chief technology officer behind the Beijing Innovation Centre of Human Robotics, the robot’s long legs and onboard software both aided it in its impressive feat. … Jian added that the robot’s battery needed to be changed only three times during the race. As for other robot entrants, many didn’t perform as well. In particular, one robot fell at the starting line and lay on the ground for a few minutes before getting up and joining the race. Yet another crashed into a railing, causing its human operator to fall over.”
Oops. Sadly, those incidents do not appear in the video. The future is clear: Wizards will sit in lawn chairs and watch their robots play sports. I wonder if my robot will go to the gym and exercise for me?
Cynthia Murrell, May 8, 2025
IBM: Making the Mainframe Cool Again
May 7, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
I a ZDNet Tech Today article titled “IBM Introduces a Mainframe for AI: The LinuxONE Emperor 5.” Years ago, I had three IBM PC 704s, each with the eight drive SCSI chassis and that wonderful ServeRAID software. I suppose I should tell you, I want a LinuxONE Emperor 5 because the capitalization reminds me of the IBM ServeRAID software. Imagine. A mainframe for artificial intelligence. No wonder that IBM stock looks like a winner in 2025.
The write up says:
IBM’s latest mainframe, the LinuxONE Emperor 5, is not your grandpa’s mainframe
The CPU for this puppy is the IBM Telum II processor. The chip is a seven nanometer item announced in 2021. If you want some information about this, navigate to “IBM’s Newest Chip Is More Than Meets the AI.”
The ZDNet write up says:
Manufactured using Samsung’s 5 nm process technology, Telum II features eight high-performance cores running at 5.5GHz, a 40% increase in on-chip cache capacity (with virtual L3 and L4 caches expanded to 360MB and 2.88GB, respectively), and a dedicated, next-generation on-chip AI accelerator capable of up to 24 trillion operations per second (TOPS) — four times the compute power of its predecessor. The new mainframe also supports the IBM Spyre Accelerator for AI users who want the most power.
The ZDNet write up delivers a bumper crop of IBM buzzwords about security, but there is one question that crossed my mind, “What makes this a mainframe?”
The answer, in my opinion, is IBM marketing. The Emperor should be able to run legacy IBM mainframe applications. However, before placing an order, a customer may want to consider:
- Snapping these machines into a modern cloud or hybrid environment might take a bit of work. Never fear, however, IBM consulting can help with this task.
- The reliance on the Telum CPU to do AI might put the system at a performance disadvantage from solutions like the Nvidia approach
- The security pitch is accurate providing the system is properly configured and set up. Once again, IBM provides the for fee services necessary to allow Z-llenial IT professional to sleep easy on weekends.
- Mainframes in the cloud are time sharing oriented; making these work in a hybrid environment can be an interesting technical challenge. Remember: IBM consulting and engineering services can smooth the bumps in the road.
Net net: Interesting system, surprising marketing, and definitely something that will catch a bean counter’s eye.
Stephen E Arnold, May 7, 2025
Microsoft Explains that Its AI Leads to Smart Software Capacity Gap Closing
May 7, 2025
No AI, just a dinobaby watching the world respond to the tech bros.
I read a content marketing write up with two interesting features: [1] New jargon about smart software and [2] a direct response to Google’s increasingly urgent suggestions that Googzilla has won the AI war. The article appears in Venture Beat with the title “Microsoft Just Launched Powerful AI ‘Agents’ That Could Completely Transform Your Workday — And Challenge Google’s Workplace Dominance.” The title suggests that Google is the leader in smart software in the lucrative enterprise market. But isn’t Microsoft’s “flavor” of smart software in products from the much-loved Teams to the lowly Notepad application? Isn’t Word like Excel at the top of the heap when it comes to usage in the enterprise?
I will ignore these questions and focus on the lingo in the article. It is different and illustrates what college graduates with a B.A. in modern fiction can craft when assisted by a sprinkling of social science majors and a former journalist or two.
Here are the terms I circled:
product name: Microsoft 365 Copilot Wave 2 Spring release (wow, snappy)
integral collaborator (another bound phrase which means agent)
intelligence abundance (something everyone is talking about)
frontier firm (forward leaning synonym)
‘human-led, agent-operated’ workplaces (yes, humans lead; they are not completely eliminated)
agent store (yes, another online store. You buy agents; you don’t buy people)
browser for AI
brand-compliant images
capacity gap (I have no idea what this represents)
agent boss (Is this a Copilot thing?)
work charts (not images, plans I think)
Copilot control system (Is this the agent boss thing?)
So what does the write up say? In my dinobaby mind, the answer is, “Everything a member of leadership could want: Fewer employees, more productivity from those who remain on the payroll, software middle managers who don’t complain or demand emotional support from their bosses, and a narrowing of the capacity gap (whatever that is).
The question is, “Can either Google, Microsoft, or OpenAI deliver this type of grand vision?” Answer: Probably the vision can be explained and made magnetic via marketing, PR, and language weaponization, but the current AI technology still has a couple of hurdles to get over without tearing the competitors’ gym shorts:
- Hallucinations and making stuff up
- Copyright issues related to training and slapping the circle C, trademarks, and patents on outputs from these agent bosses and robot workers
- Working without creating a larger attack surface for bad actors armed with AI to exploit (Remember, security, not AI, is supposed to be Job No. 1 at Microsoft. You remember that, right? Right?)
- Killing dolphins, bleaching coral, and choking humans on power plant outputs
- Getting the billions pumped into smart software back in the form of sustainable and growing revenues. (Yes, there is a Santa Claus too.)
Net net: Wow. Your turn Google. Tell us you have won, cured disease, and crushed another game player. Oh, you will have to use another word for “dominance.” Tip: Let OpenAI suggest some synonyms.
Stephen E Arnold, May 7, 2025
Google Versus OpenAI: Whose Fish Is Bigger?
May 6, 2025
No AI, just a dinobaby watching the world respond to the tech bros.
Bing Crosby quipped on one of his long-ago radio shows, “We are talking about fish here” when asked about being pulled to shore by a salmon he caught. I think about the Bingster when I come across “user” numbers for different smart software systems. “Google Reveals Sky High Gemini Usage Numbers in Antitrust Case” provides some perjury proof data that it is definitely number two in smart software.
According to the write up:
The [Google] slide listed Gemini’s 350 million monthly users, along with daily traffic of 35 million users.
Okay, we have some numbers.
The write up provides a comparative set of data; to wit:
OpenAI has also seen traffic increase, putting ChatGPT around 600 million monthly active users, according to Google’s analysis. Early this year, reports pegged ChatGPT usage at around 400 million users per month.
Where’s Microsoft in this count? Yeah, who knows? MSFT just pounds home that it is winning in the enterprise. Okay, I understand.
What’s interesting about these data or lack of it has several facets:
- For Google, the “we’re number two” angle makes clear that its monopoly in online advertising has not transferred to becoming automatically number one in AI
- The data from Google are difficult to verify, but everyone trusts the Google
- The data from OpenAI are difficult to verify, but everyone trusts Sam AI-Man.
Where are we in the AI game?
At the mercy of unverifiable numbers and marketing type assertions.
What about Deepseek which may be banned by some of the folks in Washington, DC? What about everyone’s favorite litigant Meta / Facebook?
Net net: AI is everywhere so what’s the big deal? Let’s get used to marketing because those wonderful large language models still have a bit of problem with hallucinations, not to mention security issues and copyright hassles. I won’t mention cost because the data make clear that the billions pumped into smart software have not generated a return yet. Someday perhaps?
Stephen E Arnold, May 6, 2025
Deep Fake Recognition: Google Has a Finger In
May 5, 2025
Sorry, no AI used to create this item.
I spotted this Newsweek story: “‘AI Imposter’ Candidate Discovered During Job Interview, Recruiter Warns.” The main idea is that a humanoid struggled to identify a deep fake. The deep fake was applying for a job.
The write up says:
Several weeks ago, Bettina Liporazzi, the recruiting lead at letsmake.com was contacted by a seemingly ordinary candidate who was looking for a job. Their initial message was clearly AI-generated, but Liporazzi told Newsweek that this “didn’t immediately raise any flags” because that’s increasingly commonplace.
Here’s the interesting point:
Each time the candidate joined the call, Liporazzi got a warning from Google to say the person wasn’t signed in and “might not be who they claim to be.”
This interaction seems to have taken place online.
The Newsweek story includes this statement:
As generative-AI becomes increasingly powerful, the line between what’s real and fake is becoming harder to decipher. Ben Colman, co-founder and CEO of Reality Defender, a deepfake detection company, tells Newsweek that AI impersonation in recruiting is “just the tip of the iceberg.”
The recruiter figured out something was amiss. However, in the sequence Google injected its warning.
Several questions:
- Does Google monitor this recruiter’s online interactions and analyze them?
- How does Google determine which online interaction is one in which it should simply monitor and which to interfere?
- What does Google do with the information about [a] the recruiter, [b] the job on offer itself, and [c] the deep fake system’s operator?
I wonder if Newsweek missed the more important angle in this allegedly actual factual story; that is, Google surveillance? Perhaps Google was just monitoring email when it tells me that a message from a US law enforcement agency is not in my list of contacts. How helpful, Google?
Will Google’s “monitoring” protect others from Deep Fakes? Those helpful YouTube notices are part of this effort to protect it seems.
Stephen E Arnold, May 5, 2025
AI-Fueled Buggy Whip Executive Cannot Be Replaced by AI: A Case Study
May 2, 2025
I read about a very optimistic executive who owned buggy whip companies in the US. One day a horseless carriage, today known as a Tesla, raced past this office. The person telling me the story remembered the anecdote from her required reading in her first year MBA strategic thinking class. The owner of the buggy whip company, she said. “Those newfangled machines will not replace the horse.”
The modern version of this old chestnut appears in “Marc Andreessen Says One Job Is Mostly Safe From AI: Venture Capitalist.” I hope Mr. Andreessen is correct. The write up states:
In the future, AI will apparently be able to do everybody’s job—except Marc’s.
Here’s the logic, according to the write up:
Andreessen described his job as a nuanced combination of “intangible” skills,
including psychological analysis of the entrepreneurs he works with: “A lot of it
is psychological analysis, like, ‘Who are these people?’ ‘How do they react under
pressure?’ ‘How do you keep them from falling apart?’ ‘How do you keep them
from going crazy?’ ‘How do you keep from going crazy yourself?’ You know, you
end up being a psychologist half the time.” “So, it is possible—I don’t want to be definitive—but it’s possible that that is quite literally timeless. And when, you know, when the AI is doing everything else, that may be one of the last remaining fields that people are still doing.”
I found this paragraph from the original story one that will spark some interest; to wit:
Andreessen’s powers of self-delusion are well known. His Techno-
Optimist’s Manifesto, published a few years ago, was another great window into
a mind addled by too much cash and too little common sense. If you’re one of
Silicon Valley’s Masters of the Universe, I guess having weird, self-serving views
just comes with the territory.
Several observations:
- In my opinion, some VCs will continue to use AI. Through use and greater familiarity, the technology will gain some traction. At some point, AI will handle jobs once done by wild-eyed people hungry for riches.
- Start up VCs may rely upon AI for investment decisions, not just gaining through the business plans of fund seekers. If those “experiments” show promise, whoever owns the smart VC may develop a next generation VC business. Ergo: Marc can stay, but he won’t do anything.
- Someone may stumble upon an AI VC workflow process that works faster, better, and more efficiently. If that firm emerges, Mr. Andreessen can become the innovator identified with digital horse accelerators.
How does one say “Giddy up” in AI-system-to-AI-system communication lingo? Answer maybe: Dweep, dweep, dupe?
Stephen E Arnold, May 2, 2025
Outsourced AI Works Very Well, Thank You
May 2, 2025
Tech experts predict that AI will automate all jobs and make humanity obsolete. If that’s the case then why was so-called AI outsourced? Engadget reports how one “Tech Founder Charged With Fraud For ‘AI’ That Was Secretly Overseas Contract Workers.”
The tech founder in question is Albert Sangier and the US Department of Justice indicated him on misleading clients with Nate, his financial technology platform. Sangier founded Nate in 2018, he raised $40 million from investors, and he claimed that it could give shoppers a universal checkout application powered by AI. The transactions were actually completed by human contractors located in Romania, the Philippines, and bots.
Sangier deception was first noted in 2022:
“ ‘This case follows reporting by The Information in 2022 that cast light on Nate’s use of human labor rather than AI. Sources told the publication that during 2021, “the share of transactions Nate handled manually rather than automatically ranged between 60 percent and 100 percent.’”
Sangier isn’t the only “tech leader” who duplicitously pretends that human workers are actually an AI algorithm or chatbot. More bad actors will do this scam and they’ll get more creative hiding their steps.
Whitney Grace, May 2, 2025
Anthropic Discovers a Moral Code in Its Smart Software
April 30, 2025
No AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?
With the United Arab Emirates starting to use smart software to make its laws, the idea that artificial intelligence has a sense of morality is reassuring. Who would want a person judged guilty by a machine to face incarceration, a fine, or — gulp! — worse.
“Anthropic Just Analyzed 700,000 Claude Conversations — And Found Its AI Has a Moral Code of Its Own” explains:
The [Anthropic] study examined 700,000 anonymized conversations, finding that Claude largely upholds the company’s “helpful, honest, harmless” framework while adapting its values to different contexts — from relationship advice to historical analysis. This represents one of the most ambitious attempts to empirically evaluate whether an AI system’s behavior in the wild matches its intended design.
Two philosophers watch as the smart software explains the meaning of “situational and hallucinatory ethics.” Thanks, OpenAI. I bet you are glad those former employees of yours quit. Imagine. Ethics and morality getting in the way of accelerationism.
Plus the company has “hope”, saying:
“Our hope is that this research encourages other AI labs to conduct similar research into their models’ values,” said Saffron Huang, a member of Anthropic’s Societal Impacts team who worked on the study, in an interview with VentureBeat. “Measuring an AI system’s values is core to alignment research and understanding if a model is actually aligned with its training.”
The study is definitely not part of the firm’s marketing campaign. The write up includes this quote from an Anthropic wizard:
The research arrives at a critical moment for Anthropic, which recently launched “Claude Max,” a premium $200 monthly subscription tier aimed at competing with OpenAI’s similar offering. The company has also expanded Claude’s capabilities to include Google Workspace integration and autonomous research functions, positioning it as “a true virtual collaborator” for enterprise users, according to recent announcements.
For $2,400 per year, a user of the smart software would not want to do something improper, immoral, unethical, or just plain bad. I know that humans have some difficulty defining these terms related to human behavior in simple terms. It is a step forward that software has the meanings and can apply them. And for $200 a month one wants good judgment.
Does Claude hallucinate? Is the Anthropic-run study objective? Are the data reproducible?
Hey, no, no, no. What do you expect in the dog-eat-dog world of smart software?
Here’s a statement from the write up that pushes aside my trivial questions:
The study found that Claude generally adheres to Anthropic’s prosocial aspirations, emphasizing values like “user enablement,” “epistemic humility,” and “patient wellbeing” across diverse interactions. However, researchers also discovered troubling instances where Claude expressed values contrary to its training.
Yes, pro-social. That’s a concept definitely relevant to certain prompts sent to Anthropic’s system.
Are the moral predilections consistent?
Of course not. The write up says:
Perhaps most fascinating was the discovery that Claude’s expressed values shift contextually, mirroring human behavior. When users sought relationship guidance, Claude emphasized “healthy boundaries” and “mutual respect.” For historical event analysis, “historical accuracy” took precedence.
Yes, inconsistency depending upon the prompt. Perfect.
Why does this occur? This statement reveals the depth and elegance of the Anthropic research into computer systems whose inner workings are tough for their developers to understand:
Anthropic’s values study builds on the company’s broader efforts to demystify large language models through what it calls “mechanistic interpretability” — essentially reverse-engineering AI systems to understand their inner workings. Last month, Anthropic researchers published groundbreaking work that used what they described as a “microscope” to track Claude’s decision-making processes. The technique revealed counterintuitive behaviors, including Claude planning ahead when composing poetry and using unconventional problem-solving approaches for basic math.
Several observations:
- Unlike Google which is just saying, “We are the leaders,” Anthropic wants to be the good guys, explaining how its smart software is sensitive to squishy human values
- The write up itself is a content marketing gem
- There is scant evidence that the description of the Anthropic “findings” are reliable.
Let’s slap this Anthropic software into an autonomous drone and let it loose. It will be the AI system able to make those subjective decisions. Light it up and launch.
Stephen E Arnold, April 30, 2025
Google Wins AI, According to Google AI
April 29, 2025
No AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?
Wow, not even insecure pop stars explain how wonderful they are at every opportunity. But Google is not going to stop explaining that it is number one in smart software. Never mind the lawsuits. Never mind the Deepseek thing. Never mind Sam AI-Man. Never mind angry Googlers who think the company will destroy the world.
Just get the message, “We have won.”
I know this because I read the weird PR interview called “Demis Hassabis Is Preparing for AI’s Endgame,” which is part of the “news” about the Time 100 most wonderful and intelligence and influential and talented and prescient people in the Time world.
Let’s take a quick look at a few of the statements in the marketing story. Because I am a dinobaby, I will wrap up with a few observations designed to make clear the difference between old geezers like me and the youthful new breed of Time leaders.
Here’s the first passage I noted:
He believes AGI [Googler Hassabis] would be a technology that could not only solve existing problems, but also come up with entirely new explanations for the universe. A test for its existence might be whether a system could come up with general relativity with only the information Einstein had access to; or if it could not only solve a longstanding hypothesis in mathematics, but theorize an entirely new one. “I identify myself as a scientist first and foremost,” Hassabis says. “The whole reason I’m doing everything I’ve done in my life is in the pursuit of knowledge and trying to understand the world around us.”
First comment. Yep, I noticed the reference to Einstein. That’s reasonable intellectual territory for a Googler. I want to point out that the Google is in a bit of legal trouble because it did not play fair. But neither did Einstein. Instead of fighting evil in Europe, he lit out for the US of A. I mean a genius of the Einstein ilk is not going to risk one’s life. Just think. Google is a thinking outfit, but I would suggest that its brush with authorities is different from Einstein’s. But a scientist working at an outfit in trouble with authorities, no big deal, right? AI is a way to understand the world around us. Breaking the law? What?
The second snippet is this one:
When DeepMind was acquired by Google in 2014, Hassabis insisted on a contractual firewall: a clause explicitly prohibiting his technology from being used for military applications. It was a red line that reflected his vision of AI as humanity’s scientific savior, not a weapon of war.
Well, that red line was made of erasable market red. It has disappeared. And where is the Nobel prize winner? Still at the Google, that’s the outfit that is in trouble with the law and reasonably good at discarding notions that don’t fit with its goal of generating big revenue from ads and assorted other ventures like self driving taxi cabs. Noble indeed.
Okay, here’s the third comment:
That work [dumping humans for smart software], he says, is not intended to hasten labor disruptions, but instead is about building the necessary scaffolding for the type of AI that he hopes will one day make its own scientific discoveries. Still, as research into these AI “agents” progresses, Hassabis says, expect them to be able to carry out increasingly more complex tasks independently. (An AI agent that can meaningfully automate the job of further AI research, he predicts, is “a few years away.”)
I think that Google will just say, “Yo, dudes, smart software is efficient. Those who lose their jobs can re-skill like the humanoids we are allowing to find their future elsewhere.
Several observations:
- I think that the Time people are trying to balance their fear of smart software replacing outfits like Time with the excitement of watching smart software create a new way experiencing making a life. I don’t think the Timers achieved their goal.
- The message that Google thinks, cares, and has lofty goals just doesn’t ring true. Google is in trouble with the law for a reason. It was smart enough to make money, but it was not smart enough to avoid honking off regulators in some jurisdictions. I can’t reconcile illegal behavior with baloney about the good of mankind.
- Google wants to be seen as the big dog of AI. The problem is that saying something is different from the reality of trials, loss of trust among some customer sectors, floundering for a coherent message about smart software, and the baloney that the quantumly supreme Google convinces people to propagate.
Okay, you may love the Time write up. I am amused, and I think some of the lingo will find its way into the Sundar & Prabhakar Comedy Show. Did you hear the one about Google’s AI not being used for weapons?
Stephen E Arnold, April 29, 2025
China, Self-Amusement, and AI
April 29, 2025
China pokes fun at the United States whenever it can. Why? The Middle Kingdom wants to prove its superiority over the US. China is does have many technological advances over its western neighbor and now the country made another great leap forward with AI says Business Insider: “China’s Baidu Releases Ernie X1, A New AI Reasoning Model.”
Baidu is China’s equivalent of Google and the it released two new AI models. The first is Ernie X1 that is described as a reasoning model that delivers on par with Deepseek R1 at half the price. It also released a multimodal foundation model called Ernie 4.5 that could potentially outperform GPT-4.5 and costs only a fraction of the price. Baidu is also developing the Ernie Bot, a free chatbot.
Baidu wants to offer the world cheap AI:
“Baidu’s new releases come as Silicon Valley reckons with the cost of AI models, largely spurred by the latest drops from Deepseek, a Chinese startup launched by hedge fund High Flyer.
In December, Deepseek released a large language model called V3, and in January, it unveiled a reasoning model called R1. The models are considered as good or better than equivalent models from OpenAI but priced “anywhere from 20-40x cheaper,” according to analysis from Bernstein Research.”
China is smart to develop inexpensive AI, but did the country have to make fun of Sesame Street? I mean Big Bird?
Whitney Grace, April 29, 2025