China Tough. US Weak: A Variation of the China Smart. US Dumb Campaign
May 6, 2025
No AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?
Even members of my own team thing I am confusing information about China’s technology with my dinobaby peculiarities. That may be. Nevertheless, I want to document the story “The Ancient Chinese General Whose Calm During Surgery Is Still Told of Today.” I know it is. I just read a modern retelling of the tale in the South China Morning Post. (Hey. Where did that paywall go?)
The basic idea is that a Chinese leader (tough by genetics and mental discipline) had dinner with some colleagues. A physician showed up and told the general, “You have poison in your arm bone.”
The leader allegedly told the physician,
“No big deal. Do the surgery here at the dinner table.”
The leader let the doc chop open his arm, remove the diseased area, and stitched the leader up. Now here’s the item in the write up I find interesting because it makes clear [a] the leader’s indifference to his colleagues who might find this surgical procedure an appetite killer and [b] the flawed collection of blood which seeped after the incision was made. Keep in mind that the leader did not need any soporific, and the leader continued to chit chat with his colleagues. I assume the leader’s anecdotes and social skills kept his guests mesmerized.
Here’s the detail from the China Tough. US Weak write up:
“Guan Yu [the tough leader] calmly extended his arm for the doctor to proceed. At the time, he was sitting with fellow generals, eating and drinking together. As the doctor cut into his arm, blood flowed profusely, overflowing the basin meant to catch it. Yet Guan Yu continued to eat meat, drink wine, and chat and laugh as if nothing was happening.”
Yep, blood flowed profusely. Just the extra that sets one meal apart from another. The closest approximation in my experience was arriving at a fast food restaurant after a shooting. Quite a mess and the odor did not make me think of a cheeseburger with ketchup.
I expect that members of my team will complain about this blog post. That’s okay. I am a dinobaby, but I think this variation on the China Smart. US Dumb information flow is interesting. Okay, anyone want to pop over for fried squirrel. We can skin, gut, and fry them at one go. My mouth is watering at the thought. If we are lucky, one of the group will have bagged a deer. Now that’s an opportunity to add some of that hoist, skin, cut, and grill to the evening meal. Guan Yu, the tough Chinese leader, would definitely get with the kitchen work.
Stephen E Arnold, May 6, 2025
AI Chatbots Now Learning Russian Propaganda
May 6, 2025
Gee, who would have guessed? Forbes reports, “Russian Propaganda Has Now Infected Western AI Chatbots—New Study.” Contributor Tor Constantino cites a recent NewsGuard report as he writes:
“A Moscow-based disinformation network known as ‘Pravda’ — the Russian word for ‘truth’ — has been flooding search results and web crawlers with pro-Kremlin falsehoods, causing AI systems to regurgitate misleading narratives. The Pravda network, which published 3.6 million articles in 2024 alone, is leveraging artificial intelligence to amplify Moscow’s influence at an unprecedented scale. The audit revealed that 10 leading AI chatbots repeated false narratives pushed by Pravda 33% of the time. Shockingly, seven of these chatbots directly cited Pravda sites as legitimate sources. In an email exchange, NewsGuard analyst Isis Blachez wrote that the study does not ‘name names’ of the AI systems most susceptible to the falsehood flow but acknowledged that the threat is widespread.”
Blachez believes a shift is underway from Russian operatives directly targeting readers to manipulation of AI models. Much more efficient. And sneaky. We learn:
“One of the most alarming practices uncovered is what NewsGuard refers to as ‘LLM grooming.’ This tactic is described as the deliberate deception of datasets that AI models — such as ChatGPT, Claude, Gemini, Grok 3, Perplexity and others — train on by flooding them with disinformation. Blachez noted that this propaganda pile-on is designed to bias AI outputs to align with pro-Russian perspectives. Pravda’s approach is methodical, relying on a sprawling network of 150 websites publishing in dozens of languages across 49 countries.”
AI firms can try to block propaganda sites from their models’ curriculum, but the operation is so large and elaborate it may be impossible. And also, how would they know if they had managed to do so? Nevertheless, Blachez encourages them to try. Otherwise, tech firms are destined to become conduits for the Kremlin’s agenda, she warns.
Of course, the rest of us have a responsibility here as well. We can and should double check information served up by AI. NewsGuard suggests its own Misinformation Fingerprints, a catalog of provably false claims it has found online. Or here is an idea: maybe do not turn to AI for information in the first place. After all, the tools are notoriously unreliable. And that is before Russian operatives get involved.
Cynthia Murrell, May 6, 2025
Anthropic Discovers a Moral Code in Its Smart Software
April 30, 2025
No AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?
With the United Arab Emirates starting to use smart software to make its laws, the idea that artificial intelligence has a sense of morality is reassuring. Who would want a person judged guilty by a machine to face incarceration, a fine, or — gulp! — worse.
“Anthropic Just Analyzed 700,000 Claude Conversations — And Found Its AI Has a Moral Code of Its Own” explains:
The [Anthropic] study examined 700,000 anonymized conversations, finding that Claude largely upholds the company’s “helpful, honest, harmless” framework while adapting its values to different contexts — from relationship advice to historical analysis. This represents one of the most ambitious attempts to empirically evaluate whether an AI system’s behavior in the wild matches its intended design.
Two philosophers watch as the smart software explains the meaning of “situational and hallucinatory ethics.” Thanks, OpenAI. I bet you are glad those former employees of yours quit. Imagine. Ethics and morality getting in the way of accelerationism.
Plus the company has “hope”, saying:
“Our hope is that this research encourages other AI labs to conduct similar research into their models’ values,” said Saffron Huang, a member of Anthropic’s Societal Impacts team who worked on the study, in an interview with VentureBeat. “Measuring an AI system’s values is core to alignment research and understanding if a model is actually aligned with its training.”
The study is definitely not part of the firm’s marketing campaign. The write up includes this quote from an Anthropic wizard:
The research arrives at a critical moment for Anthropic, which recently launched “Claude Max,” a premium $200 monthly subscription tier aimed at competing with OpenAI’s similar offering. The company has also expanded Claude’s capabilities to include Google Workspace integration and autonomous research functions, positioning it as “a true virtual collaborator” for enterprise users, according to recent announcements.
For $2,400 per year, a user of the smart software would not want to do something improper, immoral, unethical, or just plain bad. I know that humans have some difficulty defining these terms related to human behavior in simple terms. It is a step forward that software has the meanings and can apply them. And for $200 a month one wants good judgment.
Does Claude hallucinate? Is the Anthropic-run study objective? Are the data reproducible?
Hey, no, no, no. What do you expect in the dog-eat-dog world of smart software?
Here’s a statement from the write up that pushes aside my trivial questions:
The study found that Claude generally adheres to Anthropic’s prosocial aspirations, emphasizing values like “user enablement,” “epistemic humility,” and “patient wellbeing” across diverse interactions. However, researchers also discovered troubling instances where Claude expressed values contrary to its training.
Yes, pro-social. That’s a concept definitely relevant to certain prompts sent to Anthropic’s system.
Are the moral predilections consistent?
Of course not. The write up says:
Perhaps most fascinating was the discovery that Claude’s expressed values shift contextually, mirroring human behavior. When users sought relationship guidance, Claude emphasized “healthy boundaries” and “mutual respect.” For historical event analysis, “historical accuracy” took precedence.
Yes, inconsistency depending upon the prompt. Perfect.
Why does this occur? This statement reveals the depth and elegance of the Anthropic research into computer systems whose inner workings are tough for their developers to understand:
Anthropic’s values study builds on the company’s broader efforts to demystify large language models through what it calls “mechanistic interpretability” — essentially reverse-engineering AI systems to understand their inner workings. Last month, Anthropic researchers published groundbreaking work that used what they described as a “microscope” to track Claude’s decision-making processes. The technique revealed counterintuitive behaviors, including Claude planning ahead when composing poetry and using unconventional problem-solving approaches for basic math.
Several observations:
- Unlike Google which is just saying, “We are the leaders,” Anthropic wants to be the good guys, explaining how its smart software is sensitive to squishy human values
- The write up itself is a content marketing gem
- There is scant evidence that the description of the Anthropic “findings” are reliable.
Let’s slap this Anthropic software into an autonomous drone and let it loose. It will be the AI system able to make those subjective decisions. Light it up and launch.
Stephen E Arnold, April 30, 2025
Google Wins AI, According to Google AI
April 29, 2025
No AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?
Wow, not even insecure pop stars explain how wonderful they are at every opportunity. But Google is not going to stop explaining that it is number one in smart software. Never mind the lawsuits. Never mind the Deepseek thing. Never mind Sam AI-Man. Never mind angry Googlers who think the company will destroy the world.
Just get the message, “We have won.”
I know this because I read the weird PR interview called “Demis Hassabis Is Preparing for AI’s Endgame,” which is part of the “news” about the Time 100 most wonderful and intelligence and influential and talented and prescient people in the Time world.
Let’s take a quick look at a few of the statements in the marketing story. Because I am a dinobaby, I will wrap up with a few observations designed to make clear the difference between old geezers like me and the youthful new breed of Time leaders.
Here’s the first passage I noted:
He believes AGI [Googler Hassabis] would be a technology that could not only solve existing problems, but also come up with entirely new explanations for the universe. A test for its existence might be whether a system could come up with general relativity with only the information Einstein had access to; or if it could not only solve a longstanding hypothesis in mathematics, but theorize an entirely new one. “I identify myself as a scientist first and foremost,” Hassabis says. “The whole reason I’m doing everything I’ve done in my life is in the pursuit of knowledge and trying to understand the world around us.”
First comment. Yep, I noticed the reference to Einstein. That’s reasonable intellectual territory for a Googler. I want to point out that the Google is in a bit of legal trouble because it did not play fair. But neither did Einstein. Instead of fighting evil in Europe, he lit out for the US of A. I mean a genius of the Einstein ilk is not going to risk one’s life. Just think. Google is a thinking outfit, but I would suggest that its brush with authorities is different from Einstein’s. But a scientist working at an outfit in trouble with authorities, no big deal, right? AI is a way to understand the world around us. Breaking the law? What?
The second snippet is this one:
When DeepMind was acquired by Google in 2014, Hassabis insisted on a contractual firewall: a clause explicitly prohibiting his technology from being used for military applications. It was a red line that reflected his vision of AI as humanity’s scientific savior, not a weapon of war.
Well, that red line was made of erasable market red. It has disappeared. And where is the Nobel prize winner? Still at the Google, that’s the outfit that is in trouble with the law and reasonably good at discarding notions that don’t fit with its goal of generating big revenue from ads and assorted other ventures like self driving taxi cabs. Noble indeed.
Okay, here’s the third comment:
That work [dumping humans for smart software], he says, is not intended to hasten labor disruptions, but instead is about building the necessary scaffolding for the type of AI that he hopes will one day make its own scientific discoveries. Still, as research into these AI “agents” progresses, Hassabis says, expect them to be able to carry out increasingly more complex tasks independently. (An AI agent that can meaningfully automate the job of further AI research, he predicts, is “a few years away.”)
I think that Google will just say, “Yo, dudes, smart software is efficient. Those who lose their jobs can re-skill like the humanoids we are allowing to find their future elsewhere.
Several observations:
- I think that the Time people are trying to balance their fear of smart software replacing outfits like Time with the excitement of watching smart software create a new way experiencing making a life. I don’t think the Timers achieved their goal.
- The message that Google thinks, cares, and has lofty goals just doesn’t ring true. Google is in trouble with the law for a reason. It was smart enough to make money, but it was not smart enough to avoid honking off regulators in some jurisdictions. I can’t reconcile illegal behavior with baloney about the good of mankind.
- Google wants to be seen as the big dog of AI. The problem is that saying something is different from the reality of trials, loss of trust among some customer sectors, floundering for a coherent message about smart software, and the baloney that the quantumly supreme Google convinces people to propagate.
Okay, you may love the Time write up. I am amused, and I think some of the lingo will find its way into the Sundar & Prabhakar Comedy Show. Did you hear the one about Google’s AI not being used for weapons?
Stephen E Arnold, April 29, 2025
Innovation: America Has That Process Nailed
April 27, 2025
No AI. Just a dinobaby who gets revved up with buzzwords and baloney.
Has innovation slowed? In smart software, I read about clever uses of AI and ML (artificial intelligence and machine learning). But in my tests of various systems I find the outputs occasionally useful. Yesterday, I wanted information about a writer who produced an article about a security issue involving the US government. I tried five systems; none located the individual. I finally tracked the person down using manual methods. The smart software was clueless.
An example of American innovation caught my attention this morning (April 27, 2025 at 520 am US Eastern time to be exact). I noted the article “Khloé Kardashian Announces Protein Popcorn.” The write up explains:
For anyone khounting their makhros, reality star and entrepreneur Khloé Kardashian unveiled her new product this week: Khloud Protein Popcorn. The new snack boasts 7 grams of protein per serving—two more grams than an entire Jack Links Beef Stick—aligning with consumers’ recent obsession with protein-packed food and drinks. The popcorn isn’t covered in burnt ends—its protein boost comes from a proprietary blend of seasonings and milk protein powder called “Khloud dust” that’s sprinkled over the air-popped kernels.
My thought is that smart software may have contributed to the name of the product: Khloud Protein Popcorn, but I don’t know. The idea that enhanced popcorn has more protein than “an entire Jack Links Beef Stick” is quite innovative I think. Samuel Franklin, author of The Cult of Creativity, may have a different view. Creativity, he asserts, did not become a thing until 1875. I think Khloud Protein Popcorn demonstrates that ingenuity, cleverness, imagination, and artistry are definitely alive and thriving in the Kardashian’s idea laboratory.
I wonder if this type of innovation is going to resolve some of the problems which appear to beset daily life in April 2025. I doubt it unless one needs some fortification delivered via popcorn.
Without being too creative or innovative in my thinking, is AL/ML emulating Khloé Kardashian’s protein popcorn. We have a flawed by useful service: Web search. That functionality has been degrading for many reasons. To make it possible to find information germane to a particular topic, innovators have jumped on one horse and started riding it to the future. But the horse is getting tired. In fact, after a couple of years of riding around the barn, the innovations in large language models seems to be getting tired, slowing down, and in some cases limping along.
The big announcements from Google, Microsoft, and OpenAI focus on the number of users each has. I think the Google said it had 1.5 billion users of its smart software. Can Google “prove” it? Probably but is that number verifiable? Sure, just like the amount of protein in the Khloud dust sprinkled on the aforementioned popcorn. OpenAI’s ChatGPT on April 26, 2025, output a smarmy message about a system issue. The new service Venice was similarly uncooperative, unable in fact to locate information about a particular Telegram topic related to its discontinuing its Bridge service. Poor Perplexity was very wordy and very confident that its explanation about why it could not locate an item of information was hardly a confidence builder.
Here’s my hypothesis: AI/ML, LLMs, and the rest of the smart software jargon have embraced Ms. Kardashian’s protein popcorn approach to doing something new, fresh, creative, and exciting. Imagine AI/ML solutions having more value than an “entire Jack Links Beef Stick.” Next up, smart protein popcorn.
Innovative indeed.
Stephen E Arnold, April 27, 2025
Google Is Just Like Santa with Free Goodies: Get “High” Grades, of Course
April 18, 2025
No AI, just the dinobaby himself.
Google wants to be [a] viewed as the smartest quantumly supreme outfit in the world and [b] like Santa. The “smart” part is part of the company’s culture. The CLEVER approach worked in Web search. Now the company faces what might charitably be called headwinds. There are those pesky legal hassles in the US and some gaining strength in other countries. Also, the competitive world of smart software continues to bedevil the very company that “invented” the transformer. Google gave away some technology, and now everyone from the update champs in Redmond, Washington, to Sam AI-Man is blowing smoke about Google’s systems and methods.
What a state of affairs?
The fix is to give away access to Google’s most advanced smart software to college students. How Santa like. According to “Google Is Gifting a Year of Gemini advanced to Every College Student in the US” reports:
Google has announced today that it’s giving all US college students free access to Gemini Advanced, and not just for a month or two—the offer is good for a full year of service. With Gemini Advanced, you get access to the more capable Pro models, as well as unlimited use of the Deep Research tool based on it. Subscribers also get a smattering of other AI tools, like the Veo 2 video generator, NotebookLM, and Gemini Live. The offer is for the Google One AI Premium plan, so it includes more than premium AI models, like Gemini features in Google Drive and 2TB of Drive storage.
The approach is not new. LexisNexis was one of the first online services to make online legal research available to law school students. It worked. Lawyers are among the savviest of the work fast, bill more professionals. When did Lexis Nexis move this forward? I recall speaking to a LexisNexis professional named Don Wilson in 1980, and he was eager to tell me about this “new” approach.
I asked Mr. Wilson (who as I recall was a big wheel at LexisNexis then), “That’s a bit like drug dealers giving the curious a ‘taste’?”
He smiled and said, “Exactly.”
In the last 45 years, lawyers have embraced new technology with a passion. I am not going to go through the litany of search, analysis, summarization, and other tools that heralded the success of smart software for the legal folks. I recall the early days of LegalTech when the most common question was, “How?” My few conversations with the professionals laboring in the jungle of law, rules, and regulations have shifted to “which system” and “how much.”
The marketing professionals at Google have “invented” their own approach to hook college students on smart software. My instinct is that Google does not know much about Don Wilson’s big idea. (As an aside, I remember one of Mr. Wilson’s technical colleague sometimes sported a silver jumpsuit which anticipated some of the fashion choices of Googlers by half a century.)
The write up says:
Google’s intention is to give students an entire school year of Gemini Advanced from now through finals next year. At the end of the term, you can bet Google will try to convert students to paying subscribers.
I am not sure I agree with this. If the program gets traction, Sam AI-Man and others will be standing by with special offers, deals, and free samples. The chemical structure of certain substances is similar to today’s many variants of smart software. Hey, whatever works, right? Whatever is free, right?
Several observations:
- Google’s originality is quantumly supreme
- Some people at the Google dress like Mr. Wilson’s technical wizard, jumpsuit and all
- The competition is going to do their own version of this “original” marketing idea; for example, didn’t Bing offer to pay people to use that outstanding Web search-and-retrieval system?
Net net: Hey, want a taste? It won’t hurt anything. Try it. You will be mentally sharper. You will be more informed. You will have more time to watch YouTube. Trust the Google.
Stephen E Arnold, April 18, 2025
Google Gemini 2.5: A Somewhat Interesting Content Marketing Write Up
April 18, 2025
Just a still alive dinobaby . No smart software involved.
How about this headline: “Google’s Gemini 2.5 Pro Is the Smartest Model You’re Not Using – and 4 Reasons It Matters for Enterprise AI”?
OpenAI scroogled the Google again. First, it was the January 2023 starting gun for AI hype. Now it was the release of a Japanese cartoon style for ChatGPT. Who knew that Japanese cartoons could have blasted the Google Gemini 2.5 Pro launch more effectively than a detonation of a failed SpaceX rocket?
The write up pants:
Gemini 2.5 Pro marks a significant leap forward for Google in the foundational model race – not just in benchmarks, but in usability. Based on early experiments, benchmark data, and hands-on developer reactions, it’s a model worth serious attention from enterprise technical decision-makers, particularly those who’ve historically defaulted to OpenAI or Claude for production-grade reasoning.
Yeah, whatever.
Announcements about Google AI are about as satisfying as pizza with glued-on cheese or Apple’s AI fantasy PR about “intelligence.”
But I like this statement:
Bonus: It’s Just Useful
The headline and this “just useful” make it clear none of Google’s previous AI efforts are winning the social media buzz game. Plus, the author points out that billions of Google dollars have not made the smart software speedy. And if you want to have smart software write that history paper about Germany after WW 2, stick with other models which feature “conversational smoothness.”
Quite an advertisement. A headline that says, “No one is using this” and” it is sluggish and writes in a way that a student will get flagged for cheating.
Stick to ads maybe?
And what about “why it matters to for enterprise AI.” Yeah, nice omission.
Stephen E Arnold, April 18, 2025
Trust: Zuck, Meta, and Llama 4
April 17, 2025
Sorry, no AI used to create this item.
CNET published a very nice article that says to me: “Hey, we don’t trust you.” Navigate to “Meta Llama 4 Benchmarking Confusion: How Good Are the New AI Models?” The write up is like a wimpy version of the old PC Perspective podcast with Ryan Shrout. Before the embrace of Intel’s intellectual blanket, the podcast would raise questions about video card benchmarks. Most of the questions addressed: “Is this video card that fast?” In some cases, yes, the video card benchmarks were close to the real world. In other cases, video card manufacturers did what the butcher on Knoxville Avenue did in 1951. Mr. Wilson put his thumb on the scale. My grandmother watched friendly Mr. Wilson who drove a new Buick in a very, very modest neighborhood, closely. He did not smile as broadly when my grandmother and I would enter the store for a chicken.
Would someone put an AI professional benchmarked to this type of test? Of course not. But the idea has a certain charm. Plus, if the person dies, he was fooling. If the person survives, that individual is definitely a witch. This was a winner method to some enlightened leaders at one time.
The CNET story says about the Zuck’s most recent non-virtual reality investment:
Meta’s Llama 4 models Maverick and Scout are out now, but they might not be the best models on the market.
That’s a good way to say, “Liar, liar, pants on fire.”
The article adds:
the model that Meta actually submitted to the LMArena tests is not the model that is available for people to use now. The model submitted for testing is called “llama-4-maverick-03-26-experimental.” In a footnote on a chart on Llama’s website (not the announcement), in tiny font in the final bullet point, Meta clarifies that the model submitted to LMArena was ‘optimized for conversationality.”
Isn’t this a GenZ way to say, “You put your thumb on the scale, Mr. Wilson”?
Let’s review why one should think about the desire to make something better than it is:
- Meta’s decision is just marketing. Think about the self driving Teslas. Consequences? Not for fibbing.
- The Meta engineers have to deliver good news. Who wants to tell the Zuck that the Llama innovations are like making the VR thing a big winner? Answer: No one who wants to get a bonus and curry favor.
- Meta does not have the ability to distinguish good from bad. The model swap is what Meta is going to do anyway. So why not just use it? No big deal. Is this a moral and ethical dead zone?
What’s interesting is that from my point of view, Meta and the Zuck have a standard operating procedure. I am not sure that aligns with what some people expect. But as long as the revenue flows and meaningful regulation of social media remains a windmill for today’s Don Quixotes, Meta is the best — until another AI leader puts out a quantumly supreme news release.
Stephen E Arnold, April 17, 2025
Google AI: Invention Is the PR Game
April 17, 2025
Google was so excited to tout its AI’s great achievement: In under 48 hours, It solved a medical problem that vexed human researchers for a decade. Great! Just one hitch. As Pivot to AI tells us, "Google Co-Scientist AI Cracks Superbug Problem in Two Days!—Because It Had Been Fed the Team’s Previous Paper with the Answer In It." With that detail, the feat seems much less impressive. In fact, two days seems downright sluggish. Writer David Gerard reports:
"The hype cycle for Google’s fabulous new AI Co-Scientist tool, based on the Gemini LLM, includes a BBC headline about how José Penadés’ team at Imperial College asked the tool about a problem he’d been working on for years — and it solved it in less than 48 hours! [BBC; Google] Penadés works on the evolution of drug-resistant bacteria. Co-Scientist suggested the bacteria might be hijacking fragments of DNA from bacteriophages. The team said that if they’d had this hypothesis at the start, it would have saved years of work. Sounds almost too good to be true! Because it is. It turns out Co-Scientist had been fed a 2023 paper by Penadés’ team that included a version of the hypothesis. The BBC coverage failed to mention this bit. [New Scientist, archive]"
It seems this type of Googley AI over-brag is a pattern. Gerard notes the company claims Co-Scientist identified new drugs for liver fibrosis, but those drugs had already been studied for this use. By humans. He also reminds us of this bit of truth-stretching from 2023:
"Google loudly publicized how DeepMind had synthesized 43 ‘new materials’ — but studies in 2024 showed that none of the materials was actually new, and that only 3 of 58 syntheses were even successful. [APS; ChemrXiv]"
So the next time Google crows about an AI achievement, we have to keep in mind that AI often is a synonym for PR.
Cynthia Murrell, April 17, 2026
China Smart, US Dumb: The Fluid Mechanics Problem Solved
April 16, 2025
There are many puzzles that haven’t been solved, but with advanced technology and new ways of thinking some of them are finally getting answered. Two Chinese mathematicians working in the United States claim to have solved an old puzzle involving fluid mechanics says the South China Morning Post: “Chinese Mathematicians Say They Have Cracked Century-Old Fluid Mechanics Puzzle.”
Fluid mechanics is a study used in engineering and it is applied to aerodynamics, dams and bridges design, and hydraulic systems. The Chinese mathematicians are Deng Yu from the University of Chicago and Ma Xiao from the University of Michigan. They were joined by their international collaborator Zaher Hani also of the University of Michigan. They published a paper to arXiv-a platform that posts research papers before they are peer reviewed. The team said they found a solution to “Hilbert’s sixth problem.
What exactly did the mathematicians solve?
“At the intersection of physics and mathematics, researchers ask whether it is possible to establish physics as a rigorous branch of mathematics by taking microscopic laws as axioms and proving macroscopic laws as theorems. Axioms are mathematical statements that are assumed to be true, while a theorem is a logical consequence of axioms.
Hilbert’s sixth problem addresses that challenge, according to a post by Ma on Wednesday on Zhihu, a Quora-like Chinese online content platform.”
David Hilbert proposed this as one of twenty-three problems he presented in 1900 at the International Congress of Mathematicians. China is taking credit for these mathematicians and their work. China wants to point out how smart it is, while it likes to poke fun at the “dumb” United States. Let’s make our own point that these Chinese mathematicians are living and working in the United States.
Whitney Grace, April 16, 2025