China Smart US Dumb: An AI Content Marketing Push?

December 1, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I have been monitoring the China Smart, US Dumb campaign for some time. Most of the methods are below the radar; for example, YouTube videos featuring industrious people who seem to be similar to the owner of the Chinese restaurant not far from my office or posts on social media that remind me of the number of Chinese patents achieved each year. Sometimes influencers tout the wonders of a China-developed electric vehicle. None of these sticks out like a semi mainstream media push.

image

Thanks, Venice.ai, not exactly the hutong I had in mind but close enough for chicken kung pao in Kentucky.

However, that background “China Smart, US Dumb” messaging may be cranking up. I don’t know for sure, but this NBC News (not the Miss Now news) report caught my attention. Here’s the title:

More of Silicon Valley Is Building on Free Chinese AI

The subtitle is snappier than Girl Fixes Generator, but you judge for yourself:

AI Startups Are Seeing Record Valuations, But Many Are Building on a Foundation of Cheap, Free-to-Download Chinese AI Models.

The write up states:

Surveying the state of America’s artificial intelligence landscape earlier this year, Misha Laskin was concerned. Laskin, a theoretical physicist and machine learning engineer who helped create some of Google’s most powerful AI models, saw a growing embrace among American AI companies of free, customizable and increasingly powerful “open” AI models.

We have a Xoogler who is concerned. What troubles the wizardly Misha Laskin? NBC News intones in a Stone Phillips’ tone:

Over the past year, a growing share of America’s hottest AI startups have turned to open Chinese AI models that increasingly rival, and sometimes replace, expensive U.S. systems as the foundation for American AI products.

Ever cautious, NBC News asserts:

The growing embrace could pose a problem for the U.S. AI industry. Investors have staked tens of billions on OpenAI and Anthropic, wagering that leading American artificial intelligence companies will dominate the world’s AI market. But the increasing use of free Chinese models by American companies raises questions about how exceptional those models actually are — and whether America’s pursuit of closed models might be misguided altogether.

Bingo! The theme is China smart and the US “misguided.” And not just misguided, but “misguided altogether.”

NBC News slams the point home with more force that the generator repairing Asian female closes the generator’s housing:

in the past year, Chinese companies like Deepseek and Alibaba have made huge technological advancements. Their open-source products now closely approach or even match the performance of leading closed American models in many domains, according to metrics tracked by Artificial Analysis, an independent AI benchmarking company.

I know from personal conversations that most of the people with whom I interreact don’t care. Most just accept the belief that the US is chugging along. Not doing great. Not doing terribly. Just moving along. Therefore, I don’t expect you, gentle reader, to think much of this NBC News report.

That’s why the China Smart, US Dumb messaging is effective. But this single example raises the question, “What’s the next major messaging outlet to cover this story?”

Stephen E Arnold, December 1, 2025

AI ASICs: China May Have Plans for AI Software and AI Hardware

December 1, 2025

green-dino_thumb_thumbAnother dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

I try to avoid wild and crazy generalizations, but I want to step back from the US-centric AI craziness and ask a question, “Why is the solution to anticipated AI growth more data centers?” Data centers seem like a trivial part of the broader AI challenge to some of the venture firms, BAIT (big AI technology) companies, and some online pundits. Building a data center is a cheap building filled with racks of computers, some specialized gizmos, a connection to the local power company, and a handful of network engineers. Bingo. You are good to go.

But what happens if the compute is provided by Application-Specific Integrated Circuits or ASICs? When ASICs became available for crypto currency mining, the individual or small-scale miner was no longer attractive. What happened is that large, industrialized crypto mining farms pushed out the individual miners or mom-and-pop data centers.

image

The Ghana ASIC roll out appears to have overwhelmed the person taking orders. Demand for cheap AI compute is strong. Is that person in the blue suit from Nvidia? Thanks, MidJourney. Good enough, the mark of excellence today.

Amazon, Google, and probably other BAIT outfits want to design their own AI chips. The problem is similar to moving silos of corn to a processing plant with a couple of pick up trucks. Capacity at chip fabrication facilities is constrained. Big chip ideas today may not be possible on the time scale set by the team designing NFL arena size data centers in Rhode Island- or Mississippi-type locations.

Could a New Generation of Dedicated AI Chips Burst Nvidia’s Bubble and Do for AI GPUs What ASICs Did for Crypto Mining?” reports:

A Chinese startup founded by a former Google engineer claims to have created a new ultra-efficient and relatively low cost AI chip using older manufacturing techniques. Meanwhile, Google itself is now reportedly considering whether to make its own specialized AI chips available to buy. Together, these chips could represent the start of a new processing paradigm which could do for the AI industry what ASICs did for bitcoin mining.

What those ASICs did for crypto mining was shift calculations from individuals to large, centralized data centers. Yep, centralization is definitely better. Big is a positive as well.

The write up adds:

The Chinese startup is Zhonghao Xinying. Its Ghana chip is claimed to offer 1.5 times the performance of Nvidia’s A100 AI GPU while reducing power consumption by 75%. And it does that courtesy of a domestic Chinese chip manufacturing process that the company says is "an order of magnitude lower than that of leading overseas GPU chips." By "an order or magnitude lower," the assumption is that means well behind in technological terms given China’s home-grown chip manufacturing is probably a couple of generations behind the best that TSMC in Taiwan can offer and behind even what the likes of Intel and Samsung can offer, too.

The idea is that if these chips become widely available, they won’t be very good. Probably like the first Chinese BYD electric vehicles. But after some iterative engineering, the Chinese chips are likely to improve. If these improvements coincide with the turn on of the massive data centers the BAIT outfits are building, there might be rethinking required by the Silicon Valley wizards.

Several observations will be offered but these are probably not warranted by anyone other than myself:

  1. China might subsidize its home grown chips. The Googler is not the only person in the Middle Kingdom trying to find a way around the US approach to smart software. Cheap wins or is disruptive until neutralized in some way.
  2. New data centers based on the Chinese chips might find customers interested in stepping away from dependence on a technology that most AI companies are using for “me too”, imitative AI services. Competition is good, says Silicon Valley, until it impinges on our business. At that point, touch-to-predict actions come into play.
  3. Nvidia and other AI-centric companies might find themselves trapped in AI strategies that are comparable to a large US aircraft carrier. These ships are impressive, but it takes time to slow them down, turn them, and steam in a new direction. If Chinese AI ASICs hit the market and improve rapidly, the captains of the US-flagged Transformer vessels will have their hands full and financial officers clamoring for the leaderships’ attention.

Net net: Ponder this question: What is Ghana gonna do?

Stephen E Arnold, December 1, 2025

Deloitte and AI: Impact, Professionalism, and Integrity. Absolutely But Don’t Forget Billable

December 1, 2025

green-dino_thumb_thumb_thumb_thumbAnother dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

Do you recognize any of these catchphrases?

  1. Making an impact that matters
  2. Professionalism in Practice
  3. Serving Those Who Serve Business
  4. Service with Integrity

Time’s up. Each of these was — note the past tense — associated in my mind with Deloitte (originally Deloitte Haskins & Sell before the firm became a general consulting firm. Today’s Deloitte is a representative blue chip consulting outfit. I am not exactly what shade of blue is appropriate. There is true blue, Jack Benny’s blue eyes, and the St. Louis blues. Then there are the blues associated with a “small” misstep with AI. Understatement is useful at blue chip consulting services firms.

image

Thanks, Venice.ai. Good enough.

I read Fortune Magazine’s story “Deloitte Allegedly Cited AI-Generated Research in a Million-Dollar Report for a Canadian Provincial Government.” The write up states with the alleged hedge:

The Deloitte report contained false citations, pulled from made-up academic papers to draw conclusions for cost-effectiveness analyses, and cited real researchers on papers they hadn’t worked on, the Independent found. It included fictional papers coauthored by researchers who said they had never worked together.

Now I have had some experience with blue chip and light blue chip consulting firms in my half century of professional work. I have watched some interesting methods used to assemble documents for clients. The most memorable was employed by a special consultant dragooned by a former high ranking US government official who served in both the Nixon and Ford administrations. The “special” dude who was smarter than anyone else at my blue chip firm at the time because he told me he was used his university lecture notes as part of our original research. Okay, that worked and was approved by the former government big wheel who was on a contract with our firm.

I do not recall making up data for any project on which I worked. I thought that my boss did engage in science fiction when he dreamed up our group’s revenue goals for each quarter, but the client did not get these fanciful, often juicy numbers.

The write up presents what Deloitte allegedly said:

“Deloitte Canada firmly stands behind the recommendations put forward in our report,” a Deloitte Canada spokesperson told Fortune in a statement. “We are revising the report to make a small number of citation corrections, which do not impact the report findings. AI was not used to write the report; it was selectively used to support a small number of research citations.”

Several random thoughts:

  1. Deloitte seems to be okay with their professionals’ use of smart software. I wonder if the framing of the problem, the upsides, the downsides of options, and strategic observations were output as a result of AI prompts?
  2. AI does make errors. Does Deloitte have a process in place to verify the information in documents submitted to a client? If the answer is yes, it is not working. If the answer is no, perhaps Deloitte should consider developing such a system?
  3. I am not surprised. Based on the blue chippers I have met in the last couple of years, I was stunned that some of these people were hired by big name firms. I assumed their mom or dad had useful connections at high levels which their child could use to score a win.

Net net: Clients will pay for billable hours even though the “efficiencies” of AI may not show up in the statement. I would wager $1.00 that the upside from the “efficiencies” will boost some partners’ bonuses, but that’s just a wild guess. Perhaps the money will flow to needy families?

Stephen E Arnold, December 1, 2025

Mother Nature Does Not Like AI

December 1, 2025

green-dino_thumbAnother dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

Nature, the online service and still maybe a printed magazine, published a sour lemonade story. Its title is “Major AI Conference Flooded with Peer Reviews Written Fully by AI.” My reaction was, “Duh! Did you expect originality from AI professionals chasing big bucks?” In my experience, AI innovation appears in the marketing collateral, the cute price trickery for Google Gemini, and the slide decks presented to VCs who don’t want to miss out on the next big thing.

image

The Nature article states this shocker:

Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.

Once again: Duh!

How about this statement from the write  up and its sources?

The conference organizers say they will now use automated tools to assess whether submissions and peer reviews breached policies on using AI in submissions and peer reviews. This is the first time that the conference has faced this issue at scale, says Bharath Hariharan, a computer scientist at Cornell University in Ithaca, New York, and senior program chair for ICLR 2026. “After we go through all this process … that will give us a better notion of trust.”

Yep, trust. That’s a quality I admire.

I want to point out that Nature, a publication interested in sticking to the facts, does a little soft shoe and some fancy dancing in the cited article. For example, there are causal claims about how conferences operate. I did not spot any data, but I am a dinobaby prone to overlook the nuances of modern scientific write ups. Also, the article seems to want a fix now. Yeah, well, that is unlikely. LLMs change so that smart software tuned to find AI generated content are not exactly as reliable as a 2025 Toyota RAV.

Also, I am not sure fixes implemented by human reviewers and abstract readers will do the job. When I had the joyful opportunity to review submissions for a big time technical journal, I did a pretty good job on the first one or two papers tossed at me. But, to be honest, by paper three I was not sure I had the foggiest idea what I was doing. I probably would have approved something written by a French bulldog taking mushrooms for inspiration.

If you are in the journal article writing game or giving talks at conferences, think about AI. Whether you use it or not, you may be accused of taking short cuts. That’s important because professional publishers and conference organizers never take short cuts. They take money.

Stephen E Arnold, December 1, 2025

An AI Outfit Promises Something Infinite: Er, Really? Infinite?

November 28, 2025

green-dino_thumbAnother dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

Venture Beat publishes some write ups that strike me like marketing collateral. I am not sure what to make of an article that asserts an “infinite” service. Let’s not skip over the idea of infinity. I think a couple of mathy types drove themselves crazy thinking about infinities of infinities working two ways: Bigger numbers and then smaller numbers. Yep, those close to these mathy types checked them into a facility for those sufficiently unusual. Would this article “Anthropic’s Claude Opus 4.5 Is Here” Cheaper, AI, Infinite Chats, and Coding Skills That Beat Humans (sic).”

image

Georg Cantor makes an appearance at the Happy Valley Rest Home. He tells the nurse delivering his scopolamine that he wants to talk about the central infinite-cardinality problem before he is placed in isolation. Thanks, Venice.ai. Close enough, although the setting does not look too much like a 19th century facility. Oh, well.

The first step on what will be a short journey I assure you. Let’s let Anthropic Claude 4.5 define “infinite”, shall we? Here’s the response:

Infinity represents unboundedness—a concept of endlessness without limit. In mathematics, it describes quantities exceeding any finite value or processes continuing indefinitely. Infinity isn’t a number but an abstract notion of limitlessness, fundamental to calculus, set theory, and cosmology.

Notice that Claude 4.5 did not mention marketing and PR speak. This is important for “Anthropic’s Claude Opus 4.5 Is Here: Cheaper AI, Infinite Chats, and Coding Skills That Beat Humans.” Note the two-word phrase “infinite chats.” Nope, I don’t think so. Not even Amazon’s money pipe can serve “infinite chats.” Come on, Claude. Get real. Amazon cannot even keep its system online if my memory serves me. But infinite? Hmmm.

The write up says that Venture Beat engaged in an exclusive interview with Alex Albert, “Anthropic’s head of developer relations.” (I think this means a marketing job. What do you think?) Here’s a statement that caught  my attention:

The new model, Claude Opus 4.5, scored higher on Anthropic’s most challenging internal engineering assessment than any human job candidate in the company’s history, according to materials reviewed by VentureBeat. The result underscores both the rapidly advancing capabilities of AI systems and growing questions about how the technology will reshape white-collar professions. The Amazon-backed company is pricing Claude Opus 4.5 at $5 per million input tokens and $25 per million output tokens — a dramatic reduction from the $15 and $75 rates for its predecessor, Claude Opus 4.1, released earlier this year. The move makes frontier AI capabilities accessible to a broader swath of developers and enterprises while putting pressure on competitors to match both performance and pricing.

Does this strike you as what I caller a “better, faster, cheaper” approach. I like to add cheaper is the operative word. Yep, buy the market and raise prices later. Does this surprise me? No. From a company that promises to deliver something that costs money infinitely, cheaper does not surprise me one whit.

Here’s another statement I found interesting:

Albert said employee testers consistently reported that the model demonstrates improved judgment and intuition across diverse tasks — a shift he described as the model developing a sense of what matters in real-world contexts. “The model just kind of gets it,” Albert said. “It just has developed this sort of intuition and judgment on a lot of real world things that feels qualitatively like a big jump up from past models.”

To test this statement, I fired up Claude 4.5 again and asked, “Is it possible for a commercial enterprise supported by modest customer sales and venture capital to deliver an “infinite service”; for example, infinite chats for users?”

Here’s what the “just kind of gets it” Claude told me:

No, it’s not sustainably possible. Here’s why:

  1. Real Costs Scale with Usage: Every chat consumes computational resources (servers, energy, bandwidth). Even efficient models incur marginal costs per interaction that accumulate with heavy users.
  2. Finite Capital: Venture funding and modest sales provide limited runway. Without usage caps, a small subset of power users could drain resources disproportionately, making the business model unsustainable.
  3. Economic Reality: “Unlimited” offerings typically include hidden constraints (throttling during peak times, deprioritization, or eventual tier upgrades). True infinity would require infinite capital and infrastructure.
  4. Tragedy of the Commons: Without limits, rational users maximize personal use, collectively overwhelming the system—forcing either service degradation or imposed restrictions.

Practical Outcome: Most companies offer generous but capped services (e.g., “50 free messages/day”) or tiered pricing. Marketing terms like “unlimited” often mean “more than typical users need,” not literally infinite. Would you like examples of how AI companies balance accessibility with sustainability?

I think that Mr. Albert might want to check his assertions with his product that “infinite” is a tough promise. But let’s not stop with the infinite. What about this statement in the write up?

Opus 4.5 outscores all human candidates on company’s toughest engineering test.

For me, the sticking point is the word “all.” Categorical affirmatives are problematic. Black swans aside, “all” is a cousin of infinite. Its use implies a rather sweeping approach to those who take a tough engineering test. What’s the sample? One, 100, 10,000? Yeah. Oh, well.

What’s the proof? Here’s the chart that did not knock me over with shock. I know it is difficult to read. Navigate to the cited article on a boat anchor computer with a big screen, and you can sort of read the numbers. Look for this:

image

Claude 4.5 is better than also-rans like Google and OpenAI. Well, why not? Anthropic has whipped infinite and tamed all. Dealing with weak wristed systems like Googzilla and ChatGPT is trivial.

Mr. Albert offered a statement which Venture Beat uses to complete this remarkable feat of content marketing, hyperbole, and logical impossibilities:

When asked about the engineering exam results and what they signal about AI’s trajectory, Albert was direct: “I think it’s a really important signal to pay attention to.”

Yep, pay attention. I did.

Stephen E Arnold, November 28, 2025

Gizmodo Suggests Sam AI-Man Destroys the Mind of Youth.

November 28, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

If I were an ad sales person at Gizmodo, I would not be happy. I am all for a wall between editorial and advertising. I bet you did not know that I learned that basic rule when I worked at Ziff in Manhattan. However, writing articles that accuse a potential advertiser of destroying the minds of youth is unlikely to be forgotten. I am not saying the write up is not accurate, but I know that it is possible to write articles and stories that do not make a potential advertiser go nutso.

Gizmodo published “OpenAI Introduces ‘ChatGPT for Teachers’ to Further Destroy the Minds of Our Youth” to explain a new LexisNexis-type of play to get people used to their online product. OpenAI thinks the LexisNexis- or close variant model is a good way to get paying customers. Students in law school become familiar with LexisNexis. When and if they get a job, those students will use LexisNexis. The approach made sense when Don Wilson and his fellow travelers introduced the program. OpenAI is jumping on a marketing wagon pulled by a horse that knows how to get from A to B.

image

Have those laptops, tablets, and mobile phones made retail workers adept at making change? Thanks, Venice.ai. Good enough.

The Gizmodo article says:

ChatGPT for Teachers is designed to help educators prepare materials for their classes, and it will support Family Educational Rights and Privacy Act (FERPA) requirements so that teachers and school staff can securely work with student data within the workspace. The company says the suite of tools for teachers will be available for free through June 2027, which is probably the point at which OpenAI will need to show that it can actually generate revenue and stick its hand out to demand payment from teachers who have become reliant on the suite of tools.

Okay, no big innovation here.

Gizmodo states:

There is already mounting evidence that relying on AI can erode critical thinking skills, which is something you’d like kids to be engaging in, at least during school hours. Other studies have shown that people “offload” the more difficult cognitive work and rely on AI as a shortcut when it’s available, ultimately harming their ability to do that work when they don’t have the tool to lean on. So what could go wrong giving those tools to both students and teachers? Seems like we’re going to find out.

Okay, but that headline looms over the Ivory soap conclusion to the article. In my opinion, I know exactly how this free AI will work. Students will continue to look for the easiest way to complete assigned tasks. If ChatGPT is available, students will find out if it works. Then students will use AI for everything possible so the students have more time for digging into linear algebra. (That’s a joke.) A few students will understand that other students will not do the assignments but will pay someone to do that work for them. That other person will be [a] a paramour, [b] a classmate who is a friend, [c] a classmate who responds to threats, or [d] ChatGPT-type services.

Test scores will continue to fall until a group of teachers create easier tests. Furthermore, like putting A/V systems for students to learn a foreign language in 1962, the technology works only if the student concentrates, follows the lesson attentively, writes notes, and goes through the listen and repeat mechanisms in the language lab. PCs, tablets, Chrome books, mobile phones, or AI work the same way. When students do not have the discipline to pay attention and put in the effort required to learn, the technology cannot compensate. It can, however, replace certain jobs so companies and co-workers do not have to compensate for those who lack basic skills, the discipline required to do the work, and the social skills needed to fit into an organization.

The myth that technology can replace traditional educational techniques is more nutso than the sales professionals who have to overcome ideas like “destroy the minds of youth.”

Net net: Sam AI-Man has some challenge ahead with this free ChatGPT. Want evidence of the impact of technology on the minds of legal professionals? Just check out some of the YouTubing lawyers. There you go.

Stephen E Arnold, November 28, 2024

Can the Chrome Drone Deorbit Comet?

November 28, 2025

Perplexity developed Comet, an intuitive AI-powered Internet browser. Analytic Insight has a rundown on Comet in the article: “Perplexity CEO Aravind Srinivas Claims Comet AI Browser Could ‘Kill’ Android System.” Perplexity designed Comet for more complex tasks such as booking flights, shopping, and answering then executing simple prompts. The new browser is now being released for Android OS.

Until recently Comet was an exclusive, invite-only browser for the desktop version. It is now available for download. Comet is taking the same approach for an Android release. Perplexity hopes to overtake Android as the top mobile OS or so CEO Aravind Srinivas plans.

Another question is if Comet could overtake Chrome as the favored AI browser:

“The launch of Comet AI browser coincides with the onset of a new conflict between AI browsers. Not long ago, OpenAI introduced ChatGPT Atlas, while Microsoft Edge and Google Chrome are upgrading their platforms with top-of-the-line AI tools. Additionally, Perplexity previously received attention for a $34.5 billion proposal to acquire Google Chrome, a bold move indicating its aspirations.

Comet, like many contemporary browsers, is built on the open-source Chromium framework provided by Google, which is also the backbone for Chrome, Edge, and other major browsers. With Comet’s mobile rollout and Srinivas’s bold claim, Perplexity is obviously betting entirely on an AI-first future, one that will see a convergence of the browser and the operating system.”

Comet is built on Chromium. Chrome is too. Comet is a decent web browser, but it doesn’t have the power of Alphabet behind it. Chrome will dominate the AI-browser race because it has money to launch a swarm of digital drones at this frail craft.

Whitney Grace, November 28, 2025

Coca-Cola and AI: Things May Not Be Going Better

November 27, 2025

Coca-Cola didn’t learn its lesson last year with a less than bad AI-generated Christmas commercial. It repeated the mistake again in 2025. Although the technology has improved, the ad still bears all the fake-ness of early CGI (when examined in hindsight of course). Coca-Cola, according to Creative Bloq, did want to redeem itself, so the soft drink company controlled every detail in the ad: “Devastating Graphic Shows Just How Bad The Coca-Cola Christmas Ad Really Is.”

Here’s how one expert viewed it:

“In a post on LinkedIn, the AI consultant Dino Burbidge points out the glaring lack of consistency and continuity in the design of the trucks in the new AI Holidays are Coming ad, which was produced by AI studio Secret Level. At least one of the AI-generated vehicles appears to completely defy physics, putting half of the truck’s payload beyond the last wheel.

Dino suggests that the problem with the ad is not AI per se, but the fact that no human appears to have checked what the AI models generated… or that more worryingly they checked but didn’t care, which is extraordinary when the truck is the main character in the ad.”

It’s been suggested that Coca-Cola used AI to engage in rage bait instead of building a genuinely decent Christmas ad. There was a behind the scenes video of how the ad was made and even that used AI VoiceOver.

I liked the different horse drawn wagons. Very consistent.

Whitney Grace, November 27, 2025

Microsoft: Desperate to Be a Leader in the Agentic OS Push Decides to Shove, Not Lure Supporters

November 26, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

I had a friend in high school who like a girl Mary B. He smiled at her. He complimented her plaid skirt. He gave her a birthday gift during lunch in the school cafeteria. My reaction to this display was, “Yo, Tommy, you are trying too hard.” I said nothing, I watched as Mary B. focused her attention on a football player with a C average but comic book Superman looks. Tommy became known as a person who tried too hard to reach a goal without realizing no girl wanted to be the focal point of a birthday gift in the school cafeteria with hundreds of students watching. Fail, Tommy.

image

Thanks, Venice.ai. Good enough, the gold standard today I believe.

I thought about this try-too-hard approach when I read “Windows President Addresses Current State of Windows 11 after AI Backlash.” The source is the on-again, off-again podcasting outfit called Windows Central. Here’s a snippet from the write up which recycles content from X.com. The source of the statement is a person named Pavan Davuluri, who is the Microsoft Windows lead:

The team (and I) take in a ton of feedback. We balance what we see in our product feedback systems with what we hear directly. They don’t always match, but both are important. I’ve read through the comments and see focus on things like reliability, performance, ease of use and more… we care deeply about developers. We know we have work to do on the experience, both on the everyday usability, from inconsistent dialogs to power user experiences. When we meet as a team, we discuss these pain points and others in detail, because we want developers to choose Windows.

Windows Central pointed out that Lead Davuluri demonstrated “leadership” with a bold move. He disabled comments to his X.com post about caring deeply about its customers. I like it when Lead Davuluri takes decisive leadership actions that prevent people from providing inputs. Is that why Microsoft ignored focus groups responding to Wi-Fi hardware that did not work and “ribbon” icons instead of words in Office application interfaces? I think I have possibly identified a trend at Microsoft: The aircraft carrier is steaming forward, and it is too bad about the dolphins, fishing boats, and scuba divers. I mean who cares about these unseen flotsam and jetsam.

Remarkably Windows Central’s write up includes another hint of negativism about Microsoft Windows:

What hasn’t helped in recent years is “Continuous Innovation,” Microsoft’s update delivery strategy that’s designed to keep the OS fresh with new features and changes on a consistent, monthly basis. On paper, it sounds like a good idea, but in practice, updating Windows monthly with new features often causes more headaches than joy for a lot of people. I think most users would prefer one big update at a predictable, certain time of the year, just like how Apple and Google do it.

Several observations if I may offer them as an aged dinobaby:

  1. Google has said it wants to become the agentic operating system. That means Google wants to kill off Microsoft, its applications, and its dreams.
  2. Microsoft knows that it faces competition from a person whom Satya Nadella knows, understands, absolutely must defeat because his family would make fun of him if he failed. Yep, a man-to-man dust up with annoying users trying to stop the march of technological innovation and revenue. Lead Davuluri has his marching orders; hence, the pablum tinged non-speak cited in the Windows Central write up.
  3. User needs and government regulation have zero — that’s right, none, nil, zip — chance of altering what these BAIT (big AI tech) outfits will do to win. Buckle up, Tommy. You are going to be rejected again.

Net net: That phrase agentic OS has a ring to it, doesn’t it?

Stephen E Arnold, November 26, 2025

Has Big Tech Taught the EU to Be Flexible?

November 26, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

Here’s a question that arose in a lunch meeting today (November 19, 2025): Has Big Tech brought the European Union to heel? What’s your answer?

The “trust” outfit Thomson Reuters published “EU Eases AI, Privacy Rules As Critics Warn of Caving to Big Tech.”

image

European Union regulators demonstrate their willingness to be flexible. These exercises are performed in the privacy of a conference room in Brussels. The class is taught by those big tech leaders who have demonstrated their ability to chart a course and keep it. Thanks, Venice.ai. How about your interface? Yep, good enough I think.

The write up reported:

The EU Commission’s “Digital Omnibus”, which faces debate and votes from European countries, proposed to delay stricter rules on use of AI in “high-risk” areas until late 2027, ease rules around cookies and enable more use of data.

Ah, back peddling seems to be the new Zen moment for the European Union.

The “trust” outfit explains why, sort of:

Europe is scrabbling to balance tough rules with not losing more ground in the global tech race, where companies in the United States and Asia are streaking ahead in artificial intelligence and chips.

Several factors are causing this rethink. I am not going to walk the well-worn path called “Privacy Lane.” The reason for the softening is not a warm summer day. The EU is concerned about:

  1. Losing traction in the slippery world of smart software
  2. Failing to cultivate AI start ups with more than a snowball’s chance of surviving in the Dante’s inferno of the competitive market
  3. Keeping AI whiz kids from bailing out of European mathematics, computer science, and physics research centers for some work in Sillycon Valley or delightful Z Valley (Zhongguancun, China, in case you did not know).

From my vantage point in rural Kentucky, it certainly appears that the European Union is fearful of missing out on either the boom or the bust associated with smart software.

Several observations are warranted:

  1. BAITers are likely to win. (BAIT means Big AI Tech in my lingo.) Why? Money and FOMO
  2. Other governments are likely to adapt to the needs of the BAITers. Why? Money and FOMO
  3. The BAIT outfits will be ruthless and interpret the EU’s new flexibility as weakness.

Net net: Worth watching. What do you think? Money? Fear? A combo?

Stephen E Arnold, November 26, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta