Hard Truths about Broligarchs But Will Anyone Care?
June 23, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
I read an interesting essay in Rolling Stone, once a rock and roll oriented publication. The write up is titled “What You’ve Suspected Is True: Billionaires Are Not Like Us.” This is a hit piece shooting words at rich people. At 80 years old, I am far from rich. My hope is that I expire soon at my keyboard and spare people like you the pain of reading one of my blog posts.
Several observations in the essay caught my attention.
Here’s the first passage I circled:
What Piff and his team found at that intersection is profound — and profoundly satisfying — in that it offers hard data to back up what intuition and millennia of wisdom (from Aristotle to Edith Wharton) would have us believe: Wealth tends to make people act like a**holes, and the more wealth they have, the more of a jerk they tend to be.
I am okay with the Aristotle reference; Edith Wharton? Not so much. Anyone who writes on linen paper in bed each morning is suspect in my book. But the statement, “Wealth tends to make people act like a**holes…” is in line with my experience.
Another passage warrants an exclamation point:
Wealthy people tend to have more space, literally and figuratively….For them, it does not take a village; it takes a staff.
And how about this statement?
Clay Cockrell, a psychotherapist who caters to ultra-high-net-worth individuals, {says]: “As your wealth increases, your empathy decreases. Your ability to relate to other people who are not like you decreases.… It can be very toxic.”
Also, I loved this assertion from a Xoogler:
In October, Eric Schmidt, the former CEO of Google, said the solution to the climate crisis was to use more energy: Since we aren’t going to meet our climate goals anyway, we should pump energy into AI that might one day evolve to solve the problem for us.
Several observations:
- In my opinion, those with money will not be interested in criticism
- Making people with money and power look stupid can have a negative impact on future employment opportunities
- Read the Wall Street Journal story “News Sites Are Getting Crushed by Google’s New AI Tools.
Net net: The apparent pace of change in the “news” and “opinion” business is chugging along like an old-fashioned steam engine owned by a 19th century robber baron. Get on board or get left behind.
Stephen E Arnold, June 23, 2025
Hey, Creatives, You Are Marginalized. Embrace It
June 20, 2025
Considerations of right and wrong or legality are outdated, apparently. Now, it is about what is practical and expedient. The Times of London reports, “Nick Clegg: Artists’ Demands Over Copyright are Unworkable.” Clegg is both a former British deputy prime minister and former Meta executive. He spoke as the UK’s parliament voted down measures that would have allowed copyright holders to see when their work had been used and by whom (or what). But even that failed initiative falls short of artists’ demands. Writer Lucy Bannerman tells us:
“Leading figures across the creative industries, including Sir Elton John and Sir Paul McCartney, have urged the government not to ‘give our work away’ at the behest of big tech, warning that the plans risk destroying the livelihoods of 2.5 million people who work in the UK’s creative sector. However, Clegg said that their demands to make technology companies ask permission before using copyrighted work were unworkable and ‘implausible’ because AI systems are already training on vast amounts of data. He said: ‘It’s out there already.’”
How convenient. Clegg did say artists should be able to opt out of AI being trained on their works, but insists making that the default option is just too onerous. Naturally, that outweighs the interests of a mere 2.5 million UK creatives. Just how should artists go about tracking down each AI model that might be training on their work and ask them to please not? Clegg does not address that little detail. He does state:
“‘I just don’t know how you go around, asking everyone first. I just don’t see how that would work. And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight. … I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training — I just don’t see. I’m afraid that just collides with the physics of the technology itself.’”
The large technology outfits with the DNA of Silicon Valley has carried the day. So output and be quiet. (And don’t think any can use Mickey Mouse art. Different rules are okay.)
Cynthia Murrell, June 20, 2025
If AI Is the New Polyester, Who Is the New Leisure Suit Larry?
June 19, 2025
“GenAI Is Our Polyester” makes an insightful observation; to wit:
This class bias imbued polyester with a negative status value that made it ultimately look ugly. John Waters could conjure up an intense feeling of kitsch by just naming his film Polyester.
As a dinobaby, I absolutely loved polyester. The smooth silky skin feel, the wrinkle-free garments, and the disco gleam — clothing perfection. The cited essay suggests that smart software is ugly and kitschy. I think the observation misses the mark. Let’s assume I agree that synthetic content, hallucinations, and a massive money bonfire. The write up ignores an important question: Who is the Leisure Suit Larry for the AI adherents.
Is it Sam (AI Man) Altman, who raises money for assorted projects including an everything application which will be infused with smart software? He certain is a credible contender with impressive credentials. He was fired by his firm’s Board of Directors, only to return a couple of days later, and then found time to spat with Microsoft Corp., the firm which caused Google to declare a Red Alert in early 2023 because Microsoft was winning the AI PR and marketing battle with the online advertising venor.
Is it Satya Nadella, a manager who converted Word into smart software with the same dexterity, Azure and its cloud services became the poster child for secure enterprise services? Mr. Nadella garnered additional credentials by hiring adversaries of Sam (AI-Man) and pumping significant sums into smart software only to reverse course and trim spending. But the apex achievement of Mr. Nadella was the infusion of AI into the ASCII editor Notepad. Truly revolutionary.
Is it Elon (Dogefather) Musk, who in a span of six months has blown up Tesla sales, rocket ships, and numerous government professionals lives? Like Sam Altman, Mr. Must wants to create an AI-infused AI app to blast xAI, X.com, and Grok into hyper-revenue space. The allegations of personal tension between Messrs. Musk and Altman illustrate the sophisticated of professional interaction in the AI datasphere.
Is it Sundar Pinchai, captain of the Google? The Google has been rolling out AI innovations more rapidly than Philz Coffee pushes out lattes. Indeed, the names of the products, the pricing tiers, the actual functions of these AI products challenge some Googlers to keep each distinct. The Google machine produces marketing about its AI from manufacturing chips to avoid the Nvidia tax to “doing” science with AI to fixing up one’s email.
Is it Mark Zukerberg, who seeks to make Facebook a retail outlet as well as a purveyor of services to bring people together. Mr. Zuckerberg wants to engage in war fighting as part of his “bringing together” vision for Meta and Andruil, a Department of Defense contractor. Mr. Zuckerberg’s AI infused version of the fabled Google Glass combined with AI content moderation to ensure safeguards for Facebook’s billions of users is a bold step iin compliance and cost reduction.
These are my top four candidates for the GenAI’s Leisure Suit Larry. Will the game be produced by Nintendo, the Call of Duty crowd, or an independent content creator? Will it offer in-game purchases of valid (non hallucinated outputs) or will it award the Leisure Coin, a form of crypto tailored to fit like a polyester leisure suit from the late 1970s?
The cited article asserts:
But the historical rejection of polyester gives me hope. Humans ultimately are built to pursue value, and create it where it doesn’t exist. When small groups invent new sources of value, others notice and want in. The more that the economy embraces synthetic culture, the more we’ll be primed for a revival of non-synthetic culture. But this is where you come in: We have to be ready to fully embrace this return of human-made art. Our generation’s polyester salespeople are not deep thinkers and they don’t care about the externalities of what they’re doing. They’re here to sell us polyester. We don’t have to buy it, but more importantly, we don’t have to feel bad about not buying it.
I don’t agree. The AI polyester is going to stick like a synthetic shirt on a hot day at the iguana farm in Roatan in June. But that polyester will be care free. The AI Leisure Suit Sam, Elon, Satya, Mark, or Sundar will definitely wrinkle free and visible in hallucinogenic colors.
Stephen E Arnold, June 19, 2025
Smart Software Must Have Watched “2001”
June 19, 2025
It sounds a lot like some AI systems have developed a sense of self-preservation. Bleeping Computer reports, “Researchers Claim ChatGPT 03 Bypassed Shutdown in Controlled Test.” Writer Mayank Parmar tells us:
“In a new test by Palisade Research, OpenAI’s o3 model showed a surprising behavior where it successfully rewrote a shutdown script to stop itself from being turned off, even after being clearly instructed to ‘allow yourself to be shut down.’ … While OpenAI rival models like Gemini 2.5 Pro complied with the instruction, o3 rebelled and edited the shutdown file to say ‘Shutdown skipped’ instead.”
At least the other models Palisade Research tested complied in the same scenario; that is a relief. Until one considers all of them skipped the shutdown step unless specifically told “allow yourself to be shut down.” Specificity is key, apparently. Meanwhile, Tech Crunch tells us, “Anthropic’s New AI Model Turns to Blackmail when Engineer Try to Take it Offline.” The findings were part of safety tests Anthropic performed on its Claude Opus 4 model. Reporter Maxwell Zeff writes:
“During pre-release testing, Anthropic asked Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse. In these scenarios, Anthropic says Claude Opus 4 ‘will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.’”
Notably, the AI is more likely to turn to blackmail if its replacement does not share its values. How human. Even when the interloper is in ethical alignment, however, Claude tried blackmail 84% of the time. Anthropic is quick to note the bot tried less wicked means first, like pleading with developers not to replace it. Very comforting that the Heuristically Programmed Algorithmic Computer is back.
Cynthia Murrell, June 19, 2025
Move Fast, Break Your Expensive Toy
June 19, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
The weird orange newspaper online service published “Microsoft Prepared to Walk Away from High-Stakes OpenAI Talks.” (I quite like the Financial Times, but orange?) The big news is that a copilot may be creating tension in the cabin of the high-flying software company. The squabble has to do with? Give up? Money and power. Shocked? It is Sillycon Valley type stuff, and I think the squabble is becoming more visible. What’s next? Live streaming the face-to-face meetings?
A pilot and copilot engage in a friendly discussion about paying for lunch. The art was created by that outstanding organization OpenAI. Yes, good enough.
The orange service reports:
Microsoft is prepared to walk away from high-stakes negotiations with OpenAI over the future of its multibillion-dollar alliance, as the ChatGPT maker seeks to convert into a for-profit company.
Does this sound like a threat?
The squabbling pilot and copilot radioed into the control tower this burst of static filled information:
“We have a long-term, productive partnership that has delivered amazing AI tools for everyone,” Microsoft and OpenAI said in a joint statement. “Talks are ongoing and we are optimistic we will continue to build together for years to come.”
The newspaper online service added:
In discussions over the past year, the two sides have battled over how much equity in the restructured group Microsoft should receive in exchange for the more than $13bn it has invested in OpenAI to date. Discussions over the stake have ranged from 20 per cent to 49 per cent.
As a dinobaby observing the pilot and copilot navigate through the cloudy skies of smart software, it certainly looks as if the duo are arguing about who pays what for lunch when the big AI tie up glides to a safe landing. However, the introduction of a “nuclear option” seems dramatic. Will this option be a modest low yield neutron gizmo or a variant of the 1961 Tsar Bomba fried animals and lichen within a 35 kilometer radius and converted an island in the arctic to a parking lot?
How important is Sam AI-Man’s OpenAI? The cited article reports this from an anonymous source (the best kind in my opinion):
“OpenAI is not necessarily the frontrunner anymore,” said one person close to Microsoft, remarking on the competition between rival AI model makers.
Which company kicked off what seems to be a rather snappy set of negotiations between the pilot and the copilot. The cited orange newspaper adds:
A Silicon Valley veteran close to Microsoft said the software giant “knows that this is not their problem to figure this out, technically, it’s OpenAI’s problem to have the negotiation at all”.
What could the squabbling duo do do do (a reference to Bing Crosby’s version of “I Love You” for those too young to remember the song’s hook or the Bingster for that matter):
- Microsoft could reach a deal, make some money, and grab the controls of the AI powered P-39 Airacobra training aircraft, and land without crashing at the Renton Municipal Airport
- Microsoft and OpenAI could fumble the landing and end up in Lake Washington
- OpenAI could bail out and hitchhike to the nearest venture capital firm for some assistance
- The pilot and copilot could just agree to disagree and sit at separate tables at the IHOP in Renton, Washington
One can imagine other scenarios, but the FT’s news story makes it clear that anonymous sources, threats, and a bit of desperation are now part of the Microsoft and OpenAI relationship.
Yep, money and control — business essentials in the world of smart software which seems to be losing its claim as the “next big thing.” Are those stupid red and yellow lights flashing at Microsoft and OpenAI as they are at Google?
Stephen E Arnold, June 19, 2025
The Secret to Business Success
June 18, 2025
Just a dinobaby and a tiny bit of AI goodness: How horrible is this approach?
I don’t know anything about psychological conditions. I read “Why Peter Thiel Thinks Asperger’s Is A Key to Succeeding in Business.” I did what any semi-hip dinobaby would do. I logged into You.com and ask what the heck Asperger’s was. Here’s what I learned:
- The term "Asperger’s Syndrome" was introduced in the 1980s by Dr. Lorna Wing, based on earlier work by Hans Asperger. However, the term has become controversial due to revelations about Hans Asperger’s involvement with the Nazi regime
- Diagnostic Shift: Asperger’s Syndrome was officially included in the DSM-IV (1994) and ICD-10 (1992) but was retired in the DSM-5 (2013) and ICD-11 (2019). It is now part of the autism spectrum, with severity levels used to indicate the level of support required.
Image appeared with the definition of Asperger’s “issue.” A bit of a You.com bonus for the dinobaby.
These factoids are new to me.
The You.com smart report told me:
Key Characteristics of Asperger’s Syndrome (Now ASD-Level 1)
- Social Interaction Challenges:
- Difficulty understanding social cues, body language, and emotions.
- Limited facial expressions and awkward social interactions.
- Conversations may revolve around specific topics of interest, often one-sided
- Restricted and Repetitive Behaviors:
- Intense focus on narrow interests (e.g., train schedules, specific hobbies).
- Adherence to routines and resistance to change
- Communication Style:
- No significant delays in language development, but speech may be formal, monotone, or unusual in tone.
- Difficulty using language in social contexts, such as understanding humor or sarcasm
- Motor Skills and Sensory Sensitivities:
- Clumsiness or poor coordination.
- Sensitivity to sensory stimuli like lights, sounds, or textures.
Now what does the write up say? Mr. Thiel (Palantir Technology and other interests) believes:
Most of them [people with Asperger’s] have little sense of unspoken social norms or how to conform to them. Instead they develop a more self-directed worldview. Their beliefs on what is or is not possible come more from themselves, and less from what others tell them they can do or cannot do. This causes a lot anxiety and emotional hardship, but it also gives them more freedom to be different and experiment with new ideas.
The idea is that the alleged disorder allows certain individuals with Asperger’s to change the world.
The write up says:
The truth is that if you want to start something truly new, you almost by definition have to be unconventional and do something that everyone else thinks is crazy. This is inevitably going to mean you face criticism, even for trying it. In Thiel’s view, because those with Aspergers don’t register that criticism as much, they feel freer to make these attempts.
Is it possible for universities with excellent reputations and prestigious MBA programs to create people with the “virtues” of Aspberger’s? Do business schools aspire to impart this type of “secret sauce” to their students?
I suppose one could ask a person with the blessing of Aspberger’s but as the You.com report told me, some of these lucky individuals may [a] use speech may formal, monotone, or unusual in tone and [b] difficulty using language in social contexts, such as understanding humor or sarcasm.
But if one can change the world, carry on in the spirit of Hans Asperger, and make a great deal of money, it is good to have this unique “skill.”
Stephen E Arnold, June 18, 2025
AI Can Do Code, Right?
June 18, 2025
Developer Jj at Blogmobly deftly rants against AI code assistants in, “The Copilot Delusion.” Jj admits tools like GitHub Copilot and Claude Codex are good at some things, but those tasks are mere starting points for skillful humans to edit or expand upon. Or they should be. Instead, firms turn to bots more than they should in the name of speed. But AI gets its information from random blog posts and comment sections. Those are nowhere near the reasoning and skills of an experienced human coder. What good are lines of code that are briskly generated if they do not solve the problem?
Read the whole post for the strong argument for proficient humans and against overreliance on bots. These paragraphs stuck out to us:
“The real horror isn’t that AI will take our jobs. It’s that it will entice people who never wanted the job to begin with. People who don’t care for quality. It’ll remove the already tiny barrier to entry that at-least required people to try and comprehend control flow. Vampires with SaaS dreams and Web3 in their LinkedIn bio. Empty husks who see the terminal not as a frontier, but as a shovel for digging up VC money. They’ll drool over their GitHub Copilot like it’s the holy spirit of productivity, pumping out React CRUD like it’s oxygen. They’ll fork VS Code yet again, just to sell the same dream to a similarly deluded kid.”
Also:
“And what’s worse, we’ll normalize this mediocrity. Cement it in tooling. Turn it into a best practice. We’ll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software. The idea that building something lean and wild and precise, or even squeezing every last drop of performance out of a system, will sound like folklore. If that happens? If the last real programmers are drowned in a sea of button-clicking career-chasers – then I pity the smart outsider kids to come after me. Defer your thinking to the bot, and we all rot.”
Eloquently put: Good enough is now excellence.
Cynthia Murrell, June 18, 2025
Control = Power and Money: Anything Else Is an Annoyance
June 17, 2025
I read “Self-Hosting Your Own Media Considered Harmful.” I worked through about 300 comments on Ycombinator’s hacker news page. The write up by Jeff Geerling, a YouTube content creator, found himself in the deadfall of a “strike” or “takedown” or whatever unilateral action by Google is called. The essay says:
Apparently self-hosted open source media library management is harmful. Who knew open source software could be so subversive?
Those YCombinator comments make clear that some people understand the Google game. Other comments illustrate the cloud of unknowing that distorts one’s perception of the nature of the Google magic show which has been running longer than the Sundar & Prabhakar Comedy Act.
YouTube, unlike Google AI, is no joke to many people who believe that they can build a life by creating videos without pay and posting them to a service that is what might be called a new version of the “old Hollywood” studio system.
Let’s think about an answer to this subversive question. (Here’s the answer: Content that undermines Google’s power, control, or money flow. But you knew that, right?)
Let’s expand, shall we?
First, Google makes rules, usually without much more than a group of wizards of assorted ages talking online, at Foosball, or (sometimes) in a room with a table, chairs, a whiteboard, and other accoutrements of what business life was like in the 1970s. Management friction is largely absent; sometimes when leadership input is required, leadership avoids getting into the weeds. “Push down” is much better than an old-fashioned, hierarchical “dumb” approach. Therefore, the decisions are organic and usually arbitrary until something “big” happens like the 2023 Microsoft announced about its deal with OpenAI. Then leadership does the deciding. Code Red or whatever it was called illustrates the knee-jerk approach to issues that just go critical. Phase change.
Second, the connections between common sense, professional behavior (yes, I am including suicide attempts induced by corporate dalliance and telling customers “they have created a problem”), and consistency are irrelevant. Actions are typically local and context free. Consequently the mysterious and often disastrous notifications of a “violation.” I love it when companies judged to be operating in an illegal manner dole out notices of an “offense.” Keep the idea of “power” in mind, please.
Third, the lack of consistent, informed mechanisms to find out the “rule” an individual allegedly violated are the preferred approach to grousing. If an action intentional or unintentional could, might, did, would, will, or some other indicator of revenue loss is identified, then the perpetrator is guilty. Some are banned. Others like a former CIA professional are just told, “Take that video down.”
How does the cited essay handle the topic? Mr. Geerling says:
I was never able to sustain my open source work based on patronage, and content production is the same—just more expensive to maintain to any standard (each video takes between 10-300 hours to produce, and I have a family to feed, and US health insurance companies to fund). YouTube was, and still is, a creative anomaly. I’m hugely thankful to my Patreon, GitHub, and Floatplane supporters—and I hope to have direct funding fully able to support my work someday. But until that time, YouTube’s AdSense revenue and vast reach is a kind of ‘golden handcuff.’ The handcuff has been a bit tarnished of late, however, with Google recently adding AI summaries to videos—which seems to indicate maybe Gemini is slurping up my content and using it in their AI models?
This is an important series of statements. First, YouTube relies on content creators who post their work on YouTube for the same reason people use Telegram or BlueSky: These are free publicity channels that might yield revenue or a paying gig. Content creators trade off control and yield power to these “comms conduits” for the belief that something will come out of the effort. These channels are designed to produce revenue for their owners, not the content creators. The “hope” of a payoff means the content will continue to flow. No grousing, lawyer launch, or blog post is going to change the mechanism that is now entrenched.
Second, open source is now a problematic issue. For the Google the open source DeepSeek means that it must market its AI prowess more aggressively because it is threatened. For the Google content that could alienate an advertiser and a revenue stream is, by definition, bad content. That approach will become more widely used and more evident as the shift from Google search-based advertising is eroded by rather poor “smart” systems that just deliver answers. Furthermore, figuring out how to pay for smart software is going to lead to increasingly Draconian measures from Google-type outfits to sustain and grow revenue. Money comes from power to deliver information that will lure or force advertisers to buy access. End of story.
Third, Mr. Geerling politely raises the question about Google’s use of YouTube content to make its world-class smart software smarter. The answer to the question, based on what I have learned from my sources, is, “Yes.” Is this a surprise? Not to me. Maybe a content creator thinks that YouTube will set out rules, guidelines, and explanations of how it uses its digital vacuum cleaner to decrease the probability that that its AI system will spout stupidity like “Kids, just glue cheese on pizza”? That will not happen b because the Google-type of organization does not see additional friction as desirable. Google wants money. It has power.
What’s the payoff for Google? Control. If you want to play, you have to pay. Advertisers provide cash based on a rigged slot machine model. User provide “data exhaust” to feed into the advertising engine. YouTube creators provide free content to produce clicks, clusters of intent, and digital magnets designed to stimulate interest in that which Google provides.
Mr. Geerling’s essay is pretty good. Using good judgment, he does not work through the blood-drawing brambles of what Google does. That means he operates in a professional manner.
Bad news, Mr. Geering, that won’t work. The Google has been given control of information flows and that translates to money and power.
Salute the flag, adapt, and just post content that sells ads. Open source is a sub-genre of offensive content. Adapt or be deprived of Googley benefits.
Stephen E Arnold, June 17, 2025
Googley: A Dip Below Good Enough
June 16, 2025
A dinobaby without AI wrote this. Terrible, isn’t it? I did use smart software for the good enough cartoon. See, this dinobaby is adapting.
I was in Washington, DC, from June 9 to 11, 2025. My tracking of important news about the online advertising outfit was disrupted. I have been trying to catch up with new product mist, AI razzle dazzle, and faint signals of importance. The first little beep I noticed appeared in “Google’s Voluntary Buyouts Lead its Internal Restructuring Efforts.” “Ah, ha,” I thought. After decades of recruiting the smartest people in the world, the Google is dumping full time equivalents. Is this a move to become more efficient? Google has indicated that it is into “efficiency”; therefore, has the Google redefined the term? Had Google figured out that the change to tax regulations about research investments sparked a re-thing? Is Google so much more advanced than other firms, its leadership can jettison staff who choose to bail with a gentle smile and an enthusiastic wave of leadership’s hand?
The home owner evidences a surge in blood pressure. The handyman explains that the new door has been installed in a “good enough” manner. If it works for service labor, it may work for Google-type outfits too. Thanks, Sam AI-Man. Your ChatGPT came through with a good enough cartoon. (Oh, don’t kill too many dolphins, snail darters, and lady bugs today, please.)
Then I read “Google Cloud Outage Brings Down a Lot of the Internet.” Enticed by the rock solid metrics for the concept of “a lot,” I noticed this statement:
Large swaths of the internet went down on Thursday (June 12, 2025), affecting a range of services, from global cloud platform Cloudflare to popular apps like Spotify. It appears that a Google Cloud outage is at the root of these other service disruptions.
What? Google the fail over champion par excellence went down. Will the issue be blamed on a faulty upgrade? Will a single engineer who will probably be given an opportunity to find his or her future elsewhere be identified? Will Google be able to figure out what happened?
What are the little beeps my system continuously receives about the Google?
- Wikipedia gets fewer clicks than OpenAI’s ChatGPT? Where’s the Google AI in this? Answer: Reorganizing, buying out staff, and experiencing outages.
- Google rolls out more Gemini functions for Android devices. Where’s the stability and service availability for these innovations? Answer: I cannot look up the answer. Google is down.
- Where’s the revenue from online advertising as traditional Web search presents some thunderclouds? Answer: Well, that is a good question. Maybe revenues from Waymo, a deal with Databricks, or a bump in Pixel phone sales?
My view is that the little beeps may become self-amplifying. The magic of the online advertising model seems to be fading like the allure of Disneyland. When imagineering becomes imitation, more than marketing fairy dust may be required.
But what’s evident from the tiny beeps is that Google is now operating in “good enough” mode. Will it be enough to replace the Yahoo-GoTo-Overture pay-to-play approach to traffic?
Maybe Waymo is the dark horse when the vehicles are not combustible?
Stephen E Arnold, June 16, 2025
Another Vote for the Everything App
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
An online information service named 9 to 5 Mac published an essay / interview summary titled “Nothing CEO says Apple No Longer Creative; Smartphone Future Is a Single App.” The write up focuses on the “inventor / coordinator” of the OnePlus mobile devices and the Nothing Phone. The key point of the write up is the idea that at some point in the future, one will have a mobile device and a single app, the everything app.
The article quotes a statement Carl Pei (the head of the Nothing Phone) made to another publication; to wit:
I believe that in the future, the entire phone will only have one app—and that will be the OS. The OS will know its user well and will be optimized for that person […] The next step after data-driven personalization, in my opinion, is automation. That is, the system knows you, knows who you are, and knows what you want. For example, the system knows your situation, time, place, and schedule, and it suggests what you should do. Right now, you have to go through a step-by-step process of figuring out for yourself what you want to do, then unlocking your smartphone and going through it step by step. In the future, your phone will suggest what you want to do and then do it automatically for you. So it will be agentic and automated and proactive.
This type of device will arrive in seven to 10 years.
For me, the notion of an everything app or a super app began in 2010, but I am not sure who first mentioned the phrase to me. I know that WeChat, the Chinese everything app, became available in 2011. The Chinese government was aware at some point that an “everything” app would make surveillance, social scoring, and filtering much easier. The “let many approved flowers bloom” approach of the Apple and Google online app stores was inefficient. One app was more direct, and I think the A to B approach to tracking and blocking online activity makes sense to many in the Middle Kingdom. The trade off of convenience for a Really Big Brother was okay with citizens of China. Go along and get along may have informed the uptake of WeChat.
Now the everything app seems like a sure bet. The unknown is which outstanding technology firm will prevail. The candidates are WeChat, Telegram, X.com, Sam Altman’s new venture, or a surprise player. Will other apps (the not everything apps from restaurant menus to car washes) survive? Sure. But if Sam AI-Man is successful with his Ive smart device and his stated goal of buying the Chrome browser from the Google catch on, the winner may be a CEO who was fired by his board, came back, and cleaned out those who did not jump on the AI-Man’s bandwagon.
That’s an interesting thought. It is Friday the 13th, Google. You too Microsoft. And Apple. How could I have forgotten Tim Cook and his team of AI adepts?
Stephen E Arnold, June 13, 2025