Humans May Be Important. Who Knew?
July 9, 2025
Here is an AI reality check. Futurism reports, “Companies that Replaced Humans with AI Are Realizing their Mistake.” You don’t say. Writer Joe Wilkins tells us:
“As of April, even the best AI agent could only finish 24 percent of the jobs assigned to it. Still, that didn’t stop business executives from swarming to the software like flies to roadside carrion, gutting entire departments worth of human workers to make way for their AI replacements. But as AI agents have yet to even pay for themselves — spilling their employer’s embarrassing secrets all the while — more and more executives are waking up to the sloppy reality of AI hype. A recent survey by the business analysis and consulting firm Gartner, for instance, found that out of 163 business executives, a full half said their plans to ‘significantly reduce their customer service workforce’ would be abandoned by 2027. This is forcing corporate PR spinsters to rewrite speeches about AI ‘transcending automation,’ instead leaning on phrases like ‘hybrid approach’ and ‘transitional challenges’ to describe the fact that they still need humans to run a workplace.”
Few workers would be surprised to learn AI is a disappointment. The write-up points to a report from GoTo and Workplace Intelligence that found 62% of employees say AI is significantly overhyped. Meanwhile, 45 percent of IT managers surveyed paint AI rollouts as scattered and hasty. Security concerns and integration challenges were the main barriers, 56% of them reported.
Anyone who has watched firm after firm make a U-turn on AI-related layoffs will not be surprised by these findings. For example, after cutting staff by 22% last year, finance startup Klarna announced a recruitment drive in May. Wilkins quotes tech critic Ed Zitron, who wrote in September:
“These ‘agents’ are branded to sound like intelligent lifeforms that can make intelligent decisions, but are really just trumped-up automations that require enterprise customers to invest time programming them.”
Companies wanted a silver bullet. Now they appear to be firing blanks.
Cynthia Murrell, July 9, 2025
Can AI Do What Jesus Enrique Rosas Does?
July 8, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
I learned about a YouTube video via a buried link in a story in my newsfeed. The video is titled “Analysis of Jeffrey Epstein’s Cell Block Video Released by the FBI.” I know little about Mr. Rosas. He is a body language “expert.” I know zero about this field. He gives away a book about body language, and I assume that he gets inquiries and sells services. He appears to have developed what he calls a Knesix Code. He does not disclose his academic background.
But …
His video analysis of the Epstein surveillance camera data makes clear that Sr. Rosas has an eye for detail. Let me cite two examples:
First, he notes that in some of the footage released by the FBI, a partial image of a video editing program’s interface appears. Not only does it appear, but the image appears in several separate sectors of the FBI-released video. Mr. Rosas raises the possibility that the FBI footage (described as unaltered) was modified.
Here is an example of that video editing “tell” or partial image:
Second, Sr. Rosas spots a time gap in the FBI video. Here’s where the “glitch” appears:
How much is missing from the unedited video file? More than a minute.
Observations:
- I feed the interface image into a couple of smart software systems. None was able to identify the specific program’s interface from the partial image
- Mr. Rosas’ analysis identified two interesting anomalies in the video
- The allegedly unedited video appears to have been edited.
Net net: AI is not able to do what Sr. Rosas did. I do not want to speculate how “no videos” became this one video. I do not want to speculate why an unedited video contains two editing indications. I don’t want to think much about Jeffrey Epstein, the kiddie trafficking, and the individuals associating with him. I will stick with my observation, “AI does not seem to have the ability to do what Sr. Rosas did.”
Stephen E Arnold, July 8, 2025
Curation and Editorial Policies: Useful and Are Net Positives
July 8, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
The write up “I Deleted My Second Brain.” The author is Joan Westenberg. I had to look her up. She is writer and entrepreneur. She sells subscriptions to certain essays. Okay, that’s the who. Now what does the essay address?
It takes a moment to convert “Zettelkasten slip” into a physical notecard, but I remembered learning this from the fellows who were pitching a note card retrieval system via a company called Remac. (No, I have no idea what happened to that firm. But I understand note cards. My high school debate coach in 1958 explained the concept to me.)
Ms. Westenberg says:
For years, I had been building what technologists and lifehackers call a “second brain.” The premise: capture everything, forget nothing. Store your thinking in a networked archive so vast and recursive it can answer questions before you know to ask them. It promises clarity. Control. Mental leverage. But over time, my second brain became a mausoleum. A dusty collection of old selves, old interests, old compulsions, piled on top of each other like geological strata. Instead of accelerating my thinking, it began to replace it. Instead of aiding memory, it froze my curiosity into static categories. And so… Well, I killed the whole thing.
I assume Ms. Westenberg is not engaged in a legal matter. The intentional deletion could lead to some interesting questions. On the other hand, for a person who does public relations and product positioning, deletion may not present a problem.
I liked the reference to Jorge Luis Borges (1899 to 1986) , a writer with some interesting views about the nature of reality. As Ms. Westenberg notes:
But Borges understood the cost of total systems. In “The Library of Babel,” he imagines an infinite library containing every possible book. Among its volumes are both perfect truth and perfect gibberish. The inhabitants of the library, cursed to wander it forever, descend into despair, madness, and nihilism. The map swallows the territory. PKM systems promise coherence, but they often deliver a kind of abstracted confusion. The more I wrote into my vault, the less I felt. A quote would spark an insight, I’d clip it, tag it, link it – and move on. But the insight was never lived. It was stored. Like food vacuum-sealed and never eaten, while any nutritional value slips away. Worse, the architecture began to shape my attention. I started reading to extract. Listening to summarize. Thinking in formats I could file. Every experience became fodder. I stopped wondering and started processing.
I think the idea of too much information causing mental torpor is interesting for two reasons: [a] digital information has a mass of sorts and [b] information can disrupt the functioning of an information processing organ; that is, the human brain.
The fix? Just delete everything. Ms. Westenberg calls this “destruction by design.” She is now (presumably) lighter and more interested in taking notes. I think this is the modern equivalent of throwing out junk from the garage. My mother would say, after piling my ball bat, scuffed shoes, and fossils into the garbage can, “There. That feels better.” I would think because I did not want to suffer the wrath of mom, “No, mother, you are destroying objects which are meaningful to me. You are trashing a chunk of my self with each spring cleaning.” Destruction by design may harm other people. In the case of a legal matter, destruction by design can cost the person hitting delete big time.
What’s interesting is that the essay reveals something about Ms. Westenberg; for example, [a] A person who can destroy information can destroy other intangible “stuff” as well. How does that work in an organization? [b] The sudden realization that one has created a problem leads to a knee jerk reaction. What does that say about measured judgment? [c] The psychological boost from hitting the delete key clears the path to begin the collecting again. Is hoarding an addiction? What’s the recidivism rate for an addict who does the rehabilitation journey?
My major takeaway may surprise you. Here it is: Ms. Westenberg learned by trial and error over many years that curation is a key activity in knowledge work. Google began taking steps to winnow non-compliant Web sites from its advertising program. The decision reduces lousy content and advances Google’s agenda to control in digital Gutenberg machines. Like Ms. Westenberg, Google is realizing that indexing and saving “everything” is a bubbling volcano of problems.
Librarians know about curation. Entities like Ms. Westenberg and Google are just now realizing why informed editorial policies are helpful. I suppose it is good news that Ms. Westenberg and Google have come to the same conclusion. Too bad it took years to accept something one could learn at any library in five minutes.
Stephen E Arnold, July 8, 2025
New Business Tactics from Google and Meta: Fear-Fueled Management
July 8, 2025
No smart software. Just a dinobaby and an old laptop.
I like to document new approaches to business rules or business truisms. Examples range from truisms like “targeting is effective” to “two objectives is no objectives.” Today July 1, 2025, I spotted anecdotal evidence of two new “rules.” Both seem customed tailored to the GenX, GenY, GenZ, and GenAI approach to leadership. Let’s look at each briefly and then consider how effective these are likely to be.
The first example of new management thinking appears in “Google Embraces AI in the Classroom with New Gemini Tools for Educators, Chatbots for Students, and More.” The write up explains that Google has:
introduced more than 30 AI tools for educators, a version of the Gemini app built for education, expanded access to its collaborative video creation app Google Vids, and other tools for managed Chromebooks.
Forget the one objective idea when it comes to products. Just roll out more than two dozen AI services. That will definitely catch the attention of grade, middle school, high school, junior college, and university teachers in the US and elsewhere. I am not a teacher, but I know that when I attend neighborhood get togethers, the teachers at these functions often ask me about smart software. From these interactions, very few understand that smart software comes in different “flavors.” AI is still a mostly unexplored innovation. But Google is chock full of smart people who certainly know how teachers can rush to two dozen new products and services in a jiffy.
The second rule is that organizations are hierarchical. Assuming this is the approach, one person should lead an organization and then one person should lead a unit and one person should lead a department and so on. This is the old Great Chain of Being slapped on an enterprise. My father worked in this type of company, and he liked it. He explained how work flowed from one box on the organization chart to another. With everything working the way my father liked things to work, bulldozers and mortars appeared on the loading docks. Since I grew up with this approach, it made sense to me. I must admit that I still find this type of set up appealing, and I am usually less than thrilled to work in an matrix management, let’s just roll with it set up.
In “Nikita Bier, The Founder Of Gas And TBH, Who Once Asked Elon Musk To Hire Him As VP Of Product At Twitter, Has Joined X: ‘Never Give Up‘” I learned that Meta is going with the two bosses approach to smart software. The write up reports as real news as opposed to news release news:
On Monday, Bier announced on X that he’s officially taking the reins as head of product. "Ladies and gentlemen, I’ve officially posted my way to the top: I’m joining @X as Head of Product," Bier wrote.
Earlier in June 2025, Mark Zuckerberg pumped money into Scale.io (an indexing outfit) and hired Alexandr Wang to be the top dog of Meta’s catch up in AI initiative. It appears that Meta is going to give the two bosses are better than one approach its stamp of management genius approval. OpenAI appeared to emulate this approach, and it seemed to have spawned a number of competitors and created an environment in which huge sums of money could attract AI wizards to Mr. Zuckerberg’s social castle.
The first new management precept is that an organization can generate revenue by shotgunning more than two dozen new products and services to what Google sees as the education market. The outmoded management approach would focus on one product and service, provide that to a segment of the education market with some money to spend and a problem to solve. Then figure out how to make that product more useful and grow paying customers in that segment. That’s obviously stupid and not GenAI. The modern approach is to blast that bird shot somewhere in the direction of a big fuzzy market and go pick up the dead ducks for dinner.
The second new management precept is to have an important unit, a sense of desperation born from failure, and put two people in charge. I think this can work, but in most of the successful outfits to which I have been exposed, there is one person at the top. He or she may be floating above the fray, but the idea is that someone, in theory, is in charge.
Several observations are warranted:
- The chaos approach to building a business has taken root and begun to flower at Google and Meta. Out with the old and in with the new. I am willing to wait and see what happens because when either success or failure arrives, the stories of VCs jumping from tall buildings or youthful managers buying big yachts will circulate.
- The innovations in management at Google and Meta suggest to me a bit of desperation. Both companies perceive that each is falling behind or in danger of losing. That perception may be accurate because once the AI payoff is not evident, Google and Meta may find themselves paddling up the river, not floating down the river.
- The two innovations viewed as discrete actions are expensive, risky, and illustrative of the failure of management at both firms. Employees, stakeholders, and users have a lot to win or lose.
I heard a talk by someone who predicted that traditional management consulting would be replaced by smart software. In the blue chip firm in which I worked years ago, management decisions like these would be guaranteed to translate to old-fashioned, human-based consulting projects.
In today’s world, decisions by “leadership” are unlikely to be remediated by smart software. Fixing up the messes will require individuals with experience, knowledge, and judgment.
As Julius Caesar allegedly said:
In summo periculo timor miericordiam non recipit.
This means something along the lines, “In situations of danger, fear feels no pity.” These new management rules suggest that both Google and Meta’s “leadership” are indeed fearful and grandstanding in order to overcome those inner doubts. The decisions to go against conventional management methods seem obvious and logical to them. To others, perhaps the “two bosses” and “a blast of AI products and service” are just ill advised or not informed?
Stephen E Arnold, July 8, 2025
We Have a Cheater Culture: Quite an Achievement
July 8, 2025
The annual lamentations about AI-enabled cheating have already commenced. Professor Elizabeth Wardle of Miami University would like to reframe that debate. In an opinion piece published at Cincinnati.com, she declares, “Students Aren’t Cheating Because they Have AI, but Because Colleges Are Broken.” Reasons they are broken, she writes, include factors like reduced funding and larger class sizes. Fundamentally, though, the problem lies in universities’ failure to sufficiently evolve.
Some suggest thwarting AI with a return to blue-book essays. Wardle, though, believes that would be a step backward. She notes early U.S. colleges were established before today’s specialized workforce existed. The handwritten assignments that served to train the wealthy, liberal-arts students of yesteryear no longer fit the bill. Instead, students need to understand how things work in the present and how to pivot with change. Yes, including a fluency with AI tools. Graduates must be “broadly literate,” the professor writes. She advises:
“Providing this kind of education requires rethinking higher education altogether. Educators must face our current moment by teaching the students in front of us and designing learning environments that meet the times. Students are not cheating because of AI. When they are cheating, it is because of the many ways that education is no longer working as it should. But students using AI to cheat have perhaps hastened a reckoning that has been a long time coming for higher ed.”
Who is to blame? For one, state legislatures. Many incentivize universities to churn out students with high grades in majors that match certain job titles. State funding, Wardle notes, is often tied to graduates hitting high salaries out of the gate. Her frustration is palpable as she asserts:
“Yes, graduates should be able to get jobs, but the jobs of the future are going to belong to well-rounded critical thinkers who can innovate and solve hard problems. Every column I read by tech CEOs says this very thing, yet state funding policies continue to reward colleges for being technical job factories.”
Professor Wardle is not all talk. In her role as Director of the Howe Center for Writing Excellence, she works with colleagues to update higher-learning instruction. One of their priorities has been how to integrate AI into curricula. She writes:
“The days when school was about regurgitating to prove we memorized something are over. Information is readily available; we don’t need to be able to memorize it. However, we do need to be able to assess it, think critically about it, and apply it. The education of tomorrow is about application and innovation.”
Indeed. But these urgent changes cannot be met as long funding continues to dwindle. In fact, Wardle argues, we must once again funnel significant tax money into higher education. Believe it or not, that is something we used to do as a society. (She recommends Christopher Newfield’s book “The Great Mistake” to learn how and why free, publicly funded higher ed fell apart.) Yes, we suspect there will not be too much US innovation if universities are broken and stay that way. Where will that leave us?
Cynthia Murrell, July 8, 2025
Google Fireworks: No Boom, Just Ka-ching from the EU Regulators
July 7, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
The EU celebrates the 4th of July with a fire cracker for the Google. No bang, just ka-ching, which is the sound of the cash register ringing … again. “Exclusive: Google’s AI Overviews Hit by EU Antitrust Complaint from Independent Publishers.” The trusted news source which reminds me that it is trustworthy reports:
Alphabet’s Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters. Google’s AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May.
Will the fine alter the trajectory of the Google? Answer: Does a snowball survive a fly by of the sun?
Several observations:
- Google, like Microsoft, absolutely has to make its smart software investments pay off and pay off in a big way
- The competition for AI talent makes fat, confused ducks candidates for becoming foie gras. Mr. Zuckerberg is going to buy the best ducks he can. Sports and Hollywood star compensation only works if the product pays off at the box office.
- Google’s “leadership” operates as if regulations from mere governments are annoyances, not rules to be obeyed.
- The products and services appear to be multiplying like rabbits. Confusion, not clarity, seems to be the consequence of decisions operating without a vision.
Is there an easy, quick way to make Google great again? My view is that the advertising model anchored to matching messages with queries is the problem. Ad revenue is likely to shift from many advertisers to blockbuster campaigns. Up the quotas of the sales team. However, the sales team may no longer be able to sell at a pace that copes with the cash burn for the alleged next big thing, super intelligence.
Reuters, the trusted outfit, says:
Google said numerous claims about traffic from search are often based on highly incomplete and skewed data.
Yep, highly incomplete and skewed data. The problem for Google is that we have a small tank of nasty cichlids. In case you don’t have ChatGPT at hand, a cichlid is fish that will kill and eat its children. My cichlids have names: Chatty, Pilot girl, Miss Trall, and Dee Seeka. This means that when stressed or confined our cichlids are going to become killers. What happens then?
Stephen E Arnold, July 7, 2025
Scattered Spider: Operating Freely Despite OSINT and Specialized Investigative Tools. Why?
July 7, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
I don’t want to create a dust up in the specialized software sector. I noted the July 2, 2025, article “A Group of Young Cybercriminals Poses the Most Imminent Threat of Cyberattacks Right Now.” That story surprised me. First, the Scattered Spider group was documented (more or less) by Trellix, a specialized software and services firm. You can read the article “Scattered Spider: The Modus Operandi” and get a sense of what Trellix reported. The outfit even has a Wikipedia article about their activities.
Last week I was asked a direct question, “Which of the specialized services firms can provide me with specific information about Telegram Groups and Channels, both public and private?” My answer, “None yet.”
Scattered Spider uses Telegram for some messaging functions, and if you want to get a sense of what the outfit does, just fire up your OSINT tools or better yet use one of the very expensive specialized services available to government agencies. The young cybercriminals appear to use the alias @ScatteredSpiderERC.” There is a Wikipedia article about this group’s activities.
So what? Let’s go back to the question addressed directly to me about firms that have content about Telegram. If we assume the Wikipedia write up is sort of correct, the Scattered Spider entity popped up in 2022 and its activities caught the attention of Trellix. The time between the Trellix post and the Wired story is about two years.
Why has a specialized services firm providing actionable data to the US government, the Europol investigators, and the dozens of others law enforcement operations around the world? Isn’t it a responsible act to use that access to Telegram data to take down outfits that endanger casinos and other organizations?
Apparently the answer is, “No.”
My hunch is that these specialized software firms talk about having tools to access Telegram. That talk is a heck of a lot easier than finding a reliable way to access private Groups and Channels, trace a handle back to a real live human being possibly operating in the EU or the US. I would suggest that France tried to use OSINT and the often nine figure systems to crack Telegram. Will other law enforcement groups realize that the specialized software vendors’ tools fall short of the mark and think about a France-type of response?
France seems to have made a dent in Telegram. I would hypothesize that the failure of OSINT and the specialized software tool vendors contributed to France’s decision to just arrest Pavel Durov. Mr. Durov is now ensnared in France’s judicial bureaucracy. To make the arrest more complex for Mr. Durov, he is a citizen of France and a handful of other countries, including Russia and the United Arab Emirates.
I mention this lack of Telegram cracking capability for three reasons:
- Telegram is in decline and the company is showing some signs of strain
- The changing attitude toward crypto in the US means that Telegram absolutely has to play in that market or face either erosion or decimation of its seven year push to create alternative financial services based on TONcoin and Pavel Durov’s partners’ systems
- Telegram is facing a new generation of messaging competitors. Like Apple, Telegram is late to the AI party.
One would think that at a critical point like this, the Shadow Server account would be a slam dunk for any licensee of specialized software advertising, “Telegram content.”
Where are those vendors who webinars, email blasts, and trade show demonstrations? Where are the testimonials that Company Nuco’s specialized software really did work. “Here’s what we used in court because the specialized vendor’s software generated this data for us” is what I want to hear. I would suggest that Telegram remains a bit of a challenge to specialized software vendors. Will I identify these “big hat, no cattle outfits”? Nope.
Just thought that a reminder that marketing and saying what government professionals want to hear are easier than just talking.
Stephen E Arnold, July 2025
Technology Firms: Children of Shoemakers Go Barefoot
July 7, 2025
If even the biggest of Big Tech firms are not safe from cyberattacks, who is? Investor news site Benzinga reveals, “Apple, Google and Facebook Among Services Exposed in Massive Leak of More than 16 Billion Login Records.” The trove represents one of the biggest exposures of personal data ever, writer Murtuza J. Merchant tells us. We learn:
“Cybersecurity researchers have uncovered 30 massive data collections this year alone, each containing tens of millions to over 3.5 billion user credentials, Cybernews reported. These previously unreported datasets were briefly accessible through misconfigured cloud storage or Elasticsearch instances, giving the researchers just enough time to detect them, though not enough to trace their origin. The findings paint a troubling picture of how widespread and organized credential leaks have become, with login information originating from malware known as infostealers. These malicious programs siphon usernames, passwords, and session data from infected machines, usually structured as a combination of a URL, username, and password.”
Ah, advanced infostealers. One of the many handy tools AI has made possible. The write-up continues:
“The leaked credentials span a wide range of services from tech giants like Apple, Facebook, and Google, to platforms such as GitHub, Telegram, and various government portals. Some datasets were explicitly labeled to suggest their source, such as ‘Telegram’ or a reference to the Russian Federation. … Researchers say these leaks are not just a case of old data resurfacing.”
Not only that, the data’s format is cybercriminal-friendly. Merchant writes:
“Many of the records appear recent and structured in ways that make them especially useful for cybercriminals looking to run phishing campaigns, hijack accounts, or compromise corporate systems lacking multi-factor authentication.”
But it is the scale of these datasets that has researchers most concerned. The average collection held 500 million records, while the largest had more than 3.5 billion. What are the chances your credentials are among them? The post suggests the usual, most basic security measures: complex and frequently changed passwords and regular malware scans. But surely our readers are already observing these best practices, right?
Cynthia Murrell, July 7, 2025
Worthless College Degrees. Hey, Where Is Mine?
July 4, 2025
Smart software involved in the graphic, otherwise just an addled dinobaby.
This write up is not about going “beyond search.” Heck, search has just changed adjectives and remains mostly a frustrating and confusing experience for employees. I want to highlight the information (which I assume to be 100 percent dead accurate like other free data on the Internet) about the “17 Most Useless College Degrees Employers Don’t Want Today.” Okay, high school seniors, pay attention. According to the estimable Finance Buzz, do not study these subjects and — heaven forbid — expect to get a job when you graduate from an online school, the local college, or a big-time, big-bucks university; I have grouped the write up’s earthworm list into some categories; to wit:
Do gooder work
- Criminal justice
- Education (Who needs an education when there is YouTube?)
Entertainment
- Fashion design
- Film, video, and photographic arts
- Music
- Performing arts
Information
- Advertising
- Creative writing (like Finance Buzz research articles?)
- Communications
- Computer science
- Languages (Emojis and some English are what is needed I assume)
Real losers
- Anthropology and archaeology (I thought these were different until Finance Buzz cleared up my confusion)
- Exercise science
- Religious studies
Waiting tables and working the midnight check in desk
- Culinary arts (Fry cook until the robots arrive)
- Hospitality (Smile and show people their table)
- Tourism (Do not fall into the volcano)
Assume the write up is providing verifiable facts. (I know, I know, this is the era of alternative facts.) If we flash forward five years, the already stretched resources for law enforcement and education will be in an even smaller pickle barrel. Good for the bad actors and the people who don’t want to learn. Perhaps less beneficial to others in society. I assume that one can make TikTok-type videos and generate a really bigly income until the Googlers change the compensation rules or TikTok is banned from the US. With the world awash in information and open source software available, who needs to learn anything. AI will do this work. Who in the heck gets a job in archaeology when one can learn from UnchartedX and Brothers of the Serpent? Exercise. Play football and get a contract when you are in middle school like talented kids in Brazil. And the cruise or specialty restaurant business? Those contracts are for six months for a reason. Plus cruise lines have started enforcing no video rules on the staff who were trying to make day in my life videos about the wonderful cruise ship environment. (Weren’t these vessels once called “prison ships”?) My hunch is that whoever assembled this stellar research at Finance Buzz was actually but indirectly writing about smart software and robots. These will decimate many jobs in the idenfied
What should a person study? Nuclear physics, mathematics (applied and theoretical maybe), chemistry, biogenetics, materials science, modern financial management, law (aren’t there enough lawyers?), medicine, and psychology until the DRG codes are restricted.
Excellent way to get a job. And in what field was my degree? Medieval religious literature. Perfect for life-long employment as a dinobaby essayist.
Stephen E Arnold, July 4, 2025
Apple Fix: Just Buy Something That Mostly Works
July 4, 2025
No smart software involved. Just an addled dinobaby.
A year ago Apple announced AI which means, of course, Apple Intelligence. Well, Apple was “held back”. In 2025, the powerful innovation machine made the iPhone and Macs look a bit like the Windows see-through motif. Okay.
I read “Apple Reportedly Has a Secret Plan to Quickly Gain Ground in the AI Race.” I won’t point out that if information is circulating AND appears in an article, that information is not secret. It is public relations and marketing output. Second, forget the split infinitive. Since few recognize that datum is singular and data is plural or that the word none is singular, I won’t mention it. Obviously few “real” journalists care.
Now to the write up. In my opinion, the big secret revealed and analyzed is …
Sources report that the company is giving serious consideration to bidding for the startup Perplexity AI, which would allow it to transplant a chunk of expertise and ready-made technology into Apple Park and leapfrog many of the obstacles it currently faces. Perplexity runs an AI-powered search engine which can already perform the contextual tricks which Apple advertised ahead of the iPhone 16 launch but hasn’t yet managed to build into Siri.
Analysis of this “secret” is a bit underwhelming. Here’s the paragraph that is supposed to make sense of this non-secret secret:
Historically, Apple has been wary of large acquisitions, whereas rivals, such as Facebook (buying WhatsApp for $22 billion) and Google (acquiring cloud security platform Wiz for $32 billion), have spent big to scoop up companies. It could be a mark of how worried Apple is about the AI situation that it’s considering such a major and out-of-character move. But after a year of headaches and obstacles, it also could pay off in a big way.
Okay, but what about Google acquiring Motorola? What about Microsoft’s clever purchase of Nokia? And there are other examples. Big companies buying other companies can work out or fizzle. Where is Dodgeball now? Orkut?
The actual issue strikes me as Apple’s failure to recognize that smart software — whether it works particularly well or not — was a marketing pony to ride in the technical circus. Microsoft got the message, and it seems that the marketing play triggered Google. But the tie up seems to be under a bit of stress as of June 2025.
Another problem is that buying AI requires that the purchaser manage the operation, ensure continued innovation of an order slightly more demanding that imitating a Windows interface, and getting the wizard huskies to remain hooked to the dog sled.
What seems to be taking place is a division of the smart software world into three sectors:
- Companies that “do” large language models; for example, Google, OpenAI, and others
- Companies that “wrap” large language models and generate start ups that are presented as AI but are interfaces
- Companies that “integrate” or “glue on” AI to an existing service, platform, or system.
Apple failed at number 1. It hasn’t invented anything in the AI world. (I think I learned about Siri in a Stanford Research Institute presentation many, many years ago. (No, it did not work particularly well even in the demo.)
Apple is not too good at wrapping anything. Safari doesn’t wrap. Safari blazes its own weird trail which is okay for those who love Apple software. For someone like me, I find it annoying.
Apple has demonstrated that it could not “glue on” SIRI.
Okay, Apple has not scored a home run with either approach one, two, or three.
Thus, the analysis, in my opinion, is that Apple like some other outfits now realize smart software — whether it is 100 percent reliable — continues to generate buzz. The task for Apple, therefore, is to figure out how to convert whatever it does into buzz. Skip the cost of invention. Sidestep wrapping AI and look for “partners” who do what department stores in the 1950s: Wrap my holiday gifts. And, three, try to make “glue on” work.
Net net: Will Apple undertake auto de fe and see the light?
Stephen E Arnold, July 4, 2025