2025 Is a Triangular Number: Tim Apple May Have No Way Out
May 30, 2025
Just a dinobaby and no AI: How horrible an approach?
Macworld in my mind is associated with happy Macs, not sad Macs. I just read “Tim Cook’s Year Is Doomed and It’s Not Even June Yet.” That’s definitely a sad Mac headline and suggests that Tim Apple will morph into a well-compensated human in a little box something like this:
The write up says:
Cook’s bad, awful 2025 is pretty much on the record…
Why, pray tell? How about:
- The failure of Apple’s engineers to deliver smart software
- A donation to a certain political figure’s campaign only to be rewarded with tariffs
- Threats of an Apple “tax”
- Fancy dancing with China and pumping up manufacturing in India only to be told by a person of authority, “That’s not a good idea, Tim Apple.”
I think I have touched on the main downers. The write up concludes with:
For Apple, this may be a case of too much success being a bad thing. It is unlikely that Cook could have avoided Trump’s attention, given its inherent gravimetric field. The question is, now that a moderate show of obsequiousness has proven insufficiently mollifying, what will Cook do next?
Imagine a high flying US technology company not getting its way in the US and a couple of other countries to boot. And what about the European Union?
Several observations are warranted:
- Tim Cook should be paranoid. Lots of people are out to get Apple and he will be collateral damage.
- What happens if the iPhone craters? Will Apple TV blossom or blow?
- How many pro-Apple humans will suffer bouts of depression? My guess? Lots.
Net net: Numerologists will perceive 2025 as a year for Apple to reflect and prepare for new cycles. I just see 2025 as a triangular number with Tim Apple in its perimeter and no way out evident.
Stephen E Arnold, May 30, 2025
AI Can Do Your Knowledge Work But You Will Not Lose Your Job. Never!
May 30, 2025
The dinobaby wrote this without smart software. How stupid is that?
Ravical is going to preserve jobs for knowledge workers. Nevertheless, the company’s AI may complete 80% of the work in these types of organizations. No bean counter on earth would figure out that reducing humanoid workers would cut costs, eliminate the useless vacation scam, and chop the totally unnecessary health care plan. None.
The write up “Belgian AI Startup Says It Can Automate 80% of Work at Expert Firms” reports:
Joris Van Der Gucht, Ravical’s CEO and co-founder, said the “virtual employees” could do 80% of the work in these firms. “Ravical’s agents take on the repetitive, time-consuming tasks that slow experts down,” he told TNW, citing examples such as retrieving data from internal systems, checking the latest regulations, or reading long policies. Despite doing up to 80% of the work in these firms, Van Der Gucht downplayed concerns about the agents supplanting humans.
I believe this statement is 100 percent accurate. AI firms do not use excessive statements to explain their systems and methods. The article provides more concrete evidence that this replacement of humans is spot on:
Enrico Mellis, partner at Lakestar, the lead investor in the round, said he was excited to support the company in bringing its “proven” experience in automation to the booming agentic AI market. “Agentic AI is moving from buzzword to board-level priority,” Mellis said.
Several observations:
- Humans absolutely will be replaced, particularly those who cannot sell
- Bean counters will be among the first to point out that software, as long as it is good enough, will reduce costs
- Executives are judged on financial performance, not the quality of the work as long as revenues and profits result.
Will Ravical become the go-to solution for outfits engaged in knowledge work? No, but it will become a company that other agentic AI firms will watch closely. As long as the AI is good enough, humanoids without the ability to close deals will have plenty of time to ponder opportunities in the world of good enough, hallucinating smart software.
Stephen E Arnold, May 30, 2025
A Grok Crock: That Dog Ate My Homework
May 29, 2025
Just the dinobaby operating without Copilot or its ilk.
I think I have heard Grok (a unit of XAI I think) explain that outputs have been the result of a dog eating the code or whatever. I want to document these Grok Crocks. Perhaps I will put them in a Grok Pot and produce a list of recipes suitable for middle school and high school students.
The most recent example of “something just happened” appears in “Grok Says It’s Skeptical’ about Holocaust Death Toll, Then Blames Programming Error.” Does this mean that smart software is programming Grok? If so, the explanation should be worded, “Grok hallucinates.” If a human wizard made a programming error, then making a statement that quality control will become Job One. That worked for Microsoft until Copilot became the go-to task.
The cited article stated:
Grok said this response was “not intentional denial” and instead blamed it on “a May 14, 2025, programming error.” “An unauthorized change caused Grok to question mainstream narratives, including the Holocaust’s 6 million death toll, sparking controversy,” the chatbot said. Grok said it “now aligns with historical consensus” but continued to insist there was “academic debate on exact figures, which is true but was misinterpreted.” The “unauthorized change” that Grok referred to was presumably the one xAI had already blamed earlier in the week for the chatbot’s repeated insistence on mentioning “white genocide” (a conspiracy theory promoted by X and xAI owner Elon Musk), even when asked about completely unrelated subjects.
I am going to steer clear of the legality of these statements and the political shadows these Grok outputs cast. Instead, let me offer a few observations:
- I use a number of large language models. I have used Grok exactly twice. The outputs had nothing of interest for me. I asked, “Can you cite X.com messages.” The system said, “Nope.” I tried again after Grok 3 became available. Same answer. Hasta la vista, Grok.
- The training data, the fancy math, and the algorithms determine the output. Since current LLMs rely on Google’s big idea, one would expect the outputs to be similar. Outlier outputs like these alleged Grokings are a bit of a surprise. Perhaps someone at Grok could explain exactly why these outputs are happening. I know dogs could eat homework. The event is highly unlikely in my experience, although I had a dog which threw up on the typewriter I used to write a thesis.
- I am a suspicious person. Grok makes me suspicious. I am not sure marketing and smarmy talk can reduce my anxiety about Grok providing outlier content to middle school, high school, college, and “I don’t care” adults. Weaponized information in my opinion is just that a weapon. Dangerous stuff.
Net net: Is the dog eating homework one of the Tesla robots? if so, speak with the developers, please. An alternative would be to use Claude 3.7 or Gemini to double check Grok’s programming.
Stephen E Arnold, May 29, 2025
Telegram and xAI: Deal? What Deal?
May 29, 2025
Just a dinobaby and no AI: How horrible an approach?
What happens when two people with a penchant for spawning babies seem to sort of, mostly, well, generally want a deal? On May 28, 2025, one of the super humans suggested a deal existed between the Telegram company and the xAI outfit. Money and equity would change hands. The two parties were in sync. I woke this morning to an email that said, “No deal signed.”
The Kyiv Independent, a news outfit that pays close attention to Telegram because of the “special operation”, published “Durov Announces Telegram’s Partnership with Musk’s xAI, Who Says No Deal Signed Yet.” The story reports:
Telegram and Elon Musk’s xAI will enter a one-year partnership, integrating the Grok chatbot into the messaging app, Telegram CEO Pavel Durov announced on May 28. Musk, the world’s richest man who also owns Tesla and SpaceX, commented that "no deal has been signed," prompting Durov to clarify that the deal has been agreed in "principle" with "formalities pending." "This summer, Telegram users will gain access to the best AI technology on the market," Durov said.
The write up included an interesting item of information; to wit:
Durov has claimed he is a pariah and has been effectively exiled from Russia, but it was reported last year that he had visited Russia over 60 times since leaving the country, according to Kremlingram, a Ukrainian group that campaigns against the use of Telegram in Ukraine.
Mr. Musk, the master mind behind a large exploding space vehicle, and Mr. Durov have much to gain from a linkage. Telegram, like Apple, is not known for its smart software. Third party bots have made AI services available to Telegram’s more enterprising users. xAI has made modest progress on its path to becoming the “everything” app might benefit from getting front and center to the Telegram user base.
Both individuals are somewhat idiosyncratic. Both have interesting technology. Both present themselves as bright, engaging, and often extremely confident professionals.
What’s likely to happen? With two leaders with much in common, Grok or another smart software will make its way to the Telegram faithful. When that happens is unknown. The terms of the “deal” (if one exists) are marketing or jockeying as of May 29, 2025. The timeline for action is fuzzy.
What’s obvious is that volatility and questionable information shine the spotlight on both forward leading companies. The Telegram information distracts a bit from the failed rocket. Good for Mr. Musk. The Grok deal distracts a bit from the French-styled dog collar around Mr. Durov’s neck. Good for Mr. Durov.
When elephants fight, grope, and deal, the grass may take a beating. When the dust settles, what are these elephants doing? The grass has been stomped upon, but the beasties?
Stephen E Arnold, May 29, 2025
Here Is a Great Idea: Chase Scientists Out of the US
May 29, 2025
It seems taking a chainsaw to funding for scientific research might chase scientists into the welcoming arms of other countries. Who knew? NPR reveals, “Nearly 300 Scientists Apply for French Academic Program Amid Trump Cuts in U.S.” Reporter Alana Wise tells us:
“A French university courting U.S.-based academics said it has already received nearly 300 applications for researchers seeking ‘refugee status’ amid President Trump’s elimination of funding for several scientific programs. Last month, Aix-Marseille University, one of the country’s oldest and largest universities, announced it was accepting applications for its Safe Place For Science program, which it said offers ‘a safe and stimulating environment for scientists wishing to pursue their research in complete freedom.’”
We can see how that might appeal to our best and brightest. Nearly half of the applicants to the program are American. The scientists hail from institutions like Johns Hopkins University, NASA, the University of Pennsylvania, Columbia, Yale and Stanford. But only 20 will be accepted into Aix-Marseille’s program. Not to worry, this intellect poaching may be a trend. We learn:
“Last month, France’s CentraleSupélec announced a $3.2 million grant to help finance American research that had been halted in the states. And Netherlands Minister of Education, Culture and Science Eppo Bruins wrote in a letter to parliament that he requested to set up a fund aimed at bringing top international scientists to the Netherlands.”
That is good news for the reported 75% of scientists looking to leave the US. Who wants or needs scientists?
Cynthia Murrell, May 29, 2025
xAI and Telegram: What Will the Durovs Do? The Clock Is Ticking
May 28, 2025
Just a dinobaby and no AI: How horrible an approach?
One of my colleagues called my attention to the Coindesk online service’s article “Telegram Signs $300M Deal with Elon Musk’s xAI to Integrate Grok into Its Messaging App, TON up 16%.” The subtitle is interesting:
Telegram Will Also Received 50% of Revenue from xAI Subscriptions Sold via the App
If one views Telegram as a simple messaging app, Telegram itself has not done much to infuse its “mini app” with AI functions. However, Telegram bot developers have. Dozens of bots include AI features. The most popular smart software among bot developers is, based on my team’s research, a toss up between open source AI and ChatGPT. If our information are correct, Elon Musk now has a conduit to the Telegram user base. Depending on what source you select, Telegram has 900 to one billion users. How many are humans with an actual mobile phone number? We don’t know, and I am not sure law enforcement knows until the investigators try to match a mobile number with a person, a company, or some mysterious off shore entity with offices in the Seychelles or a similarly flexible nation.
The write up says:
Telegram founder Pavel Durov, revealed on X, that the two companies agreed to a 1-year partnership that would see Telegram receive $300 million in cash and equity from xAI, in addition to 50% of revenues from xAI subscriptions sold via Telegram.
Let’s pull out the allegedly true factoids:
- The deal is a one-year partnership. In the world of the French judiciary, one-year can be a generous amount of time to de-rail the Telegram operation. Mr. Durov’s criticism of France with regards to the Romanian elections and increasing criticism of the French government may add risk to the xAI deal. With Pavel Durov in France between August 2024 and March 2025, Telegram’s pace of innovation stalled on STARs token fiddling, not AI.
- Mr. Musk’s willingness to sign up a sales channel for Grok may be related to the prevalence of Sam Altman’s AI system in third-party bots for customer support and performing a steadily increasing range of Telegram-centric functions. Because Telegram’s approach to messaging allows bots to move across boundaries between blockchains as well as traditional Web services, Telegram’s bot ecosystem should deliver, Mr. Musk hopes, an alternative AI to bot developers and provide a new source of users to the Grok smart software.
- The “equity” angle is interesting. Equity in what? xAI or some other property. Perhaps — just perhaps — Mr. Musk becomes a stakeholder in Telegram. Mr. Musk wants to convert X.com into an “everything” service, a dream shared with Sam Altman. Mr. Altman is not a particularly enthusiastic supporter of Mr. Musk. Mr. Musk is equally disenchanted with Mr. Altman. The love triangle will be interesting to observe as the days click toward the end of the one year tie up between Telegram and xAI.
Another angle on the deal was offered by the online information service Watcher.Guru. “Elon Musk’s xAI Joins Telegram in $300M Grok Partnership”, speculates":
This integration has addressed several critical pain points that crypto users face across multiple essential areas daily. Many people find blockchain technology overwhelming, and the complexity often prevents them from fully engaging with digital assets right now. By leveraging AI assistance directly within Telegram, users can get help with crypto-related questions, market analysis, and blockchain education without leaving their messaging app. The AI integration revolutionizes security by providing tools that identify crypto scams. This becomes valuable given how scams prevail on messaging platforms.
The cited paragraph makes clear that convergence is coming among smart software, social media services with hefty user counts, and crypto currency. However, the idea that smart software will prevent fraud causes me to chortle. Crypto is, in my opinion, a fraudulent enterprise. Mashing up the Telegram system with X.com binds a range of alleged criminal activities to a communications system that can be shaped to promote quite specific propaganda. Toss in crypto, and what do you get? Answer: More cyber crime.
Will this union create a happy, sunny user experience free from human trafficking, online gambling, and the sale of contraband? One can only hope, but this tie up has to prove that it delivers a positive, constructive user experience. When Sam Altman releases his everything app, will X.com be positioned to be a worthy competitor? Will Elon Musk purchase Telegram and compete with proven technology, a large user base, and a team of core engineers able to create a slam dunk product and service?
Good questions. Unlike Watcher.Guru’s observation that “AI integration revolutionizes security”, the disposition of the deal between Messers. Durov and Musk is unknown. (Oh, how can AI integration revolutionize security when the services are not yet integrated.) Oh, well, close enough for horse shoes.
Stephen E Arnold, May 28, 2025
Real News Outfit Finds a Study Proving That AI Has No Impact in the Workplace
May 27, 2025
Just the dinobaby operating without Copilot or its ilk.
The “real news” outfit is the wonderful business magazine Fortune, now only $1 a month. Subscribe now!
The title of the write up catching my attention was “Study Looking at AI Chatbots in 7,000 Workplaces Finds ‘No Significant Impact on Earnings or Recorded Hours in Any Occupation.” Out of the blocks this story caused me to say to myself, “This is another you-can’t-fire-human-writers” proxy.”
Was I correct? Here are three snips, and I not only urge you to subscribe to Fortune but read the original article and form your own opinion. Another option is to feed it into an LLM which is able to include Web content and ask it to tell you about the story. If you are reading my essay, you know that a dinobaby plucks the examples, no smart software required, although as I creep toward 81, I probably should let a free AI do the thinking for me.
Here’s the first snip I captured:
Their [smart software or large language models] popularity has created and destroyed entire job descriptions and sent company valuations into the stratosphere—then back down to earth. And yet, one of the first studies to look at AI use in conjunction with employment data finds the technology’s effect on time and money to be negligible.
You thought you could destroy humans, you high technology snake oil peddlers (not the contraband Snake Oil popular in Hong Kong at this time). Think old-time carnival barkers.
Here’s the second snip about the sample:
focusing on occupations believed to be susceptible to disruption by AI
Okay, “believed” is the operative word. Who does the believing a University of Chicago assistant professor of economics (Yay, Adam Smith. Yay, yay, Friedrich Hayak) and a graduate student. Yep, a time honored method: A graduate student.
Now the third snip which presents the rock solid proof:
On average, users of AI at work had a time savings of 3%, the researchers found. Some saved more time, but didn’t see better pay, with just 3%-7% of productivity gains being passed on to paychecks. In other words, while they found no mass displacement of human workers, neither did they see transformed productivity or hefty raises for AI-wielding super workers.
Okay, not much payoff from time savings. Okay, not much of a financial reward for the users. Okay, nobody got fired. I thought it was hard to terminate workers in some European countries.
After reading the article, I like the penultimate paragraph’s reminder that outfits like Duolingo and Shopify have begun rethinking the use of chatbots. Translation: You cannot get rid of human writers and real journalists.
Net net: A temporary reprieve will not stop the push to shift from expensive humans who want health care and vacations. That’s the news.
Stephen E Arnold, May 27, 2025
Microsoft Investigates Itself and a Customer: Finding? Nothing to See Here
May 26, 2025
No AI, just a dinobaby and his itty bitty computer.
GeekWire, creator of the occasional podcast, published “Microsoft: No Evidence Israeli Military Used Technology to Harm Civilians, Reviews Find.” When an outfit emits occasional podcasts published a story, I know that the information is 100 percent accurate. GeekWire has written about Microsoft and its outstanding software. Like Windows Central, the enthusiasm for what the Softies do is a key feature of the information.
What did I learn included:
- Israel’s military uses Microsoft technology
- Israel may have used Microsoft technology to harm non-civilians
- The study was conducted by the detail-oriented and consistently objective company. Self-study is known to be reliable, a bit like research papers from Harvard which are a bit dicey in the reproducible results department
- The data available for the self-study was limited; that is, Microsoft relied on an incomplete data set because certain information was presumably classified
- Microsoft “provided limited emergency support to the Israeli government following the October 7, 2023, Hamas attacks.”
Yeah, that sounds rock solid to me.
Why did the creator of Bob and Clippy sit down and study its navel? The write up reported:
Microsoft said it launched the reviews in response to concerns from employees and the public over media reports alleging that its Azure cloud platform and AI technologies were being used by the Israeli military to harm civilians.
The Microsoft investigation concluded:
its recent reviews found no evidence that the Israeli Ministry of Defense has failed to comply with its terms of service or AI Code of Conduct.
That’s a fact. More than rock solid, the fact is like one of those pre-Inca megaliths. That’s really solid.
GeekWire goes out on a limb in my opinion when it includes in the write up a statement from an individual who does not see eye to eye with the Softies’ investigation. Here’s that passage:
A former Microsoft employee who was fired after protesting the company’s ties to the Israeli military, he said the company’s statement is “filled with both lies and contradictions.”
What’s with the allegation of “lies and contradictions”? Get with the facts. Skip the bogus alternative facts.
I do recall that several years ago I was told by an Israeli intelware company that their service was built on Microsoft technology. Now here’s the key point. I asked if the cloud system worked on Amazon? The response was total confusion. In that English language meeting, I wondered if I had suffered a neural malfunction and posed the question, “Votre système fonctionne-t-il sur le service cloud d’Amazon?” in French, not English.
The idea that this firm’s state-of-the-art intelware would be anything other than Microsoft centric was a total surprise to those in the meeting. It seemed to me that this company’s intelware like others developed in Israel would be non Microsoft was inconceivable.
Obviously these professionals were not aware that intelware systems (some of which failed to detect threats prior to the October 2023 attack) would be modified so that only adversary military personnel would be harmed. That’s what the Microsoft investigation just proved.
Based on my experience, Israel’s military innovations are robust despite that October 2023 misstep. Furthermore, warfighting systems if they do run on Microsoft software and systems have the ability to discriminate between combatants and non-combatants. This is an important technical capability and almost on a par with the Bob interface, Clippy, and AI in Notepad.
I don’t know about you, but the Microsoft investigation put my mind at ease.
Stephen E Arnold, May 26, 2025
Microsoft: Did It Really Fork This Fellow?
May 26, 2025
Just the dinobaby operating without Copilot or its ilk.
Forked doesn’t quite communicate the exact level of frustration Philip Laine experienced while working on a Microsoft project. He details the incident in his post, “Getting Forked By Microsoft.” Laine invented a solution for image scalability without a stateful component and needed minimal operation oversight. He dubbed his project Spegel, made it open source, and was contacted by Microsoft.
Microsoft was pleased with Spegel. Laine worked with Microsoft engineers to implement Spegel into its architecture. Everything went well until Microsoft stopped working with him. He figured the moved onto other projects. Microsoft did move on but the engineers developed their own version of Spegel. They have the grace to thank Laine and in a README file. It gets worse:
"While looking into Peerd, my enthusiasm for understanding different approaches in this problem space quickly diminished. I saw function signatures and comments that looked very familiar, as if I had written them myself. Digging deeper I found test cases referencing Spegel and my previous employer, test cases that have been taken directly from my project. References that are still present to this day. The project is a forked version of Spegel, maintained by Microsoft, but under Microsoft’s MIT license.”
Microsoft plagiarized…no…downright stole Spegel’s base coding from Laine. He, however, published Spegel with Microsoft’s MIT licensing. The MIT licensing means:
“Software released under an MIT license allows for forking and modifications, without any requirement to contribute these changes back. I default to using the MIT license as it is simple and permissive.”
It does require this:
“The license does not allow removing the original license and purport that the code was created by someone else. It looks as if large parts of the project were copied directly from Spegel without any mention of the original source.”
Laine wanted to work with Microsoft and have their engineers contribute to his open source project. He’s dedicated his energy, time, and resources to Spegel and continues to do so without much contribution other than GitHub sponsors and the thanks of its users. Laine is considering changing Spegel’s licensing as it’s the only way to throw a stone at Microsoft.
If true, the pulsing AI machine is a forker.
Whitney Grace, May 26, 2025
Some Outfits Takes Pictures… Of Users
May 23, 2025
Conspiracy theorists aka wackadoos assert preach that the government is listening to everyone with microphones and it’s only gotten worse with mobile devices. This conspiracy theory has been running circuits since before the invention of the Internet. It used to be spies or aluminum can string telephones were the culprit. Truth is actually stranger than fiction and New Atlas updated an article about how well Facebook is actually listening to us, “Your Phone Isn’t Secretly Listening To You, But The Truth Is More Disturbing.”
Let’s assume that the story is accurate, but the information was on the Internet, so for AI and some humans, the write up is chock full of meaty facts. It was revealed in 2024 that Cox Media Group (CMG) developed Active Listening, a system to capture “real time intent data” with mobile devices’ microphones. It then did the necessary technology magic and fed personalized ads. Tech companies distanced themselves from CMG. CMG stopped using the system. It supposedly worked by listening to small vocal data uploaded after digital assistants were activated. It bleeds into the smartphone listening conspiracy but apparently that’s still not a tenable reality.
The mobile cyber security company Wandera tested the listening microphone theory. They placed two smart phones in a room, played pet food ads on an audio loop for thirty minutes a day over three days. Here are the nitty gritty details:
“User permissions for a large number of apps were all enabled, and the same experiment was performed, with the same phones, in a silent test room to act as a control. The experiment had two main goals. First, a number of apps were scanned following the experiment to ascertain whether pet food ads suddenly appeared in any streams. Secondly, and perhaps more importantly, the devices were closely examined to track data consumption, battery use, and background activity.”
The results showed that phones weren’t listening to conversations. The truth was on par and more feasible given the current technology:
“In early 2017 Jingjing Ren, a PhD student at Northeastern University, and Elleen Pan, an undergraduate student, designed a study to investigate the very issue of whether phones listen in on conversations without users knowing. Pretty quickly it became clear to the researchers that the phones’ microphones were not being covertly activated, but it also became clear there were a number of other disconcerting things going on. There were no audio leaks at all – not a single app activated the microphone,’ said Christo Wilson, a computer scientist working on the project. ‘Then we started seeing things we didn’t expect. Apps were automatically taking screenshots of themselves and sending them to third parties. In one case, the app took video of the screen activity and sent that information to a third party.’”
There are multiple other ways Facebook and companies are actually tracking and collecting data. Everything done on a smartphone from banking to playing games generates data that can be tracked and sent to third parties. The more useful your phone is to you, the more useful it is as a tracking, monitoring, and selling tool to AI algorithms to generate targeted ads and more personalized content. It’s a lot easier to believe in the microphone theory because it’s easier to understand the vast amounts of technology at work to steal…er…gather information. To sum up, innovators are inspirational!
Whitney Grace, May 23, 2025