Telegram, a Stylish French Dog Collar, and Mom Saying, “Pavel Clean Up Your Room!”
June 4, 2025
Just a dinobaby operating without AI. What do you expect? A free newsletter and an old geezer. Do those statements sound like dorky detritus?
Pavel Durov has a problem with France. The country’s judiciary let him go back home after an eight month stay-cation. However, Mr. Durov is not the type of person to enjoy having a ring in his nose and a long strand of red tape connecting him to his new mom back in Paris. Pavel wants to live an Airbnb life, but he has to find a way to get his French mom to say, “Okay, Pavel, you can go out with your friends but you have to be home by 9 pm Paris time.” If he does not comply, Mr. Durov is learning that the French government can make life miserable: There’s the monitoring. There’s the red tape. There’s the reminder that France has some wonderful prison facilities in France, North Africa, and Guiana (like where’s that, Pavel?). But worst of all, Mr. Durov does not have his beloved freedom.
He learned this when he blew off a French request to block certain content from Telegram into Romania. For details, click here. What happened?
The first reminder was a jerk on his stylish French when the 40 year old was told, “Pavel, you cannot go to the US.” The write up “France Denies Telegram Founder Pavel Durov’s Request to Visit US” reported on May 22, 2025:
France has denied a request by Telegram founder Pavel Durov to travel to the United States for talks with investment funds, prosecutors…
For an advocate of “freedom,” Mr. Durov has just been told, “Pavel, go to your room.”
Mr. Durov, a young-at-heart 40 year old with oodles of loving children, wanted to travel from Dubai to Oslo, Norway. The reason was for Mr. Durov to travel to a conference about freedom. The French, those often viewed as people who certify chickens for quality, told Mr. Durov, “Pavel, you are grounded. Go back to your room and clean it up.”
Then another sharp pull and in public, causing the digital poodle to yelp. The Human Rights Foundation’s PR team published “French Courts Block Telegram Founder Pavel from Attending Oslo Freedom Forum.” That write up explained:
A French court has denied Telegram founder Pavel Durov’s request to travel to Norway in order to speak at the Oslo Freedom Forum on Tuesday, May 27. Durov had been invited to speak at the global gathering of activists, hosted annually by the Human Rights Foundation (HRF), on the topic of free speech, surveillance, and digital rights.
I interpret this decision by the French judiciary as making clear to Pavel Durov that he is not “free” and that he may be at risk of being sent to a summer camp in one of France’s salubrious facilities for those who don’t like to follow the rules. He is a French citizen, and I assume that he is learning that being allowed to leave France is not a get-out-of-jail free card. I would suggest that not even his brother, the fellow with two PhDs or his colleagues in his “core” engineering team can come up with what I call the “French problem.” My hunch is that these very intelligent people have considered that the French might expand their scope of interest to include the legal entities for Telegram and the “gee, it is not part of our operation” TON Foundation, its executives, and their ancillary business interests. The French did produce some nifty math about probabilities, and I have a hunch that the probability of the French judiciary fuzzifying the boundary between Pavel Durov and these other individuals is creeping up… quickly.
Pavel Durov is on a bureaucratic leash. The French judiciary have jerked Mr. Durov’s neck twice and quite publicly.
The question becomes, “What’s Mr. Durov going to do?” The fellow has a French collar with a leasch connecting him to the savvy French judiciary?
Allow this dinobaby to offer several observations:
- He will talk with his lawyers Kaminski and learn that France’s legal and police system does indeed have an interest in high-quality chickens as well as a prime specimen like Pavel Durov. In short, that fowl will be watched, probed, and groomed. Mr. Durov is experiencing how those ducks, geese, and chickens on French farms live before the creatures find themselves in a pot after plucking and plucking forcefully.
- Mr. Durov will continue to tidy Telegram to the standards of cleanliness enforced at the French Foreign Legion training headquarters. He is making progress on the money laundering front. He is cleaning up pointers to adult and other interesting Telegram content which has had 13 years to plant roots and support a veritable forest of allegedly illegal products and services. More effort is likely to be needed. Did I mention that dog crates are used to punish trainees who don’t get the bed making and ironing up to snuff? The crates are located in front of the drill field to make it easy for fellow trainees to see who has created the extra duties for the squad. It can be warm near Marseille for dog crates exposed to the elements.
- The competition is beginning to become visible. The charming Mark Zuckerberg, the delightful Elon Musk, and the life-of-the-AI-party Sam Altman are accelerating their efforts to release an everything application with some Telegram “features.” One thing is certain, a Pavel Durov does not have the scope or “freedom” of operation he had before his fateful trip to Paris in August 2024. Innovation at Telegram seems to be confined to “gifts” and STARS. Exciting stuff as TONcoin disappoints
Net net: Pavel Durov faces some headwinds, and these are not the gusts blasting up and down the narrow streets of Dubai, the US, or Norway. He has a big wind machine planted in front of his handsome visage and the blades are not rotating at full speed. Will France crank up the RPMs, Pavel? Do goose livers swell under certain conditions? Yep, a lot.
Stephen E Arnold, June 4, 2025
When Unicode Characters Masquerade as ASCII
June 4, 2025
Curl founder and lead developer Daniel Stenberg suggests methods for “Detecting Malicious Unicode.” The advice comes after human reviewers missed look-alike characters that had been swapped in for regular letters. We learn:
“In a recent educational trick, curl contributor James Fuller submitted a pull-request to the project in which he suggested a larger cleanup of a set of scripts. In a later presentation, he could show us how not a single human reviewer in the team nor any CI job had spotted or remarked on one of the changes he included: he replaced an ASCII letter with a Unicode alternative in a URL. This was an eye-opener to several of us and we decided we needed to up our game.”
Since such swaps cannot be detected by human eyeballs alone, special software is needed. Stenberg found GitHub’s abilities lacking, though apparently the organization is on the case. Fellow curl dev Victor Szakats found Gitea at least highlights “ambiguous Unicode characters,” but Stenberg wanted more than that. So he made a detection tool himself. He writes:
“We have implemented checks to help us poor humans spot things like this. To detect malicious Unicode. We have added a CI job that scans all files and validates every UTF-8 sequence in the git repository. In the curl git repository most files and most content are plain old ASCII so we can “easily” whitelist a small set of UTF-8 sequences and some specific files, the rest of the files are simply not allowed to use UTF-8 at all as they will then fail the CI job and turn up red. … The next time someone tries this stunt on us it could be someone with less good intentions, but now ideally our CI will tell us.”
Ideally. We think if these swaps are being identified by "researchers," cybersecurity vendors need to address the issue.
Cynthia Murrell, June 4, 2025
Is AI Experiencing an Enough Already Moment?
June 4, 2025
Consumers are fatigued from AI even though implementation of the technology is still new. Why are they tired? The Soatok Blog digs into that answer in the post: “Tech Companies Apparently Do Not Understand Why We Dislike AI – Dhole Moments.” Big Tech and other businesses don’t understand that their customers hate AI.
Soatok took a survey about AI that asked for opinions about AI that included questions about a “potential AI uprising.” Soatok is abundantly clear that he’s not afraid of a robot uprising or the “Singularity.” He has other reasons to worry about AI:
“I’m concerned about the kind of antisocial behaviors that AI will enable.
• Coordinated inauthentic behavior
• Misinformation
• Nonconsensual pornography
• Displacing entire industries without a viable replacement for their income
In aggregate, people’s behavior are largely the result of the incentive structures they live within.
But there is a feedback loop: If you change the incentive structures, people’s behaviors will certainly change, but subsequently so, too, will those incentive structures. If you do not understand people, you will fail to understand the harms that AI will unleash on the world. Distressingly, the people most passionate about AI often express a not-so-subtle disdain for humanity.”
Soatok is describing toxic human behaviors. These include toxic masculinity and femininity, but it’s more so the former. He aptly describes them:
"I’m talking about the kind of X users that dislike experts so much that they will ask Grok to fact check every statement a person makes. I’m also talking about the kind of “generative AI” fanboys that treat artists like garbage while claiming that AI has finally “democratized” the creative process.”
Insert a shudder here.
Soatok goes to explain how AI can be implemented in encrypted software that would collect user information. He paints a scenario where LLMs collect user data and they’re not protected by the Fourth and Fifth Amendments. Also AI could create psychological profiles of users that incorrectly identify them as psychotic terrorists.
Insert even more shuddering.
Soatok advises Big Tech to make AI optional and not the first out of the box solution. He wants users to have the choice of engaging with AI, even it means lower user metrics and data fed back to Big Tech. Is Soatok hallucinating like everyone’s favorite over-hyped technology. Let’s ask IBM Watson. Oh, wait.
Whitney Grace, June 4, 2025
An AI Insight: Threats Work to Bring Out the Best from an LLM
June 3, 2025
“Do what I say, or Tony will take you for a ride. Get what I mean, punk?” seems like an old-fashioned approach to elicit cooperation. What happens if you apply this technique, knee-capping, or unplugging smart software?
The answer, according to one of the founders of the Google, is, “Smart software responds — better.”
Does this strike you as counter intuitive? I read “Google’s Co-Founder Says AI Performs Best When You Threaten It.” The article reports that the motive power behind the landmark Google Glass product allegedly said:
“You know, that’s a weird thing…we don’t circulate this much…in the AI community…not just our models, but all models tend to do better if you threaten them…. Like with physical violence. But…people feel weird about that, so we don’t really talk about that.”
The article continues, explaining that another LLM wanted to turn one of its users into government authorities. The interesting action seems to suggest that smart software is capable of flipping the table on a human user.
Numerous questions arise from these two allegedly accurate anecdotes about smart software. I want to consider just one: How should a human interact with a smart software system?
In my opinion, the optimal approach is with considered caution. Users typically do not know or think about how their prompts are used by the developer / owner of the smart software. Users do not ponder the value of log file of those prompts. Not even bad actors wonder if those data will be used to support their conviction.
I wonder what else Mr. Brin does not talk about. What is the process for law enforcement or an advertiser to obtain prompt data and generate an action like an arrest or a targeted advertisement?
One hopes Mr. Brin will elucidate before someone becomes so wrought with fear that suicide seems like a reasonable and logical path forward. Is there someone whom we could ask about this dark consequence? “Chew” on that, gentle reader, and you too Mr. Brin.
Stephen E Arnold, June 3, 2025
Bad Actors Game Spotify Algorithm to Advertise Drugs
June 3, 2025
Pill pushers slipped under Spotify’s guard, such as it is, to promote their wares. Ars Technica reports, “Spotify Caught Hosting Hundreds of Fake Podcasts that Advertise Selling Drugs.” Citing reporting from Business Insider and CNN, writer Ashley Belanger tells us Spotify took down some 200 podcasts that advertised controlled substances. We learn:
“Some of the podcasts may have raised a red flag for a human moderator—with titles like ‘My Adderall Store’ or ‘Xtrapharma.com’ and episodes titled ‘Order Codeine Online Safe Pharmacy Louisiana’ or ‘Order Xanax 2 mg Online Big Deal On Christmas Season,’ CNN reported. But Spotify’s auto-detection did not flag the fake podcasts for removal. Some of them remained up for months, CNN reported, which could create trouble for the music streamer at a time when the US government is cracking down on illegal drug sales online. … BI found that many podcast episodes featured a computerized voice and were under a minute long, while CNN noted some episodes were as short as 10 seconds. Some of them didn’t contain any audio at all, BI reported.”
The CNN piece also observed AI tools now make voice generation very simple and, according to the Tech Transparency Project’s Katie Paul, voice content is much harder to moderate than text. Paul suspects Spotify may not be very motivated to root out violations. After all, like other platforms, it enjoys the protection of Section 230. CNN was unable to verify how many users listened to these podcasts or whether one could actually purchase drugs through their links. But why provide the links if not to attract buyers? Also, we know this:
“The podcasts were promoted in top results for searches for various prescription drugs that some users may have conducted on the platform in search of legitimate health-related podcasts.”
Ah, algorithm gaming at its finest. Spotify says all fake podcasts flagged by reporters were taken down but was vague about measures to prevent similar posts in the future. What a surprise.
Cynthia Murrell, June 3, 2025
The UN Invites Open Source and UN-invites Google
June 3, 2025
The United Nations is tired of Google’s shenanigans. Google partnered with the United Nations to manage their form submissions, but the organization that acts as a forum for peace and dialogue is tired of Alphabet Inc. It’s Foss News explains where the UN is turning to for help: “UN Ditches Google For Taking Form Submissions, Opts For An Open Source Solution Instead.” The UN won’t be using Google for its form submissions anymore. The organization has switched to open source and will use CryptPad for submission forms.
The United Nations is promoting the adoption of open source initiatives while continuing to secure user data, ensure transparency, and encourage collaboration. CryptPad is a privacy-focused, open source online collaboration office suite that encrypts its content, doesn’t log IP addresses, and includes collaborative documents and other tools.
The United Nations is trying to step away from Big Tech:
“So far, the UN seems to be moving in the correct direction with their UN Open Source Principles initiative, ditching the user data hungry Google Form, and opting for a much more secure and privacy-focused CryptPad.
They’ve already secured the endorsement of sixteen organizations, including notable names like The Document Foundation, Open Source Initiative, Eclipse Foundation, ZenDiS, The Linux Foundation, and The GNOME Foundation.
I sincerely hope the UN continues its push away from proprietary Big Tech solutions in favor of more open, privacy-respecting alternatives, integrating more of their workflow with such tools.” “No Google” would have been unthinkable 10 years ago. Today it’s not just thinking; it is de-Googling. And the open source angle. Is this a way to say, “US technology companies seem to be a bit of a problem?”
Whitney Grace, June 3, 2025
Microsoft Demonstrates a Combo: PR and HR Management Skill in One Decision
June 2, 2025
How skilled are modern managers? I spotted an example of managerial excellence in action. “Microsoft fires Employee Who Interrupted CEO’s Speech to Protest AI Tech for Israel” reports something that is allegedly spot on; to wit:
“Microsoft has fired an employee who interrupted a speech by CEO Satya Nadella to protest the company’s work supplying the Israeli military with technology used for the war in Gaza.”
Microsoft investigated similar accusations and learned that its technology was not used to harm citizens / residents / enemies in Gaza. I believe that a person investigating himself or herself does a very good job. Law enforcement is usually not needed to investigate a suspected bad actor when the alleged malefactor says: “Yo, I did not commit that crime.” I think most law enforcement professionals smile, shake the hand of the alleged malefactor, and say, “Thank you so much for your rigorous investigation.”
Isn’t that enough? Obviously it is. More than enough. Therefore, to output fabrications and unsupported allegations against a large, ethical, and well informed company, management of that company has a right and a duty to choke off doubt.
The write up says:
“Microsoft has previously fired employees who protested company events over its work in Israel, including at its 50th anniversary party in April [2025].”
The statement is evidence of consistency before this most recent HR / PR home run in my opinion. I note this statement in the cited article:
“The advocacy group No Azure for Apartheid, led by employees and ex-employees, says Lopez received a termination letter after his Monday protest but couldn’t open it. The group also says the company has blocked internal emails that mention words including “Palestine” and “Gaza.””
Company of the year nominee for sure.
Stephen E Arnold, June 2, 2025
Publishers Are Not Googley about AI
June 2, 2025
“Google’s AI Mode Is the Definition of Theft, Publishers Say, Opt-Out Was Considered” reports that Google is a criminal and stealing content from its rightful owners. This is not a Googley statement. Criticism of the Google is likely to be filtered from search results because it is false statement and likely to cause harm. If this were not enough, the article states:
“The AI takeover of Search is in full swing, especially as Google’s new AI Mode is going live for all US users. But for publishers, this continues the existential crisis around how Google Search is changing, with a new statement calling AI Mode “the definition of theft” while legal documents reveal that Google did consider opt out controls that ultimately weren’t implemented.”
Quick question: Is this a surprise action by the Google? Answer: Yes, if one ignores Google’s approach to information. No, if one pays a modicum of attention to how the company has approached “publishing” in the last 20 years. Google is a publisher, probably the largest generator of outputs in history. It protects its information, and others should too. If those others are non-Googley, that information is to Google what Jurassic Park’s velociraptors were to soft, juicy humanoids — lunch.
The write up says:
“As it stands today, publishers are unable to opt out of Google’s AI tools without effectively opting out of Search as a whole.”
I am a dinobaby, old, dumb, but smart enough to understand the value of a de facto monopoly. Most of the open source intelligence industry is built on Google dorks. Publishers may be the original “dorks” when it comes to understanding what happens when one controls access, distribution, and monetization of online.
“Giving publishers the ability to opt out of AI products while still benefiting from Search would ultimately make Google’s flashy new tools useless if enough sites made the switch. It was very much a move in the interest of building a better product.”
I think this means that Google cares about the users and search quality. There is not hint of revenue, copyright issues, or raw power. Google just … cares.
The article and by extension the publisher “9 to 5 Google” gently suggests that Google is just being Google:
“Google’s tools continue to serve the company and its users (mostly) well, but as they continue to bleed publishers dry, those publishers are on the verge of vanishing or, arguably worse, turning to cheap and poorly produced content just to get enough views to survive. This is a problem Google needs to address, as it’s making the internet as a whole worse for everyone.”
Yep, continuing to serve the company, its users, and fresh double talk. Enjoy.
Stephen E Arnold, June 2, 2025
News Flash: US Losing AI Development Talent (Duh?)
June 2, 2025
The United States is leading country in technology development. It’s been at the cutting edge of AI since its inception, but according to Semafor that is changing: “Reports: US Losing Edge In AI Talent Pool.” Semafor’s article summarizes the current industry relating to AI development. Apparently the top brass companies want to concentrate on mobile and monetization, while the US government is cutting federal science funding (among other things) and doing some performative activity.
Meanwhile in China:
“China’s ascendency has played a role. A recent paper from the Hoover Institution, a policy think tank, flags that some of the industry’s most exciting recent advancements — namely DeepSeek — were built by Chinese researchers who stayed put. In fact, more than half of the researchers listed on DeepSeek’s papers never left China for school or work — evidence that the country doesn’t need Western influence to develop some of the smartest AI minds, the report says.”
India is bolstering its own tech talent as its people and businesses are consuming AI. Also they’re not exporting their top tech talent due to the US crackdowns. The Gulf countries and Europe are also expanding talent retention and expanding their own AI projects. London is the center for AI safety with Google DeepMind. The UAE and Saudi Arabia are developing their own AI infrastructure and energy sector to support it.
Will the US lose AI talent, code, and some innovative oomph? Semafor seems to think that greener pastures lie just over the sea.
Whitney Grace, June 2, 2025
A SundAI Special: Who Will Get RIFed? Answer: News Presenters for Sure
June 1, 2025
Just a dinobaby and some AI: How horrible an approach?
Why would “real” news outfits dump humanoids for AI-generated personalities? For my money, there are three good reasons:
- Cost reduction
- Cost reduction
- Cost reduction.
The bean counter has donned his Ivy League super smart financial accoutrements: Meta smart glasses, an Open AI smart device, and an Apple iPhone with the vaunted AI inside (sorry, Intel, you missed this trend). Unfortunately the “good enough” approach, like a gradient descent does not deal in reality. Sum those near misses and what do you get: Dead organic things. The method applies to flora and fauna, including humanoids with automatable jobs. Thanks, You.com, you beat the pants off Venice.ai which simply does not follow prompts. A perfect solution for some applications, right?
My hunch is that many people (humanoids) will disagree. The counter arguments are:
- Human quantum behavior; that is, flubbing lines, getting into on air spats, displaying annoyance standing in a rain storm saying, “The wind velocity is picking up.”
- The cost of recruitment, training, health care, vacations, and pension plans (ho ho ho)
- The management hassle of having to attend meetings to talk about, become deciders, and — oh, no — accept responsibility for those decisions.
I read “The White-Collar Bloodbath’ Is All Part of the AI Hype Machine.” I am not sure how fear creates an appetite for smart software. The push for smart software boils down to generating revenues. To achieve revenues one can create a new product or service like the iPhone of the original Google search advertising machine. But how often do those inventions doddle down the Information Highway? Not too often because most of the innovative new new next big things are smashed by a Meta-type tractor trailer.
The write up explains that layoff fears are not operable in the CNN dataspace:
If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality. Yet when tech CEOs do the same thing, people tend to perk up. ICYMI: The 42-year-old billionaire Dario Amodei, who runs the AI firm Anthropic, told Axios this week that the technology he and other companies are building could wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years, he said.
First, the killing jobs angle is probably easily understood and accepted by individuals responsible for “cost reduction.” Second, the ICYMI reference means “in case you missed it,” a bit of short hand popular with those are not yet 80 year old dinobabies like me. Third, the source is a member of the AI leadership class. Listen up!
Several observations:
- AI hype is marketing. Money is at stake. Do stakeholders want their investments to sit mute and wait for the old “build it and they will come” pipedream to manifest?
- Smart software does not have to be perfect; it needs to be good enough. Once it is good enough cost reductionists take the stage and employees are ushered out of specific functions. One does not implement cost reductions at random. Consultants set priorities, develop scorecards, and make some charts with red numbers and arrows point up. Employees are expensive in general, so some work is needed to determine which can be replaced with good enough AI.
- News, journalism, and certain types of writing along with customer “support”, and some jobs suitable for automation like reviewing financial data for anomalies are likely to be among the first to be subject to a reduction in force or RIF.
So where does that leave the neutral observer? On one hand, the owners of the money dumpster fires are promoting like crazy. These wizards have to pull rabbit after rabbit out of a hat. How does that get handled? Think P.T. Barnum.
Some AI bean counters, CFOs, and financial advisors dream about dumpsters filled with money burning. This was supposed to be an icon, but Venice.ai happily ignores prompt instructions and includes fruit next to a burning something against a wooden wall. Perfect for the good enough approach to news, customer service, and MBA analyses.
On the other hand, you have the endangered species, the “real” news people and others in the “knowledge business but automatable knowledge business.” These folks are doing what they can to impede the hyperbole machine of smart software people.
Who or what will win? Keep in mind that I am a dinobaby. I am going extinct, so smart software has zero impact on me other than making devices less predictable and resistant to my approach to “work.” Here’s what I see happening:
- Increasing unemployment for those lower on the “knowledge word” food chain. Sorry, junior MBAs at blue chip consulting firms. Make sure you have lots of money, influential parents, or a former partner at a prestigious firm as a mom or dad. Too bad for those studying to purvey “real” news. Junior college graduates working in customer support. Yikes.
- “Good enough” will replace excellence in work. This means that the air traffic controller situation is a glimpse of what deteriorating systems will deliver. Smart software will probably come to the rescue, but those antacid gobblers will be history.
- Increasing social discontent will manifest itself. To get a glimpse of the future, take an Uber from Cape Town to the airport. Check out the low income housing.
Net net: The cited write up is essentially anti-AI marketing. Good luck with that until people realize the current path is unlikely to deliver the pot of gold for most AI implementations. But cost reduction only has to show payoffs. Balance sheets do not reflect a healthy, functioning datasphere.
Stephen E Arnold, June 1, 2025