Will Meta Force the UK to Do a Kremlin Telegram Play?
March 18, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
The US big tech outfits may be annoying some countries. Examples include Switzerland and its interactions with Palantir Technologies, Google and its jousting with EU regulators, and Amazon AWS’ chats with customers about “dependence.” One set of interactions caught my attention because it has the potential to trigger a quite dramatic governmental reaction. Remember: This is a dinobaby’s interpretation and extrapolation of a “what if” scenario.
A wealthy Silicon Valley professional drives his RV into a British home owners’ garden. The home owner is distressed. The RV driver toots ignores the outraged home owner. Thanks, Venice.ai. Good enough.
The trigger for my thinking is a write up titled “Exclusive: Meta Vowed to Stop Illegal Financial Ads in Britain. It Failed 1,000 Times in a Week.” Please, read the original Reuters’ story. I will boil it down and then focus on what I call the Kremlin Telegram Play.
The cited story from the trust outfit reports that Meta said one thing, then did another…. just a 1,000 times in a week. I will quote one sentence from the exclusive report:
… 56% of those ads were ?from an unspecified number of unauthorized advertisers the FCA had already flagged to Meta, according to the results of the review seen by Reuters and reported here for the first time.
Now let’s think about this alleged action by Meta. The British government made a reasonable request. Meta agreed. Then Meta did what it has been doing for many years. The company just followed its “we do what we want” approach to its core philosophy of moving fast and breaking things.
No surprise here. Sarah Wynn-Williams’ "Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism" documents a number of examples of how the Facebook, Instagram, and WhatsApp owner deals with political, financial, and ethical decisions. The Meta outfit does what it has decided to do just as it allegedly is doing with illegal financial ads.
Some might call Meta’s approach good business. Others, like some other countries, view the behavior as either inappropriate, unethical, or illegal. That divide is what makes some American high technology companies deeply problematic. Users love the services; elected officials are troubled. Meta’s failure to block illegal financial advertisements in the UK may raise an interesting question; that is, “Will the UK block Meta’s services just as the Kremlin is blocking access to Telegram’s services.
If the UK takes this decision, the impact of US corporate behavior could trigger a set of similar actions in other countries. The damage would not be confined to companies exhibiting Telegram-type behavior. The US companies would lose some percentage of their customer base. But the knock on effects are interesting to consider; for example:
- Data service providers (ISPs and others) would find themselves having to make a decision. Do these firms follow the law of the country in which the data center is located and the law of a country like Russia or Britain? Do these firms roll over or do they defy regulators?
- Suppliers. Companies and consultants working for a blocked company could be subject to fines, loss of government contracts, or the arrest of the senior executives. Will these supply chain entities comply or will they adopt the US approach and say, “Sure. Whatever.” And then continue to work for the US big technology firms. That may fly in some countries, but in other countries, that might be a problem.
- Employees of US companies who live and work in a country which has taken action to block their employer’s services could face arrest, imprisonment, and in some countries, extreme punishment. (I won’t define “extreme,” but you can look it up on one of those big tech smart software services. Note: You may be blocked from viewing the content. Why not give it a whirl?)
- Users would find themselves looking for ways to evade blocks by data centers and other firms in order to access the US services. In Iran, there are rumors on social media that the government is looking for individuals with Elon Musk Starlink systems and people who use virtual private networks. Breaking the law raises some interesting questions about user push back or kinetic response.
- Lawyers and consultants. Still billing no matter what.
Now let’s look at the question in the title of this essay, “Will Meta force the UK to do a Kremlin Telegram play?”
Accommodation. If the UK just accommodates Meta, the firm may continue to run the plays in its game plan. This signals other companies to ignore British laws, rules, and regulations. If a fine is levied, pay the fine and keep on running what’s in the play book.
Negotiate. Yeah, that works. “Great to meet you and your team. My team is here and ready to work out an understanding.” Look at the agreement getting settled in its little coffin.
Do nothing but talk. If the UK does nothing, big US high technology firms are likely to expand and become more aggressive in their methods for generating revenue from users and advertisers in that country. The “do nothing” approach has been, from my point of view, the path the EU has followed. How much money have US big technology companies paid in fines? Answer: Not much.
Block Meta’s services. If the UK requires UK data centers and related firms to block access to Meta’s services, the UK has adopted the Kremlin approach to managing information. I am not sure how that will fly in Britain or if it would fly in Ireland, Scotland, or Wales. It would, however, be interesting to watch the different political entities respond to this Putinesque approach. The Ivory Tower thinkers at Oxbridge would produce some fascinating essays and books about the decision.
Net net: The Reuters’ story, if accurate, is important. The consequences for the UK may be significant. Meta will just adapt because of the Silicon Valley, big technology, tech bro thing.
Stephen E Arnold, March 18, 2026
AI: Is the Next Big Thing for Manufacturing? But Whose?
March 18, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
For the second time in three days, I have heard about the “physical problem of AI.” Each write up defines “physical” implicitly. Both, however, view AI in the context of a mental view like this: Tomorrow will be like yesterday. What does this mean? The implication is that AI software is ready to go big time but for:
- Power. Yes, it takes time to build power generation facilities. Even getting a gas turbine and hooking it up to an existing data center can take months. How long does it take to build a baby nuclear plant? I don’t know, but Bill Gates’ venture will answer that question? Why not just buy the reactors that power nuclear vessels like submarine? Sorry, that’s classified.
- Chips. Yes, that’s a bit of a problem with at least two dimensions. The first is that only a few companies manufacture the systems that pump out semiconductors. Supply chains exist for these vendors, and some of the gizmos required by ASML-type outfits cannot be purchased from Amazon. The second dimension is the pace of change of the chips themselves. Each time an Nvidia-type company rolls out a new AI chip, the data centers purpose built for previous generations of AI chips need tweaking. What happens when many people want to tweak at the same time? Cost spikes and bottlenecks, perhaps?
- Data centers. Telegram, piloted by the clever and likeable Brothers Durov, concluded that using other people’s infrastructure or OPI was a better way to do AI. No capital investment, no verticalization, and no physical facilities on one’s books — That is the way to go. The direction is the opposite of some other big tech outfits. Are the Brother Durovs correct? some bean counters think the duo may be.
I thought about these factors when I read the marketing collateral published as “real” information in MIT Technology Review. (No, I won’t mention Epstein Epstein Epstein again in the context of the estimable institution.) The article is titled “Why Physical AI Is Becoming Manufacturing’s Next Advantage.” Okay, tomorrow will be like yesterday because new technology revolutionizes something fundamental like manufacturing.
The idea is unrelated to the three points mentioned above about the problem physical infrastructure poses for big tech AI. Hey, folks, where are you going to get helium? What will be the cost of components when disruption hammers companies in those supply chains for the semiconductor and infrastructure companies? Industrial machines have supply chains too, and these may not operate as they did yesterday.

Yesterday many not prepare some for today. Today may not predict tomorrow. Foundational assumptions require identification, analysis, and fact-based predictions. Thanks, Venice.ai. Good enough for handling medical treatments and the manufacturing of experimental modular nuclear reactors for the data centers to be constructed in this open field in Poland.
The write up asserts:
The next phase of transformation will not be defined by isolated AI tools or individual robots, but by intelligence that can operate reliably in the physical world. This is where physical AI—intelligence that can sense, reason, and act in the real world—marks a decisive shift. And it is why Microsoft and NVIDIA are working together to help manufacturers move from experimentation to production at industrial scale.
I like the notion of “industrial scale.” It is a harbinger of big money for those who capture the market. What’s going to create this conquering of the “industrial frontier”? Answer: Microsoft and Nvidia. Microsoft, the MIT Technology Review helpfully points out, is the author of the article. The argument flows in a PR-ish rush that “physical AI” will arrive at scale, enabled by Microsoft and Nvidia. Instead of talking about this revolution, these two firms will move from intelligence to action.
As I have pointed out, humans want to believe that today’s AI is indeed good enough. The idea has fueled adoption of smart software by a number of firms. Even governments have found value in today’s smart software. As some forward leaning people assert, tomorrow will be like yesterday. I would suggest that:
- The coming supply chain disruptions will make such assumptions subject to endless revision and revisionism
- The friction of the humans involved in these predicted inevitable shifts may add both time and emotional heat to the inevitable change
- The big outfits confidently predicting the future from their vantage point may be operating on false assumptions.
I noted this passage in the advertisement for the Microsoft and Nvidia view of the industrial future:
As physical AI systems scale, trust becomes the limiting factor. Manufacturers must ensure that AI systems are secure, observable, and operating within policy, especially when they influence safety?critical or mission?critical processes. Governance cannot be an afterthought; It must be engineered into the platform itself. This is why frontier manufacturers treat trust as a first?class requirement, pairing innovation with visibility, compliance, and accountability. Only then can physical AI move from promising demonstrations to enterprise?wide deployment.
Infrastructure, supply chain issues, and geopolitical instability — Not a problem. The problem is trust; it is the limiting factor. Does Microsoft engender trust? I am wary of quasi-monopolies engendering trust in dinobabies like me. I don’t “trust” anyone or any thing unless either unless I have verified it in accordance with my old-fashioned principles. QR codes on my mobile for a menu? Nope. Cough up information to use a free service? Nope. Believe that Microsoft can update its software without crashing an essential function like printing? Nope. Believe that Nvidia’s endless stream of driver updates improve my experience? Nope. Your mileage may vary, or you may rationalize by saying to yourself, “That’s the way it is.”
And security? I am not going to recycle my comments, findings, and perceptions of Microsoft-type companies’ security. Security increases these firms’ costs. What these firms require is margins, profits, and bonus pools spilling cash into Carpertland.
Will manufacturing (whatever that means to Microsoft’s PR team) embrace AI? The answer is, “Over time.” Will manufacturing in China use AI? Answer: Many Chinese firms are deeply integrating AI into certain processes. Isn’t that why Apple loves to do business with certain Chinese manufacturers or is it because those Apple executives love driving in Beijing, Shanghai, or Suzhou for meetings?
Here are my customary observations about this PR piece designed to make General Motors-type outfits immediately plan to acquire machines to replace those silly manual gizmos in the model shop and the endangered tool shops in the firm’s plants and spark the purchase of new smart machines for the planned Sunday Mines Complex. Machines are less expensive than the legal fees for worker health claims, a troubling issue for bean counters. AI to the rescue.
- Timing. Microsoft’s claims about AI and manufacturing are the PR equivalent of a farmer tilling before planting. The fallacy is that tomorrow will be like yesterday.
- Cost. An issue now and tomorrow because the geopolitical instability does not decrease economic uncertainty and make bankers think about distant time horizons or risk averse investors just dive in as they did yesterday.
- Reliability. The outputs of probabilistic word prediction-centric systems will produce errors some percentage of the time. Is good enough going to be good enough? That’s an interesting question because the answer depends on context. How about those smart systems prescribing chemicals for your daughter’s chemo treatment? Are you onboard?
- Stable supply chains. Yesterday, maybe? Today, perhaps if you pay up front and buy what’s available? Tomorrow? Why not predict the winner of the 2026 Kentucky Derby? If you leave soon, you can visit the Hormuz area and maybe return before the race.
Net net: There are now quite significant restraints on the current smart software sector. Some are pragmatic like power and water and gizmos. Others are downstream like industrial machines with AI inside. The flaw is that tomorrow is starting to look less and less like yesterday.
What country will do the AI in manufacturing thing? Polymarket?
Stephen E Arnold, March 18, 2026
Upskilling: Chasing the Impossible for Most People
March 17, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
The idea that one can take a group of 100 white collar workers and upskill them to “do” AI strikes me as a little crazy. For a short time, I taught a class at Duquesne University, did a one year tour in a program set up from youth offenders, and for some reason I still don’t understand served as a director of a special program at Northern Illinois University for special admission students. I learned that upskilling at each of these levels was difficult. The Duquesne experience made clear to me that bright people who had chosen a profession in the Catholic church were not “into” learning some new methods. My work with young people made clear that upskilling a person with traditional instructional methods was a waste of time. Therefore, when I hear about upskilling white collar professionals to learn about AI and then use AI to perform some job functions, I think a dose of reality may be needed.
A good example of this fanciful thinking appears in “The AI Cost-Cutting Fallacy: Why Doing More with Less is Breaking Engineering Teams.” The premise is now a trope. AI will make workers more productive. The Harvard Business Review explains that AI usage causes some workers to experience stress. The estimable HBR management wizards call this condition “brain fry.”

A 45 year old professional utility rate statistical analyst waits for a local train. He has been terminated because he insists that smart software cannot perform the requisite mathematical analyses required to determine probable power demand of a new data center coming online in three months. His superior wants to use the optimistic, hallucinated outputs from the firm’s new AI system. He knows he will be RIFed because AI does not have the know how our hero has gained over his 20 year career. Thanks, Venice.ai. Good enough.
“The AI Cost-Cutting” article states:
In late 2024 and throughout 2025, a dangerous narrative took hold in boardrooms across the tech industry. The logic seemed seductive in its simplicity: if AI tools like GitHub Copilot, Cursor, or Windsurf can help a developer write code 20% to 50% faster, then surely a company can reduce its engineering headcount by a similar margin while maintaining the same output. This “spreadsheet logic” has led to a wave of premature optimizations, where leadership teams view AI licenses as a direct substitute for human talent. The expectation is straightforward: buy the tools, cut the bottom 5–20% of the workforce, and watch margins improve. However, this approach fundamentally misunderstands the nature of software engineering. It confuses typing speed with problem-solving.
I agree.
The article then grinds through MBA jargon to make clear that efficiency has a downside: Degradation, not improvement. The conclusion of the write up, however, veers into the upskilling craziness. The article states:
Your domain experts are more valuable than ever. AI can write syntax, but only your people understand business logic. Train them to master Horizon 1 tools to prepare for Horizon 2.
Horizon 1 and Horizon 2 are MBA speak for producing needed software faster and then pushing to get smart software to do the “work.” How does one move “domain experts” along this yellow brick road?
Easy. Upskill.
I want to point out:
- People who don’t “upskill” are essentially watching the train depart from the station. Most will not be on the train. A local to the local unemployment office is definitely a possibility.
- People who won’t “upskill” are waiting for the pink slip to arrive via email or a quick Zoom meeting. Resistance means termination.
- Training programs that don’t output appropriately upskilled individuals will be chasing new contracts or waiting for a local to the local unemployment office.
- The leadership who pitch, manage, and have to report to an upskilled board of directors will be in a precarious position. Failure is bad for one’s business career.
The larger question is, “Why do people believe that upskilling adults who may have their sense of self anchored in a particular bucket of knowledge, systems, and methods is going to work?”
Upskilling won’t just as modern education is not cranking out large numbers of high performing graduates. Isn’t upskilling just a stopping point on a road that requires off loading and on loading the people needed to make the business work in the smart software centric organization?
Stephen E Arnold, March 17, 2026
AI Is Now about Infrastructure
March 16, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I listened to an interview conducted by Dwarkesh Patel with his “roommate” Dylan Patel. My question is, “Can roomies have an objective, unbiased, and fact-based interview?”

Thanks, Midjourney. Good enough with the roomies’ thing.
The premise of “Dylan Patel — Deep Dive on the 3 Big Bottlenecks to Scaling AI Compute. Plus, Why an H100 Is Worth More Today Than 3 Years Ago” is that AI is no longer a software problem. It is a power and infrastructure problem. I agree, however, I am not sure that I can accept some of the unsupported assertions presented in the interview. I suppose roomies don’t have to ask one another tough questions. At the end of the interview, the two have to return to the same home, share the facilities, and avoid the type of friction that created what I think of the Rob Reiner problem.
Please, listen to the one hour interview or read the transcript of the program. I want to offer one quote that struck me as illustrative of the approach:
If you look at what Anthropic has done over the last few months, with $4 billion or $6 billion in revenue added, we can just draw a straight line and say they’ll add another $6 billion of revenue a month. People would argue that’s bearish, and that they should go faster. What that implies is they’re going to add $60 billion of revenue across the next ten months. At the current gross margins Anthropic had, as last reported by media, that would imply they have roughly $40 billion of compute spend for that inference, for that $60 billion of revenue.
Dylan Patel, one roomie, is the “leadership” of SemiAnalysis consulting firm. His approach to smart software is what I call a kinder, gentler Ed Zitron approach. Both want to be noticed. Both present some interesting generalizations. Both thrive when the social media world notices their observations.
What are the main points of the Dylan Patel showcase?
I jotted down four:
First, Anthropic is going to be a big AI winner. My response is, “Maybe?” But $60 billion in revenue when Anthropic reported $6 billion to the US government. Yeah, maybe?
Second, the US economy is going to grow faster. My response is, “But the war may have a somewhat negative impact?” Geo-politics does not come into play in this interview when the US economy is discussed.
Third, the statements about China are speculative. The phrase, like the reference to a “roomie,” shot in the dark undermines the credibility of China-related statements.
Fourth, the assertion that ASML can make machines that produce the chips needed for AI is interesting. ASML does not make a toaster. Its “machines” are complex systems. These depend on supply chains that produce components that are not on Amazon’s warehouse shelves. With the Iranian war, assertions about ASML delivering machines and seamless production of chips are disconnected from reality.
Net net: Big talking and wild statements are part of what makes the name of Dylan’s SemiAnalysis appropriate. “Semi” thinking and partial analysis. Roommate interviews provide insight into curating enthusiasm, not probing assertions. Is that why these fellows are roomies?
Stephen E Arnold, March 16, 2026
What! Brain Fry? Oh, My, AI!
March 16, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I love takes like “AI Causes Brain Fry at Work, Researchers Warn.”
Researchers writing in the Harvard Business Review described a survey of around 1,500 US-based workers across a variety of industries, revealing a harmful trend for people using AI to increase productivity and improve performance.
Yep, Harvard, the go-to source for AI information.

Thanks, Venice.ai. Good enough.
Here’s a passage from the take on the take I found interesting:
The researchers observed a pattern of “cognitive exhaustion from intensive oversight of AI agents” that resulted in a severe difficulty in focusing on tasks. They used the term ‘brain fry’ to refer to participants who experienced “mental fatigue that results from excessive use or oversight of AI tools beyond one’s cognitive capacity.”
Okay. Too much AI causes brain fry.
The take on the take said:
The percentage of people suffering from brain fry was highest among people working in marketing, with more than a quarter of participants reporting the issue. Other professions that saw high levels of brain fry included human resources (HR), finance and software development.
Several observations:
- The stress reported by marketing may be an indication that thinking is a non-standard activity. When thinking is required, the brain overheats and smoke is emitted. (See illustration.)
- The stress on financial people may come from smart software outputting results making it clear to the Excel jockey that he or she is not longer needed to ride the crazy horse pulling the number generating wagon.
- HR? Are there human HR professionals anymore?
- Software development professionals’ concern comes from the realization that their once bulletproof career has more bullet holes in it than YeYe’s frying pans.
Net net: Brain fry from AI. AI y’AI Ai.
Stephen E Arnold, March 16, 2026
Yep, Technology Publications Face the Grim Stealer
March 13, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I was not familiar with an online publication called Growtika. I am curious about the pronunciation of the neologism. Well, not that curious. The article “The Internet’s Most-Read Tech Publications Have Lost 58% of Their Google Traffic Since 2024” caught my attention. As I have said on previous occasions, I believe everything I read on the Internet. I have a particular fondness for click data. Once I did not believe everything I read online. Once I thought that clickstream data were accurate. I won’t tell you how interesting counting clicks is. Please, use your imagination. There are clicks, Clicks, and CLICKS.

Yep, the family has a bit of a problem. A saber tooth tiger has appeared, and he is going to do what saber tooth tigers do. Thanks, Venice.ai. Good enough.
The write up makes clear that some mysterious force has chopped online traffic off at the knees. As a dinobaby who knows that Google is the primary source of findability and clicks, I surmise that the loss of traffic is not due to the immense popularity of Swisscows and Metager. That leaves me with the thought that Google either [a] has decided cannibalism is a good source of revenue, [b] that AI Gemini thing is wrecking havoc on technology publication Web site, or [c] leadership at the Google is just going to do what alleged monopolies do in a seemingly unregulated ecosystem; that is, whatever leadership decides is just ducky.
What does the write up present?
I note this passage:
We tracked the organic search traffic of CNET, Wired, The Verge, TechRadar, and six others from early 2024 to today. Combined, they’ve lost 65 million monthly visits. Some lost over 90%.
That suggests that technology news and information sites have a date with the Grim Stealer of revenues.
The article points out:
At their peaks, ten major tech publications pulled a combined 112 million organic visits per month from Google in the US. By January 2026, that number had fallen to 47 million. All ten sites are down, though not by equal amounts. Some lost 30%. Others lost over 90%.
I would suggest that the traffic is not coming back any more than a saber tooth tiger will be found prowling around your subdivision or local coffee shop. The notion of traffic is a quaint holdover when Web search was the way to find information online. Google replaced the slog through library catalogs with its “free” search service. I read an article written by a reference librarian which told people how to search Google. That article should have included a sidebar about setting up an online chat with a group of Clovis people and their method of finding information. One could talk to the SEO experts, but that might have as much impact as a chat with a shaman if you can find one that is coherent.
With the shift from the search that killed libraries to the new AI method, individual sources of information are no longer relevant. Why? Who cares where the information comes from? As one of my clients told me decades ago, “I don’t care where the information comes from, any information is better than none.” Hey, how about that enlightened MBA attitude?
The cited article says that the Verge dropped from 5.3 million clicks to about 800,000 in January 2026. That works out to keeping the outfit afloat with 15 percent of the clicks it had in February 2024. The Verge wants money. The problem is that converting visitors to subscribers follows the brutal data from the now-almost-dead paper magazine business. One mails many pleas to subscribe and if one percent convert, it was party time. Maybe the Verge should try bulk emails to boost its subscriber base and, therefore, its clicks. I would point out that more traffic to the Verge would be a signal to a certain provider of search to suck down and process more intensely the Verge’s content. I think there are some colorful phrases to describe this knock on effect. Will “sign your own death warrant?” work? Nah. It’s a poohbah tech outfit.
The write up offer three reasons for the traffic hit. These are:
- Google AI shortcuts to reading and thinking
- Reddit lost its fizz
- ChatGPT or similar services instead of traditional search.
These are reasonable, if unsupported, assertions. However, I am a dinobaby, and I like to point out the obvious. Humans do not want to do work unless big money is involved. Reading is difficult and takes time. Framing a functioning search query that works requires mental “work” which takes away from “real” work like sitting in meetings. Reviewing a list of hits from a commercial database is hard and expensive. Making sense of a list of hits from a traditional search system is even harder. Hey, check out those Yandex.ru results. How’s your Russian?
The reason clicks are down is that smart software, regardless of quality, is the easiest way forward. Since Google has the most online traffic in the world, Google is the reason that these technology news sites are cratering. Does Google care? Not at the moment. The firm will care once it realizes that it has been exposed to the “next big thing.” That next big thing will kill it just as Google has punched the doomsday button for technology information online services.
Net net: Change has arrived. Time does not reverse itself no matter what the quantum cats say.
Stephen E Arnold, March 13, 2026
Musky Consistency Is Inconsistent
March 13, 2026
Elon Musk says AI is evil!? ?Okay, that sounds about musk-right. That’s taking the information out of context.? ? According to eWeek, Musk claimed that a specific AI is evil: “Elon Musk Slams Anthropic AI as ‘Evil’ After $380B Valuation.”? ? Musk has his nose out of joint because Anthropic is now worth $380 billion after a successful $30 billion fundraising round.? ? Musk called Anthropic “evil and misanthropic,” mostly likely because he’s jealous of the money.? ? Musk threw a tempter tantrum and said Anthropic were biased against demographic groups: Whites, Asians, men, and heterosexuals.
It also points to a important change in the AI company race: “AI competition is no longer just about models and benchmarks. It is about ideology, branding, and who gets to define what “safe” or “fair” AI really means.”
There’s also this big thought:
“Musk’s attack highlights a growing divide in how AI leaders talk about safety and fairness. Some companies prioritize guardrails and content moderation as core features. Others argue that overly restrictive systems risk political or cultural skew. The tension between those philosophies is playing out in real time, and increasingly in public.”
AI leaders are throwing hissy fits about who gets to control the future of global technology.? ? Let’s remember which service allowed dirty pictures. Was it musky Grok?
Whitney Grace, March 13, 2026
Is Glean Moving Beyond Search? You Bet and Fast
March 12, 2026
It’s been a hot minute since we’ve discussed enterprise tools and how they will impact AI.? ? Strike that and reverse it, because AI is influencing enterprise tools more than anything that has ever been invented since the Internet (.? ? TechCrunch says that a new company is trying to become the new tool that makes AI work better: “The Enterprise AI Land Grab Is On — Glean Is Building The Layer Beneath The Interface.”
Glean wants to be the powerful intelligencer lawyer beneath enterprise AI.? ? Glean came into existence once seven years ago and tried to be a Google enterprise tool.? ? ? Glean wants to build context between AI and their generic LLM.
Here’s what it offers:
“The Glean Assistant is often the entry point for customers — a familiar chat interface powered by a mix of leading proprietary (i.e., ChatGPT, Gemini, Claude) and open source models, grounded in the company’s internal data.”
Glean makes generic LLMs more intuitive and offers specialization for enterprise systems:
“The question is whether that middle layer survives as platform giants push deeper into the stack. Microsoft and Google already control much of the enterprise workflow surface area, and they’re hungry for more. If Copilot or Gemini can access the same internal systems with the same permissions, does a stand-alone intelligence layer still matter?
Jain argues enterprises don’t want to be locked into a single model or productivity suite and would rather opt for a neutral infrastructure layer rather than a vertically integrated assistant.”
Blah blah puff piece.? ? Yadda yadda press release about the latest thing that will make AI even better than sliced bread.? ? We’ve heard it before.? ? Is this anything new other than search is not as compelling as more high-flying assertions about findability or is that findAIbility?
Whitney Grace, March 12, 2026
Telegram: Updates and One Oddity
March 11, 2026
The Telegram write ups contain information gathered by my team about one of the most interesting companies currently operating online.
The most recent information we posted on Telegram Notes includes:
- The most segment in our summary of Andrei Grachev’s business adventures. “Part III: Grachev’s Flight to Falcon” picks up his story after he disengaged from AlphaTON Capital. The “falcon” is Grachev, and he did not travel without purpose. He had set up a new company called Falcon Finance and appears to be pursuing a different path to fame and further increasing his wealth. My team and I will continue to follow the story, but the flow of information has been disrupted due to the disturbance in the Middle East.
- We published a short item called “Note: AlphaTON Capital: Bringing in Legends.” AlphaTON Capital, ostensibly building an AI compute infrastructure, hired two people with technical skills. From my point of view, the NASDAQ wizard, the CEO, the law firm, and the financial person who replaced the estimable Mr. Grachev lack the background and technical expertise to make the idea of AI compute “rentals” a reality.
- Another short item is available. It is another short item titled “Note: March 2026 Telegram Messenger Features.” This is a rundown of some of the new features added to Telegram. What’s interesting is that none of these appear to implement any AI functionality from the announced Cocoon AI compute initiation.

This is an Uzbek falcon. It eats mice among other things.
If you want to read the first two parts of the Andrei Grachev story, these are at these locations:
Part I. Andrei Grachev: A Hungry Uzbek Falcon at https://shorturl.at/TbMCo
Part II. Grachev Aloft: Watching for HFT Prey at https://shorturl.at/wnaS8
Please, note the disclaimers for each of these segments. The information comes from our notes, and it is presented in a far more casual manner than the information in my formal books and monographs. One example is the use of the much-loved Uzbek falcon as a metaphor for some of the actions of Mr. Grachev. Colorful? Yes. Does Mr. Grachev have feathers? No.
The oddity is that references to these essays have been scrubbed or removed from certain social media services. I find it interesting that an 82 year old writing about a 13 year old online system attracts attention. One one had, I am flattered that LinkedIn-type outfits have the time and interest to prevent my essays from being available to their users. On the other hand, I am surprised that our compilation of publicly available information disrupts the otherwise serene world of digital information.
Do I care? Well, sort of. When we discover these scrubbings, we will try to document the instances and speculate about the reasons. Mr. Grachev is a luminary in the crypto-verse with particular expertise in high frequency trading. He does try to maintain a low profile compared to the outsized reputational visibility of Pavel Durov (the founder of VKontakte) and Yuri Mitin (Red Shark Ventures and other names such as RSV).
That profile is what caught my attention along with his role at RACIB and his dealings with a couple of interesting fellows in Switzerland. The next major Telegram Notes’ article is about everyone’s favorite business school and innovation center in Moscow. Watch for it in the next few days. Will it be blocked by certain social media outlets? Absolutely.
Stephen E Arnold, March 11, 2026
AI: Helping Humans Be Stupid
March 9, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “Scientists Warn Fake Research Is Spreading Faster Than Real Science.” The write up contained no surprises. Humans love short cuts, convenience, and cute ways to snooker an advantage. The write up presents what I thought was obvious as an important insight. Wow.
The Science Daily reports:
A new study from Northwestern University warns that coordinated scientific fraud is becoming increasingly common. From fabricated data to purchased authorships and paid citations, researchers say organized groups are manipulating the academic publishing system.
I have mentioned in my assorted writings that Dr. Gene Garfield, the fellow who made citations an indicator of importance, knew that the system would be gamed. He was correct. It is trivial to get colleagues, friends, graduate students, and Fiverr.com workers to pump, reference, and backlink to benefit a person, a company or an idea. (I provide an example of a publicly traded company flooding the zone with shaped messages in this article.)

The “Scientists Warn…” article points out:
…fraudulent studies are now appearing at a faster rate than legitimate scientific publications.
What’s this means for smart software? Answer: It will not only hallucinate but it will output incorrect information. Do you want your doctor to trust an AI to diagnose what’s wrong with your child? How about an AI to figure out the doses of chemo for your cancer-ridden mom? Do you want to be admitted to graduate school by an AI? Sure, you don’t, but you will have little say in the matter.
AI is going to operate just like the helpful bots in the Telegram platform or the add ins available in the Claude marketplace. Unless one takes special care, those software daemons are just going to do their thing and use fake information. Think about that when you ponder the implications of your retirement savings invested in the company pumping out shaped information to paint a very rosy investment picture.
Is a single scientist going rogue? Nah. The Science Daily story says:
…the researchers identified coordinated operations involving paper mills, brokers and compromised journals. Paper mills function like production lines for academic manuscripts. They produce large numbers of papers and sell them to researchers who want to increase their publication record quickly. These manuscripts often contain fabricated data, manipulated or stolen images, plagiarized text and sometimes claims that are scientifically impossible.
Can the scientific, technical, and medical professional publishers fix the problem in their peer-reviewed publications? I suppose but there are several hurdles:
- Money. Professional publishers don’t want to invest in what is a black hole problem
- Authors. Why stop? If a topic is sufficiently narrow, the only people who can identify a fake is a graduate student who made up the data in the first place. Example: The Harvard ethics professor who made up information for an ethics paper.
- Readers. Humans read less and less and fewer humans appear to read critically. Smart software companies don’t read; they process and then synthesize information and spit it out. Readers are not very good at finding fake data when writ large like the economy is great or tiny like information related to the DNA of Etruscans.
I want to suggest a fix that almost no one on the planet will be interested in pursuing. Ready or not, here’s my recipe:
- Take learning seriously
- Read critically and look for anomalies and discrepancies, then check them
- Do this throughout life
- Demonstrate this approach as part of the furniture of life.
Spoiler: I estimate one percent of the people in the US will follow this recipe. I think the tech bros want sheeple, not people who question.
Stephen E Arnold, March 9, 2026

