Gemini, Listen Up
January 20, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
We were testing Google Gemini last week. We learned several things:
- Gemini does not follow instructions even if these are input into the “rules” form
- Gemini refuses to stay on track. The system attempts to hijack the line of questioning by inserting “Would you like…” suggestions
- Gemini pushes YouTube videos into results, ignoring a user’s request for text only responses.
As if these were not sufficiently annoying amidst the incorrect responses and the smarmy tone adopted when informed that a professional, objective response is requested.
This is Venice.ai’s interpretation of my using Google Gemini and getting giant, stupid links to AI infected YouTube videos. Hey, thanks, Gemini. Good enough Venice.ai.
Nope, the big news is that Gemini is about as good as ChatGPT and Perplexity on our test prompts and queries.
Where it fails is a window into Google’s approach. I characterize this services of the entitled to the less adept method.
Here’s one example. “YouTube Views Less Effective Than Audio Downloads in Driving Purchases, According to Study” states:
YouTube views are 18-25% less effective than audio downloads at driving purchases and may not be interchangeable despite common industry belief and practice…. It suggests there may be a loss of up to $250k in conversion value for every $1m spent on YouTube podcast impressions.
Do I believe these studies by outfits which are essentially unknown to me? Answer: Absolutely. I believe everything I read on the Internet.
But let’s assume that the data reported are close enough for horse shoes. Google certainly has data that supports its decision to shove YouTube videos into Gemini outputs. Furthermore, some Google wizard instructed Gemini to include YouTube videos whenever possible into Gemini. Plus, those videos consume a significant segment of the output. Finally, when copying a Gemini answer the YouTube “links” screw up basic document formatting in Word and LibreOffice. Some documents are unusable after pasting a YouTube output from Gemini into a Word file.
Several observations:
- Google is definitely into providing links to YouTube videos which are increasingly generated by AI or manipulated by AI with the notice that AI produced the videos missing or at the tail end of the “More” text
- Gemini is configured to present video in a text response. What about podcast links? Why not a link to the transcript of the video instead of creating an annoyance?
- Gemini, like Microsoft, is trying to make smart software work better than Boolean search in my opinion. Guess what? It is not as effective for queries requiring a fast, precise answer. Why spend billions and make me wait for a simple look up.
- Google Advanced Search is still available, but it presents AI slop. At least Yandex.com tries to provide something relevant and Alice thankfully is not pushing her way into a query.
Net net: Google’s management of its technical resources is poor. Both Google and Microsoft are taking steps to increase user frustration. Gemini, to its credit, knows when I am angry. That doesn’t matter. I am a dinobaby, and I like to do work the old fashioned way. Okay, Google, why not automatically play podcast snippets in the Gemini output. That would be really cool, right, whiz kids?
Stephen E Arnold, January 20, 2026
Telegram Notes: Mama Durova and Her Inner Circle, Part 2
January 20, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
This is the second part of my Telegram Notes’ write up about Pavel Durov’s mother. “Mama Albina, Pure Bred Animals, and Some Tricks” gathers thoughts, comments, and hypotheses about what I call a “web of companies.” Imagine. As a professor of “misinformation” and social media, she watched the viral take off of VKontakte. Mama Durova pulled her “cozy group” together. In the circle were her former and second husband, her three sons, and one adopted high-performer whom Nikolai helped. As Pasha rapidly coded VKontakte into a Russian Facebook, his libertarian stance on privacy collided with the Kremlin’s demand for control during the Snow Revolution protests. While Pasha postured as the defiant "GOAT" (the greatest of all time Russian entrepreneur and visionary), she did what mothers do. She orchestrated a "core group" of family insiders—half-brothers and trusted associates—who allegedly routed VK’s revenue through a web of shadow companies like Peering LLC and VKT Rus while using her influence to shield Pasha from Kremlin needs. When the Kremlin moved to silence dissent, Pasha refused to bend, transforming from a golden child into a black sheep. By 2014, he was exiled to run Telegram and create another entrepreneurial winner. Mama’s network remained entrenched in Russia. This briefing uncovers the ultimate family bargain: Did Mama sacrifice her son’s freedom to preserve the lucrative empire she helped build? You can find Part II at this Telegram Notes / Bear Blog page. (Lectures about Telegram are available via Zoom or in person. Write kentmaxwell@proton.me for information. Each lecture attendee receives a free copy of “The Telegram Labyrinth.”
Stephen E Arnold, January 20, 2026
Microsoft: Budgeting Data Centers How Exactly?
January 20, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Here’s how bean counters and MBA work. One gathers assumptions. Depending on the amount of time one has, the data collection can be done blue chip consulting style; that is, over weeks or months of billable hours. Alternatively, bean counters and MBA can sit in a conference room, talk, jot stuff on a white board, and let one person pull the assumptions into an Excel-type of spreadsheet. Then fill in the assumptions, some numbers based on hard data (unlikely in many organizations) and some guess-timates and check the “flow” of the numbers. Once the numbers have the right optics, reconvene in the conference room, talk, write on the white board, and the same lucky person gets to make the fixes. Once the flow is close enough for horse shoes, others can eyeball the numbers. Maybe a third party will be asked to “validate” the analysis? And maybe not?

Softies are working overtime on the AI data center budget. Thanks, Venice.ai. Good enough. Apologies to Copilot. Your output is less useful than Venice’s. Bummer.
We now know that Microsoft’s budgeting for its big beautiful build out of data centers will need to be reworked. Not to worry. If the numbers are off, the company can raise the price of an existing service or just fire however many people needed to free up some headroom. Isn’t this how Boeing-type companies design and build aircraft? (Snide comments about cutting corners to save money are not permitted. Thank you.)
How do “we” know this? I read in GeekWire (an estimable source I believe) this story: “Microsoft Responds to AI Data Center Revolt, Vowing to Cover Full Power Costs and Reject Local Tax Breaks.” I noted this passage about Microsoft’s costs for AI infrastructure:
The new plan, announced Tuesday morning [January 13, 2026) in Washington, D.C, includes pledges to pay the company’s full power costs, reject local property tax breaks, replenish more water than it uses, train local workers, and invest in AI education and community programs.
Yep, pledges. Just ask any religious institution about the pledge conversion ratio. Pledges are not cold, hard cash. Why are the Softies doing executive level PR about its plans to build environmentally friendly and family friendly data centers? People in fly over areas are not thrilled with increased costs for power and water, noise pollution from clever repurposed jet engines and possible radiation emission from those refurbed nuclear power plants in unused warships, and the general idea of watching good old empty land covered with football stadium sized tan structures.
Now back to the budget estimates for Microsoft’s data center investments. Did those bean counters include set asides for the “full power costs,” property taxes, water management, and black hole costs for “invest in AI education and community.”
Nope.
That means “pledges” are likely to be left fuzzy, defined on the fly, or just forgotten like Bob, the clever interface with precursor smart software. Sorry, Copilot Bob’s help and Clippy missed the mark. You may too for one reason: Apple and Google have teamed up in an even bigger way than before.
That brings me back to the bean counters at Microsoft, a uni shuttle bus full of MBAs, and a couple of railroad passenger cars filled with legal eagles. The money assumptions need a rethink.
Brad Smith, the Microsoft member of “leadership” blaming security breaches on 1,000 Russian hackers, wants this PR to work. The write up reports that Mr. Smith, the member of Microsoft leadership said:
Smith promised new levels of transparency… The companies that succeed with data centers in the long run will be the companies that have a strong and healthy relationship with local communities. Microsoft’s plan starts by addressing the electricity issue, pledging to work with utilities and regulators to ensure its electricity costs aren’t passed on to residential customers. Smith cited a new “Very Large Customers” rate structure in Wisconsin as a model, where data centers pay the full cost of the power they use, including grid upgrades required to support them.
And that’s not all. The pledge includes this ethically charged corporate commitment. I quote:
- “A 40% improvement in water efficiency by 2030, plus a pledge to replenish more water than it uses in each district where it operates. (Microsoft cited a recent $25 million investment in water and sewer upgrades in Leesburg, Va., as an example.)
- A new partnership with North America’s Building Trades Unions for apprenticeship programs, and expansion of its Datacenter Academy for operations training.
- Full payment of local property taxes, with no requests for municipal tax breaks.
- AI training through schools, libraries, and chambers of commerce, plus new Community Advisory Boards at major data center sites.”
I hear the background music. I think it is the double fugue or Kyrie in Mozart’s Requiem, but I may be wrong. Yes, I am. That is the sound track for the group reworking the numbers for Microsoft’s “beat Google” data center spend.
You can listen to Mozart’s Requiem on YouTube. My hunch is that the pledge is likely to be part of the Microsoft legacy. As a dinobaby, I would suggest that Microsoft’s legacy is blaming users for Microsoft’s security issues and relying on PR when it miscalculates [a] how people react to the excellent company’s moves and [b] its budget estimates for Copilot, the aircraft, the staff, the infrastructure, and the odds and ends.
Net net: Microsoft’s PR better be better than its AI budgeting.
Stephen E Arnold, January 20, 2026
Yes, There Is an AI Protest Group
January 20, 2026
Are you tired of your company implementing AI policies without input from you and your coworkers? You might be an ideal candidate to join Workers Decide. Workers Decide is an organization that raises awareness about AI policies and encourages people to demand they’re included when these are drafted.
Here’s Workers Decide’s mission statement:
“Frustrated by your employer’s generative AI policies? We’re here to help you organize. We’re uniting tech workers who recognize the consequences of employers rolling out AI policies without the input of their workers. Workers are not helpless in the face of these top-down initiatives — we have the tools to demand our seat at the table, and to have our say in how technology is implemented and deployed in our workplaces.”
Workers Decide has an inquiry toolkit available to help people gather information about their work and how it can be used to implement change. It’s described as:
“As an organizing tool, it gives workers a space to reflect on their conditions and consider the broader forces that shape them, as well as places they would like to see change, and potentials for how that change can come about. By going through this exercise together, we can better understand one another, identify key shared issues, and build solidarity.”
There’s also an AI implementation Bingo card that includes the most common phrases people say about the technology. These include, “It’s a fad, play along and let it blow over,” “AI is just a tool,” and “If you don’t learn AI, you won’t grow your career.”
Workers Decide includes the usual rhetoric about unions and organizing to protect worker’s rights. Collective power is how demands are met and make long lasting changes. Workers Decide has a bold and necessary mission, but no one has heard about them. AI is more powerful and they’ll be lucky if they even manage to make a blip on the Internet Archive.
Whitney Grace, January 20, 2026
Can AI Buzzwording Land a Job?
January 19, 2026
Since the application process was automated, job hunting has been a problem. It’s worsened with the implementation of AI. AI makes job hunting worse, because if your resume doesn’t include the correct buzzwords you won’t make it to the next round. To that effect, job hunters are packing their resume with the right words, but perspective employers are doing the same. ZDNet explains the hard knock reality of job hunting in an AI world: “AI Buzzwords Are Making The Job Hunt Harder – For Everyone.”
Packing resumes and job classifieds with AI buzzwords is a trend called “AI language inflation.” It’s a double edged sword. Employers are loading their job advertisements with buzzwords to be cutting edge (sometimes highlighting aspects that aren’t essential for job). Job hunters are then adding more AI buzzwords to get past the algorithms.
This means that job hunters must learn the different AI “flavors” and properly curate their resumes. Even the word AI is a buzzword:
“AI is used very loosely and almost like a buzzword. What I’m looking for is the appropriate use of the word AI. How well are people really starting to understand, more deeply, what is artificial intelligence or digital labor? Or how they’re using it, so it’s not just as a buzzword but day-to-day practical examples?”
If you do want to do the jargon dance, here’s a resource for you. The list is published from a company with a remarkably opaque name, Service Now. The helpful listing is “AI Terms Explained.” Service Now (does anyone or any thing deliver “service” now?) Here are three examples of what the list will deliver. Plus you can watch a little video about the “comprehensive guide.”
Artificial general intelligence (or AGI). Artificial General Intelligence (AGI) refers to an AI system that possesses a wide range of cognitive abilities, much like humans, enabling them to learn, reason, adapt to new situations, and devise creative solutions across various tasks and domains, rather than being limited to specific tasks as narrow AI systems are. [Note that you have to use “AI” a couple of times when defining “AGI.” That’s intellectually impressive even though Ms. Sperling told me in the 10th grade, don’t define a word by using that word in your definition. Obviously Ms. Sperling in 1959 would need to up her game in 2026.]
Here’s another definition from the service outfit Service Now which is serving you:
Explainability. Explainability refers to techniques that make AI model decisions and predictions interpretable and understandable to humans. [Note: Would someone at a Google or OpenAI-type company provide me with some information about the hallucination function? [Note: As a human (I think), providing incorrect, made up, or off point information is difficult for me to understand especially when I have to pay for wrongness.]
The final example from the serving me now Service Now listing of AI buzzwords that will definitely serve you well:
Natural Language Generation (or NLG). A subfield of AI that produces natural written or spoken language. [Note: What is “natural”? What is “language”? The explanation does not serve me well.]
This list contains more than 90 terms.
Several observations:
- Buzzwords won’t deliver the bacon
- Lists generated by AI are not particularly helpful
- Do not define AI, in case anyone asks you, by using the acronym AI or the word artificial.
Net net: I am skeptical about AI which is a utility function related to search and retrieval. I am also not sure about the “service” thing. Especially service “now.” I have to wait everywhere; for instance, getting a clear definition of AI.
Stephen E Arnold, January 19. 2025
AI Business Trickery: Not a Good Sign for the Industry
January 19, 2026
In feat reminiscent of the Great and Powerful Oz, the curtain was pulled back on an UK AI company that turned out to be a great big real fake. The ACS Information Age reported that, “The Company Whose ‘AI’ Was Actually 700 Humans In India.” For eight years, Engineer.ai allegedly fooled the tech industry. It was allegedly founded by Sachin Dev Duggal, who served as the CEO of Builder.ai. Plus, he raised money. He pitched AI, and the check books came out. He acquired funding from Microsoft, Qatar, and SoftBank.
Duggal promised that his AI chatbot, Natasha, would be a no-code tool that could build apps six times faster than typical required work and would be seventy percent cheaper. Duggal embraced Silicon Valley baloney job titles. He dubbed himself the “chief wizard” borrowing from the 1939 motion picture “Somewhere Over the Rainbow.” Yep, the film had a robot too.
However, Engineer.ai declared bankruptcy after a Bloomberg investigation reported that Engineer.ai had been working with the Indian social media startup VerSe. Both were employing criminal financial actions. When these practices were revealed, Viola Credit, a major backer, wanted immediate repayment of its $50 million loan.
More information popped out in December 2025. The smart software Natasha was about 700 Indian app developers. These professional humans wrote customers’ software and adopted the behavior of bots. Not good. The cited source reports:
“Although the developers used a range of software tools in their work, coding was performed manually, meaning that while Builder.ai did eventually deliver apps to its customers, it was simply another player in an Indian offshoring industry attracting $27 billion ($US17.7 billion) annually. That puts the company in a completely different market segment than the one that propelled AI-hungry investors through four funding rounds before and after the debut of OpenAI’s ChatGPT turned the global tech industry on its head.”
What other AI charades are operating using hyperbolic marketing and motion picture tropes? My hunch. Lots.
Whitney Grace, January 19, 2026
Grok Is Spicy and It Did Not Get the Apple Deal
January 16, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
First, Gary Marcus makes clear that AI is not delivering the goods. Then a fellow named Tom Renner explains that LLMs are just a modern variation of a “confidence trick” that’s been in use for centuries. I then bumbled into a paywalled post from an outfit named Vox. The write up about AI is “Grok’s Nonconsensual Porn Problem is Part of a Long, Gross Legacy.”
Unlike Dr. Marcus and Mr. Renner, Vox focuses on a single AI outfit. Is this fair? Nah, but it does offer some observations that may apply to the entire band of “if we build it, they will come” wizards. Spoiler: I am not coming for AI. I will close with an observation about the desperation that is roiling some of the whiz kids.
First, however, what does Vox have to say about the “I am a genius. I want to spawn more like me. I want to colonize Mars” superman. I urge you to subscribe the Vox. I will highlight a couple of passages about the genius Elon Musk. (I promised I won’t mention the department of government efficiency project. I project. DOGE DOGE DOGE. Yep, I lied must like some outfits have. Thank goodness I am an 81 year old dinobaby in rural Kentucky. I can avoid AI, but DOGE DOGE DOGE, not a chance.
Here’s the first statement I circled on my print out of the expensive Vox article:
Elon Musk claims tech needs a “spicy mode” to dominate. Is he right?
I can answer this question: No, only those who want to profit from salacious content want a spicy mode. People who deal in spicy modes made VHS tapes a thing much to the chagrin of Sony. People who create spicy mode content helped sell some virtual reality glasses. I sure didn’t buy any. Spicy at age 81 is walking from room to room in my two room log cabin in the Kentucky hollow in which I live.
Here’s the second passage in the write up I check marked:
Musk has remained committed to the idea that Grok would be the sexiest AI model. On X, Musk has defended the choice on business grounds, citing the famous tale of how VHS beat Betamax in the 1980s after the porn industry put its weight behind VHS, with its larger storage capacity. “VHS won in the end,” Musk posted, “in part because they allowed spicy mode.
Does this mean that Elon analyzed the p_rn industry when he was younger? For business reasons only I presume. I wonder if he realizes that Grok and perhaps the Tesla businesses may be adversely affected by the spicy stuff. No, I won’t. I won’t. Yes, I will. DOGE DOGE DOGE
Here’s the final snip:
A more accurate phrasing, however, might be to say that in our misogynistic society, objectifying and humiliating the bodies of unconsenting women is so valuable that the fate of world-altering technologies depends on how good they are at facilitating it. AI was always going to be used for this, one way or the other. But only someone as brutally uncaring and willing to cut corners as Elon Musk would allow it to go this wrong.
Snappy.
But the estimable Elon Musk has another thorn in the driver’s seat of his Tesla. Apple, a company once rumored to be thinking about buying the car company, signed another deal with Apple. The gentle and sometimes smarmy owner of Android, online advertising, and surveillance technology is going to provide AI to the wonderful wonderful Apple.
I think Mr. Musk’s Grok is a harbinger of a spring time blossoming of woe for much of the AI sector. There are data center pushbacks. There are the Chinese models available for now as open source. There are regulators in the European Union who want to hear the ka-ching of cash registers after another fine is paid by an American AI outfit.
I think the spicy angle just helps push Mr. Musk and Grok to the head of the line for AI pushback. I hope not. I wonder if Mr. Musk will resume talks with Pavel Durov about providing Grok as an AI engine for Nikolai Durov’s new approach to smart software. I await spring.
Stephen E Arnold, January 16, 2026
Shall We Recall This Nvidia Prediction?
January 16, 2026
Nvidia. Champion circular investor. Leather jacketed wizard. I dug up this item as a reference point: “ ‘China Is Going To Win The AI Race’ – Nvidia CEO Jensen Huang Makes Bold Proclamation, Says We All Need A Little Less "Cynicism" In Our Lives.”
Nvidia warns the US that China is seconds behind it (literally nanoseconds) in the AI race and the country shouldn’t ignore its eastern neighbor. Huang suggests that not only will China win the technology race, but also the US should engage with China’s developer base. Doing so will help the US maintain its competitive edge. Huang also warns that ignoring China would have negative long-term consequences for AI adoption.
Huang makes a valid point about China, but his remarks could also be self-serving regarding some recent restrictions from the US.
“Nvidia has faced restrictions in China due to governmental policies, preventing the sale of its latest processors, central to AI tools and applications, which are essential for research, deployment, and scaling of AI workloads.
Huang suggested limiting Chinese access may inadvertently slow the spread of American technology, even as policymakers focus on national security.”
Hardware is vital for AI technology because a lot of processing power and energy is needed to run AI models. Huang warns (yet again) that if the US remains exclusionary of China with its technology that Chinese developers will be forced to design their own. It means less reliance on US technology and an AI ecosystem outside of the US’s sphere of influence. Huang said:
“ ‘We want America to win this AI race. No doubt about that,’ Huang said at a recent Nvidia developers’ conference. ‘We want the world to be built on American tech stack. Absolutely the case. But we also need to be in China to win their developers. A policy that causes America to lose half of the world’s AI developers is not beneficial in the long term, it hurts us more,’ he added.”
Huang’s statement is self-serving for Nvidia and maybe he’s angling for a professorship at Beijing University? But he’s also right. It’s better for Chinese developers to favor the US over their red uncle.
Whitney Grace, January 16, 2025
More Obvious Commentary about the Smart Phone That Makes People Stupid
January 16, 2026
Adults are rabid about protecting kids. Whether it’s chalked up to instinct, love, or simple common sense, no one can argue that it’s necessary to guide and guard younger humans. A big debate these days is when it is appropriate to give kids their first smartphone. According to The New York Times, that should probably be never: “A Smartphone Before Age 12 Could Carry Health Risks, Study Says.”
The journal named Pediatrics reported that when kids younger than twelve are given a smartphone, they’re at a greater risk for poor sleep, obesity, and depression. These results were from the Adolescent Brain Cognitive Development Study that surveyed 10,500 kids. This is what they discovered:
“The younger that children under 12 were when they got their first smartphones, the study found, the greater their risk of obesity and poor sleep. The researchers also focused on a subset of children who hadn’t received a phone by age 12 and found that a year later, those who had acquired one had more harmful mental health symptoms and worse sleep than those who hadn’t.”
Kids equipped with smartphones spent less time socializing in person and are less inclined to exercise or prioritize sleep. All these activities are exceedingly important for developing minds. They’re stunting and seriously harming their growth with smartphones.
Smartphones are a tool like anything else. They’re seriously addicting because of their engagement. Videogames were given the same bad rep when they became popular. At least videogames had the social interaction of arcades back in the day.
Just ban all smartphones for kids. That could work if the lobbyists and political funding policies undergo a little change. If not, duh.
Whitney Grace, January 16, 2026
Apple and Google: Lots of Nots, Nos, and Talk
January 15, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
This is the dinobaby, an 81 year old dinobaby. In my 60 plus year work career I have been around, in, and through what I call “not and no” PR. The basic idea is that one floods the zones with statements about what an organization will do. Examples range from “our Wi-Fi sniffers will not log home access point data” to “our AI service will not capture personal details” to “our security policies will not hamper usability of our devices.” I could go on, but each of these statements were uttered in meetings, in conference “hallway” conversations, or in public podcasts.
Thanks, Venice.ai. Good enough. See I am prevaricating. This image sucks. The logos are weird. GW looks like a wax figure.
I want to tell you that if the Nots and Nos identified in the flood of write ups about the Apple Google AI tie up immutable like Milton’s description of his God, the nots and nos are essentially pre-emptive PR. Both firms are data collection systems. The nature of the online world is that data are logged, metadata captured and mindlessly processed for a statistical signal, and content processed simply because “why not?”
Here’s a representative write up about the Apple Google nots and nos: “Report: Apple to Fine-Tune Gemini Independently, No Google Branding on Siri, More.” So what’s the more that these estimable firms will not do? Here’s an example:
Although the final experience may change from the current implementation, this partly echoes a Bloomberg report from late last year, in which Mark Gurman said: “I don’t expect either company to ever discuss this partnership publicly, and you shouldn’t expect this to mean Siri will be flooded with Google services or Gemini features already found on Android devices. It just means Siri will be powered by a model that can actually provide the AI features that users expect — all with an Apple user interface.”
How about this write up: “Official: Apple Intelligence & Siri To Be Powered By Google Gemini.”
Source details how Apple’s Gemini deal works: new Siri features launching in spring and at WWDC, Apple can finetune Gemini, no Google branding, and more
Let’s think about what a person who thinks the way my team does. Here are what we can do with these nots and nos:
- Just log everything and don’t talk about the data
- Develop specialized features that provide new information about use of the AI service
- Monitor the actions of our partners so we can be prepared or just pounce on good ideas captured with our “phone home” code
- Skew the functionality so that our partners become more dependent on our products and services; for example, exclusive features only for their users.
The possibilities are endless. Depending upon the incentives and controls put in place for this tie up, the employees of Apple and Google may do what’s needed to hit their goals. One can do PR about what won’t happen but the reality of certain big technology companies is that these outfits defy normal ethical boundaries, view themselves as the equivalent of nation states, and have a track record of insisting that bending mobile devices do not bend and that information of a personal nature is not cross correlated.
Watch the pre-emptive PR moves by Apple and Google. These outfits care about their worlds, not those of the user.
Just keep in mind that I am an old, very old, dinobaby. I have some experience in these matters.
Stephen E Arnold, January 15, 2025

