France Arrested Pavel. The UK Signals Signal: What Might Follow?
December 23, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
As a dinobaby, I play no part in the machinations of those in the encrypted messaging arena. On one side, are those who argue that encryption helps preserve a human “right” to privacy. On the other hand, are those who say, “Money laundering, kiddie pix, drugs, and terrorism threaten everything.” You will have to pick your side. That decision will dictate how you interpret the allegedly actual factual information in “Creating Apps Like Signal or WhatsApp Could Be Hostile Activity, Claims UK Watchdog.”

An American technology company leader looks at a UK prison and asks the obvious question. The officers escort the tech titan to the new cell. Thanks, Venice.ai. Close enough for a single horse shoe.
“Hostile activity” suggests bad things will happen if a certain behavior persists. These include:
- Fines
- Prohibitions on an online service in a country (this is popular in Iran among other nation states)
- Potential legal hassles (a Heathrow holding cell is probably the Ritz compared to HMP Woodhill)
The write up reports:
Developers of apps that use end-to-end encryption to protect private communications could be considered hostile actors in the UK.
That is the stark warning from Jonathan Hall KC, the government’s Independent Reviewer of State Threats Legislation and Independent Reviewer of Terrorism Legislation
I interpret this as a helpful summary of a UK government brief titled State Threats Legislation in 2024. The timing of Mr. Hall’s observation may “signal” an overt action. That step may not be on the scale of the French arrest of a Russian with a French passport, but it will definitely create a bit of a stir in the American encrypted messaging sector. Believe it or not, the UK is not thrilled with some organizations’ reluctance to provide information relevant to certain UK legal matters.
In my experience, applying the standard “oh, we didn’t get the email” or “we’ll get back to you, thanks” is unlikely to work for certain UK government entities. Although unfailingly polite, there are some individuals who learned quite particular skills in specialized training. The approach, like the French action, can cause surprise among the individuals identified as problematic.
With certain international tensions rising, the UK may seize an opportunity to apply both PR and legal pressure to overcome what may be seen an impolite and ill advised behavior by certain American companies in the end to end encrypted messaging business.
The article “Creating Apps Like Signal” points out:
In his independent review of the Counter-Terrorism and Border Security Act and the newly implemented National Security Act, Hall KC highlights the incredibly broad scope of powers granted to authorities.
The article adds:
While the report’s strong wording may come as a shock, it doesn’t exist in a vacuum. Encrypted apps are increasingly in the crosshairs of UK lawmakers, with several pieces of legislation targeting the technology. Most notably, Apple was served with a technical capability notice under the Investigatory Powers Act (IPA) demanding it weaken the encryption protecting iCloud data. That legal standoff led the tech giant to disable its Advanced Data Protection instead of creating a backdoor.
What will the US companies do? I learned from the write up:
With the battle lines drawn, we can expect a challenging year ahead for services like Signal and WhatsApp. Both companies have previously pledged to leave the UK market rather than compromise their users’ privacy and security.
My hunch is that more European countries may look at France’s action and the “signals” emanating from the UK and conclude, “We too can take steps to deal with the American companies.”
Stephen E Arnold, December 23, 2025
Australia: Kangaroos and Putting Kids in a Secure Pouch
December 17, 2025
Australia became the first country in the world to ban social media for kids under sixteen. They did it in a bid to protect the younger sect from addictive behaviors, online bullies, and predators. CNN details the ban in, “Millions Of Australian Children Just Lost Access To Social Media. What’s Happening And Will It Work?”
The ten platforms that are banned for under sixteen kids are, X, Twitch, Reddit, Kick, TikTok, Snapchat, Threats, Facebook, YouTube, and Instagram.
The ban will be implemented using age-verification technology, but the platforms don’t believe it will make kids safer. The Australian prime minister believes differently:
“Prime Minister Anthony Albanese said it was a “proud day” for Australia. ‘This is the day when Australian families are taking back power from these big tech companies. They are asserting the right of kids to be kids and for parents to have greater peace of mind,’ Albanese told the public broadcaster ABC Wednesday. But he conceded ‘it won’t be simple.’”
The platforms will use age-verification technology such as video selfies, email addresses, or official documents. The video selfies use facial data points to estimate age.
There are workarounds such as parents creating accounts for their kids and backup social media companies. People are saying it’s a game of whack-a-mole that the Australian government won’t win. There aren’t any punishments for parents who do make accounts for kids.
A follow up from The Nightly says, “Australian Under-16s Social Media Ban: Kids Claim Ban Didn’t Work As They Troll Anthony Albanese On TikTok.” The younger sect took to TikTok and did what kids do best: make fun of the incident. They’re trolling the Prime Minister with memes, videos, comments, and anything else to prove the ban isn’t working.
There are kinks to still work out, but maybe the ban will work. Some youngsters have good technical know how. Work arounds are inevitable. Even baby roos leave the pouch.
Whitney Grace, December 17, 2025
Ka-Ching: The EU Cash Registers Tolls for the Google
December 16, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Thomson Reuters, the trust outfit because they say the company is, published another ka-ching story titled “Exclusive: Google Faces Fines Over Google Play if It Doesn’t Make More Concessions, Sources Say.” The story reports:
Alphabet’s Google is set to be hit with a potentially large EU fine early next year if it does not do more to ensure that its app store complies with EU rules aimed at ensuring fair access and competition, people with direct knowledge of the matter said.
An elected EU official introduces the new and permanent member of the parliament. Thanks, Venice.ai. Not exactly what I specified, but saving money on compute cycles is the name of the game today. Good enough.
I can hear the “Sorry. We’re really, really sorry” statement now. I can even anticipate the sequence of events; hence and herewith:
- Google says, “We believe we have complied.”
- The EU says, “Pay up.”
- Google says, “Let’s go to trial.”
- The EU says, “Fine with us.”
- The Google says, “We are innocent and have complied.”
- The EU says, “You are guilty and owe $X millions of dollars. (Note: The EU generates more revenue by fining US big tech companies than it does from certain tax streams I have heard.)
- The Google says, “Let’s negotiate.”
- The EU says, “Fine with us.”
- Google negotiates and says, “We have a deal plus we did nothing wrong.”
- The EU says, “Pay X millions less the Y millions we agree to deduct based on our fruitful negotiations.”
The actual factual article says:
DMA fines can be as much as 10% of a company’s global annual revenue. The Commission has also charged Google with favoring its associated search services in Google Search, and is investigating its use of online content for its artificial intelligence tools and services and its spam policy.
My interpretation of this snippet is that the EU has on deck another case of Google’s alleged law breaking. This is predictable, and the approach does generate revenue from companies with lots of cash.
Stephen E Arnold, December 16, 2025
The Loss of a Blue Check Causes Credibility to Be Lost
December 15, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
At first glance, either the EU is not happy with the Teslas purchased for official use, or Elon Musk is a Silicon Valley luminary that sets some regulators’ teeth on edge. I read “Elon Musk Calls for Abolition of European Union After X Fined $140 Million.” The idea of dissolving the EU is unlikely to make the folks in Brussels and Strasbourg smile with joy. I think the estimable Mr. Putin and some of his surviving advisors may break out in broad grins. But the EU elected officials are unlikely to be doing high fives. (Well, that just a guess.)

Thanks, Midjourney. Good enough.
The CNBC story says:
Elon Musk has called for the European Union to be abolished after the bloc fined his social media company X 120 million euros ($140 million) for a “deceptive” blue checkmark and lack of transparency of its advertising repository. The European Commission hit X with the ruling on Friday following a two-year investigation into the company under the Digital Services Act (DSA), which was adopted in 2022 to regulate online platforms. At the time, in a reply on X to a post from the Commission, Musk wrote, “Bulls—.”
Mr. Musk’s alleged reply is probably translated by official human translators as “Mr. Musk wishes to disagree with due respect.” Yep, that will work.
I followed up with a reluctant click on a “premium, you must pay” story from Poltico. (I think its videos on YouTube are free or the videos themselves are advertisements for Politico.) That write up is titled “X Axes European Commission’s Ad Account after €120M EU Fine.” The main idea is that Mr. Musk is responding with actions, not just words. Imagine the EU will not be permitted to advertise on X.com. My view is that the announcement sent shockwaves through the elected officials and caused consternation in the EU countries.
The Politico essay says:
Nikita Bier, X’s head of product, accused the EU executive of trying to amplify its own social media post about the fine on X by trying “to take advantage of an exploit in our Ad Composer.”
Ah, ha. The EU is click baiting on X.com.
The write up adds:
The White House has accused the rules of discriminating against U.S. companies, and the fine will likely amplify transatlantic trade tensions. U.S. Secretary of Commerce Howard Lutnick has already threatened to keep 50 percent tariffs on European exports of steel and aluminum unless the EU loosens its digital rules.
Fascinating. A government entity finds a US Silicon Valley outfit of violating one of its laws. That entity fines the Silicon Valley company. But the entire fine is little more than an excuse to [a] get clicks on Twitter (now, the outstanding X.com) and [b] the US government suggests that tariffs on certain EU exports will not be reduced.
I almost forgot. The root issue is the blue check one can receive or purchase to make a short message more “valid.” Next we jump to a fine, which is certainly standard operating procedure for entities breaking a law in the EU and then to a capitalist company refusing to sell ads and finally to a linkage to tariff rates.
I am a dinobaby, and a very uninformed dinobaby. The two stories, the blue check, the government actions and the chain of consequences reminds me of this proverb (author unknown):
“For want of a nail the shoe was lost;
For want of a shoe the horse was lost;
For want of a horse the rider was lost;
For want of a rider the message was lost;
For want of a message the battle was lost;
For want of a battle the kingdom was lost;
And all for the want of a horseshoe nail.”
I have revised the proverb:
“For want of a blue check the ads were lost;
For want of the ads, the click stream was lost;
For want of a click stream, the law suit was lost;
For want of a law suit, the fine was lost;
For want of the fine, the US influence was lost;
For want of influence, sanity was lost;
And all for the want of a blue check.”
There you go. A digital check has consequences.
Stephen E Arnold, December 15, 2025
Google Gemini Hits Copilot with a Dang Block: Oomph
December 10, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Smart software is finding its way into interesting places. One of my newsfeeds happily delivered “The War Department Unleashes AI on New GenAI.mil Platform.” Please, check out the original document because it contains some phrasing which is difficult for a dinobaby to understand. Here’s an example:
The War Department today announced the launch of Google Cloud’s Gemini for Government as the first of several frontier AI capabilities to be housed on GenAI.mil, the Department’s new bespoke AI platform.
There are a number of smart systems with government wide contracts. Is the Google Gemini deal just one of the crowd or is it the cloud over the other players? I am not sure what a “frontier” capability is when it comes to AI. The “frontier” of AI seems to be shifting each time a performance benchmark comes out from a GenX consulting firm or when a survey outfit produces a statement that QWEN accounts for 30 percent of AI involving an open source large language model. The idea of a “bespoke AI platform” is fascinating. Is it like a suit tailored on Oxford Street or a vehicle produced by Chip Foose, or is it one of those enterprise software systems with extensive customization? Maybe like an IBM government systems solution?
Thanks, Google. Good enough. I wanted square and you did horizontal, but that’s okay. I understand.
And that’s just the first sentence. You are now officially on your own.
For me, the big news is that the old Department of Defense loved PowerPoint. If you have bumped into any old school Department of Defense professionals, the PowerPoint is the method of communication. Sure, there’s Word and Excel. But the real workhorse is PowerPoint. And now that old nag has Copilot inside.
The way I read this news release is that Google has pulled a classic blocking move or dang. Microsoft has been for decades the stallion in the stall. Now, the old nag has some competition from Googzilla, er, excuse me, Google. Word of this deal was floating around for several months, but the cited news release puts Microsoft in general and Copilot in particular on notice that it is no longer the de facto solution to a smart Department of War’s digital needs. Imagine a quarter century after screwing up a big to index the US government servers, Google has emerged as a “winner” among “several frontier AI capabilities” and will reside on “the Department’s new bespoke AI platform.”
This is big news for Google and Microsoft, its certified partners, and, of course, the PowerPoint users at the DoW.
The official document says:
The first instance on GenAI.mil, Gemini for Government, empowers intelligent agentic workflows, unleashes experimentation, and ushers in an AI-driven culture change that will dominate the digital battlefield for years to come. Gemini for Government is the embodiment of American AI excellence, placing unmatched analytical and creative power directly into the hands of the world’s most dominant fighting force.
But what about Sage, Seerist, and the dozens of other smart platforms? Obviously these solutions cannot deliver “intelligent agentic workflows” or unleash the “AI driven culture change” needed for the “digital battlefield.” Let’s hope so. Because some of those smart drones from a US firm have failed real world field tests in Ukraine. Perhaps the smart drone folks can level up instead of doing marketing?
I noted this statement:
The Department is providing no-cost training for GenAI.mil to all DoW employees. Training sessions are designed to build confidence in using AI and give personnel the education needed to realize its full potential. Security is paramount, and all tools on GenAI.mil are certified for Controlled Unclassified Information (CUI) and Impact Level 5 (IL5), making them secure for operational use. Gemini for Government provides an edge through natural language conversation, retrieval-augmented generation (RAG), and is web-grounded against Google Search to ensure outputs are reliable and dramatically reduces the risk of AI hallucinations.
But wait, please. I thought Microsoft and Palantir were doing the bootcamps, demonstrating, teaching, and then deploying next generation solutions. Those forward deployed engineers and the Microsoft certified partners have been beavering away for more than a year. Who will be doing the training? Will it be Googlers? I know that YouTube has some useful instructional videos, but those are from third parties. Google’s training is — how shall I phrase it — less notable than some of its other capabilities like publicizing its AI prowess.
The last paragraph of the document does not address the questions I have, but it does have a stentorian ring in my opinion:
GenAI.mil is another building block in America’s AI revolution. The War Department is unleashing a new era of operational dominance, where every warfighter wields frontier AI as a force multiplier. The release of GenAI.mil is an indispensable strategic imperative for our fighting force, further establishing the United States as the global leader in AI.
Several observations:
- Google is now getting its chance to put Microsoft in its place from inside the Department of War. Maybe the Copilot can come along for the ride, but it could be put on leave.
- The challenge of training is interesting. Training is truly a big deal, and I am curious how that will be handled. The DoW has lots of people to teach about the capabilities of Gemini AI.
- Google may face some push back from its employees. The company has been working to stop the Googlers from getting out of the company prescribed lanes. Will this shift to warfighting create some extra work for the “leadership” of that estimable company? I think Google’s management methods will be exercised.
Net net: Google knows about advertising. Does it have similar capabilities in warfighting?
Stephen E Arnold, December 10, 2025
China Smart US Dumb: An AI Content Marketing Push?
December 1, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I have been monitoring the China Smart, US Dumb campaign for some time. Most of the methods are below the radar; for example, YouTube videos featuring industrious people who seem to be similar to the owner of the Chinese restaurant not far from my office or posts on social media that remind me of the number of Chinese patents achieved each year. Sometimes influencers tout the wonders of a China-developed electric vehicle. None of these sticks out like a semi mainstream media push.

Thanks, Venice.ai, not exactly the hutong I had in mind but close enough for chicken kung pao in Kentucky.
However, that background “China Smart, US Dumb” messaging may be cranking up. I don’t know for sure, but this NBC News (not the Miss Now news) report caught my attention. Here’s the title:
The subtitle is snappier than Girl Fixes Generator, but you judge for yourself:
AI Startups Are Seeing Record Valuations, But Many Are Building on a Foundation of Cheap, Free-to-Download Chinese AI Models.
The write up states:
Surveying the state of America’s artificial intelligence landscape earlier this year, Misha Laskin was concerned. Laskin, a theoretical physicist and machine learning engineer who helped create some of Google’s most powerful AI models, saw a growing embrace among American AI companies of free, customizable and increasingly powerful “open” AI models.
We have a Xoogler who is concerned. What troubles the wizardly Misha Laskin? NBC News intones in a Stone Phillips’ tone:
Over the past year, a growing share of America’s hottest AI startups have turned to open Chinese AI models that increasingly rival, and sometimes replace, expensive U.S. systems as the foundation for American AI products.
Ever cautious, NBC News asserts:
The growing embrace could pose a problem for the U.S. AI industry. Investors have staked tens of billions on OpenAI and Anthropic, wagering that leading American artificial intelligence companies will dominate the world’s AI market. But the increasing use of free Chinese models by American companies raises questions about how exceptional those models actually are — and whether America’s pursuit of closed models might be misguided altogether.
Bingo! The theme is China smart and the US “misguided.” And not just misguided, but “misguided altogether.”
NBC News slams the point home with more force that the generator repairing Asian female closes the generator’s housing:
in the past year, Chinese companies like Deepseek and Alibaba have made huge technological advancements. Their open-source products now closely approach or even match the performance of leading closed American models in many domains, according to metrics tracked by Artificial Analysis, an independent AI benchmarking company.
I know from personal conversations that most of the people with whom I interreact don’t care. Most just accept the belief that the US is chugging along. Not doing great. Not doing terribly. Just moving along. Therefore, I don’t expect you, gentle reader, to think much of this NBC News report.
That’s why the China Smart, US Dumb messaging is effective. But this single example raises the question, “What’s the next major messaging outlet to cover this story?”
Stephen E Arnold, December 1, 2025
AI ASICs: China May Have Plans for AI Software and AI Hardware
December 1, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I try to avoid wild and crazy generalizations, but I want to step back from the US-centric AI craziness and ask a question, “Why is the solution to anticipated AI growth more data centers?” Data centers seem like a trivial part of the broader AI challenge to some of the venture firms, BAIT (big AI technology) companies, and some online pundits. Building a data center is a cheap building filled with racks of computers, some specialized gizmos, a connection to the local power company, and a handful of network engineers. Bingo. You are good to go.
But what happens if the compute is provided by Application-Specific Integrated Circuits or ASICs? When ASICs became available for crypto currency mining, the individual or small-scale miner was no longer attractive. What happened is that large, industrialized crypto mining farms pushed out the individual miners or mom-and-pop data centers.
The Ghana ASIC roll out appears to have overwhelmed the person taking orders. Demand for cheap AI compute is strong. Is that person in the blue suit from Nvidia? Thanks, MidJourney. Good enough, the mark of excellence today.
Amazon, Google, and probably other BAIT outfits want to design their own AI chips. The problem is similar to moving silos of corn to a processing plant with a couple of pick up trucks. Capacity at chip fabrication facilities is constrained. Big chip ideas today may not be possible on the time scale set by the team designing NFL arena size data centers in Rhode Island- or Mississippi-type locations.
A Chinese startup founded by a former Google engineer claims to have created a new ultra-efficient and relatively low cost AI chip using older manufacturing techniques. Meanwhile, Google itself is now reportedly considering whether to make its own specialized AI chips available to buy. Together, these chips could represent the start of a new processing paradigm which could do for the AI industry what ASICs did for bitcoin mining.
What those ASICs did for crypto mining was shift calculations from individuals to large, centralized data centers. Yep, centralization is definitely better. Big is a positive as well.
The write up adds:
The Chinese startup is Zhonghao Xinying. Its Ghana chip is claimed to offer 1.5 times the performance of Nvidia’s A100 AI GPU while reducing power consumption by 75%. And it does that courtesy of a domestic Chinese chip manufacturing process that the company says is "an order of magnitude lower than that of leading overseas GPU chips." By "an order or magnitude lower," the assumption is that means well behind in technological terms given China’s home-grown chip manufacturing is probably a couple of generations behind the best that TSMC in Taiwan can offer and behind even what the likes of Intel and Samsung can offer, too.
The idea is that if these chips become widely available, they won’t be very good. Probably like the first Chinese BYD electric vehicles. But after some iterative engineering, the Chinese chips are likely to improve. If these improvements coincide with the turn on of the massive data centers the BAIT outfits are building, there might be rethinking required by the Silicon Valley wizards.
Several observations will be offered but these are probably not warranted by anyone other than myself:
- China might subsidize its home grown chips. The Googler is not the only person in the Middle Kingdom trying to find a way around the US approach to smart software. Cheap wins or is disruptive until neutralized in some way.
- New data centers based on the Chinese chips might find customers interested in stepping away from dependence on a technology that most AI companies are using for “me too”, imitative AI services. Competition is good, says Silicon Valley, until it impinges on our business. At that point, touch-to-predict actions come into play.
- Nvidia and other AI-centric companies might find themselves trapped in AI strategies that are comparable to a large US aircraft carrier. These ships are impressive, but it takes time to slow them down, turn them, and steam in a new direction. If Chinese AI ASICs hit the market and improve rapidly, the captains of the US-flagged Transformer vessels will have their hands full and financial officers clamoring for the leaderships’ attention.
Net net: Ponder this question: What is Ghana gonna do?
Stephen E Arnold, December 1, 2025
Has Big Tech Taught the EU to Be Flexible?
November 26, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Here’s a question that arose in a lunch meeting today (November 19, 2025): Has Big Tech brought the European Union to heel? What’s your answer?
The “trust” outfit Thomson Reuters published “EU Eases AI, Privacy Rules As Critics Warn of Caving to Big Tech.”

European Union regulators demonstrate their willingness to be flexible. These exercises are performed in the privacy of a conference room in Brussels. The class is taught by those big tech leaders who have demonstrated their ability to chart a course and keep it. Thanks, Venice.ai. How about your interface? Yep, good enough I think.
The write up reported:
The EU Commission’s “Digital Omnibus”, which faces debate and votes from European countries, proposed to delay stricter rules on use of AI in “high-risk” areas until late 2027, ease rules around cookies and enable more use of data.
Ah, back peddling seems to be the new Zen moment for the European Union.
The “trust” outfit explains why, sort of:
Europe is scrabbling to balance tough rules with not losing more ground in the global tech race, where companies in the United States and Asia are streaking ahead in artificial intelligence and chips.
Several factors are causing this rethink. I am not going to walk the well-worn path called “Privacy Lane.” The reason for the softening is not a warm summer day. The EU is concerned about:
- Losing traction in the slippery world of smart software
- Failing to cultivate AI start ups with more than a snowball’s chance of surviving in the Dante’s inferno of the competitive market
- Keeping AI whiz kids from bailing out of European mathematics, computer science, and physics research centers for some work in Sillycon Valley or delightful Z Valley (Zhongguancun, China, in case you did not know).
From my vantage point in rural Kentucky, it certainly appears that the European Union is fearful of missing out on either the boom or the bust associated with smart software.
Several observations are warranted:
- BAITers are likely to win. (BAIT means Big AI Tech in my lingo.) Why? Money and FOMO
- Other governments are likely to adapt to the needs of the BAITers. Why? Money and FOMO
- The BAIT outfits will be ruthless and interpret the EU’s new flexibility as weakness.
Net net: Worth watching. What do you think? Money? Fear? A combo?
Stephen E Arnold, November 26, 2025
Tim Apple, Granny Scarfs, and Snooping
November 24, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I spotted a write in a source I usually ignore. I don’t know if the write up is 100 percent on the money. Let’s assume for the purpose of my dinobaby persona that it indeed is. The write up is “Apple to Pay $95 Million Settle Suit Accusing Siri Of Snoopy Eavesdropping.” Like Apple’s incessant pop ups about my not logging into Facetime, iMessage, and iCloud, Siri being in snoop mode is not surprising to me. Tim Apple, it seems, is winding down. The pace of innovation, in my opinion, is tortoise like. I haven’t nothing against turtle like creatures, but a granny scarf for an iPhone. That’s innovation, almost as cutting edge as the candy colored orange iPhone. Stunning indeed.

Is Frederick the Great wearing an Apple Granny Scarf? Thanks, Venice.ai. Good enough.
What does the write up say about this $95 million sad smile?
Apple has agreed to pay $95 million to settle a lawsuit accusing the privacy-minded company of deploying its virtual assistant Siri to eavesdrop on people using its iPhone and other trendy devices. The proposed settlement filed Tuesday in an Oakland, California, federal court would resolve a 5-year-old lawsuit revolving around allegations that Apple surreptitiously activated Siri to record conversations through iPhones and other devices equipped with the virtual assistant for more than a decade.
Apple has managed to work the legal process for five years. Good work, legal eagles. Billable hours and legal moves generate income if my understanding is correct. Also, the notion of “surreptitiously” fascinates me. Why do the crazy screen nagging? Just activate what you want and remove the users’ options to disable the function. If you want to be surreptitious, the basic concept as I understand it is to operate so others don’t know what you are doing. Good try, but you failed to implement appropriate secretive operational methods. Better luck next time or just enable what you want and prevent users from turning off the data vacuum cleaner.
The write up notes:
Apple isn’t acknowledging any wrongdoing in the settlement, which still must be approved by U.S. District Judge Jeffrey White. Lawyers in the case have proposed scheduling a Feb. 14 court hearing in Oakland to review the terms.
I interpreted this passage to mean that the Judge has to do something. I assume that lawyers will do something. Whoever brought the litigation will do something. It strikes me that Apple will not be writing a check any time soon, nor will the fine change how Tim Apple has set up that outstanding Apple entity to harvest money, data, and good vibes.
I have several questions:
- Will Apple offer a complementary Granny Scarf to each of its attorneys working this case?
- Will Apple’s methods of harvesting data be revealed in a white paper written by either [a] Apple, [b] an unhappy Apple employee, or [c] a researcher laboring in the vineyards of Stanford University or San Jose State?
- Will regulatory authorities and the US judicial folks take steps to curtail the “we do what we want” approach to privacy and security?
I have answers for each of these questions. Here we go:
- No. Granny Scarfs are sold out
- No. No one wants to be hassled endlessly by Apple’s legions of legal eagles
- No. As the recent Meta decision about WhatsApp makes clear, green light, tech bros. Move fast, break things. Just do it.
Stephen E Arnold, November 24, 2025
Danes May Ban Social Media for Kids
November 17, 2025
Australia’s ban on social media for kids under 16 goes into effect December 10. Now another country is pursuing a similar approach. Euro News reports, “Denmark Wants to Ban Access to Social Media for Children Under 15.” We learn:
“The move, led by the Ministry of Digitalisation, would set the age limit for access to social media but give some parents – after a specific assessment – the right to give consent to let their children access social media from age 13. Such a measure would be among the most sweeping steps yet by a European Union government to address concerns about the use of social media among teens and younger children, which has drawn concerns in many parts of an increasingly online world. … The Danish digitalisation ministry statement said the age minimum of 15 would be introduced for ‘certain’ social media, though it did not specify which ones.”
If the Danes follow Australia’s example, those platforms could include TikTok, Facebook, Snapchat, Reddit, Kick, X, Instagram, and YouTube. The write-up describes the motivation behind the push:
“A coalition of lawmakers from the political right, left and centre ‘are making it clear that children should not be left alone in a digital world where harmful content and commercial interests are too much a part of shaping their everyday lives and childhoods,’ the ministry said. ‘Children and young people have their sleep disrupted, lose their peace and concentration, and experience increasing pressure from digital relationships where adults are not always present,’ it said. ‘This is a development that no parent, teacher, or educator can stop alone’.”
That may be true. And it is certainly true that social media poses certain dangers to children and teens. But how would the ban be enforced? The statement does not say. Teens, after all, famously find ways to get around security measures. If only there had been a way for platforms to know about these risks sooner.
Cynthia Murrell, November 17, 2025

