Google: Is It Becoming Microapple?
September 19, 2025
Google’s approach to Android, the freedom to pay Apple to make Google search the default for Safari, and the registering of developers — These are Tim Apple moves. Google has another trendlet too.
Google has 1.8 billion users around the world and according to the Mens Journal Google has a new problem: “Google Issues Major Warning to All 1.8 Billion Users.” There’s a new digital security threat and it involves AI. That’s not a surprise, because artificial intelligence has been a growing concern for cyber security experts for years. Since the technology is becoming more advanced, bad actors are using it for devious actions. The newest round of black hat tricks are called “indirect prompt injections.”
Indirect prompt injections are a threat for individual users, businesses, and governments. Google warned users about this new threat and how it works:
“‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions,’ the blog post continued.
The Google blog post warned that this puts individuals and entities at risk.
‘As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,’ the blog post continued.”
Bad actors have tasked Google’s Gemini (Shock! Gasp!) to infiltrate emails and ask users for their passwords and login information. That’s not the scary part. Most spammy emails have a link for users to click on to collect data, instead this new hack uses Gemini to prompt users for the information. Downloading fear.
Google is already working on counter measures for Gemini. Good luck! Microsoft has had this problem for years! Google and Microsoft are now twins! Is this the era of Google as Microapple?
Whitney Grace, September 19, 2025
AI and Security? What? Huh?
September 18, 2025
As technology advances so do bad actors and their devious actions. Bad actors are so up to date with the latest technology that it takes white hat hackers and cyber security engineers awhile to catch up to them. AI has made bad actors smarter and EWeek explains that there is we are facing a banking security crisis: “Altman Warns Of AI-Powered Fraud Crisis in Banking, Urges Stronger Security Measures.”
OpenAI CEO Sam Altman warned that AI vocal technology is a danger to society. He told the Federal Reserve Vice Chair for Supervision Michelle Bowman that US banks are lagging behind Ai vocal security, because many financial institutions still rely on voiceprint technology to verify customers’ identities.
Altman warned that AI vocal technology can easily replicate humans and deepfake videos are even scarier when they become indistinguishable from reality. Bowman mentioned potential partnering with tech companies to create solutions.
Despite sounding the warning bells, Altman didn’t offer much help:
“Despite OpenAI’s prominence in the AI industry, Altman clarified that the company is not creating tools for impersonation. Still, he stressed that the broader AI community must take responsibility for developing new verification systems, such as “proof of human” solutions.
Altman is supporting tools like The Orb, developed by Tools for Humanity. The device aims to provide “proof of personhood” in a digital world flooded with fakes. His concerns go beyond financial fraud, extending to the potential for AI superintelligence to be misused in areas such as cyberwarfare or biological threats.”
Proof of personhood? It’s like the blue check on verified X/Twitter accounts. Altman might be helping make the future but he’s definitely also part of the problem.
Whitney Grace, September 18, 2025
Google: Klaxons, Red Lights, and Beeps
September 12, 2025
Here we go again with another warning from Google about scams in the form of Gemini. The Mirror reports that, “Google Issues ‘Red Alert’ To Gmail Users Over New AI Scam That Steals Passwords.” Bad actors are stealing passwords using Google’s own chatbot. Hackers are sending emails using Gemini. These emails contain a hidden message to reveal passwords.
Here’s how people are falling for the scam: there’s no link to click in the email. A box pops up alerting you to a risk. That’s all! It’s incredibly simple and scary. Remember that Google will never ask you for your username and password. It’s still the easiest tip to remember when it comes to these scams.
Google issued a statement:
“The tech giant explained the subtlety of the threat: ‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.’ As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.’”
Google also said some calming platitudes but the record replay is getting tiresome.
Whitney Grace, September 12, 2025
AI a Security Risk? No Way or Is It No WAI?
September 11, 2025
Am I the only one who realizes that AI is a security problem? Okay, I’m not but organizations certainly aren’t taking AI security breaches says Venture Beat in the article, “Shadow AI Adds $670K To Breach Costs While 97% Of Enterprises Skip Basic Access Controls, IBM Reports.” IBM collected information with the Ponemon Institute (does anyone else read that as Pokémon Institute?) about data breaches related to AI. IBM and the Ponemon Institute held 3470 interviews with 600 organizations that had data breaches.
Shadow AI is the unauthorized use of AI tools and applications. IBM shared how shadow AI affects organizations in the Cost of a Data Breach Report. Unauthorized usage of AI tools cost organizations $4.63 million and that is 16% more than the $4.44 million global average. YIKES! Another frightening statistic is that 97% of the organizations lacked proper AI access controls. Only 13% had AI-security related breaches compared to 8% who were unaware if AI comprised their systems
Bad actors are using supply chains as their primary attack and AI allows them to automate tasks to blend in with regular traffic. If you want to stay awake at night here are some more numbers:
“A majority of breached organizations (63%) either don’t have an AI governance policy or are still developing one. Even when they have a policy, less than half have an approval process for AI deployments, and 62% lack proper access controls on AI systems.”
An expert said this about the issue:
This pattern of delayed response to known vulnerabilities extends beyond AI governance to fundamental security practices. Chris Goettl, VP Product Management for Endpoint Security at Ivanti, emphasizes the shift in perspective: ‘What we currently call ‘patch management’ should more aptly be named exposure management—or how long is your organization willing to be exposed to a specific vulnerability?’”
Organizations that are aware of AI breaches and have security plans in place save more money.
It pays to be prepared and cheaper too!
Whitney Grace, September 11, 2025
Derailing Smart Software with Invisible Prompts
September 3, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.
The write up states:
Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.
The write up includes examples like these:
… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….
Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.
Stephen E Arnold, September 3, 2025
AI Words Are the Surface: The Deeper Thought Embedding Is the Problem with AI
September 3, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Humans are biased. Content generated by humans reflects these mental patterns. Smart software is probabilistic. So what?
Select the content to train smart software. The more broadly the content base, the greater range of biases will be baked into the Fancy Dan software. Then toss in the human developers who make decisions about thresholds, weights, and rounding. Mix in the wrapper code that does the guardrails which are created by humans with some of those biases, attitudes, and idiosyncratic mental equipment.
Then provide a system to students and people eager to get more done with less effort and what do you get? A partial and important glimpse of the consequences of about 2.5 years of AI as the next big thing are presented in “On-Screen and Now IRL: FSU Researchers Find Evidence of ChatGPT Buzzwords Turning Up in Everyday Speech.”
The write up reports:
“The changes we are seeing in spoken language are pretty remarkable, especially when compared to historical trends,” Juzek said. “What stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a link.”
Conjecture. That’s a weasel word. Once words are embedded they dragged a hard sided carry on with them.
The write up adds:
“Our research highlights many important ethical questions,” Galpin said. “With the ability of LLMs to influence human language comes larger questions about how model biases and misalignment, or differences in behavior in LLMs, may begin to influence human behaviors.”
As more research data become available, I project that several factoids will become points of discussion:
- What happens when AI outputs are weaponized for political, personal, or financial gain?
- How will people consuming AI outputs recognize that their vocabulary and the attendant “value baggage” is along for the life journey?
- What type of mental remapping can be accomplished with shaped AI output?
For now, students are happy to let AI think for them. In the future, will that warm, fuzzy feeling persist. If ignorance is bliss, I say, “Hello, happy.”
Stephen E Arnold, September 3, 2025
NATO Cyber Defense Document: Worth a Look
September 1, 2025
It’s so hard to find decent resources on cyber security these days without them trying to sell you on the latest, greatest service own product. One
Great resource is the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE), a multinational and interdisciplinary cyber defense hub. The organization’s mission is to support interdisciplinary expertise in cyber defense research, training and exercises covering the focus areas of technology, strategy, operations, and law.
While CCDCOE primarily serves NATO and member countries, its impactful research is useful for teaching all nations about the importance of cyber security. The organization began in May 2008 and since 2018 it has been responsible for teaching and training all NATO countries about cyber security. One of the organization’s biggest accomplishments is the Tallinn Manual:
“One of the most well-known and internationally recognised research accomplishments for CCDCOE has been the Tallinn Manual process, launched in 2009. The process has involved CCDCOE experts, internationally renowned legal scholars from various nations, legal advisors of nearly 50 states and other partners. Authored by nineteen international law experts, the “Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations” published in 2017 expanded the first edition published in 2013 with a legal analysis of more common cyber incidents that states encounter on a day-to-day basis and that fall below the thresholds of the use of force or armed conflict. The Tallinn Manual 2.0 is the most comprehensive analysis on how existing international law applies to cyberspace.”
CCDCOE is a very influential organization. Cybersecurity and defense is more important now than ever because of the dangers of artificial intelligence. CCDCOE is a fantastic organization to start learning the fundamentals of cybersecurity.
Whitney Grace , September 1, 2025
A Interesting Free Software: FreeVPN
August 28, 2025
No AI. Just a dinobaby working the old-fashioned way.
I often hear about the wonders of open source software. Even an esteemed technologist like Pavel Durov offers free and open source software. He wants to make certain aspects of Telegram transparent. “Transparent” is a popular word in some circles. China releases Qwen and it is free. The commercial variants are particularly stimulating. Download free and open source software. If you run into a problem, just fix it yourself. Alternatively you can pay for “commercial for fee” support. Choice! That’s the right stuff.
I read “Chrome VPN Extension with 100K Installs Screenshots All Sites Users Visit.” Note: By the time you read this, the Googlers may have blocked this extension or the people who rolled out this digital Trojan horse may have modified the extension’s behavior to something slightly less egregious.
Now back to the Trojan horse with a saddle blanket displaying the word “spyware.” I quote:
FreeVPN.One, a Chrome extension with over 100,000 installs and a verified badge on the Chrome Web Store, is exposed by researchers for taking screenshots of users’ screens and exfiltrating them to remote servers. A Koi Security investigation of the VPN tool reveals that it has been capturing full-page screenshots from users’ browsers, logging sensitive visual data like personal messages, financial dashboards, and private photos, and uploading it to aitd[.]one, a domain registered by the extension’s developer.
The explanation makes clear that one downloads and installs or activates a Chrome extension. Then the software sends data to the actor deploying the malware.
The developer says:
The extension’s developer claimed to Koi Security that the background screenshot functionality is part of a “security scan” intended to detect threats.
Whom does one believe? The threat detection outfit or the developer.
Can you recall a similar service? Hint: Capitalize the “r” in “Recall.”
Can the same stealth (clumsy stealth in some cases) exist in other free software? Does a jet air craft stay aloft when its engines fail?
Stephen E Arnold, August 28, 2025
A Better Telegram: Max (imum) Surveillance
August 27, 2025
No AI. Just a dinobaby working the old-fashioned way.
The super duper everything apps include many interesting functions. But one can spice up a messaging app with a bit of old-fashioned ingenuity. The newest “player” in the “secret” messaging game is not some knock off Silicon Valley service. The MAX app has arrived.
Reuters reported in “Russia Orders Sate-Backed MAX Messenger Ap, a WhatsApp Rival, Pre-Installed on Phones and Tablets.” (Did you notice the headline did not include Telegram?) The trusted news source says:
A Russian state-backed messenger application called MAX, a rival to WhatsApp that critics say could be used to track users, must be pre-installed on all mobile phones and tablets from next month, the Russian government said on Thursday. The decision to promote MAX comes as Moscow is seeking greater control over the internet space as it is locked in a standoff with the West over Ukraine, which it casts as part of an attempt to shape a new world order.
I like the inclusion of a reference to “a new world order.”
The trusted news source adds:
State media says accusations from Kremlin critics that MAX is a spying app are false and that it has fewer permissions to access user data than rivals WhatsApp and Telegram.
Yep, Telegram. Several questions:
- Are any of the companies supporting MAX providing services to Telegram?
- Were any of the technologists working on MAX associated with VKontakte or Telegram?
- Will other countries find the MAX mandated installation an interesting idea?
- How does MAX intersect with data captured from Russia-based telecom outfits and online service providers?
I can’t answer these questions, but I would think that a trusted news service would.
Stephen E Arnold, August 27, 2025
What Cyber Security Professionals “Fear”
August 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
My colleague Robert David Steele (now deceased) loved to attend Black Hat. He regaled me with the changing demographics of the conference, the reaction to his often excitement-inducing presentations, and the interesting potential “resources” he identified. I was content to stay in my underground office in rural Kentucky and avoid the hacking and posturing.
I still keep up (sort of but not too enthusiastically) with Black Hat events by reading articles like “Black Hat 2025: What Keeps Cyber Experts Up at Night?” The write up explains that:
“Machines move faster than humans.”
Okay, that makes sense. The write up then points out:
“Tools like generative AI are fueling faster, more convincing phishing and social engineering campaigns.”
I concluded that cyber security professionals fear fast computers and smart software. When these two things are combined, the write up states:
The speed of AI innovation is stretching security management to its limits.
My conclusion is that the wide availability of smart software is the big “fear.”
I interpret the information in the write up from a slightly different angle. Let me explain.
First, cyber security companies have to make money to stay in business. I could name one Russian outfit that gets state support, but I don’t want to create waves. Let’s go with money is the driver of cyber security. In order to make money, the firms have to come up with fancy ways of explaining DNS analysis, some fancy math, or yet another spin on the Maltego graph software. I understand.
Second, cyber security companies are by definition reactive. So far the integration of smart software into the policeware and intelware systems I track adds some workflow enhancements; for example, grouping information and in some cases generating a brief paragraph, thus saving time. Proactive perimeter defense systems and cyber methods designed to spot insider attacks are in what I call “sort of helpful” mode. These systems can easily overwhelm the person monitoring the data signals. Firms respond by popping up a level with another layer of abstraction. Those using the systems are busy, of course, and it is not clear if more work gets done or if time is bled off to do busy-work. Cyber security firms, therefore, are usually not in proactive mode except for marketing.
Third, cyber security firms are consolidating. I think about outfits like Pala Alto or the private equity roll ups. The result is that bureaucratic friction is added to the technology development these firms must do. Just figuring out how to snag data from the latest and greatest Dark Web secret forum and actually getting access to a Private Channel on Telegram disseminating content that is illegal in many jurisdictions takes time. With smart software, bad actors can experiment. The self-appointed gatekeepers do little to filter these malware activities because some bad actors are customers of the gatekeepers. (No, I won’t name firms. I don’t want to talk to lawyers or inflamed cyber security firms’ leadership.) My point is that consolidation creates bureaucratic work. That activity puts the foot on the fast moving cyber firm’s brakes. Reaction time slows.
What does this mean?
I think the number one fear for cyber security professionals may be the awareness that bad actors with zero bureaucratic, technical, or financial limits can use AI to make old wine new again. Recently a major international law enforcement organization announced the shutdown of particular stealer software. Unfortunately that stealer is currently being disseminated via Web search systems with live links to the Telegram-centric vendor pumping the malware into thousands of unsuspecting Telegram users each month.
What happens when that “old school” stealer is given some new capabilities by one of the smart software tools? The answer is, “Cyber security firms may have to hype their capabilities to an even greater degree than they now do. Behind the scenes, the stage is now set for developer burn out and churn.
The fear, then, is a nagging sense that bad guys may be getting a tool kit to punch holes in what looks like a slam dunk business. I am probably wrong because I am a dinobaby. I don’t go to many conferences. I don’t go to sales meetings. I don’t meet with private equity people. I just look at how AI makes asymmetric cyber warfare into a tough game. One should not take a squirt gun to a shoot out with a bad actor working without bureaucratic and financial restraints armed with an AI system.
Stephen E Arnold, August 21, 2025

