Google: Klaxons, Red Lights, and Beeps
September 12, 2025
Here we go again with another warning from Google about scams in the form of Gemini. The Mirror reports that, “Google Issues ‘Red Alert’ To Gmail Users Over New AI Scam That Steals Passwords.” Bad actors are stealing passwords using Google’s own chatbot. Hackers are sending emails using Gemini. These emails contain a hidden message to reveal passwords.
Here’s how people are falling for the scam: there’s no link to click in the email. A box pops up alerting you to a risk. That’s all! It’s incredibly simple and scary. Remember that Google will never ask you for your username and password. It’s still the easiest tip to remember when it comes to these scams.
Google issued a statement:
“The tech giant explained the subtlety of the threat: ‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.’ As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.’”
Google also said some calming platitudes but the record replay is getting tiresome.
Whitney Grace, September 12, 2025
AI a Security Risk? No Way or Is It No WAI?
September 11, 2025
Am I the only one who realizes that AI is a security problem? Okay, I’m not but organizations certainly aren’t taking AI security breaches says Venture Beat in the article, “Shadow AI Adds $670K To Breach Costs While 97% Of Enterprises Skip Basic Access Controls, IBM Reports.” IBM collected information with the Ponemon Institute (does anyone else read that as Pokémon Institute?) about data breaches related to AI. IBM and the Ponemon Institute held 3470 interviews with 600 organizations that had data breaches.
Shadow AI is the unauthorized use of AI tools and applications. IBM shared how shadow AI affects organizations in the Cost of a Data Breach Report. Unauthorized usage of AI tools cost organizations $4.63 million and that is 16% more than the $4.44 million global average. YIKES! Another frightening statistic is that 97% of the organizations lacked proper AI access controls. Only 13% had AI-security related breaches compared to 8% who were unaware if AI comprised their systems
Bad actors are using supply chains as their primary attack and AI allows them to automate tasks to blend in with regular traffic. If you want to stay awake at night here are some more numbers:
“A majority of breached organizations (63%) either don’t have an AI governance policy or are still developing one. Even when they have a policy, less than half have an approval process for AI deployments, and 62% lack proper access controls on AI systems.”
An expert said this about the issue:
This pattern of delayed response to known vulnerabilities extends beyond AI governance to fundamental security practices. Chris Goettl, VP Product Management for Endpoint Security at Ivanti, emphasizes the shift in perspective: ‘What we currently call ‘patch management’ should more aptly be named exposure management—or how long is your organization willing to be exposed to a specific vulnerability?’”
Organizations that are aware of AI breaches and have security plans in place save more money.
It pays to be prepared and cheaper too!
Whitney Grace, September 11, 2025
Derailing Smart Software with Invisible Prompts
September 3, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.
The write up states:
Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.
The write up includes examples like these:
… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….
Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.
Stephen E Arnold, September 3, 2025
AI Words Are the Surface: The Deeper Thought Embedding Is the Problem with AI
September 3, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Humans are biased. Content generated by humans reflects these mental patterns. Smart software is probabilistic. So what?
Select the content to train smart software. The more broadly the content base, the greater range of biases will be baked into the Fancy Dan software. Then toss in the human developers who make decisions about thresholds, weights, and rounding. Mix in the wrapper code that does the guardrails which are created by humans with some of those biases, attitudes, and idiosyncratic mental equipment.
Then provide a system to students and people eager to get more done with less effort and what do you get? A partial and important glimpse of the consequences of about 2.5 years of AI as the next big thing are presented in “On-Screen and Now IRL: FSU Researchers Find Evidence of ChatGPT Buzzwords Turning Up in Everyday Speech.”
The write up reports:
“The changes we are seeing in spoken language are pretty remarkable, especially when compared to historical trends,” Juzek said. “What stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a link.”
Conjecture. That’s a weasel word. Once words are embedded they dragged a hard sided carry on with them.
The write up adds:
“Our research highlights many important ethical questions,” Galpin said. “With the ability of LLMs to influence human language comes larger questions about how model biases and misalignment, or differences in behavior in LLMs, may begin to influence human behaviors.”
As more research data become available, I project that several factoids will become points of discussion:
- What happens when AI outputs are weaponized for political, personal, or financial gain?
- How will people consuming AI outputs recognize that their vocabulary and the attendant “value baggage” is along for the life journey?
- What type of mental remapping can be accomplished with shaped AI output?
For now, students are happy to let AI think for them. In the future, will that warm, fuzzy feeling persist. If ignorance is bliss, I say, “Hello, happy.”
Stephen E Arnold, September 3, 2025
NATO Cyber Defense Document: Worth a Look
September 1, 2025
It’s so hard to find decent resources on cyber security these days without them trying to sell you on the latest, greatest service own product. One
Great resource is the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE), a multinational and interdisciplinary cyber defense hub. The organization’s mission is to support interdisciplinary expertise in cyber defense research, training and exercises covering the focus areas of technology, strategy, operations, and law.
While CCDCOE primarily serves NATO and member countries, its impactful research is useful for teaching all nations about the importance of cyber security. The organization began in May 2008 and since 2018 it has been responsible for teaching and training all NATO countries about cyber security. One of the organization’s biggest accomplishments is the Tallinn Manual:
“One of the most well-known and internationally recognised research accomplishments for CCDCOE has been the Tallinn Manual process, launched in 2009. The process has involved CCDCOE experts, internationally renowned legal scholars from various nations, legal advisors of nearly 50 states and other partners. Authored by nineteen international law experts, the “Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations” published in 2017 expanded the first edition published in 2013 with a legal analysis of more common cyber incidents that states encounter on a day-to-day basis and that fall below the thresholds of the use of force or armed conflict. The Tallinn Manual 2.0 is the most comprehensive analysis on how existing international law applies to cyberspace.”
CCDCOE is a very influential organization. Cybersecurity and defense is more important now than ever because of the dangers of artificial intelligence. CCDCOE is a fantastic organization to start learning the fundamentals of cybersecurity.
Whitney Grace , September 1, 2025
A Interesting Free Software: FreeVPN
August 28, 2025
No AI. Just a dinobaby working the old-fashioned way.
I often hear about the wonders of open source software. Even an esteemed technologist like Pavel Durov offers free and open source software. He wants to make certain aspects of Telegram transparent. “Transparent” is a popular word in some circles. China releases Qwen and it is free. The commercial variants are particularly stimulating. Download free and open source software. If you run into a problem, just fix it yourself. Alternatively you can pay for “commercial for fee” support. Choice! That’s the right stuff.
I read “Chrome VPN Extension with 100K Installs Screenshots All Sites Users Visit.” Note: By the time you read this, the Googlers may have blocked this extension or the people who rolled out this digital Trojan horse may have modified the extension’s behavior to something slightly less egregious.
Now back to the Trojan horse with a saddle blanket displaying the word “spyware.” I quote:
FreeVPN.One, a Chrome extension with over 100,000 installs and a verified badge on the Chrome Web Store, is exposed by researchers for taking screenshots of users’ screens and exfiltrating them to remote servers. A Koi Security investigation of the VPN tool reveals that it has been capturing full-page screenshots from users’ browsers, logging sensitive visual data like personal messages, financial dashboards, and private photos, and uploading it to aitd[.]one, a domain registered by the extension’s developer.
The explanation makes clear that one downloads and installs or activates a Chrome extension. Then the software sends data to the actor deploying the malware.
The developer says:
The extension’s developer claimed to Koi Security that the background screenshot functionality is part of a “security scan” intended to detect threats.
Whom does one believe? The threat detection outfit or the developer.
Can you recall a similar service? Hint: Capitalize the “r” in “Recall.”
Can the same stealth (clumsy stealth in some cases) exist in other free software? Does a jet air craft stay aloft when its engines fail?
Stephen E Arnold, August 28, 2025
A Better Telegram: Max (imum) Surveillance
August 27, 2025
No AI. Just a dinobaby working the old-fashioned way.
The super duper everything apps include many interesting functions. But one can spice up a messaging app with a bit of old-fashioned ingenuity. The newest “player” in the “secret” messaging game is not some knock off Silicon Valley service. The MAX app has arrived.
Reuters reported in “Russia Orders Sate-Backed MAX Messenger Ap, a WhatsApp Rival, Pre-Installed on Phones and Tablets.” (Did you notice the headline did not include Telegram?) The trusted news source says:
A Russian state-backed messenger application called MAX, a rival to WhatsApp that critics say could be used to track users, must be pre-installed on all mobile phones and tablets from next month, the Russian government said on Thursday. The decision to promote MAX comes as Moscow is seeking greater control over the internet space as it is locked in a standoff with the West over Ukraine, which it casts as part of an attempt to shape a new world order.
I like the inclusion of a reference to “a new world order.”
The trusted news source adds:
State media says accusations from Kremlin critics that MAX is a spying app are false and that it has fewer permissions to access user data than rivals WhatsApp and Telegram.
Yep, Telegram. Several questions:
- Are any of the companies supporting MAX providing services to Telegram?
- Were any of the technologists working on MAX associated with VKontakte or Telegram?
- Will other countries find the MAX mandated installation an interesting idea?
- How does MAX intersect with data captured from Russia-based telecom outfits and online service providers?
I can’t answer these questions, but I would think that a trusted news service would.
Stephen E Arnold, August 27, 2025
What Cyber Security Professionals “Fear”
August 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
My colleague Robert David Steele (now deceased) loved to attend Black Hat. He regaled me with the changing demographics of the conference, the reaction to his often excitement-inducing presentations, and the interesting potential “resources” he identified. I was content to stay in my underground office in rural Kentucky and avoid the hacking and posturing.
I still keep up (sort of but not too enthusiastically) with Black Hat events by reading articles like “Black Hat 2025: What Keeps Cyber Experts Up at Night?” The write up explains that:
“Machines move faster than humans.”
Okay, that makes sense. The write up then points out:
“Tools like generative AI are fueling faster, more convincing phishing and social engineering campaigns.”
I concluded that cyber security professionals fear fast computers and smart software. When these two things are combined, the write up states:
The speed of AI innovation is stretching security management to its limits.
My conclusion is that the wide availability of smart software is the big “fear.”
I interpret the information in the write up from a slightly different angle. Let me explain.
First, cyber security companies have to make money to stay in business. I could name one Russian outfit that gets state support, but I don’t want to create waves. Let’s go with money is the driver of cyber security. In order to make money, the firms have to come up with fancy ways of explaining DNS analysis, some fancy math, or yet another spin on the Maltego graph software. I understand.
Second, cyber security companies are by definition reactive. So far the integration of smart software into the policeware and intelware systems I track adds some workflow enhancements; for example, grouping information and in some cases generating a brief paragraph, thus saving time. Proactive perimeter defense systems and cyber methods designed to spot insider attacks are in what I call “sort of helpful” mode. These systems can easily overwhelm the person monitoring the data signals. Firms respond by popping up a level with another layer of abstraction. Those using the systems are busy, of course, and it is not clear if more work gets done or if time is bled off to do busy-work. Cyber security firms, therefore, are usually not in proactive mode except for marketing.
Third, cyber security firms are consolidating. I think about outfits like Pala Alto or the private equity roll ups. The result is that bureaucratic friction is added to the technology development these firms must do. Just figuring out how to snag data from the latest and greatest Dark Web secret forum and actually getting access to a Private Channel on Telegram disseminating content that is illegal in many jurisdictions takes time. With smart software, bad actors can experiment. The self-appointed gatekeepers do little to filter these malware activities because some bad actors are customers of the gatekeepers. (No, I won’t name firms. I don’t want to talk to lawyers or inflamed cyber security firms’ leadership.) My point is that consolidation creates bureaucratic work. That activity puts the foot on the fast moving cyber firm’s brakes. Reaction time slows.
What does this mean?
I think the number one fear for cyber security professionals may be the awareness that bad actors with zero bureaucratic, technical, or financial limits can use AI to make old wine new again. Recently a major international law enforcement organization announced the shutdown of particular stealer software. Unfortunately that stealer is currently being disseminated via Web search systems with live links to the Telegram-centric vendor pumping the malware into thousands of unsuspecting Telegram users each month.
What happens when that “old school” stealer is given some new capabilities by one of the smart software tools? The answer is, “Cyber security firms may have to hype their capabilities to an even greater degree than they now do. Behind the scenes, the stage is now set for developer burn out and churn.
The fear, then, is a nagging sense that bad guys may be getting a tool kit to punch holes in what looks like a slam dunk business. I am probably wrong because I am a dinobaby. I don’t go to many conferences. I don’t go to sales meetings. I don’t meet with private equity people. I just look at how AI makes asymmetric cyber warfare into a tough game. One should not take a squirt gun to a shoot out with a bad actor working without bureaucratic and financial restraints armed with an AI system.
Stephen E Arnold, August 21, 2025
Cyber Security: Evidence That Performance Is Different from Marketing
August 20, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
In 2022, Google bought a cyber security outfit named Mandiant. The firm had been around since 2004, but when Google floated more than $5 billion for the company, it was time to sell.
If you don’t recall, Google operates a large cloud business and is trying diligently to sell to Microsoft customers in the commercial and government sector. A cyber security outfit would allow Google to argue that it would offer better security for its customers and their users.
Mandiant’s business was threat intelligence. The idea is that Mandiant would monitor forums, the Web, and any other online information about malware and other criminal cyber operations. As an added bonus, Mandiant would blend automated security functions with its technology. Wham, bam! Slam dunk, right?
I read “Google Confirms Major Security Breach After Hackers Linked To ShinyHunters Steal Sensitive Corporate Data, Including Business Contact Information, In Coordinated Cyberattack.” First, a disclaimer. I have no idea if this WCCF Tech story is 100 percent accurate. It could be one of those Microsoft 1,000 Russian programmers are attacking us” plays. On the other hand, it will be fun to assume that some of the information in the cited article is accurate.
With that as background, I noted this passage:
The tech giant has recently confirmed a data breach linked to the ShinyHunters ransomware group, which targeted Google’s corporate Salesforce database systems containing business contact information.
Okay. Google’s security did not work. A cloud customer’s data were compromised. The assertion that Google’s security is better than or equal to Microsoft’s is tough for me to swallow.
Here’s another passage:
As per Google’s Threat Intelligence Group (GTIG), the hackers used a voice phishing technique that involved calling employees while pretending to be members of the internal IT team, in order to have them install an altered version of Salesforce’s Data Loader. By using this technique, the attackers were able to access the database before their intrusion was detected.
A human fooled another human. The automated systems were flummoxed. The breach allegedly took place.
Several observations are warranted:
- This is security until a breach occurs. I am not sure that customers expect this type of “footnote” to their cyber security licensing mumbo jumbo. The idea is that Google should deliver a secure service.
- Mandiant, like other threat intelligence services, allows the customer to assume that the systems and methods generally work. That’s true until they don’t.
- Bad actors have an advantage. Armed with smart software and tools that can emulate my dead grandfather, the humans remain a chink in the otherwise much-hyped armor of an outfit like Google.
What this example, even if only partly accurate, makes it clear than cyber security marketing performs better than the systems some of the firms sell. Consider that the victim was Google. That company has touted its technical superiority for decades. Then Google buys extra security. The combo delivers what? Evidence that believing the cyber security marketing may do little to reduce the vulnerability of an organization. What’s notable is that the missteps were Google’s. Microsoft may enshrine this breach case and mount it on the walls of every cyber security employees’ cubicles.
I can imagine hearing a computer-generated voice emulating Bill Gates’, saying, “It wasn’t us this time.”
Stephen E Arnold, August 20, 2025
News Flash from the Past: Bad Actors Use New Technology and Adapt Quickly
August 18, 2025
No AI. Just a dinobaby working the old-fashioned way.
NBC News is on top of cyber security trends. I think someone spotted Axios report that bad actors were using smart software to outfox cyber security professionals. I am not sure this is news, but what do I know?
“Criminals, Good Guys and Foreign Spies: Hackers Everywhere Are Using AI Now” reports this “hot off the press” information. I quote:
The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.
My goodness. Who knew that stealers have been zipping around for many years? Even more startling old information is:
LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents. The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster.
Stunning. A free chunk of smart software, unemployed or intra-gig programmers, and juicy targets pushed out with a fairy land of vulnerabilities. Isn’t it insightful that bad actors would apply these tools to clueless employees, inherently vulnerable operating systems, and companies too busy outputting marketing collateral to do routine security updates.
The cat-and-mouse game works this way. Bad actors with access to useful scripting languages, programming expertise, and smart software want to generate revenue or wreck havoc. One individual or perhaps a couple of people in a coffee shop hit upon a better way to access a corporate network or obtain personally identifiable information from a hapless online user.
Then, after the problem has been noticed and reported, cyber security professionals will take a closer look. If these outfits have smart software running, a human will look more closely at logs and say, “I think I saw something.”
Okay, mice are in and swarming. Now the cats jump into action. The cats will find [a] a way to block the exploit, [b] rush to push the fix to paying customers, and [c] share the information in a blog post or a conference.
What happens? The bad actors notice their mice aren’t working or they are being killed instantly. The bad actors go back to work. In most cases, the bad actors are not unencumbered by bureaucracy or tough thought problems about whether something is legal or illegal. The bad actors launch more attacks. If one works, its gravy.
Now the cats jump back into the fray.
In the current cyber crime world, cyber security firms, investigators, and lawyers are in reactive mode. The bad actors play offense.
One quick example: Telegram has been enabling a range of questionable online activities since 2013. In 2024 after a decade of inaction, France said, “Enough.” Authorities in France arrested Pavel Durov. The problem from my point of view is that it took 12 years to man up to the icon Pavel Durov.
What happens when a better Telegram comes along built with AI as part of its plumbing?
The answer is, “You can buy licenses to many cyber security systems. Will they work?”
There are some large, capable mice out there in cyber space.
Stephen E Arnold, August 18, 2025