AI Security: Big Plus or Big Minus?

October 9, 2025

Agentic AI presents a new security crisis. But one firm stands ready to help you survive the threat. Cybersecurity firm Palo Alto Networks describes “Agentic AI and the Looming Board-Level Security Crisis.” Writer and CSO Haider Pasha sounds the alarm:

“In the past year, my team and I have spoken to over 3,000 of Europe’s top business leaders, and these conversations have led me to a stark conclusion: Three out of four current agentic AI projects are on track to experience significant security challenges. The hype, and resulting FOMO, around AI and agentic AI has led many organisations to run before they’ve learned to walk in this emerging space. It’s no surprise how Gartner expects agentic AI cancellations to rise through 2027 or that an MIT report shows most enterprise GenAI pilots already failing. The situation is even worse from a cybersecurity perspective, with only 6% of organizations leveraging an advanced security framework for AI, according to Stanford.

But the root issue isn’t bad code, it’s bad governance. Unless boards instill a security mindset from the outset and urgently step in to enforce governance while setting clear outcomes and embedding guardrails in agentic AI rollouts, failure is inevitable.”

The post suggests several ways to implement this security mindset from the start. For example, companies should create a council that oversees AI agents across the organization. They should also center initiatives on business goals and risks, not shiny new tech for its own sake. Finally, enforce least-privilege access policies as if the AI agent were a young intern. See the write-up for more details on these measures.

If one is overwhelmed by the thought of implementing these best practices, never fear. Palo Alto Networks just happens to have the platform to help. So go ahead and fear the future, just license the fix now.

Cynthia Murrell, October 9, 2025

AI May Be Like a Disneyland for Threat Actors

October 7, 2025

AI is supposed to revolutionize the world, but bad actors are the ones who are benefitting the most tight now.  AI is the ideal happy place for bad actors, because there’s an easy hack using autonomous browser based agents that use them as a tool for their nefarious deeds.  This alert cokes from Hacker Noon’s story: “Studies Show AI Agents And Browsers Are A Hacker’s Perfect Playground.”

Many companies are running on at least one AI enterprise agent, using it as a tool to fetch external data, etc.  Security, however, is still viewed as an add-on for the developers in this industry.  Zenity Labs, a leading Agentic AI security and governance company, discovered that 3000 publicly accessible MS Copilot agents.  

The Copilot agents failed because they relied on soft boundaries:

“…i.e., fragile, surface-level protections (i.e., instructions to the AI about what it should and shouldn’t do, with no technical controls). Agents were instructed in their prompts to “only help legitimate customers,” yet such rules were easy to bypass. Prompt shields designed to filter malicious inputs proved ineffective, while system messages outlining “acceptable behavior” did little to stop crafted attacks. Critically, there was no technical validation of the input sources feeding the agents, leaving them open to manipulation. With no sandboxing layer separating the agent from live production data, attackers can exploit these weaknesses to access sensitive systems directly.”

White hat hackers also found other AI exploits that were demonstrated at Black Hat USA 2025. Here’s a key factoid: “The more autonomous the AI agent, the higher the security risk.”

Many AI agents are vulnerable to security exploits and it’s a scary thought information is freely available to bad actors.  Hacker Noon suggests putting agents through stress tests to find weak points then adding the necessary security levels.  But Oracle (the marketer of secure enterprise search) and Google (owner of the cyber security big dog Mandiant) have both turned on their klaxons for big league vulnerabilities. Is AI helping? It depends whom one asks.

Whitney Grace, October 7, 2025

Get Cash for Spyware

September 26, 2025

Are you a white hat hacker? Do you have the genius to comprehend code and write your own? Are you a bad actor looking to hang up your black hat and clean up your life? Crowdfense might be the place for you. Here’s the link.

Crowdfense is an organization that “…is the world-leading research hub and acquisition platform for high-quality zero-day exploits and advanced vulnerability research. We acquire the most advanced zero-day research across desktop, mobile, appliances, web and embedded platforms.”

Despite the archaic web design (probably to weed out) uninterested parties, Crowdfense is a respected for spyware. They’re currently advertising for for their Exploit Acquisition Program:

“Since 2017, Crowdfense has operated the world’s most private vulnerability acquisition program, initially backed by a USD 10 million fund and powered by our proprietary Vulnerability Research Hub (VRH) platform. Today, the program has expanded to USD 30 million, with a broader scope that now includes enterprise software, mobile components, and messaging technologies. We offer rewards ranging from USD 10,000 to USD 7 million for full exploit chains or previously unreported capabilities. Partial chains and individual components are assessed individually and priced accordingly. As part of our commitment to the research community, we also offered free high-level technical training to hundreds of vulnerability researchers worldwide.”

If you want to do some good with your bad l33t skills, search for an exploit, invent some spyware, and reap the benefits. You can retire to an island and live off grid. Isn’t that the dream?

Whitney Grace, September 26, 2025

Graphite: Okay, to License Now

September 24, 2025

The US government uses specialized software to gather information related to persons of interest. The brand of popular since NSO Group marketed itself into a pickle is from the Israeli-founded spyware company Paragon Solutions. The US government isn’t a stranger to Paragon Solutions, in fact, El Pais shares in the article, “Graphite, the Israeli Spyware Acquired By ICE” that it renewed its contract with the specialized software company.

The deal was originally signed during Biden’s administration during September 24, but it went against the then president’s executive order that prohibited US agencies from using spyware tools that “posed ‘significant counterintelligence and security risks’ or had been misused by foreign governments to suppress dissent.

During the negotiations, AE Industrial Partners purchased Paragon and merged it with REDLattice, an intelligence contractor located in Virginia. Paragon is now a domestic partner with deep connections to former military and intelligence personnel. The suspension on ICE’s Homeland Security Investigations was quietly lifted on August 29 according to public contracting announcements.

The Us government will use Paragon’s Graphite spyware:

“Graphite is one of the most powerful commercial spy tools available. Once installed, it can take complete control of the target’s phone and extract text messages, emails, and photos; infiltrate encrypted apps like Signal and WhatsApp; access cloud backups; and covertly activate microphones to turn smartphones into listening devices.

The source suggests that although companies like Paragon insist their tools are intended to combat terrorism and organized crime, past use suggests otherwise. Earlier this year, Graphite allegedly has been linked to info gathering in Italy targeting at least some journalists, a few migrant rights activists, and a couple of associates of the definitely worth watching Pope Francis. Paragon stepped away from the home of pizza following alleged “public outrage.”

The US government’s use of specialized software seems to be a major concern among Democrats and Republicans alike. What government agencies are licensing and using Graphite. Beyond Search has absolutely no idea.

Whitney Grace, September 24, 2025

Google: Is It Becoming Microapple?

September 19, 2025

Google’s approach to Android, the freedom to pay Apple to make Google search the default for Safari, and the registering of developers — These are Tim Apple moves. Google has another trendlet too.

Google has 1.8 billion users around the world and according to the Mens Journal Google has a new problem: “Google Issues Major Warning to All 1.8 Billion Users.” There’s a new digital security threat and it involves AI. That’s not a surprise, because artificial intelligence has been a growing concern for cyber security experts for years. Since the technology is becoming more advanced, bad actors are using it for devious actions. The newest round of black hat tricks are called “indirect prompt injections.”

Indirect prompt injections are a threat for individual users, businesses, and governments. Google warned users about this new threat and how it works:

“‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions,’ the blog post continued.

The Google blog post warned that this puts individuals and entities at risk.

‘As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,’ the blog post continued.”

Bad actors have tasked Google’s Gemini (Shock! Gasp!) to infiltrate emails and ask users for their passwords and login information. That’s not the scary part. Most spammy emails have a link for users to click on to collect data, instead this new hack uses Gemini to prompt users for the information. Downloading fear.

Google is already working on counter measures for Gemini. Good luck! Microsoft has had this problem for years! Google and Microsoft are now twins! Is this the era of Google as Microapple?

Whitney Grace, September 19, 2025

AI and Security? What? Huh?

September 18, 2025

As technology advances so do bad actors and their devious actions. Bad actors are so up to date with the latest technology that it takes white hat hackers and cyber security engineers awhile to catch up to them. AI has made bad actors smarter and EWeek explains that there is we are facing a banking security crisis: “Altman Warns Of AI-Powered Fraud Crisis in Banking, Urges Stronger Security Measures.”

OpenAI CEO Sam Altman warned that AI vocal technology is a danger to society. He told the Federal Reserve Vice Chair for Supervision Michelle Bowman that US banks are lagging behind Ai vocal security, because many financial institutions still rely on voiceprint technology to verify customers’ identities.

Altman warned that AI vocal technology can easily replicate humans and deepfake videos are even scarier when they become indistinguishable from reality. Bowman mentioned potential partnering with tech companies to create solutions.

Despite sounding the warning bells, Altman didn’t offer much help:

“Despite OpenAI’s prominence in the AI industry, Altman clarified that the company is not creating tools for impersonation. Still, he stressed that the broader AI community must take responsibility for developing new verification systems, such as “proof of human” solutions.

Altman is supporting tools like The Orb, developed by Tools for Humanity. The device aims to provide “proof of personhood” in a digital world flooded with fakes. His concerns go beyond financial fraud, extending to the potential for AI superintelligence to be misused in areas such as cyberwarfare or biological threats.”

Proof of personhood? It’s like the blue check on verified X/Twitter accounts. Altman might be helping make the future but he’s definitely also part of the problem.

Whitney Grace, September 18, 2025

Google: Klaxons, Red Lights, and Beeps

September 12, 2025

Here we go again with another warning from Google about scams in the form of Gemini. The Mirror reports that, “Google Issues ‘Red Alert’ To Gmail Users Over New AI Scam That Steals Passwords.” Bad actors are stealing passwords using Google’s own chatbot. Hackers are sending emails using Gemini. These emails contain a hidden message to reveal passwords.

Here’s how people are falling for the scam: there’s no link to click in the email. A box pops up alerting you to a risk. That’s all! It’s incredibly simple and scary. Remember that Google will never ask you for your username and password. It’s still the easiest tip to remember when it comes to these scams.

Google issued a statement:

“The tech giant explained the subtlety of the threat: ‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.’ As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.’”

Google also said some calming platitudes but the record replay is getting tiresome.

Whitney Grace, September  12, 2025

AI a Security Risk? No Way or Is It No WAI?

September 11, 2025

Am I the only one who realizes that AI is a security problem? Okay, I’m not but organizations certainly aren’t taking AI security breaches says Venture Beat in the article, “Shadow AI Adds $670K To Breach Costs While 97% Of Enterprises Skip Basic Access Controls, IBM Reports.” IBM collected information with the Ponemon Institute (does anyone else read that as Pokémon Institute?) about data breaches related to AI. IBM and the Ponemon Institute held 3470 interviews with 600 organizations that had data breaches.

Shadow AI is the unauthorized use of AI tools and applications. IBM shared how shadow AI affects organizations in the Cost of a Data Breach Report. Unauthorized usage of AI tools cost organizations $4.63 million and that is 16% more than the $4.44 million global average. YIKES! Another frightening statistic is that 97% of the organizations lacked proper AI access controls. Only 13% had AI-security related breaches compared to 8% who were unaware if AI comprised their systems

Bad actors are using supply chains as their primary attack and AI allows them to automate tasks to blend in with regular traffic. If you want to stay awake at night here are some more numbers:

“A majority of breached organizations (63%) either don’t have an AI governance policy or are still developing one. Even when they have a policy, less than half have an approval process for AI deployments, and 62% lack proper access controls on AI systems.”

An expert said this about the issue:

This pattern of delayed response to known vulnerabilities extends beyond AI governance to fundamental security practices. Chris Goettl, VP Product Management for Endpoint Security at Ivanti, emphasizes the shift in perspective: ‘What we currently call ‘patch management’ should more aptly be named exposure management—or how long is your organization willing to be exposed to a specific vulnerability?’”

Organizations that are aware of AI breaches and have security plans in place save more money.

It pays to be prepared and cheaper too!

Whitney Grace, September 11, 2025

Derailing Smart Software with Invisible Prompts

September 3, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.

The write up states:

Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.

The write up includes examples like these:

… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….

Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.

Stephen E Arnold, September 3, 2025

AI Words Are the Surface: The Deeper Thought Embedding Is the Problem with AI

September 3, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker. 

Humans are biased. Content generated by humans reflects these mental patterns. Smart software is probabilistic. So what?

Select the content to train smart software. The more broadly the content base, the greater range of biases will be baked into the Fancy Dan software. Then toss in the human developers who make decisions about thresholds, weights, and rounding. Mix in the wrapper code that does the guardrails which are created by humans with some of those biases, attitudes, and idiosyncratic mental equipment.

Then provide a system to students and people eager to get more done with less effort and what do you get? A partial and important glimpse of the consequences of about 2.5 years of AI as the next big thing are presented in “On-Screen and Now IRL: FSU Researchers Find Evidence of ChatGPT Buzzwords Turning Up in Everyday Speech.”

The write up reports:

“The changes we are seeing in spoken language are pretty remarkable, especially when compared to historical trends,” Juzek said. “What stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a link.”

Conjecture. That’s a weasel word. Once words are embedded they dragged a hard sided carry on with them.

The write up adds:

“Our research highlights many important ethical questions,” Galpin said. “With the ability of LLMs to influence human language comes larger questions about how model biases and misalignment, or differences in behavior in LLMs, may begin to influence human behaviors.”

As more research data become available, I project that several factoids will become points of discussion:

  1. What happens when AI outputs are weaponized for political, personal, or financial gain?
  2. How will people consuming AI outputs recognize that their vocabulary and the attendant “value baggage” is along for the life journey?
  3. What type of mental remapping can be accomplished with shaped AI output?

For now, students are happy to let AI think for them. In the future, will that warm, fuzzy feeling persist. If ignorance is bliss, I say, “Hello, happy.”

Stephen E  Arnold, September 3, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta