AI Security: Big Plus or Big Minus?

October 9, 2025

Agentic AI presents a new security crisis. But one firm stands ready to help you survive the threat. Cybersecurity firm Palo Alto Networks describes “Agentic AI and the Looming Board-Level Security Crisis.” Writer and CSO Haider Pasha sounds the alarm:

“In the past year, my team and I have spoken to over 3,000 of Europe’s top business leaders, and these conversations have led me to a stark conclusion: Three out of four current agentic AI projects are on track to experience significant security challenges. The hype, and resulting FOMO, around AI and agentic AI has led many organisations to run before they’ve learned to walk in this emerging space. It’s no surprise how Gartner expects agentic AI cancellations to rise through 2027 or that an MIT report shows most enterprise GenAI pilots already failing. The situation is even worse from a cybersecurity perspective, with only 6% of organizations leveraging an advanced security framework for AI, according to Stanford.

But the root issue isn’t bad code, it’s bad governance. Unless boards instill a security mindset from the outset and urgently step in to enforce governance while setting clear outcomes and embedding guardrails in agentic AI rollouts, failure is inevitable.”

The post suggests several ways to implement this security mindset from the start. For example, companies should create a council that oversees AI agents across the organization. They should also center initiatives on business goals and risks, not shiny new tech for its own sake. Finally, enforce least-privilege access policies as if the AI agent were a young intern. See the write-up for more details on these measures.

If one is overwhelmed by the thought of implementing these best practices, never fear. Palo Alto Networks just happens to have the platform to help. So go ahead and fear the future, just license the fix now.

Cynthia Murrell, October 9, 2025

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta