AI a Security Risk? No Way or Is It No WAI?

September 11, 2025

Am I the only one who realizes that AI is a security problem? Okay, I’m not but organizations certainly aren’t taking AI security breaches says Venture Beat in the article, “Shadow AI Adds $670K To Breach Costs While 97% Of Enterprises Skip Basic Access Controls, IBM Reports.” IBM collected information with the Ponemon Institute (does anyone else read that as Pokémon Institute?) about data breaches related to AI. IBM and the Ponemon Institute held 3470 interviews with 600 organizations that had data breaches.

Shadow AI is the unauthorized use of AI tools and applications. IBM shared how shadow AI affects organizations in the Cost of a Data Breach Report. Unauthorized usage of AI tools cost organizations $4.63 million and that is 16% more than the $4.44 million global average. YIKES! Another frightening statistic is that 97% of the organizations lacked proper AI access controls. Only 13% had AI-security related breaches compared to 8% who were unaware if AI comprised their systems

Bad actors are using supply chains as their primary attack and AI allows them to automate tasks to blend in with regular traffic. If you want to stay awake at night here are some more numbers:

“A majority of breached organizations (63%) either don’t have an AI governance policy or are still developing one. Even when they have a policy, less than half have an approval process for AI deployments, and 62% lack proper access controls on AI systems.”

An expert said this about the issue:

This pattern of delayed response to known vulnerabilities extends beyond AI governance to fundamental security practices. Chris Goettl, VP Product Management for Endpoint Security at Ivanti, emphasizes the shift in perspective: ‘What we currently call ‘patch management’ should more aptly be named exposure management—or how long is your organization willing to be exposed to a specific vulnerability?’”

Organizations that are aware of AI breaches and have security plans in place save more money.

It pays to be prepared and cheaper too!

Whitney Grace, September 11, 2025

Microsoft: The Secure Discount King

September 10, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

Let’s assume that this story in The Register is dead accurate. Let’s forget that Google slapped the $0.47 smart software price tag on its Gemini smart software. Now let’s look at the interesting information in “Microsoft Rewarded for Security Failures with Another US Government Contract.” Snappy title. But check out the sub-title for the article: “Free Copilot for Any Agency Who Actually Wants It.”

I did not know that a US government agency was human signaled by the “who.” But let’s push forward.

The article states:

The General Services Administration (GSA) announced its new deal with Microsoft on Tuesday, describing it as a “strategic partnership” that could save the federal government as much as $3.1 billion over the next year. The GSA didn’t mention specific discount terms, but it said that services, including Microsoft 365, Azure cloud services, Dynamics 365, Entra ID Governance, and Microsoft Sentinel, will be cheaper than ever for feds.  That, and Microsoft’s next-gen Clippy, also known as Copilot, is free to access for any agency with a G5 contract as part of the new deal, too. That free price undercuts Google’s previously cheapest-in-show deal to inject Gemini into government agencies for just $0.47 for a year.

Will anyone formulate the hypothesis that Microsoft and Google are providing deep discounts to get government deals and the every-popular scope changes, engineering services, and specialized consulting fees?

I would not.

I quite like comparing Microsoft’s increasingly difficult to explain OpenAI, acqui-hire, and home-grown smart software as Clippy. I think that the more apt comparison is the outstanding Microsoft Bob solution to interface complexity.

The article explains that Oracle landed contracts with a discount, then Google, and now Microsoft. What about the smaller firms? Yeah, there are standard procurement guidelines for those outfits. Follow the rules and stop suggesting that giant companies are discounting there way into the US government.

What happens if these solutions hallucinate, do not deliver what an Inspector General, an Independent Verification & Validation team, or the General Accounting Office expects? Here’s the answer:

With the exception of AWS, all the other OneGov deals that have been announced so far have a very short shelf life, with most expirations at the end of 2026. Critics of the OneGov program have raised concerns that OneGov deals have set government agencies up for a new era of vendor lock-in not seen since the early cloud days, where one-year discounts leave agencies dependent on services that could suddenly become considerably more expensive by the end of next year.

The write up quotes one smaller outfit’s senior manager’s concern about low prices. But the deals are done, and the work on the 2026-2027 statements of work has begun, folks. Small outfits often lack the luxury of staff dedicated to extending a service provider’s engagement into a year or two renewal target.

The write up concludes by bringing up ancient history like those pop archaeologists on YouTube who explain that ancient technology created urns with handles. The write up says:

It was mere days ago that we reported on the Pentagon’s decision to formally bar Microsoft from using China-based engineers to support sensitive cloud services deployed by the Defense Department, a practice Defense Secretary Pete Hegseth called “mind-blowing” in a statement last week.  Then there was last year’s episodes that allowed Chinese and Russian cyber spies to break into Exchange accounts used by high-level federal officials and steal a whole bunch of emails and other information. That incident, and plenty more before it, led former senior White House cyber policy director AJ Grotto to conclude that Microsoft was an honest-to-goodness national security threat. None of that has mattered much, as the feds seem content to continue paying Microsoft for its services, despite wagging their finger at Redmond for “avoidable errors.”

Ancient history or aliens? I don’t know. But Microsoft does deals, and it is tough to resist “free”.

Stephen E Arnold, September 10, 2025

First, Let Us Kill Relevance for Once and For All. Second, Just Use Google

September 9, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

In the long distant past, Danny Sullivan was a search engine optimization-oriented journalist. I think we was involved with an outfit called Search Engine Land. He gave talks and had an animated dinosaur as his cursor. I recall liking the dinosaur. On August 29, 2025, Search Engine Land published a story unthinkable years ago when Google was the one and only game in town.

The article “ChatGPT, AI Tools Gain Traction as Google Search Slips: Survey” says:

“AI tool use is accelerating in everyday search, with ChatGPT use nearly tripling while Google’s share slips, survey of US users finds.”

But Google just sold the US government at $0.47 per head the Gemini system. How can these procurement people have gone off track? The write up says:

Google’s role in everyday information seeking is shrinking, while AI tools – particularly ChatGPT – are quickly gaining ground. That’s according to a new Higher Visibility survey of 1,500 U.S. users.

And here’s another statement that caught my eye:

Search behavior is fractured, which means SEOs cannot rely on Google Search alone (though, to be clear, SEO for Google remains as critical as ever). Therefore, SEO/GEO strategies now must account for visibility across multiple AI platforms.

I wonder if relevant search results will return? Of course not, one must optimize content for the new world of multiple AI platforms.

A couple of questions:

  1. If AI is getting uptake, won’t that uptake help out Google too?
  2. Who are the “users” in the survey sample? Is the sample valid? Are the data reliable?
  3. Is the need for SEO an accurate statement? SEO helped destroy relevance in search results. Aren’t these folks satisfied with their achievement to date?

I think I know the answers to these questions. But I am content to just believe everything Search Engine Land says. I mean marketing SEO and eliminating relevance when seeking answers online is undergoing change. Change means many things. Some of these issues are beyond the ken of the big thinkers at Search Engine Land in my opinion. But that’s irrelevant and definitely not SEO.

Stephen E Arnold, September 10, 2025

Google and Its Reality Dictating Machine: What Is a Fact?

September 9, 2025

I’m not surprised by this. I don’t understand why anyone would be surprised by this story from Neoscope: “Doctors Horrified After Google’s Healthcare AI Makes Up A Body Part That Does Not Exist In Humans.” Healthcare professional are worried about their industry’s over the widespread use of AI tools. These tools are error prone and chock full of bugs. In other words, these bots are creating up facts and lies and making them seem convincing.

It’s called hallucinating.

A recent example of an AI error involves Google’s Med-Gemini and it took an entire year before anyone discovered it. The false information was published in a May 2024 research paper from Google that ironically discussed the promises of AI Med-Gemini analyzing brain scans. The AI “identified” the “old left basilar ganglia infarct” in the scans, but that doesn’t exist in the human body. Google never fixed its research paper.

Hallucinations are dangerous in humans but they’re much worse in AI because they won’t be confined to a single source.

“It’s not just Med-Gemini. Google’s more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time. ‘Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine,’ Judy Gichoya, Emory University associate professor of radiology and informatics, told The Verge.

Other experts say we’re rushing into adapting AI in clinical settings — from AI therapists, radiologists, and nurses to patient interaction transcription services — warranting a far more careful approach.”

A wise fictional character once said, “Take risks! Make mistakes! Get messy! In other words, say “I don’t know!” Could this quick kill people? Duh.

Whitney Grace, September 9, 2025

Dr. Bob Clippy Will See You Now

September 8, 2025

I cannot wait for AI to replace my trusted human physician whom I’ve been seeing for years. “Microsoft Claims its AI Tool Can Diagnose Complex Medical Cases Four Times More Accurately than Doctors,” Fortune reports. The company made this incredible claim in a recent blog post. How did it determine this statistic? By taking the usual resources away from human doctors it pitted against its AI. Senior Reporter Alexa Mikhail tells us:

“The team at Microsoft noted the limitations of this research. For one, the physicians in the study had between five and 20 years of experience, but were unable to use textbooks, coworkers, or—ironically—generative AI for their answers. It could have limited their performance, as these resources may typically be available during a complex medical situation.”

You don’t say? Additionally, the study did not include everyday cases. You know, the sort doctors do not need to consult books or coworkers to diagnose. Seems legit. Microsoft says it sees the tool as a complement to doctors, not a replacement for them. That sounds familiar.

Mikahil notes AI already permeates healthcare: Most of us have looked up symptoms with AI-assisted Web searches. ChatGPT is actively being used as a psychotherapist (sometimes for better, often for worse). Many healthcare executives are eager to take this much, much further. So are about half of US patients and 63% of clinicians, according to the 2025 Philips Future Health Index (FHI), who expect AI to improve health outcomes. We hope they are correct, because there may be no turning back now.

Cynthia Murrell, September 8, 2025

AI Can Be Your Food Coach… Well, Perhaps Not

September 5, 2025

Is this better or worse than putting glue on pizza? TechSpot reveals yet another severe consequence of trusting AI: “Man Develops Rare 19th-Century Psychiatric Disorder After Following ChatGPT’s Diet Advice.” Writer Rob Thubron tells us:

“The case involved a 60-year-old man who, after reading reports on the negative impact excessive amounts of sodium chloride (common table salt) can have on the body, decided to remove it from his diet. There were plenty of articles on reducing salt intake, but he wanted it removed completely. So, he asked ChatGPT for advice, which he followed. After being on his new diet for three months, the man admitted himself to hospital over claims that his neighbor was poisoning him. His symptoms included new-onset facial acne and cherry angiomas, fatigue, insomnia, excessive thirst, poor coordination, and a rash. He also expressed increasing paranoia and auditory and visual hallucinations, which, after he attempted to escape, ‘resulted in an involuntary psychiatric hold for grave disability.’”

Yikes! It was later learned ChatGPT suggested he replace table salt with sodium bromide. That resulted, unsurprisingly, in this severe case of bromism. That malady has not been common since the 1930s. Maybe ChatGPT confused the user with a spa/hot tub or an oil and gas drill. Or perhaps its medical knowledge is just a bit out of date. Either way, this sad incident illustrates what a mistake it is to rely on generative AI for important answers. This patient was not the only one here with hallucinations.

Cynthia Murrell, September 5, 2025

Fabulous Fakes Pollute Publishing: That AI Stuff Is Fatuous

September 4, 2025

New York Times best selling author David Baldacci testified before the US Congress about regulating AI. Medical professionals are worried about false information infiltrating medical knowledge like the scandal involving Med-Gemini and an imaginary body part. It’s getting worse says ZME Science: “A Massive Fraud Ring Is Publishing Thousands of Fake Studies and the Problem is Exploding. ‘These Networks Are Essentially Criminal Organizations.’”

Bad actors in scientific publishing used to be a small group, but now it’s a big posse:

“What we are seeing is large networks of editors and authors cooperating to publish fraudulent research at scale. They are exploiting cracks in the system to launder reputations, secure funding, and climb academic ranks. This isn’t just about the occasional plagiarized paragraph or data fudged to fool reviewers. This is about a vast and resilient system that, in some cases, mimics organized crime. And it’s infiltrating the very core of science.”

Luís Amaral discovered in a study he conducted that analyzed five million papers across 70,000 scientific journals that there is a fraudulent paper mill for publishing. You’ve heard of paper mill colleges where students can buy so-called degrees. This is similar except the products are authorship slots and journal placements from artificial research and compromised editors.

Outstanding, AI champions!

This is a way for bad actors to pad their resumes and gain undeserved creditability.

Fake science has always been a problem but it’s outpacing fact-based science. It’s cheaper to produce fake science than legitimate truth. The article then waxes poetic about the need for respectability, the dangerous consequences of false science, and how the current tools aren’t enough. It’s devastating but the expected cultural shift needed to be more respectful of truth and hard facts is not equipped to deal with the new world. Thanks, AI.

Whitney Grace, September 4, 2025

Derailing Smart Software with Invisible Prompts

September 3, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.

The write up states:

Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.

The write up includes examples like these:

… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….

Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.

Stephen E Arnold, September 3, 2025

Bending Reality or Creating a Question of Ownership and Responsibility for Errors

September 3, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

The Google has may busy digital beavers working in the superbly managed organization. The BBC, however, seems to be agitated about what may be a truly insignificant matter: Ownership of substantially altered content and responsibility for errors introduced into digital content.

YouTube secretly used AI to Edit People’s Videos. The Results Could Bend Reality” reports:

In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission.

The BBC ignores a couple of issues that struck me as significant if — please, note the “if” — the assertion about YouTube altering content belonging to another entity. I will address these after some more BBC goodness.

I noted this statement:

the company [Google] has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.

Okay, the Google digital beavers are beavering away.

I also noted this passage attributed to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh:

“You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos…. “People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”

What about those issues I thought about after reading the BBC’s write up:

  1. If Google’s changes (improvements, enhancements, AI additions, whatever), will Google “own” the resulting content? My thought is that if Google can make more money by using AI to create a “fair use” argument, it will. How long will it take a court (assuming these are still functioning) to figure out if Google’s right or the individual content creator is the copyright holder?
  2. When, not if, Google’s AI introduces some type of error, is Google responsible or is it the creator’s problem? My hunch is that Google’s attorneys will argue that it provides a content creator with a free service. See the Terms of Service for YouTube and stop complaining.
  3. What if a content creator hits a home run and Google’s AI “learns” then outputs content via its assorted AI processes? Will Google be able to deplatform the original creator and just use it as a way to make money without paying the home-run hitting YouTube creator?

Perhaps the BBC would like to consider how these tiny “experiments” can expand until they shift the monetization methods further in favor of the Google. Maybe one reason is that BBC doesn’t think these types of thoughts. The Google, based on my experience, is indeed thinking these types of “what if” talks in a sterile room with whiteboards and brilliant Googlers playing with their mobile devices or snacking on goodies.

Stephen E Arnold, September 3, 2025

Deadbots. Many Use Cases, Including Advertising

September 2, 2025

Dino 5 18 25_thumbNo AI. Just a dinobaby working the old-fashioned way.

I like the idea of deadbots, a concept explained by the ever-authoritative NPR in “AI Deadbots Are Persuasive — and Researchers Say, They’re Primed for Monetization.” The write up reports in what I imagine as a resonant, somewhat breathy voice:

AI avatars of deceased people – or “deadbots” – are showing up in new and unexpected contexts, including ones where they have the power to persuade.

Here’s a passage I thought was interesting:

Researchers are now warning that commercial use is the next frontier for deadbots. “Of course it will be monetized,” said Lindenwood University AI researcher James Hutson. Hutson co-authored several studies about deadbots, including one exploring the ethics of using AI to reanimate the dead. Hutson’s work, along with other recent studies such as one from Cambridge University, which explores the likelihood of companies using deadbots to advertise products to users, point to the potential harms of such uses. “The problem is if it is perceived as exploitative, right?” Hutson said.

Not surprisingly, some sticks in the mud see a downside to deadbots:

Quinn [a wizard a Authetic Interactions Inc.] said companies are going to try to make as much money out of AI avatars of both the dead and the living as possible, and he acknowledges there could be some bad actors. “Companies are already testing things out internally for these use cases,” Quinn said, with reference to such uses cases as endorsements featuring living celebrities created with generative AI that people can interactive with. “We just haven’t seen a lot of the implementations yet.”

I wonder if any philosophical types will consider how an interaction with a dead person’s avatar can be an “authetic interaction.”

I started thinking of deadbots I would enjoy coming to life on my digital devices; for example:

  • My first boss at a blue chip consulting firm who encouraged rumors that his previous wives accidently met with boating accidents
  • My high school English teacher who took me to the assistant principal’s office for writing a poem about the spirit of nature who looked to me like a Playboy bunny
  • The union steward who told me that I was working too fast and making other workers look like they were not working hard
  • The airline professional who told me our flight would be delayed when a passenger died during push back from the gate. (The fellow was sitting next to me. Airport food did it I think.)
  • The owner of an enterprise search company who insisted, “Our enterprise information retrieval puts all your company’s information at an employee’s fingertips.”

You may have other ideas for deadbots. How would you monetize a deadbot, Google- and Meta-type companies? Will Hollywood do deadbot motion pictures? (I know the answer to that question.)

Stephen E Arnold, September 2, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta