Desperate Much? Buying Cyber Security Software Regularly

September 16, 2025

Bad actors have access to AI, and it is enabling them to increase both speed and volume at an alarming rate. Are cybersecurity teams able to cope? Maybe—if they can implement the latest software quickly enough. VentureBeat reports, “Software Commands 40% of Cybersecurity Budgets ad Gen AI Attacks Execute in Milliseconds.” Citing IBM’s recent Cost of a Data Breach Report, writer Louis Columbus reports 40% of cybersecurity spending now goes to software. Compare that to just 15.8% spent on hardware, 15% on outsourcing, and 29% on personnel. Even so, AI-assisted hacks now attack in milliseconds while the Mean Time to Identify (MTTI) is 181 days. That is quite the disparity. Columbus observes:

“Three converging threats are flipping cybersecurity on its head: what once protected organizations is now working against them. Generative AI (gen AI) is enabling attackers to craft 10,000 personalized phishing emails per minute using scraped LinkedIn profiles and corporate communications. NIST’s 2030 quantum deadline threatens retroactive decryption of $425 billion in currently protected data. Deepfake fraud that surged 3,000% in 2024 now bypasses biometric authentication in 97% of attempts, forcing security leaders to reimagine defensive architectures fundamentally.”

Understandable. But all this scrambling for solutions may now be part of the problem. Some teams, we are told, manage 75 or more security tools. No wonder they capture so much of the budget. Simplification, however, is proving elusive. We learn:

“Security Service Edge (SSE) platforms that promised streamlined convergence now add to the complexity they intended to solve. Meanwhile, standalone risk-rating products flood security operations centers with alerts that lack actionable context, leading analysts to spend 67% of their time on false positives, according to IDC’s Security Operations Study. The operational math doesn’t work. Analysts require 90 seconds to evaluate each alert, but they receive 11,000 alerts daily. Each additional security tool deployed reduces visibility by 12% and increases attacker dwell time by 23 days, as reported in Mandiant’s 2024 M-Trends Report. Complexity itself has become the enterprise’s greatest cybersecurity vulnerability.”

See the writeup for more on efforts to improve cybersecurity’s speed and accuracy and the factors that thwart them. Do we have a crisis yet? Of course not. Marketing tells us cyber security just works. Sort of.

Cynthia Murrell, September 16, 2025

Shame, Stress, and Longer Hours: AI’s Gifts to the Corporate Worker

September 15, 2025

Office workers from the executive suites to entry-level positions have a new reason to feel bad about themselves. Fortune reports, “ ‘AI Shame’ Is Running Rampant in the Corporate Sector—and C-Suite Leaders Are Most Worried About Getting Caught, Survey Says.” Writer Nick Lichtenberg cites a survey of over 1,000 workers by SAP subsidiary WalkMe. We learn almost half (48.8%) of the respondents said they hide their use of AI at work to avoid judgement. The number was higher at 53.4% for those at the top—even though they use AI most often. But what about the generation that has entered the job force amid AI hype? We learn:

“Gen Z approaches AI with both enthusiasm and anxiousness. A striking 62.6% have completed work using AI but pretended it was all their own effort—the highest rate among any generation. More than half (55.4%) have feigned understanding of AI in meetings. … But only 6.8% report receiving extensive, time-consuming AI training, and 13.5% received none at all. This is the lowest of any age group.”

In fact, the study found, only 3.7% of entry-level workers received substantial AI training, compared to 17.1% of C-suite executives. The write-up continues:

“Despite this, an overwhelming 89.2% [of Gen Z workers] use AI at work—and just as many (89.2%) use tools that weren’t provided or sanctioned by their employer. Only 7.5% reported receiving extensive training with AI tools.”

So younger employees use AI more but receive less training. And, apparently, are receiving little guidance on how and whether to use these tools in their work. What could go wrong?

From executives to fresh hires and those in between, the survey suggests everyone is feeling the impact of AI in the workplace. Lichtenberg writes:

“AI is changing work, and the survey suggests not always for the better. Most employees (80%) say AI has improved their productivity, but 59% confess to spending more time wrestling with AI tools than if they’d just done the work themselves. Gen Z again leads the struggle, with 65.3% saying AI slows them down (the highest amount of any group), and 68% feeling pressure to produce more work because of it.”

In addition, more than half the respondents said AI training initiatives amounted to a second, stressful job. But doesn’t all that hard work pay off? Um, no. At least, not according to this report from MIT that found 95% of AI pilot programs at large companies fail. So why are we doing this again? Ask the investor class.

Cynthia Murrell, September 15, 2025

How Much Is That AI in the Window? A Lot

September 15, 2025

AI technology is expensive. Big Tech companies are aware of the rising costs, but the average organization is unaware of how much AI will make their budgets skyrocket. The Kilo Code blog shares insights into AI’s soaring costs in, “Future AI Bills Of $100K/YR Per Dev.”

Kilo recently broke the 1 trillion tokens a month barrier on OpenRouter for the first time. Other open source AI coding tools experienced serious growth too. Claude and Cursor “throttled” their users and encouraged them to use open source tools. These AI algorithms needed to be throttled because their developers didn’t anticipate that application inference costs would rise. Why did this happen?

“Application inference costs increased for two reasons: the frontier model costs per token stayed constant and the token consumption per application grew a lot. We’ll first dive into the reasons for the constant token price for frontier models and end with explaining the token consumption per application. The price per token for the frontier model stayed constant because of the increasing size of models and more test-time scaling. Test time scaling, also called long thinking, is the third way to scale AI…While the pre- and post-training scaling influenced only the training costs of models. But this test-time scaling increases the cost of inference. Thinking models like OpenAI’s o1 series allocate massive computational effort during inference itself. These models can require over 100x compute for challenging queries compared to traditional single-pass inference.”

If organizations don’t want to be hit with expensive AI costs they should consider using open source models. Open source models ere designed to assist users instead of throttling them on the back send. That doesn’t even account for people expenses such as salaries and training.

Costs and customers’ willingness to pay escalating and unpredictable fees for AI may be a problem the the AI wizards cannot explain away. Those free and heavily discounted deals may deflate some AI balloons.

Whitney Grace, September 15, 2025

Swinging for the Data Centers: You May Strike Out, Casey

September 2, 2025

Home to a sparse population of humans, the Cowboy State is about to generate an immense amount of electricity. Tech Radar Pro reports, “A Massive Wyoming Data Center Will Soon Use 5x More Power than the State’s Human Occupants—But No One Knows Who Is Using It.” Really? We think we can guess. The Cheyenne facility is to be powered by a bespoke combination of natural gas and renewables. Writer Efosa Udinmwen writes:

“The proposed facility, a collaboration between energy company Tallgrass and data center developer Crusoe, is expected to start at 1.8 gigawatts and could scale to an immense 10 gigawatts. For context, this is over five times more electricity than what all households in Wyoming currently use.”

Who could need so much juice? Could it be OpenAI? So far, Crusoe neither confirms nor denies that suspicion. The write-up, however, notes Crusoe worked with OpenAI to build the world’s “largest data center” in Texas as part of the OpenAI-led “Stargate” initiative. (Yes, named for the portals in the 1994 movie and subsequent TV show. So clever.) Udinmwen observes:

“At the core of such AI-focused data centers lies the demand for extremely high-performance hardware. Industry experts expect it to house the fastest CPUs available, possibly in dense, rack-mounted workstation configurations optimized for deep learning and model training. These systems are power-hungry by design, with each server node capable of handling massive workloads that demand sustained cooling and uninterrupted energy. Wyoming state officials have embraced the project as a boost to local industries, particularly natural gas; however, some experts warn of broader implications. Even with a self-sufficient power model, a data center of this scale alters regional power dynamics. There are concerns that residents of Wyoming and its environs could face higher utility costs, particularly if local supply chains or pricing models are indirectly affected. Also, Wyoming’s identity as a major energy exporter could be tested if more such facilities emerge.”

The financial blind spot is explained in Futurism’s article “There’s a Stunning Financial Problem With AI Data Centers.” The main idea is that today’s investment will require future spending for upgrades, power, water, and communications. The result is that most of these “home run” swings will result in lousy batting averages and maybe become a hot dog vendor at the ball park adjacent the humming, hot structures.

Cynthia Murrell, September 2, 2025

Google Uses a Blue Light Special for the US Government (Sorry K-Meta You Lose)

August 27, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

I read an interesting news item in Artificial Intelligence News, a publication unknown to me. Like most of the AI information I read online I believe every single word. AI radiates accuracy, trust, and factual information. Let’s treat this “real” news story as actual factual. To process the information, you will want to reflect on the sales tactics behind Filene’s Basement, K-Mart’s blue light specials, and the ShamWow guy.

The US Federal Government Secures a Massive Google Gemini AI Deal at $0.47 per Agency” reports:

Google Gemini will soon power federal operations across the United States government following a sweeping new agreement between the General Services Administration (GSA) and Google that delivers comprehensive AI capabilities at unprecedented pricing.

I regret I don’t have Microsoft government sales professional or a Palantir forward deployed engineer to call and get their view of this deal. Oh, well, that’s what happens when one gets old. (Remember. For a LinkedIn audience NEVER reveal your age. Okay, too bad LinkedIn, I am 81.)

It so happens I was involved in Year 2000 in some meetings at which Google pitched its search-and-retrieval system for US government wide search. For a number of reasons, the Google did not win that procurement bake off. It took a formal protest and some more meetings to explain the concept of conforming to a Statement of Work and the bid analysis process used by the US government 25 years ago. Google took it on the snout.

Not this time.

By golly, Google figured out how to deal with RFPs, SOWs, the Q&A process, and the pricing dance. The write up says:

The “Gemini for Government” offering, announced by GSA, represents one of the most significant government AI procurement deals to date. Under the OneGov agreement extending through 2026, federal agencies will gain access to Google’s full artificial intelligence stack for just US$0.47 per agency—a pricing structure that industry observers note is remarkably aggressive for enterprise-level AI services.

What does the US government receive? According to the write up:

Google CEO Sundar Pichai characterized the partnership as building on existing relationships: “Building on our Workspace offer for federal employees, ‘Gemini for Government’ gives federal agencies access to our full stack approach to AI innovation, including tools like NotebookLM and Veo powered by our latest models and our secure cloud infrastructure.”

Yo, Microsoft. Yo, Palantir. Are you paying attention? This explanation suggests that a clever government professional can do what your firms do. But — get this — at a price that may be “unsustainable.” (Of course, I know that em dashes signal smart software. Believe me. I use em dashes all by myself. No AI needed.)

I also noted this statement in the write up:

The $0.47 per agency pricing model raises immediate concerns about market distortion and the sustainability of such aggressive government contracting. Industry analysts question whether this represents genuine cost efficiency or a loss-leader strategy designed to lock agencies into Google’s ecosystem before prices inevitably rise after 2026. Moreover, the deal’s sweeping scope—encompassing everything from basic productivity tools to custom AI agent development—may create dangerous vendor concentration risks. Should technical issues, security breaches, or contract disputes arise, the federal government could find itself heavily dependent on a single commercial provider for critical operational capabilities. The announcement notably lacks specific metrics for measuring success, implementation timelines, or safeguards against vendor lock-in—details that will ultimately determine whether this represents genuine modernization or expensive experimentation with taxpayer resources.

Several observations are warranted:

  1. Google has figured out that making AI too cheap to resist appeals to certain government procurement professionals. A deal is a deal, of course. Scope changes, engineering services, and government budget schedules may add some jerked chicken spice to the bargain meal.
  2. The existing government-wide incumbent types are probably going to be holding some meetings to discuss what “this deal” means to existing and new projects involving smart software.
  3. The budget issues about AI investments are significant. Adding more expense for what can be a very demanding client is likely to have a direct impact on advertisers who fund the Google fun bus. How much will that YouTube subscription go up? Would Google raise rates to fund this competitive strike at Microsoft and Palantir? Of course not, you silly goose.

I wish I were at liberty to share some of the Google-related outputs from the Year 2000 procurement. But, alas, I cannot. Let me close by saying, “Google has figured out some basics of dealing with the US government.” Hey, it only took a quarter century, not bad for an ageing Googzilla.

Stephen E Arnold, August 27, 2025

Think It. The * It * Becomes Real. Think Again?

August 27, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Fortune Magazine — once the gem for a now spinning-in-his-grave publisher —- posted “MIT Report: 95% of Generative AI Pilots at Companies Are Failing.” I take a skeptical view of MIT. Why? The esteemed university found Jeffrey Epstein a swell person.

The thrust of the story is that people stick smart software into an organization, allow it time to steep, cook up a use case, and find the result unpalatable. Research is useful. When it evokes a “Duh!”, I don’t get too excited.

But there was a phrase in the write up which caught my attention: Learning gap. AI or smart software is a “belief.” The idea of the next big thing creates an opportunity to move money. Flow, churn, motion — These are positive values in some business circles.

AI fits the bill. The technology demonstrates interesting capabilities. Use cases exist. Companies like Microsoft have put money into the idea. Moving money is proof that “something” is happening. And today that something is smart software. AI is the “it” for the next big thing.

Learning gap, however, is the issue. The hurdle is not Sam Altman’s fears about the end of humanity or his casual observation that trillions of dollars are needed to make AI progress. We have a learning gap.

But the driving vision for Internet era innovation is do something big, change the world, reinvent society. I think this idea goes back to the sales-oriented philosophy of visualizing a goal and aligning one’s actions to achieve that goal. I a fellow or persona named Napoleon Hill pulled together some ideas and crafted “Think and Grow Rich.” Today one just promotes the “next big thing,” gets some cash moving, and an innovation like smart software will revolutionize, remake, or redo the world.

The “it” seems to be stuck in the learning gap. Here’s the proof, and I quote:

But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained. The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.

Consider this question: What if smart software mostly works but makes humans uncomfortable in ways difficult for the user to articulate? What if humans lack the mental equipment to conceptualize what a smart system does? What if the smart software cannot answer certain user questions?

I find information about costs, failed use cases, hallucinations, and benefits plentiful. I don’t see much information about the “learning gap.” What causes a learning gap? Spell check makes sense. A click that produces a complete report on a complex topic is different. But in what way? What is the impact on the user?

I think the “learning gap” is a key phrase. I think there is money to be made in addressing it. I am not confident that visualizing a better AI is going to solve the problem which is similar to a bonfire of cash. The learning gap might be tough to fill with burning dollar bills.

Stephen E Arnold, August 27, 2025

Deal Breakers in Medical AI

August 26, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

My newsfeed thing spit out a link to “Why Radiology AI Didn’t Work and What Comes Next.” I have zero interest in radiology. I don’t get too excited about smart software. So what did I do? Answer: I read the article. I was delighted to uncover a couple of points that, in my opinion, warrant capturing in my digital notebook.

The set up is that a wizard worked at a start up trying to get AI to make sense of the consistently fuzzy, murky, and baffling images cranked out by radiology gizmos. Tip: Follow the instructions and don’t wear certain items of jewelry. The start up fizzled. AI was part of the problem, but the Jaws-type sharp lurking in the murky image explains this type of AI implosion.

Let’s run though the points that struck me.

First, let’s look at this passage:

Unlike coding or mathematics, medicine rarely deals in absolutes. Clinical documentation, especially in radiology, is filled with hedge language — phrases like “cannot rule out,” “may represent,” or “follow-up recommended for correlation.” These aren’t careless ambiguities; they’re defensive signals, shaped by decades of legal precedent and diagnostic uncertainty.

Okay, lawyers play a significant role in establishing thought processes and normalizing ideas that appear to be purpose-built to vaporize like one of those nifty tattoo removing gadgets the smart system. I would have pegged insurance companies, then lawyers, but the write up directed my attention of the legal eagles’ role: Hedge language. Do I have disease X? The doctor responds, “Maybe, maybe not. Let’s wait 30 days and run more tests.” Fuzzy lingo, fuzzy images, perfect.

Second, the write up asks two questions:

  • How do we improve model coverage at the tail without incurring prohibitive annotation costs?
  • Can we combine automated systems with human-in-the-loop supervision to address the rare but dangerous edge cases?

The answers seem to be: You cannot afford to have humans do indexing and annotation. That’s why certain legal online services charge a lot for annotations. And, the second question, no, you cannot pull off automation with humans for events rarely covered in the training data. Why? Cost and finding enough humans who will do this work in a consistent way in a timely manner.

Here’s the third snippet:

Without direct billing mechanisms or CPT reimbursement codes, it was difficult to monetize the outcomes these tools enabled. Selling software alone meant capturing only a fraction of the value AI actually created. Ultimately, we were offering tools, not outcomes. And hospitals, rightly, were unwilling to pay for potential unless it came bundled with performance.

Finally, insurance procedures. Hospitals aren’t buying AI; they are buying ways to deliver “service” and “bill.” AI at this time does not sell what hospitals want to buy: A way to keep high rates and slash costs wherever possible.

Unlikely but perhaps some savvy AI outfit will create a system that can crack the issues the article identifies. Until then, no money, no AI.

Stephen E Arnold, August 26, 2025

Smart Software Fix: Cash, Lots and Lots of Cash

August 19, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way. But I asked ChatGPT one question. Believe it or not.

If you have some spare money, Sam Altman aka Sam AI-Man wants to talk with you. It is well past two years since OpenAI forced the 20 year old Google to go back to the high school lab. Now OpenAI is dealing with the reviews of ChatGPT 5. The big news in my opinion is that quite a  few people are annoyed with the new smart software from the money burning Bessemer furnace at 3180 18th Street in San Francisco. (I have heard that a satellite equipped with an infra red camera gets a snazzy image of the heat generated from the immolation of cash. There are also tiny red dots scattered around the SF financial district. Those, I believe, are the burning brain cells of the folks who have forked over dough to participate in Sam AI-Man’s next big thing.

As People Ridicule GPT-5, Sam Altman Says OpenAI Will Need ‘Trillions’ in Infrastructure” addresses the need for cash. The write up says:

Whether AI is a bubble or not, Altman still wants to spend a certifiably insane amount of money building out his company’s AI infrastructure. “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” Altman told reporters.

Trillions is a general figure that most people cannot relate to everyday life. Years ago when I was an indentured servant at a consulting firm, I worked on a project that sought to figure out what types of decisions Boards of Directors of Fortune 1000 companies consumed the most time. The results surprised me then and still do.

Boards of directors spent the most time discussing relatively modest-scale projects; for example, expanding a parking lot or developing of list of companies for potential joint ventures. Really big deals like spending large sums to acquire a company were often handled in swift, decisive votes.

Why?

Boards of directors, like most average people, cannot relate to massive numbers. It is easier to think in terms of a couple hundred thousand dollars to lease a parking lot than borrow billions and buy a giant allegedly synergistic  company.

When Mr. Altman uses the word “trillions,” I think he is unable to conceptualize the amount of money represented in his casual “you should expect OpenAI to spend trillions…”

Several observations:

  1. AI is useful in certain use cases. Will AI return the type of payoff that Google’s charge ‘em every which way from Sunday for advertising model does?
  2. AI appears to produce incorrect outputs. I liked the application for oncology docs who reported losing diagnostic skills when relying on AI assistants.
  3. AI creates negative mental health effects. One old person, younger than I, believed a chat bot cared for him. On the way to meet his digital friend, he flopped over dead. Anticipative anxiety or a use case for AI sparking nutso behavior?

What’s a trillion look like? Answer: 1,000,000,000,000.

How many railroad boxcars would it take to move $1 trillion from a collection point like Denver, Colorado, to downtown San Francisco? Answer from ChatGPT: you would need 10,000 standard railroad boxcars. This calculation is based on the weight and volume of the bills, as well as the carrying capacity of a typical 50-foot boxcar. The train would stretch over 113.6 miles—about the distance from New York City to Philadelphia!

Let’s talk about expanding the parking lot.

Stephen E Arnold, August 19, 2025

Party Time for Telegram?

August 14, 2025

Dino 5 18 25No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.

Let’s assume that the information is “The SEC Quietly Surrendered in Its Biggest Crypto Battle.” Now look at this decision from the point of view of Pavel Durov. The Messenger service has about 1.35 billion users. Allegedly there are 50 million or so in the US. Mr. Durov was one of the early losers in the crypto wars in the United States. He has hired a couple of people to assist him in his effort to do the crypto version of “Coming to America.” Will Manny Stoltz and Max Crown are probably going to make their presence felt.

The cited article states:

This is a huge deal. It creates a crucial distinction that other crypto projects can now use in their own legal battles, potentially shielding them from the SEC’s claim of blanket authority over the market. By choosing to settle rather than risk having this ruling upheld by a higher court, the SEC has shown the limits of its “regulation by enforcement” playbook: its strategy of creating rules through individual lawsuits instead of issuing clear guidelines for the industry.

What will Telegram’s clever Mr. Durov do with its 13 year  old platform, hundreds of features, crypto plumbing, and hundreds of developers eager to generate “money”? It is possible it won’t be Pavel making trips to America. He may be under the watchful eye of the French judiciary.

But Manny, Max, and the developers?

Stephen E Arnold, August 14, 2025

Taylorism, 996, and Motivating Employees

August 6, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

No more Foosball. No more Segways in the hallways (thank heaven!). No more ping pong (Wait. Scratch that. You must have ping pong.)

Fortune Magazine reported that Silicon Valley type outfits want to be more like the workplace managed using Frederick Winslow Taylor’s management methods. (Did you know that Mr. Taylor provided the oomph for many blue chip management consulting firms? If you did not, you may be one of the people suggesting that AI will kill off the blue chip outfits. Those puppies will survive.)

Some Silicon Valley AI Startups Are Asking Employees to Adopt China’s Outlawed 996 Work Model” reports:

Some Silicon Valley startups are embracing China’s outlawed “996” work culture, expecting employees to work 12-hour days, six days a week, in pursuit of hyper-productivity and global AI dominance.

The reason, according to the write up, is:

The rise of the controversial work culture appears to have been born out of the current efficiency squeeze in Silicon Valley. Rounds of mass layoffs and the rise of AI have put pressure and turned up the heat on tech employees who managed to keep their jobs.

My response to this assertion is that it is a convenient explanation. My view is that one can trot out the China smart, US dumb arguments, point to the holes of burning AI cash, and the political idiosyncrasies of California and the US government.

The reason is that these are factors, but Silicon Valley is starting to accept the reality that old-fashioned business methods are semi useful. The idea that employees should converge on a work location to do what is still called “work.”

What’s the cause of this change? Since hooking electrodes to a worker in a persistent employee monitoring environment is a step too far for now, going back to the precepts of Freddy are a reasonable compromise.

But those electric shocks would work quite well, don’t you agree? (Sure, China’s work environment sparked a few suicides, but the efficiency is not significantly affected.)

Stephen E Arnold, August 6, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta