Desperate Much? Buying Cyber Security Software Regularly
September 16, 2025
Bad actors have access to AI, and it is enabling them to increase both speed and volume at an alarming rate. Are cybersecurity teams able to cope? Maybe—if they can implement the latest software quickly enough. VentureBeat reports, “Software Commands 40% of Cybersecurity Budgets ad Gen AI Attacks Execute in Milliseconds.” Citing IBM’s recent Cost of a Data Breach Report, writer Louis Columbus reports 40% of cybersecurity spending now goes to software. Compare that to just 15.8% spent on hardware, 15% on outsourcing, and 29% on personnel. Even so, AI-assisted hacks now attack in milliseconds while the Mean Time to Identify (MTTI) is 181 days. That is quite the disparity. Columbus observes:
“Three converging threats are flipping cybersecurity on its head: what once protected organizations is now working against them. Generative AI (gen AI) is enabling attackers to craft 10,000 personalized phishing emails per minute using scraped LinkedIn profiles and corporate communications. NIST’s 2030 quantum deadline threatens retroactive decryption of $425 billion in currently protected data. Deepfake fraud that surged 3,000% in 2024 now bypasses biometric authentication in 97% of attempts, forcing security leaders to reimagine defensive architectures fundamentally.”
Understandable. But all this scrambling for solutions may now be part of the problem. Some teams, we are told, manage 75 or more security tools. No wonder they capture so much of the budget. Simplification, however, is proving elusive. We learn:
“Security Service Edge (SSE) platforms that promised streamlined convergence now add to the complexity they intended to solve. Meanwhile, standalone risk-rating products flood security operations centers with alerts that lack actionable context, leading analysts to spend 67% of their time on false positives, according to IDC’s Security Operations Study. The operational math doesn’t work. Analysts require 90 seconds to evaluate each alert, but they receive 11,000 alerts daily. Each additional security tool deployed reduces visibility by 12% and increases attacker dwell time by 23 days, as reported in Mandiant’s 2024 M-Trends Report. Complexity itself has become the enterprise’s greatest cybersecurity vulnerability.”
See the writeup for more on efforts to improve cybersecurity’s speed and accuracy and the factors that thwart them. Do we have a crisis yet? Of course not. Marketing tells us cyber security just works. Sort of.
Cynthia Murrell, September 16, 2025
Google Is Going to Race Penske in Court!
September 15, 2025
Written by an unteachable dinobaby. Live with it.
How has smart software affected the Google? On the surface, we have the Code Red klaxons. Google presents big time financial results so the sirens drowned out by the cheers for big bucks. We have Google dodging problems with the Android and Chrome snares, so the sounds are like little chicks peeping in the eventide.
—-
FYI: The Penske Outfits
- Penske Corporation itself focuses on transportation, truck leasing, automotive retail, logistics, and motorsports.
- Penske Media Corporation (PMC), a separate entity led by Jay Penske, owns major media brands like Rolling Stone and Billboard.
—-
What’s actually going on is different, if the information in “Rolling Stone Publisher Sues Google Over AI Overview Summaries.” [Editor’s note: I live the over over lingo, don’t you?] The write up states:
Google has insisted that its AI-generated search result overviews and summaries have not actually hurt traffic for publishers. The publishers disagree, and at least one is willing to go to court to prove the harm they claim Google has caused. Penske Media Corporation, the parent company of Rolling Stone and The Hollywood Reporter, sued Google on Friday over allegations that the search giant has used its work without permission to generate summaries and ultimately reduced traffic to its publications.
Site traffic metrics are an interesting discipline. What exactly are the log files counting? Automated pings, clicks, views, downloads, etc.? Google is the big gun in traffic, and it has legions of SEO people who are more like cheerleaders for making sites Googley, doing the things that Google wants, and pitching Google advertising to get sort of reliable traffic to a Web site.
The SEO crowd is busy inventing new types of SEO. Now one wants one’s weaponized content to turn up as a link, snippet, or footnote in an AI output. Heck, some outfits are pitching to put ads on the AI output page because money is the name of the game. Pay enough and the snippet or summary of the answer to the user’s prompt may contain a pitch for that item of clothing or electronic gadget one really wants to acquire. Psychographic ad matching is marvelous.
The write up points out that an outfit I thought was into auto racing and truck rentals but is now a triple threat in publishing has a different take on the traffic referral game. The write up says:
Penske claims that in recent years, Google has basically given publishers no choice but to give up access to its content. The lawsuit claims that Google now only indexes a website, making it available to appear in search, if the publisher agrees to give Google permission to use that content for other purposes, like its AI summaries. If you think you lose traffic by not getting clickthroughs on Google, just imagine how bad it would be to not appear at all.
Google takes a different position, probably baffled why a race car outfit is grousing. The write up reports:
A spokesperson for Google, unsurprisingly, said that the company doesn’t agree with the claims. “With AI Overviews, people find Search more helpful and use it more, creating new opportunities for content to be discovered. We will defend against these meritless claims.” Google Spokesperson Jose Castaneda told Reuters.
Gizmodo, the source for the cited article about the truck rental outfit, has done some original research into traffic. I quote from the cited article:
Just for kicks, if you ask Google Gemini if Google’s AI Overviews are resulting in less traffic for publishers, it says, “Yes, Google’s AI Overview in search results appears to be resulting in less traffic for many websites and publishers. While Google has stated that AI Overviews create new opportunities for content discovery, several studies and anecdotal reports from publishers suggest a negative impact on traffic.”
I have some views on this situation, and I herewith present them to you:
- Google is calm on the outside but in crazy mode internally. The Googlers are trying to figure out how to keep revenues growing as referral traffic and the online advertising are undergoing some modest change. Is the glacier calving? Yep, but it is modest because a glacier is big and the calf is small.
- The SEO intermediaries at the Google are communicating like Chatty Cathies to the SEO innovators. The result will be a series of shotgun marriages among the lucrative ménage à trois of Google’s ad machine, search engine optimization professional, and advertising services firms in order to lure advertisers to a special private island.
- The bean counters at Google are looking at their MBA course materials, exam notes for CPAs, and reading books about forensic accounting in order to make the money furnaces at Google hot using less cash as fuel. This, gentle reader, is a very, very difficult task. At another time, a government agency might be curious about the financial engineering methods, but at this time, attention is directed elsewhere I presume.
Net net: This is a troublesome point. Google has lots of lawyers and probably more cash to spend on fighting the race car outfit and its news publications. Did you know that the race outfit owned the definitive publication about heavy metal as well at Billboard magazine?
Stephen E Arnold, September 15, 2025
Google: The EC Wants Cash, Lots of Cash
September 15, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
The European Commission is not on the same page as the judge involved in the Google case. Googzilla is having a bit of a vacay because Android and Chrome are still able to attend the big data party. But the EC? Not invited and definitely not welcome. “Commission Fines Google €2.95 Billion over Abusive Practices in Online Advertising Technology” states:
The European Commission has fined Google €2.95 billion for breaching EU antitrust rules by distorting competition in the advertising technology industry (‘adtech’). It did so by favouring its own online display advertising technology services to the detriment of competing providers of advertising technology services, advertisers and online publishers. The Commission has ordered Google (i) to bring these self-preferencing practices to an end; and (ii) to implement measures to cease its inherent conflicts of interest along the adtech supply chain. Google has now 60 days to inform the Commission about how it intends to do so.
The news release includes a third grade type diagram, presumably to make sure that American legal eagles who may not read at the same grade level as their European counterparts can figure out the scheme. Here it is:
For me, the main point is Google is in the middle. This racks up what crypto cronies call “gas fees” or service charges. Whatever happens Google gets some money and no other diners are allowed in the Mountain View giant’s dining hall.
There are explanations and other diagrams in the cited article. The main point is clear: The EC is not cowed by the Googlers nor their rationalizations and explanations about how much good the firm does.
Stephen E Arnold, September 15, 2025
Shame, Stress, and Longer Hours: AI’s Gifts to the Corporate Worker
September 15, 2025
Office workers from the executive suites to entry-level positions have a new reason to feel bad about themselves. Fortune reports, “ ‘AI Shame’ Is Running Rampant in the Corporate Sector—and C-Suite Leaders Are Most Worried About Getting Caught, Survey Says.” Writer Nick Lichtenberg cites a survey of over 1,000 workers by SAP subsidiary WalkMe. We learn almost half (48.8%) of the respondents said they hide their use of AI at work to avoid judgement. The number was higher at 53.4% for those at the top—even though they use AI most often. But what about the generation that has entered the job force amid AI hype? We learn:
“Gen Z approaches AI with both enthusiasm and anxiousness. A striking 62.6% have completed work using AI but pretended it was all their own effort—the highest rate among any generation. More than half (55.4%) have feigned understanding of AI in meetings. … But only 6.8% report receiving extensive, time-consuming AI training, and 13.5% received none at all. This is the lowest of any age group.”
In fact, the study found, only 3.7% of entry-level workers received substantial AI training, compared to 17.1% of C-suite executives. The write-up continues:
“Despite this, an overwhelming 89.2% [of Gen Z workers] use AI at work—and just as many (89.2%) use tools that weren’t provided or sanctioned by their employer. Only 7.5% reported receiving extensive training with AI tools.”
So younger employees use AI more but receive less training. And, apparently, are receiving little guidance on how and whether to use these tools in their work. What could go wrong?
From executives to fresh hires and those in between, the survey suggests everyone is feeling the impact of AI in the workplace. Lichtenberg writes:
“AI is changing work, and the survey suggests not always for the better. Most employees (80%) say AI has improved their productivity, but 59% confess to spending more time wrestling with AI tools than if they’d just done the work themselves. Gen Z again leads the struggle, with 65.3% saying AI slows them down (the highest amount of any group), and 68% feeling pressure to produce more work because of it.”
In addition, more than half the respondents said AI training initiatives amounted to a second, stressful job. But doesn’t all that hard work pay off? Um, no. At least, not according to this report from MIT that found 95% of AI pilot programs at large companies fail. So why are we doing this again? Ask the investor class.
Cynthia Murrell, September 15, 2025
How Much Is That AI in the Window? A Lot
September 15, 2025
AI technology is expensive. Big Tech companies are aware of the rising costs, but the average organization is unaware of how much AI will make their budgets skyrocket. The Kilo Code blog shares insights into AI’s soaring costs in, “Future AI Bills Of $100K/YR Per Dev.”
Kilo recently broke the 1 trillion tokens a month barrier on OpenRouter for the first time. Other open source AI coding tools experienced serious growth too. Claude and Cursor “throttled” their users and encouraged them to use open source tools. These AI algorithms needed to be throttled because their developers didn’t anticipate that application inference costs would rise. Why did this happen?
“Application inference costs increased for two reasons: the frontier model costs per token stayed constant and the token consumption per application grew a lot. We’ll first dive into the reasons for the constant token price for frontier models and end with explaining the token consumption per application. The price per token for the frontier model stayed constant because of the increasing size of models and more test-time scaling. Test time scaling, also called long thinking, is the third way to scale AI…While the pre- and post-training scaling influenced only the training costs of models. But this test-time scaling increases the cost of inference. Thinking models like OpenAI’s o1 series allocate massive computational effort during inference itself. These models can require over 100x compute for challenging queries compared to traditional single-pass inference.”
If organizations don’t want to be hit with expensive AI costs they should consider using open source models. Open source models ere designed to assist users instead of throttling them on the back send. That doesn’t even account for people expenses such as salaries and training.
Costs and customers’ willingness to pay escalating and unpredictable fees for AI may be a problem the the AI wizards cannot explain away. Those free and heavily discounted deals may deflate some AI balloons.
Whitney Grace, September 15, 2025
Here Is a Happy Thought: The Web Is Dead
September 12, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
I read “One of the Last, Best Hopes for Saving the Open Web and a Free Press Is Dead.” If the headline is insufficiently ominous, how about the subtitle:
The Google ruling is a disaster. Let the AI slop flow and the writers, journalists and creators get squeezed.
The write up says:
Google, of course, was one of the worst actors. It controlled (and still controls) an astonishing 90% of the search engine market, and did so not by consistently offering the best product—most longtime users recognize the utility of Google Search has been in a prolonged state of decline—but by inking enormous payola deals with Apple and Android phone manufacturers to ensure Google is the default search engine on their products.
The subject is the US government court’s ruling that Google must share. Google’s other activities are just ducky. The write up presents this comment:
The only reason that OpenAI could even attempt to do anything that might remotely be considered competing with Google is that OpenAI managed to raise world-historic amounts of venture capital. OpenAI has raised $60 billion, a staggering figure, but also a sum that still very much might not be enough to compete in an absurdly capital intensive business against a decadal search monopoly. After all, Google drops $60 billion just to ensure its search engine is the default choice on a single web browser for three years. [Note: The SAT word “decadal” sort of means over 10 years. The Google has been doing search control for more than 20 years, but “more than 20 years is not sufficiently erudite I guess.]
The point is that competition in the number scale means that elephants are fighting. Guess what loses? The grass, the ants, and earthworms.
The write up concludes:
The 2024 ruling that Google was an illegal monopoly was a glimmer of hope at a time when platforms were concentrating ever more power, Silicon Valley oligarchy was on the rise, and it was clear the big tech cartels that effectively control the public internet were more than fine with overrunning it with AI slop. That ruling suggested there was some institutional will to fight against the corporate consolidation that has come to dominate the modern web, and modern life. It proved to be an illusion.
Several observations are warranted:
- Money talks; common sense walks
- AI is having dinner at the White House; the legal eagles involved in this high-profile matter got the message
- I was not surprised; the author, surprised and somewhat annoyed that the Internet is dead.
The US mechanisms remind me of how my father described government institutions in Campinas, Brazil, in the 1950s: Carry contos and distribute them freely. [Note: A conto was 1,000 cruzeiros at the time. Today the word applies to 1,000 reais.]
Stephen E Arnold, September 12, 2025
Google: Klaxons, Red Lights, and Beeps
September 12, 2025
Here we go again with another warning from Google about scams in the form of Gemini. The Mirror reports that, “Google Issues ‘Red Alert’ To Gmail Users Over New AI Scam That Steals Passwords.” Bad actors are stealing passwords using Google’s own chatbot. Hackers are sending emails using Gemini. These emails contain a hidden message to reveal passwords.
Here’s how people are falling for the scam: there’s no link to click in the email. A box pops up alerting you to a risk. That’s all! It’s incredibly simple and scary. Remember that Google will never ask you for your username and password. It’s still the easiest tip to remember when it comes to these scams.
Google issued a statement:
“The tech giant explained the subtlety of the threat: ‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.’ As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.’”
Google also said some calming platitudes but the record replay is getting tiresome.
Whitney Grace, September 12, 2025
China Smart, US Dumb: The Baidu AI Service
September 12, 2025
It seems smart software is good for something. CNBC reports, “AI Avatars in China Just Proved They Are Ace Influencers: It Only Took a Duo 7 Hours to Rake in More than $7 Million.” Chinese tech firm Baidu collaborated with two human influencers on the project. Reporter Evelyn Cheng tells us:
“Luo Yonghao, one of China’s earliest and most popular live streamers, and his co-host Xiao Mu both used digital versions of themselves to interact with viewers in real time for well over six hours on Sunday on Baidu’s e-commerce livestreaming platform ‘Youxuan’, the Chinese tech company said. The session raked in 55 million yuan ($7.65 million). In comparison, Luo’s first livestream attempt on Youxuan last month, which lasted just over four hours, saw fewer orders for consumer electronics, food and other key products, Baidu said.”
The experiment highlights Baidu’s avatar technology, which can save marketing departments a lot of money. We learn:
“Luo’s and his co-host’s avatars were built using Baidu’s generative AI model, which learned from five years’ worth of videos to mimic their jokes and style, Wu Jialu, head of research at Luo’s other company, Be Friends Holding, told CNBC on Wednesday. … AI avatars can sharply reduce costs since companies don’t need to hire a large production team or a studio to livestream. The digital avatars can also stream nonstop without needing breaks. … [Wu] said that Baidu now offers the best digital human product currently available, compared to the early days of livestreaming e-commerce five or six years ago.”
Yes, the “early” days of five or six years ago, when the pandemic forced companies and workers to explore their online options. Both landed on livestreaming to generate sales and commissions. Now, it seems, companies can cut the human talent out of the equation. How efficient.
Cynthia Murrell, September 12, 2025
American Illiteracy: Who Is Responsible?
September 11, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
I read an essay I found quite strange. “She Couldn’t Read Her Own Diploma: Why Public Schools Pass Students but Fail Society” is from what seems to be a financial information service. This particular essay is written by Tyler Durden and carries the statement, “Authored by Hannah Frankman Hood via the American Institute for Economic Research (AIER).” Okay, two authors. Who wrote what?
The main idea seems to be that a student who graduated from Hartford, Connecticut (a city founded by one of my ancestors) graduate with honors but is unable to read. How did she pull of the “honors” label? Answer: She used “speech to text apps to help her read and write essays.”
Now the high school graduate seems to be in the category of “functional illiteracy.” The write up says:
To many, it may be inconceivable that teachers would continue to teach in a way they know doesn’t work, bowing to political pressure over the needs of students. But to those familiar with the incentive structures of public education, it’s no surprise. Teachers unions and public district officials fiercely oppose accountability and merit-based evaluation for both students and teachers. Teachers’ unions consistently fight against alternatives that would give students in struggling districts more educational options. In attempts to improve ‘equity,’ some districts have ordered teachers to stop giving grades, taking attendance, or even offering instruction altogether.
This may be a shock to some experts, but one of my recollections of my youth was my mother reading to me. I did not know that some people did not have a mother and father, both high school graduates, who read books, magazines, and newspapers. For me, it was books.
I was born in 1944, and I recall heading to kindergarten and knowing the alphabet, how to print my name (no, it was not “loser”), and being able to read words like Topps (a type of bubble gum with pictures of baseball players in the package), Coca Cola, and the “MD” on my family doctor’s sign. (I had no idea how to read “McMorrow,” but I could identify the letters.
The “learning to read” skill seemed to take place because my mother and sometimes my father would read to me. My mother and I would walk to the library about a mile from our small rented house on East Wilcox Avenue. She would check out book for herself and for me. We would walk home and I would “read” one of my books. When I couldn’t figure out a word, I asked her. This process continued until we moved to Washington, DC when I was in the third grade. When we moved to Campinas, Brazil, my father bought a set of World Books and told me to read them. My mother helped me when I encountered words or information I did not understand. Campinas was a small town in the 1950s. I had my Calvert Correspondence course at the set of blue World Book Encyclopedias.
When we returned to the US, I entered the seventh grade. I am not sure I had much formal instruction in reading, phonics, word recognition, or the “normal” razzle dazzle of education. I just started classes and did okay. As I recall, I was in the advanced class, and the others in that group would stay together throughout high school, also in central Illinois.
My view is probably controversial, but I will share it in this essay by two people who seem to be worried about teachers not teaching students how to read. Here goes:
- Young children are curious. When exposed to books and a parent who reads and explains meanings, the child learns. The young child’s mind is remarkable in its baked in ability to associate, discern patterns, learn language, and figure out that Coca Cola is a drink parents don’t often provide.
- A stable family which puts and emphasis on reading even though the parents are not college educated makes reading part of the furniture of life. Mobile phones and smart software cannot replicate the interaction between a parent and child involved in reading, printing letters, and figuring out that MD means weird Dr. McMorrow.
- Once reading becomes a routine function, normal curiosity fuels knowledge acquisition. This may not be true for some people, but in my experience it works. Parents read; child reads.
When the family unit does not place emphasis on reading for whatever reason, the child fails to develop some important mental capabilities. Once that loss takes place, it is very difficult to replace it with each passing year.
Teachers alone cannot do this job. School provides a setting for a certain type of learning. If one cannot read, one cannot learn what schools afford. Years ago, I had responsibility for setting up and managing a program at a major university to help disadvantaged students develop skills necessary to succeed in college. I had experts in reading, writing, and other subjects. We developed our own course materials; for example, we pioneered the use of major magazines and lessons built around topics of interest to large numbers of Americans. Our successes came from instructors who found a way to replicate the close interaction and support of a parent-child reading experience. The failures came from students who did not feel comfortable with that type of one to one interaction. Most came from broken families, and the result of not having a stable, knowledge-oriented family slammed on the learning and reading brakes.
Based on my experience with high school and college age students, I never was and never will be a person who believes that a device or a teacher with a device can replicate the parent – child interaction that normalizes learning and instills value via reading. That means that computers, mobile phones, digital tablets, and smart software won’t and cannot do the job that parents have to do when the child is very young.
When the child enters school, a teacher provides a framework and delivers information tailored to the physical and hopefully mental age of the student. Expecting the teacher to remediate a parenting failure in the child’s first five to six years of life is just plain crazy. I don’t need economic research to explain the obvious.
This financial write up strikes me as odd. The literacy problem is not new. I was involved in trying to create a solution in the late 1960s. Now decades later, financial writers are expressing concern. Speedy, right? My personal view is that a large number of people who cannot read, understand, and think critically will make an orderly social construct very difficult to achieve.
I am now 80 years old. How can an online publication produce an essay with two different authors and confuse me with yip yap about teaching methods. Why not disagree about the efficacy of Grok versus Gemini? Just be happy with illiterates who can talk to Copilot to generate Excel spreadsheets about the hockey stick payoffs from smart software.
I don’t know much. I do know that I am a dinobaby, and I know my ancestor who was part of the group who founded Hartford, Connecticut, would not understand how his vision of the new land jibes with what the write up documents.
Stephen E Arnold, September 11, 2025
AI Algorithms Are Not Pirates, Just Misunderstood
September 11, 2025
Let’s be clear: AI algorithms are computer programs designed to imitate human brains. They’re not sentient . They are taught using huge amounts of data sets that contain pirated information. By proxy this makes AI developers thieves. David Carson on Medium wrote, “Theft Is Not Fair Use” arguing that AI is not abiding by one of the biggest laws that powers YouTube. (One of the big AI outfits just wrote a big check for unauthorized content suck downs. Not guilty, of course.)
Publishers, record labels, entertainment companies, and countless artists are putting AI developers on notice by filing lawsuits against AI developers. Thomson Reuters was victorious against an AI-based legal platform, Ross Intelligence, for harvesting its data. It’s a drop in the water bucket however, because Trump’s Artificial Intelligence Action Plan sought input from Big Tech. Open AI and Google asked to be exempt from copyright in their big data sets. A group of authors are suing Meta and a copyright law professor gaggle filed an amicus brief on their behalf. The professors poke holes in Meta’s fair use claim.
Big Tech is powerful and they’ve done this for years:
"Tech companies have a history of taking advantage of legacy news organizations that are desperate for revenue and are making deals with short-term cash infusions but little long-term benefit. I fear AI companies will act as vampires, draining news organizations of their valuable content to train their new AI models and then ride off into the sunset with their multi-billion dollar valuations while the news organizations continue to teeter on the brink of bankruptcy. It wouldn’t be the first time tech companies out-maneuvered (online advertising) or lied to news organizations.”
Unfortunately creative types are probably screwed. What’s funny is that Carson is a John S. Knight Journalism Fellow at Stanford. It’s the same school in which the president manipulated content to advance his career. How many of these deep suckers are graduates of this esteemed institution? Who teaches copyright basics? Maybe an AI system?
Whitney Grace, September 11, 2025