A Baloney Blizzard: What Is Missing? Oh, Nothing, Just Security
August 19, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I do not know what a CVP is. I do know a baloney blizzard when I see one. How about these terms: Ambient, pervasive, and multi-modal. I interpret ambient as meaning temperature or music like the tunes honked in Manhattan elevators. Pervasive I view as surveillance; that is, one cannot escape the monitoring. What a clever idea. Who doesn’t want Microsoft Windows to be inescapable? And multi-modal sparks in me thoughts of a cave painting and a shaman. I like the idea of Windows intermediating for me.
Where did I get these three odd ball words? I read “Microsoft’s Windows Lead Says the Next Version of Windows Will Be More Ambient, Pervasive, and Multi-Modal As AI Redefines the Desktop Interface.” The source of this write up is an organization that absolutely loves Microsoft products and services.
Here’s a passage I noted:
Davuluri confirms that in the wake of AI, Windows is going to change significantly. The OS is going to become more ambient and multi-modal, capable of understanding the content on your screen at all times to enable context-aware capabilities that previously weren’t possible. Davuluri continues, “you’ll be able to speak to your computer while you’re writing, inking, or interacting with another person. You should be able to have a computer semantically understand your intent to interact with it.”
Very sci-fi. However, I don’t want to speak to my computer. I work in silence. My office is set up do I don’t have people interrupting, chattering, or asking me to go to get donuts. My view is, “Send me an email or a text. Don’t bother me.” Is that why in many high-tech companies people wear earbuds? It is. They don’t want to talk, interact, or discuss Netflix. These people want to “work” or what they think is “work.”
Does Microsoft care? Of course not. Here’s a reasonably clear statement of what Microsoft is going to try and force upon me:
It’s clear that whatever is coming next for Windows, it’s going to promote voice as a first class input method on the platform. In addition to mouse and keyboard, you will be able to ambiently talk to Windows using natural language while you work, and have the OS understand your intent based on what’s currently on your screen.
Several observations:
- AI is not reliable
- Microsoft is running a surveillance operation in my opinion
- This is the outfit which created Bob and Clippy.
But the real message in this PR marketing content essay: Security is not mentioned. Does a secure operation want people talking about their work?
Stephen E Arnold, August 19, 2025
News Flash from the Past: Bad Actors Use New Technology and Adapt Quickly
August 18, 2025
No AI. Just a dinobaby working the old-fashioned way.
NBC News is on top of cyber security trends. I think someone spotted Axios report that bad actors were using smart software to outfox cyber security professionals. I am not sure this is news, but what do I know?
“Criminals, Good Guys and Foreign Spies: Hackers Everywhere Are Using AI Now” reports this “hot off the press” information. I quote:
The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.
My goodness. Who knew that stealers have been zipping around for many years? Even more startling old information is:
LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents. The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster.
Stunning. A free chunk of smart software, unemployed or intra-gig programmers, and juicy targets pushed out with a fairy land of vulnerabilities. Isn’t it insightful that bad actors would apply these tools to clueless employees, inherently vulnerable operating systems, and companies too busy outputting marketing collateral to do routine security updates.
The cat-and-mouse game works this way. Bad actors with access to useful scripting languages, programming expertise, and smart software want to generate revenue or wreck havoc. One individual or perhaps a couple of people in a coffee shop hit upon a better way to access a corporate network or obtain personally identifiable information from a hapless online user.
Then, after the problem has been noticed and reported, cyber security professionals will take a closer look. If these outfits have smart software running, a human will look more closely at logs and say, “I think I saw something.”
Okay, mice are in and swarming. Now the cats jump into action. The cats will find [a] a way to block the exploit, [b] rush to push the fix to paying customers, and [c] share the information in a blog post or a conference.
What happens? The bad actors notice their mice aren’t working or they are being killed instantly. The bad actors go back to work. In most cases, the bad actors are not unencumbered by bureaucracy or tough thought problems about whether something is legal or illegal. The bad actors launch more attacks. If one works, its gravy.
Now the cats jump back into the fray.
In the current cyber crime world, cyber security firms, investigators, and lawyers are in reactive mode. The bad actors play offense.
One quick example: Telegram has been enabling a range of questionable online activities since 2013. In 2024 after a decade of inaction, France said, “Enough.” Authorities in France arrested Pavel Durov. The problem from my point of view is that it took 12 years to man up to the icon Pavel Durov.
What happens when a better Telegram comes along built with AI as part of its plumbing?
The answer is, “You can buy licenses to many cyber security systems. Will they work?”
There are some large, capable mice out there in cyber space.
Stephen E Arnold, August 18, 2025
If You Want to Work at Meta, You Must Say Yes, Boss, Yes Boss, Yes Boss
August 18, 2025
No AI. Just a dinobaby working the old-fashioned way.
These giant technology companies are not very good in some situations. One example which comes to mind in the Apple car. What was the estimate? About $10 billion blown Meta pulled a similar trick with its variant of the Google Glass. Winners.
I read “Meta Faces Backlash over AI Policy That Lets Bots Have Sensual Conversations with Children.” My reaction was, “You are kidding, right?” Nope. Not a joke. Put aside common sense, a parental instinct for appropriateness, and the mounting evidence that interacting with smart software can be a problem. What are these lame complaints.
The write up says:
According to Meta’s 200-page internal policy seen by Reuters, titled “GenAI: Content Risk Standards”, the controversial rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist.
Okay, let’s stop the buggy right here, pilgrim.
A “chief ethicist”! A chief ethicist who thought that this was okay:
An internal Meta policy document, seen by Reuters, showed the social media giant’s guidelines for its chatbots allowed the AI to “engage a child in conversations that are romantic or sensual”, generate false medical information, and assist users in arguing that Black people are “dumber than white people”.
What is an ethicist? First, it is a knowledge job. One I assume requiring knowledge of ethical thinking embodied in different big thinkers. Second, it is a profession which relies on context because what’s right for Belgium in the Congo may not be okay today. Third, the job is likely one that encourages flexible definitions of ethics. It may be tough to get another high-paying gig if one points out that the concept of sensual conversations with children is unethical.
The write up points out that an investigation is needed. Why? The chief ethicist should say, “Sorry. No way.”
Chief ethicist? A chief “yes, boss” person.
Stephen E Arnold, August 18, 2025
c
Google: Simplicity Is Not a Core Competency
August 18, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Telegram Messenger is reasonably easy to use messaging application. People believe that it is bulletproof, but I want to ask, “Are you sure?” Then there is WhatsApp, now part of Darth Zuck’s empire. However, both of these outfits appear to be viewed as obtuse and problematic by Kremlin officials. The fix? Just ban these service. Banning online services is a popular way for a government to “control” information flow.
I read a Russian language article about an option some Russians may want to consider. The write up’s title is “How to Replace Calls on WhatsApp and Telegram. Review of the Google Meet Application for Android and iOS.”
I worked through the write up and noted this statement:
Due to the need to send invitation links Meet is not very convenient for regular calls— and most importantly it belongs to the American company Google, whose products, by definition, are under threat of blocking. Moreover, several months ago, Russian President Vladimir Putin himself called for «stifling» Western services operating in Russia, and instructed the Government to prepare a list of measures to limit them by September 1, 2025.
The bulk of the write up is a how to. In order to explain the process of placing a voice call via the Google system, PCNews presented:
- Nine screenshots
- These required seven arrows
- One rectangular box in red to call attention to something. (I couldn’t figure out what, however.)
- Seven separate steps.
How does one “do” a voice call in Telegram Messenger. Here are the steps:
- I opened Telegram mini app and select the contact with whom I want to speak
- I tap on my contact’s name
- I look for the phone call icon and tap it
- I choose “Voice Call” from the options to start an audio call. If I want to make a video call instead, I select “Video Call”
One would think that when a big company wants to do a knock off of a service, someone would check out what Telegram does. (It is a Russian audience due to the censorship in the country.) Then the savvy wizard would figure out how to make the process better and faster and easier. Instead the clever Googlers add steps. That’s the way of the Sundar & Prabhakar Comedy Show.
Stephen E Arnold, August 18, 2025
The Early Bird Often Sings Alone
August 17, 2025
Mathematicians, computer developers, science-fiction writers, etc. smarter than the average human have known for decades that computers would outpace human intelligence. Computers have actually been capable of this since the first machine printed its initial binary 01. AI algorithms are the next evolution of computers and they can do research, explore science, and extrapolate formulas beyond all the last known recorded digit of PI.
Future House explains how its Robin the AI system is designed to automate scientific discovery: “Demonstrating End-To-End Scientific Discovery With Robin: A Multi-Agent System.” Future House developed AI agents that automated different segments of the discovery process, but Robin is the first unified system that does everything. Robin’s inventors automated the scientific process and used the new system to make a generated discovery by using the past AI agents.
They asked Robin to:
“We applied Robin to identify ripasudgl, a Rho-kinase (ROCK) inhibitor clinically used to treat glaucoma, as a novel therapeutic candidate for dry age-related macular degeneration (dAMD), a leading cause of irreversible blindness worldwide.”
Robin did follow the scientific process. It made an initial hypothesis, but mechanized investigation instead of doing things the old-fashioned way, and then it made a discovery. Everything was done by Robin the AI system:
“All hypotheses, experiment choices, data analyses, and main text figures in the manuscript describing this work were generated by Robin autonomously. Human researchers executed the physical experiments, but the intellectual framework was entirely AI-driven.”
Robins creators are happy with their progress:
“By automating hypothesis generation, experimental planning, and data analysis in an integrated system, Robin represents a powerful new paradigm for AI-driven scientific discovery. Although we first applied Robin to therapeutics, our agents are general-purpose and can be used for a wide variety of discoveries across diverse fields—from materials science to climate technology. “
Mathematicians are chugging away at AI development, including number theorists. Listen to Curt Jaimungal’s podcast episode, “The AI Math That Left Number Theorists Speechless” and within the first five minutes you’ll have an understanding of where AI is at being very smart. Here’s the summary: it’s beyond human comprehension.
Whitney Grace, August 17, 2025
Remember the Metaverse
August 17, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
The “Metaverse” was Mark Zuckerberg’s swing and a miss in the virtual world video game. Alphabet is rebooting the failed world says Ars Technica, “Meta’s “AI Superintelligence” Effort Sounds Just Like Its Failed ‘Metaverse.’” Zuckerberg released a memo in which he hyped the new Meta Superintelligence Labs. He described it as “the beginning of a new era for humanity.” It sounds like Zuckerberg is described his Metaverse from a 2021 keynote address.
The Metaverse exists but not many people use it outside of Meta employees who actively avoid using certain features. It’s possible that the public hasn’t given Zuckerberg enough time to develop the virtual world. But when augmented reality uses a pair of ugly coke bottle prototype glasses that cost $10000, the average person isn’t going to log in. To quote the article:
“Today, those kinds of voices of internal skepticism seem in short supply as Meta sets itself up to push AI in the same way it once backed the metaverse. Don’t be surprised, though, if today’s promise that we’re at "the beginning of a new era for humanity" ages about as well as Meta’s former promises about a metaverse where "you’re gonna be able to do almost anything you can imagine."
Zuckerberg is blah blah-ing and yada yada-ing about the future of AI and how it will change society. Society won’t either adapt, can’t afford the changes, or the technology is too advanced to replicate on a large scale. But there is Apple with its outstanding google-headset thing.
One trick ponies do one trick. Yep. Big glasses.
Whitney Grace, August 17, 2025
The HR Gap: First in Line, First Fooled
August 15, 2025
No AI. Just a dinobaby being a dinobaby.
Not long ago I spoke with a person who is a big time recruiter. I asked, “Have you encountered any fake applicants?” The response, “No, I don’t think so.”
That’s the problem. Whatever is happening in HR continuing education, deep fake spoof employees is not getting through. I am not sure there is meaningful “continuing education” for personnel professionals.
I mention this cloud of unknowing in one case example because I read “Cloud Breaches and Identity Hacks Explode in CrowdStrike’s Latest Threat Report.” The write up reports:
The report … highlights the increasingly strategic use of generative AI by adversaries. The North Korea-linked hacking group Famous Chollima emerged as the most generative AI-proficient actor, conducting more than 320 insider threat operations in the past year. Operatives from the group reportedly used AI tools to craft compelling resumes, generate real-time deepfakes for video interviews and automate technical work across multiple jobs.
My first job was at Nuclear Utilities Services (an outfit soon after I was hired became a unit of Halliburton. Dick Cheney, Halliburton, remember?). One of the engineers came up to me after I gave a talk about machine indexing at what was called “Allerton House,” a conference center at the University of Illinois decades ago. The fellow liked my talk and asked me if my method could index technical content in English. I said, “Yes.” He said, “I will follow up next week.”
True to his word, the fellow called me and said, “I am changing planes at O’Hare on Thursday. Can you meet me at the airport to talk about a project? I was teaching part time at Northern Illinois University and doing some administrative work for a little money. Simultaneously I was working on my PhD at the University of Illinois. I said, “Sure.” DeKalb, Illinois, was about an hour west of O’Hare. I drove to the airport, met the person whom I remember was James K. Rice, an expert in nuclear waste water, and talked about what I was doing to support my family, keep up with my studies, and do what 20 years olds do. That is to say, just try to survive.
I explained the indexing, the language analysis I did for the publisher of Psychology Today and Intellectual Digest magazines, and the newsletter I was publishing for high school and junior college teachers struggling to educate ill-prepared students. As a graduate student and family, I explained that I had information and wanted to make it available to teachers facing a tough problem. I remember his comment, “You do this for almost nothing.” He had that right.
End of meeting. I forgot about nuclear and went back to my regular routine.
A month later I got a call from a person named Nancy who said, “Are you available to come to Washington, DC, to meet some people?” I figured out that this was a follow up to the meeting I had at O’Hare Airport. I went. Long story short: I dumped my PhD and went to work for what is generally unknown; that is, Halliburton is involved in things nuclear.
Why is this story from the 1970s relevant? The interview process did not involve any digital anything. I showed up. Two people I did not know pretended to care about my research work. I had no knowledge about nuclear other than when I went to grade school in Washington, DC, we had to go into the hall and cover our heads in case a nuclear bomb was dropped on the White House.
The article “In Recruitment, an AI-on-AI War Is Rewriting the Hiring Playbook,” I learned:
“AI hasn’t broken hiring,” says Marija Marcenko, Head of Global Talent Acquisition at SaaS platform Semrush. “But it’s changed how we engage with candidates.”
The process followed for my first job did not involve anything but one-on-one interactions. There was not much chance of spoofing. I sat there, explained how I indexed sermons in Latin for a fellow named William Gillis, calculated reading complexity for the publisher, and how I gathered information about novel teaching methods. None of those activities had any relevance I could see to nuclear anything.
When I visited the company’s main DC office, it was in the technology corridor running from the Beltway to Germantown, Maryland. I remember new buildings and farm land. I met people who were like those in my PhD program except these individuals thoughts about radiation, nuclear effects modeling, and similar subjects.
One math PhD, who became my best friend, said, “You actually studied poetry in Latin?” I said, “Yep.” He said, “I never read a poem in my life and never will.” I recited a few lines of a Percy Bysshe Shelley poem. I think his written evaluation of his “interview” with me got me the job.
No computers. No fake anything. Just smart people listening, evaluating, and assessing.
Now systems can fool humans. In the hiring game, what makes a company is a collection of people, cultural information, and a desire to work with individuals who can contribute to the organization’s achieving goals.
The Crowdstrike article includes this paragraph:
Scattered Spider, which made headlines in 2024 when one of its key members was arrested in Spain, returned in 2025 with voice phishing and help desk social engineering that bypasses multifactor authentication protections to gain initial access.
Can hiring practices keep pace with the deceptions in use today? Tricks to get hired. Fakery to steal an organization’s secrets.
Nope. Few organizations have the time, money, or business processes to hire using inefficient means as personal interactions, site visits, and written evaluations of a candidate.
Oh, in case you are wondering, I did not go back to finish my PhD. Now I know a little bit about nuclear stuff, however and slightly more about smart software.
Stephen E Arnold, August 15, 2025
Google! Manipulating Search Results? No Kidding
August 15, 2025
The Federal Trade Commission has just determined something the EU has been saying (and litigating) for years. The International Business Times tells us, “Google Manipulated Search Results to Bolster Own Products, FTC Report Finds.” Writer Luke Villapaz reports:
“For Internet searches over the past few years, if you typed ‘Google’ into Google, you probably got the exact result you wanted, but if you were searching for products or services offered by Google’s competitors, chances are those offerings were found further down the page, beneath those offered by Google. That’s what the U.S. Federal Trade Commission disclosed on Thursday, in an extensive 160-page report, which was obtained by the Wall Street Journal as part of a Freedom of Information Act request. FTC staffers found evidence that Google’s algorithm was demoting the search results of competing services while placing its own higher on the search results page, according to excerpts from the report. Among the websites affected: shopping comparison, restaurant review and travel.”
Villapaz notes Yelp has made similar allegations, estimating Google’s manipulation of search results may have captured some 20% of its potential users. So, after catching the big tech firm red handed, what will the FTC do about it? Nothing, apparently. We learn:
“Despite the findings, the FTC staffers tasked with investigating Google did not recommend that the commission issue a formal complaint against the company. However, Google agreed to some changes to its search result practices when the commission ended its investigation in 2013.”
Well OK then. We suppose that will have to suffice.
Cynthia Murrell, August 15, 2025
Party Time for Telegram?
August 14, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Let’s assume that the information is “The SEC Quietly Surrendered in Its Biggest Crypto Battle.” Now look at this decision from the point of view of Pavel Durov. The Messenger service has about 1.35 billion users. Allegedly there are 50 million or so in the US. Mr. Durov was one of the early losers in the crypto wars in the United States. He has hired a couple of people to assist him in his effort to do the crypto version of “Coming to America.” Will Manny Stoltz and Max Crown are probably going to make their presence felt.
The cited article states:
This is a huge deal. It creates a crucial distinction that other crypto projects can now use in their own legal battles, potentially shielding them from the SEC’s claim of blanket authority over the market. By choosing to settle rather than risk having this ruling upheld by a higher court, the SEC has shown the limits of its “regulation by enforcement” playbook: its strategy of creating rules through individual lawsuits instead of issuing clear guidelines for the industry.
What will Telegram’s clever Mr. Durov do with its 13 year old platform, hundreds of features, crypto plumbing, and hundreds of developers eager to generate “money”? It is possible it won’t be Pavel making trips to America. He may be under the watchful eye of the French judiciary.
But Manny, Max, and the developers?
Stephen E Arnold, August 14, 2025
Airships and AI: A Similar Technology Challenge
August 14, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Vaclav Smil writes books about the environment and technology. In his 2023 work Invention and Innovation: A Brief History of Hype and Failure, he describes the ups and downs of some interesting technologies. I thought of this book when I read “A Best Case Scenario for AI?” The author is a wealthy person who has some interaction in the relaxing crypto currency world. The item appeared on X.com.
I noted a passage in the long X.com post; to wit:
… the latest releases of AI models show that model capabilities are more decentralized than many predicted. While there is no guarantee that this continues — there is always the potential for the market to accrete to a small number of players once the investment super-cycle ends — the current state of vigorous competition is healthy. It propels innovation forward, helps America win the AI race, and avoids centralized control. This is good news — that the Doomers did not expect.
Reasonable. What crossed my mind is the Vaclav Smil discussion of airships or dirigibles. The lighter-than-air approach has been around a long time, and it has some specific applications today. Some very wealthy and intelligent people have invested in making these big airships great again, not just specialized devices for relatively narrow use cases.
So what? The airship history spans the 18th, 19th, 20th, and 21st century. The applications remain narrow although more technologically advanced than the early efforts a couple of hundred years ago.
What is smart software is a dirigible type of innovation? The use cases may remain narrow. Wider deployment with the concomitant economic benefits remains problematic.
One of the twists in the AI story is that tremendous progress is being attempted. The innovations as they are rolled out are incremental improvements. Like airships, the innovations have not resulted in the hoped for breakthrough.
There are numerous predictions about the downsides of smart software. But what if AI is little more than a modern version of the dirigible. We have a remarkable range of technologies, but each next steps is underwhelming. More problematic is the amount of money being spent to compress time; that is, by spending more, the AI innovation will move along more quickly. Perhaps that is not the case. Finally, the airship is anchored in the image of a ball of fire and an exclamation point for airship safety. Will their be a comparable moment for AI?
Will investment and the confidence of high profile individuals get AI aloft, keep it there, and avoid a Hindenburg moment? Much has been invested to drive AI forward and make it “the next big thing.” The goal is to generate money, substantial sums.
The X.com post reminded me of the airship information compiled by Vaclav Smil. I can’t shake the image. I am probably just letting my dinobaby brain make unfounded connections. But, what if….? We could ask Google and its self-shaming smart software. Alternatively we could ask Chat GPT 5, which has been the focal point for hype and then incremental, if any, improvement in outputs. We could ask Apple, Amazon, or Telegram. But what if…?
I think an apt figure of speech might be “pushing a string.”
Stephen E Arnold, August 14, 2025