American Illiteracy: Who Is Responsible?
September 11, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
I read an essay I found quite strange. “She Couldn’t Read Her Own Diploma: Why Public Schools Pass Students but Fail Society” is from what seems to be a financial information service. This particular essay is written by Tyler Durden and carries the statement, “Authored by Hannah Frankman Hood via the American Institute for Economic Research (AIER).” Okay, two authors. Who wrote what?
The main idea seems to be that a student who graduated from Hartford, Connecticut (a city founded by one of my ancestors) graduate with honors but is unable to read. How did she pull of the “honors” label? Answer: She used “speech to text apps to help her read and write essays.”
Now the high school graduate seems to be in the category of “functional illiteracy.” The write up says:
To many, it may be inconceivable that teachers would continue to teach in a way they know doesn’t work, bowing to political pressure over the needs of students. But to those familiar with the incentive structures of public education, it’s no surprise. Teachers unions and public district officials fiercely oppose accountability and merit-based evaluation for both students and teachers. Teachers’ unions consistently fight against alternatives that would give students in struggling districts more educational options. In attempts to improve ‘equity,’ some districts have ordered teachers to stop giving grades, taking attendance, or even offering instruction altogether.
This may be a shock to some experts, but one of my recollections of my youth was my mother reading to me. I did not know that some people did not have a mother and father, both high school graduates, who read books, magazines, and newspapers. For me, it was books.
I was born in 1944, and I recall heading to kindergarten and knowing the alphabet, how to print my name (no, it was not “loser”), and being able to read words like Topps (a type of bubble gum with pictures of baseball players in the package), Coca Cola, and the “MD” on my family doctor’s sign. (I had no idea how to read “McMorrow,” but I could identify the letters.
The “learning to read” skill seemed to take place because my mother and sometimes my father would read to me. My mother and I would walk to the library about a mile from our small rented house on East Wilcox Avenue. She would check out book for herself and for me. We would walk home and I would “read” one of my books. When I couldn’t figure out a word, I asked her. This process continued until we moved to Washington, DC when I was in the third grade. When we moved to Campinas, Brazil, my father bought a set of World Books and told me to read them. My mother helped me when I encountered words or information I did not understand. Campinas was a small town in the 1950s. I had my Calvert Correspondence course at the set of blue World Book Encyclopedias.
When we returned to the US, I entered the seventh grade. I am not sure I had much formal instruction in reading, phonics, word recognition, or the “normal” razzle dazzle of education. I just started classes and did okay. As I recall, I was in the advanced class, and the others in that group would stay together throughout high school, also in central Illinois.
My view is probably controversial, but I will share it in this essay by two people who seem to be worried about teachers not teaching students how to read. Here goes:
- Young children are curious. When exposed to books and a parent who reads and explains meanings, the child learns. The young child’s mind is remarkable in its baked in ability to associate, discern patterns, learn language, and figure out that Coca Cola is a drink parents don’t often provide.
- A stable family which puts and emphasis on reading even though the parents are not college educated makes reading part of the furniture of life. Mobile phones and smart software cannot replicate the interaction between a parent and child involved in reading, printing letters, and figuring out that MD means weird Dr. McMorrow.
- Once reading becomes a routine function, normal curiosity fuels knowledge acquisition. This may not be true for some people, but in my experience it works. Parents read; child reads.
When the family unit does not place emphasis on reading for whatever reason, the child fails to develop some important mental capabilities. Once that loss takes place, it is very difficult to replace it with each passing year.
Teachers alone cannot do this job. School provides a setting for a certain type of learning. If one cannot read, one cannot learn what schools afford. Years ago, I had responsibility for setting up and managing a program at a major university to help disadvantaged students develop skills necessary to succeed in college. I had experts in reading, writing, and other subjects. We developed our own course materials; for example, we pioneered the use of major magazines and lessons built around topics of interest to large numbers of Americans. Our successes came from instructors who found a way to replicate the close interaction and support of a parent-child reading experience. The failures came from students who did not feel comfortable with that type of one to one interaction. Most came from broken families, and the result of not having a stable, knowledge-oriented family slammed on the learning and reading brakes.
Based on my experience with high school and college age students, I never was and never will be a person who believes that a device or a teacher with a device can replicate the parent – child interaction that normalizes learning and instills value via reading. That means that computers, mobile phones, digital tablets, and smart software won’t and cannot do the job that parents have to do when the child is very young.
When the child enters school, a teacher provides a framework and delivers information tailored to the physical and hopefully mental age of the student. Expecting the teacher to remediate a parenting failure in the child’s first five to six years of life is just plain crazy. I don’t need economic research to explain the obvious.
This financial write up strikes me as odd. The literacy problem is not new. I was involved in trying to create a solution in the late 1960s. Now decades later, financial writers are expressing concern. Speedy, right? My personal view is that a large number of people who cannot read, understand, and think critically will make an orderly social construct very difficult to achieve.
I am now 80 years old. How can an online publication produce an essay with two different authors and confuse me with yip yap about teaching methods. Why not disagree about the efficacy of Grok versus Gemini? Just be happy with illiterates who can talk to Copilot to generate Excel spreadsheets about the hockey stick payoffs from smart software.
I don’t know much. I do know that I am a dinobaby, and I know my ancestor who was part of the group who founded Hartford, Connecticut, would not understand how his vision of the new land jibes with what the write up documents.
Stephen E Arnold, September 11, 2025
AI Algorithms Are Not Pirates, Just Misunderstood
September 11, 2025
Let’s be clear: AI algorithms are computer programs designed to imitate human brains. They’re not sentient . They are taught using huge amounts of data sets that contain pirated information. By proxy this makes AI developers thieves. David Carson on Medium wrote, “Theft Is Not Fair Use” arguing that AI is not abiding by one of the biggest laws that powers YouTube. (One of the big AI outfits just wrote a big check for unauthorized content suck downs. Not guilty, of course.)
Publishers, record labels, entertainment companies, and countless artists are putting AI developers on notice by filing lawsuits against AI developers. Thomson Reuters was victorious against an AI-based legal platform, Ross Intelligence, for harvesting its data. It’s a drop in the water bucket however, because Trump’s Artificial Intelligence Action Plan sought input from Big Tech. Open AI and Google asked to be exempt from copyright in their big data sets. A group of authors are suing Meta and a copyright law professor gaggle filed an amicus brief on their behalf. The professors poke holes in Meta’s fair use claim.
Big Tech is powerful and they’ve done this for years:
"Tech companies have a history of taking advantage of legacy news organizations that are desperate for revenue and are making deals with short-term cash infusions but little long-term benefit. I fear AI companies will act as vampires, draining news organizations of their valuable content to train their new AI models and then ride off into the sunset with their multi-billion dollar valuations while the news organizations continue to teeter on the brink of bankruptcy. It wouldn’t be the first time tech companies out-maneuvered (online advertising) or lied to news organizations.”
Unfortunately creative types are probably screwed. What’s funny is that Carson is a John S. Knight Journalism Fellow at Stanford. It’s the same school in which the president manipulated content to advance his career. How many of these deep suckers are graduates of this esteemed institution? Who teaches copyright basics? Maybe an AI system?
Whitney Grace, September 11, 2025
AI a Security Risk? No Way or Is It No WAI?
September 11, 2025
Am I the only one who realizes that AI is a security problem? Okay, I’m not but organizations certainly aren’t taking AI security breaches says Venture Beat in the article, “Shadow AI Adds $670K To Breach Costs While 97% Of Enterprises Skip Basic Access Controls, IBM Reports.” IBM collected information with the Ponemon Institute (does anyone else read that as Pokémon Institute?) about data breaches related to AI. IBM and the Ponemon Institute held 3470 interviews with 600 organizations that had data breaches.
Shadow AI is the unauthorized use of AI tools and applications. IBM shared how shadow AI affects organizations in the Cost of a Data Breach Report. Unauthorized usage of AI tools cost organizations $4.63 million and that is 16% more than the $4.44 million global average. YIKES! Another frightening statistic is that 97% of the organizations lacked proper AI access controls. Only 13% had AI-security related breaches compared to 8% who were unaware if AI comprised their systems
Bad actors are using supply chains as their primary attack and AI allows them to automate tasks to blend in with regular traffic. If you want to stay awake at night here are some more numbers:
“A majority of breached organizations (63%) either don’t have an AI governance policy or are still developing one. Even when they have a policy, less than half have an approval process for AI deployments, and 62% lack proper access controls on AI systems.”
An expert said this about the issue:
This pattern of delayed response to known vulnerabilities extends beyond AI governance to fundamental security practices. Chris Goettl, VP Product Management for Endpoint Security at Ivanti, emphasizes the shift in perspective: ‘What we currently call ‘patch management’ should more aptly be named exposure management—or how long is your organization willing to be exposed to a specific vulnerability?’”
Organizations that are aware of AI breaches and have security plans in place save more money.
It pays to be prepared and cheaper too!
Whitney Grace, September 11, 2025
Microsoft: The Secure Discount King
September 10, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
Let’s assume that this story in The Register is dead accurate. Let’s forget that Google slapped the $0.47 smart software price tag on its Gemini smart software. Now let’s look at the interesting information in “Microsoft Rewarded for Security Failures with Another US Government Contract.” Snappy title. But check out the sub-title for the article: “Free Copilot for Any Agency Who Actually Wants It.”
I did not know that a US government agency was human signaled by the “who.” But let’s push forward.
The article states:
The General Services Administration (GSA) announced its new deal with Microsoft on Tuesday, describing it as a “strategic partnership” that could save the federal government as much as $3.1 billion over the next year. The GSA didn’t mention specific discount terms, but it said that services, including Microsoft 365, Azure cloud services, Dynamics 365, Entra ID Governance, and Microsoft Sentinel, will be cheaper than ever for feds. That, and Microsoft’s next-gen Clippy, also known as Copilot, is free to access for any agency with a G5 contract as part of the new deal, too. That free price undercuts Google’s previously cheapest-in-show deal to inject Gemini into government agencies for just $0.47 for a year.
Will anyone formulate the hypothesis that Microsoft and Google are providing deep discounts to get government deals and the every-popular scope changes, engineering services, and specialized consulting fees?
I would not.
I quite like comparing Microsoft’s increasingly difficult to explain OpenAI, acqui-hire, and home-grown smart software as Clippy. I think that the more apt comparison is the outstanding Microsoft Bob solution to interface complexity.
The article explains that Oracle landed contracts with a discount, then Google, and now Microsoft. What about the smaller firms? Yeah, there are standard procurement guidelines for those outfits. Follow the rules and stop suggesting that giant companies are discounting there way into the US government.
What happens if these solutions hallucinate, do not deliver what an Inspector General, an Independent Verification & Validation team, or the General Accounting Office expects? Here’s the answer:
With the exception of AWS, all the other OneGov deals that have been announced so far have a very short shelf life, with most expirations at the end of 2026. Critics of the OneGov program have raised concerns that OneGov deals have set government agencies up for a new era of vendor lock-in not seen since the early cloud days, where one-year discounts leave agencies dependent on services that could suddenly become considerably more expensive by the end of next year.
The write up quotes one smaller outfit’s senior manager’s concern about low prices. But the deals are done, and the work on the 2026-2027 statements of work has begun, folks. Small outfits often lack the luxury of staff dedicated to extending a service provider’s engagement into a year or two renewal target.
The write up concludes by bringing up ancient history like those pop archaeologists on YouTube who explain that ancient technology created urns with handles. The write up says:
It was mere days ago that we reported on the Pentagon’s decision to formally bar Microsoft from using China-based engineers to support sensitive cloud services deployed by the Defense Department, a practice Defense Secretary Pete Hegseth called “mind-blowing” in a statement last week. Then there was last year’s episodes that allowed Chinese and Russian cyber spies to break into Exchange accounts used by high-level federal officials and steal a whole bunch of emails and other information. That incident, and plenty more before it, led former senior White House cyber policy director AJ Grotto to conclude that Microsoft was an honest-to-goodness national security threat. None of that has mattered much, as the feds seem content to continue paying Microsoft for its services, despite wagging their finger at Redmond for “avoidable errors.”
Ancient history or aliens? I don’t know. But Microsoft does deals, and it is tough to resist “free”.
Stephen E Arnold, September 10, 2025
Google Does Its Thing: Courts Vary in their Views of the Outfit
September 10, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
I am not sure I understand how the US legal system works or any other legal system works. A legal procedure headed by a somewhat critical judge has allowed the Google to keep on doing what it is doing: Selling ads, collecting personal data, and building walled gardens even if they encroach on a kiddie playground.
However, at the same time, the Google was found to be a bit too frisky in its elephantine approach to business.
The first example is that Google was found guilty of collecting user data when users disabled the data collection. The details of this gross misunderstanding of how the superior thinkers at Google interpreted assorted guidelines and user settings appear in “Jury Slams Google Over App Data Collection to Tune of $425 Million.” Now to me that sounds like a lot of money. To the Google, it is a cash flow issue which can be addressed by negotiation, slow administrative response, and consulting firm speak. The write up says:
Google attorney Benedict Hur of Cooley LLP told jurors Google “certainly thought” it had permission to access the data. He added that Google lets users know it will continue to collect certain types of data, even if they toggle off web activity.
Quite an argument.
The other write up with some news about Google behavior is “France Fines Google, Shein Record Sums over Cookie Law Violations.” I found this passage in the write up interesting:
France’s data protection watchdog CNIL on Wednesday fined Google €325 million ($380 million) and fast-fashion retailer Shein €150 million ($175 million) for violating cookie rules. The record penalties target two platforms with tens of millions of French users, marking among the heaviest sanctions the regulator has imposed.
Several observations are warranted:
- Google is manifesting behavior similar to the China-linked outfit Shein. Who is learning from whom?
- Some courts find Google problematic; other courts think that Google is just doing okay Googley things
- A showdown may occur from outside the United States if a nation state just gets fed up with Google doing exactly whatever it wants.
I wonder if anyone at Google is thinking about hassling the French judiciary in the remainder of 2025 and into 2026. If so, it may be instructive to recall how the French judiciary addressed a 13-year-old case of digital Toxic Epidermal Necrolysis. Pavel Durov was arrested, interrogated for four days, and must report to French authorities every couple of weeks. His legal matter is moving through a judicial system noted for its methodical and red-tape choked processes.
Fancy a nice dinner in Paris, Google?
Stephen E Arnold, September 10, 2025
Cloud Storage: Working Really Well Most of the Time
September 10, 2025
If true, cloud services are outstanding. does Microsoft’s Cloud and Azure behave like this?
We at Beyond Search love the cloud. You love the cloud. Everyone loves the cloud. Except when the cloud deletes your entire life’s work. That’s what happened to one unfortunate soul according to a Seuros blog post and shared via Windows Central: “AWS Data Crisis: Engineer Restores 10 Years of Work Thanks To A Compassionate Insider.”
The victim is known as Abdelkader Boudih (aka Seuros) and he saved a lot of developer tools on the AWS cloud so is desktop wouldn’t be crowded. Here a description of the situation:
“When AWS deleted my account, they didn’t just hurt me. They hurt every developer who uses my gems. Every student who could have learned from those tutorials. Every future contribution that won’t happen because my workflow is destroyed.”
Darn.
Boudih stated he had backups of his backups and followed all proper procedures but he didn’t expect AWS to be a problem. The scenario began with AWS asking Boudih for verification, but he didn’t see it until it was past expiation. He then had to send in a bill and a copy of his ID. AWS said the files were unreadable. His account then went bye-bye.
There’s a ninety day grace period before AWS deletes all data. He spoke with customer support and never received straight answers. He did receive emails asking him to rate AWS’s service and give them five stars. Brilliant!
Anyone else recognize the frustration?
Here’s the conspiracy theory:
“This is no doubt in response to Boudih’s claims that an AWS insider had reached out shortly after the Seuros blog post began circulating publicly.
The insider suggested that AWS MENA (the second acronym stands for Middle East and North Africa) was "running some kind of proof of concept on ‘dormant’ and ‘low-activity’ accounts." It wasn’t just Boudih’s account that was affected.
It gets technical from this point on, but it basically boils down to the assumption that an AWS developer typed the wrong command and ended up deleting accounts that were still very much in use, like Boudih’s.
There’s no real proof that any of this happened, but Boudih points to the slow progress and ineffective feedback from support as explanations for a potential cover-up.”
The lesson to be learned here is to never rely on third-party storage vendors. Doesn’t anyone use external hard drives anymore? Of course not, the cloud is just there. What worry?
Whitney Grace, September 10, 2025
Google Monopoly: A Circle Still Unbroken
September 9, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
I am no lawyer, and I am not sure how the legal journey will unfold for the Google. I assume Google is still a monopoly. Google, however, is not happy with the recent court decision that appears to be a light tap on Googzilla’s snout. The snow is not falling and no errant piece of space junk has collided with the Mountain View campus.
I did notice a post on the Google blog with a cute url. The words “outreach-initiatives” , “public policy,” and DOJ search decision speak volumes to me.
The post carries this Google title, well, a Googley command:
Read our statement on today’s decision in the case involving Google Search
Okay, snap to it. The write up instructs:
Competition is intense and people can easily choose the services they want. That’s why we disagree so strongly with the Court’s initial decision in August 2024 on liability.
Okay, not em dashes, so Gemini did not write the sentence, although it may contain some words rarely associated with Googley things. These are words like “easily choose”. Hey, I thought Google was a monopoly. The purpose of the construct is to take steps to narrow choice. The Chicago stockyards uses fences, guides, and designated killing areas. But the cows don’t have a choice. The path is followed and the hammer drops. Thonk.
The write up adds:
Now the Court has imposed limits on how we distribute Google services, and will require us to share Search data with rivals. We have concerns about how these requirements will impact our users and their privacy, and we’re reviewing the decision closely.
The logic is pure blue chip consultant with a headache. I like the use of the word “imposed”. Does Google impose on its users; for instance, irrelevant search results, filtered YouTube videos, or roll up of user generated information in Google services? Of course not, a Google user can easily choose which videos to view on YouTube. A person looking for information can easily choose to access Web content on another Web search system. Just use Bing, Ecosia, or Phind. I like “easily.”
What strikes me is the command language and the huffiness about the decision.
Wow, I love Google. Is it a monopoly? Definitely not Android or Chrome. Ads? I don’t know. Probably not.
Stephen E Arnold, September 9, 2025
First, Let Us Kill Relevance for Once and For All. Second, Just Use Google
September 9, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
In the long distant past, Danny Sullivan was a search engine optimization-oriented journalist. I think we was involved with an outfit called Search Engine Land. He gave talks and had an animated dinosaur as his cursor. I recall liking the dinosaur. On August 29, 2025, Search Engine Land published a story unthinkable years ago when Google was the one and only game in town.
The article “ChatGPT, AI Tools Gain Traction as Google Search Slips: Survey” says:
“AI tool use is accelerating in everyday search, with ChatGPT use nearly tripling while Google’s share slips, survey of US users finds.”
But Google just sold the US government at $0.47 per head the Gemini system. How can these procurement people have gone off track? The write up says:
Google’s role in everyday information seeking is shrinking, while AI tools – particularly ChatGPT – are quickly gaining ground. That’s according to a new Higher Visibility survey of 1,500 U.S. users.
And here’s another statement that caught my eye:
Search behavior is fractured, which means SEOs cannot rely on Google Search alone (though, to be clear, SEO for Google remains as critical as ever). Therefore, SEO/GEO strategies now must account for visibility across multiple AI platforms.
I wonder if relevant search results will return? Of course not, one must optimize content for the new world of multiple AI platforms.
A couple of questions:
- If AI is getting uptake, won’t that uptake help out Google too?
- Who are the “users” in the survey sample? Is the sample valid? Are the data reliable?
- Is the need for SEO an accurate statement? SEO helped destroy relevance in search results. Aren’t these folks satisfied with their achievement to date?
I think I know the answers to these questions. But I am content to just believe everything Search Engine Land says. I mean marketing SEO and eliminating relevance when seeking answers online is undergoing change. Change means many things. Some of these issues are beyond the ken of the big thinkers at Search Engine Land in my opinion. But that’s irrelevant and definitely not SEO.
Stephen E Arnold, September 10, 2025
Google and Its Reality Dictating Machine: What Is a Fact?
September 9, 2025
I’m not surprised by this. I don’t understand why anyone would be surprised by this story from Neoscope: “Doctors Horrified After Google’s Healthcare AI Makes Up A Body Part That Does Not Exist In Humans.” Healthcare professional are worried about their industry’s over the widespread use of AI tools. These tools are error prone and chock full of bugs. In other words, these bots are creating up facts and lies and making them seem convincing.
It’s called hallucinating.
A recent example of an AI error involves Google’s Med-Gemini and it took an entire year before anyone discovered it. The false information was published in a May 2024 research paper from Google that ironically discussed the promises of AI Med-Gemini analyzing brain scans. The AI “identified” the “old left basilar ganglia infarct” in the scans, but that doesn’t exist in the human body. Google never fixed its research paper.
Hallucinations are dangerous in humans but they’re much worse in AI because they won’t be confined to a single source.
“It’s not just Med-Gemini. Google’s more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time. ‘Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine,’ Judy Gichoya, Emory University associate professor of radiology and informatics, told The Verge.
Other experts say we’re rushing into adapting AI in clinical settings — from AI therapists, radiologists, and nurses to patient interaction transcription services — warranting a far more careful approach.”
A wise fictional character once said, “Take risks! Make mistakes! Get messy! In other words, say “I don’t know!” Could this quick kill people? Duh.
Whitney Grace, September 9, 2025
Innovation Is Like Gerbil Breeding: It Is Tough to Produce a Panda
September 8, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
The problem is innovation is a tough one. I remember getting a job from a top dog at the consulting firm silly enough to employ me. The task was to chase down the Forbes Magazine list of companies ordered by how much they spend on innovation. I recall that the goal was to create an “estimate” or what would be a “model” today of what a company of X size should be spending on “innovation.”
Do that today for an outfit like OpenAI or one of the other US efforts to deliver big money via the next big thing and the result is easy to express; namely, every available penny is spent trying to create something new. Yep, spend the cash innovating. Think it, and the “it” becomes real. Build “it,” and the “it” draws users with cash.
A recent and somewhat long essay plopped in my “Read file.” The article is titled “We’ve Lost the Plot with Smartphones.” (The write up requires signing up and / or paying for access.)
The main idea of the essay is that smartphones, once heralded as revolutionary devices for communication and convenience, have evolved into tools that undermine our attention and well-being. I agree. However, innovation may not fix the problem. In my view, the fix may be an interesting effort, but as long as there are gizmos, the status quo will return.
The essay suggests that the innovation arc of such devices like a toaster or the mobile phone solves problems or adds obvious convenience to a user otherwise unfamiliar with the device. Like Steve Jobs suggested, users have to see and use a device. Words alone don’t do the job. Pushing deck chairs around a technology yacht does not add much to the value of the device. This is the “me too” approach to innovation or what is often called “featuritis.”
Several observations:
- Innovations often arise without warning, no matter what process is used
- The US is supporting “old” businesses, and other countries are pushing applied AI, which may be a better bet
- Big money innovation usually surfs on month, years, or decades of previous work. Once that previous work is exhausted, the brutal odds of innovation success kick in. A few winners will emerge from many losers.
One of the oddities is the difficulty of identifying a significant or substantive innovation. That seems to be as difficult to do as set up a system to generate innovation. In short, technology innovation reminds me of gerbils. Start with a few and quickly have lots of gerbils. The problem is that you have gerbils and what you want is something different.
Good luck.
Stephen E Arnold, September 8, 2025

