Train Models on Hostility Oriented Slop and You Get Happiness? Nope, Nastiness on Steroids
November 10, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Informed discourse, factual reports, and the type of rhetoric loved by Miss Spurling in 1959 can have a positive effect. I wasn’t sure at the time if I wanted to say Whoa! Nelly to my particular style of writing and speaking. She did her best to convert me from a somewhat weird 13 year old into a more civilized creature. She failed I fear.

A young student is stunned by the criticism of his approach to a report. An “F”. Failure. The solution is not to listen. Some AI vendors take the same approach. Thanks, Venice.ai, good enough.
When I read “Study: AI Models Trained on Clickbait Slop Result In AI Brain Rot, Hostility,” I thought about Miss Spurling and her endless supply of red pencils. Algorithms, it seems, have some of the characteristics of an immature young person. Feed that young person some unusual content, and you get some wild and crazy outputs
The write up reports:
To see how these [large language] models would “behave” after subsisting on a diet of clickbait sewage, the researchers cobbled together a sample of one million X posts and then trained four different LLMs on varying mixtures of control data (long form, good faith, real articles and content) and junk data (lazy, engagement chasing, superficial clickbait) to see how it would affect performance. Their conclusion isn’t too surprising; the more junk data that is fed into an AI model, the lower quality its outputs become and the more “hostile” and erratic the model is …
But here’s the interesting point:
They also found that after being fed a bunch of ex-Twitter slop, the models didn’t just get “dumber”, they were (shocking, I know) far more likely to take on many of the nastier “personality traits” that now dominate the right wing troll platform …
The write up makes a point about the wizards creating smart software; to wit:
The problem with AI generally is a decidedly human one: the terrible, unethical, and greedy people currently in charge of it’s implementation (again, see media, insurance, countless others) — folks who have cultivated some unrealistic delusions about AI competency and efficiency (see this recent Stanford study on how rushed AI adoption in the workforce often makes people less efficient).
I am not sure that the highly educated experts at Google-type AI companies would agree. I did not agree with Miss Spurling. On may points, she was correct. Adolescent thinking produces some unusual humans as well as interesting smart software. I particularly like some of the newer use cases; for instance, driving some people wacky or appealing to the underbelly of human behavior.
Net net: Scale up, shut up, and give up.
Stephen E Arnold, November 10, 2025
Mobile Hooking People: Digital Drugs
November 10, 2025
Most of us know that spending too much time on our phones is a bad idea, especially for young minds. We also know the companies on the other end profit from keeping us glued to the screen. The Conversation examines the ways “Smartphones Manipulate our Emotions and Trigger our Reflexes– No Wonder We’re Addicted.” Yes–try taking a 12 year old’s mobile phone and let us know how that goes.
Of course, social media, AI chatbots, games, and other platforms have their own ways of capturing our attention. This article, however, focuses on ways the phones themselves manipulate users. Author Stephen Monteiro writes:
“As I argue in my newly published book, Needy Media: How Tech Gets Personal, our phones — and more recently, our watches — have become animated beings in our lives. These devices can build bonds with us by recognizing our presence and reacting to our bodies. Packed with a growing range of technical features that target our sensory and psychological soft spots, smartphones create comforting ties that keep us picking them up. The emotional cues designed into these objects and interfaces imply that they need our attention, while in actuality, the devices are soaking up our data.”
The write-up explores how phones’ responsive features, like facial recognition, geolocation, touchscreen interactions, vibrations and sounds, and motion and audio sensing, combine to build a potent emotional attachment. Meanwhile, devices have drastically increased how much information they collect and when. They constantly record data on everything we do on our phones and even in our environments. One chilling example: With those sensors, software can build a fairly accurate record of our sleep patterns. Combine that with health and wellness apps, and that gives app-makers a surprisingly comprehensive picture. Have you seen any eerily insightful ads for fitness, medical, or mindfulness products lately? Soon, they will be even be able to gauge our emotions through analysis of our facial expressions. Just what we need.
Given a cell phone is pretty much required to navigate life these days, what are we to do? Monteiro suggests:
“We can access device settings and activate only those features we truly require, adjusting them now and again as our habits and lifestyles change. Turning on geolocation only when we need navigation support, for example, increases privacy and helps break the belief that a phone and a user are an inseparable pair. Limiting sound and haptic alerts can gain us some independence, while opting for a passcode over facial recognition locks reminds us the device is a machine and not a friend. This may also make it harder for others to access the device.”
If these measures do not suffice, one can go retro with a “dumb” phone. Apparently, that is a trend among Gen Z. Perhaps there is hope for humanity yet.
Cynthia Murrell, November 10, 2025
News Flash: Smart Software Can Be Truly Stupid about News
November 10, 2025
Chatbots Are Wrong About News
Do you receive your news via your favorite chatbot? It doesn’t matter which one is your favorite, because most of the time you’re being served misinformation. While the Trump and Biden administrations went crazy about the spreading of fake news, they weren’t entirely wrong. ZDNet reports that if, “Get Your News Drom AI? Watch Out – It’s Wrong Almost Half The Time.”
The European Broadcasting Union and the BBC discovered that popular chatbots are incorrectly reporting the news. The BBC and the EBU had journalists study media in eighteen countries and fourteen languages from Perplexity, Copilot, Gemini, and ChatGPT. Here are the results:
“The researchers found that close to half (45%) of all of the responses generated by the four AI systems “had at least one significant issue,” according to the BBC, while many (20%) “contained major accuracy issues,” such as hallucination — i.e., fabricating information and presenting it as fact — or providing outdated information. Google’s Gemini had the worst performance of all, with 76% of its responses containing significant issues, especially regarding sourcing.”
The implications are obvious to the smart thinker: distorted information. Thankfully Reuters found that only 7% of adults received all of their news from AI sources, while the numbers were larger at 15% for those under age twenty-five. More than three-quarters of adults never turn to chatbots for their news.
Why is anyone surprised by this? More importantly, why aren’t the big news outlets, including those on the fr left and right, sharing this information? I thought these companies were worried about going out of business because of chatbots. Why aren’t they reporting on this story?
Whitney Grace, November 7, 2025
Cyber Security: Do the Children of Shoemakers Have Yeezies or Sandals?
November 7, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
When I attended conferences, I liked to stop at the exhibitor booths and listen to the sales pitches. I remember one event held in a truly shabby hotel in Tyson’s Corner. The vendor whose name escapes me explained that his firm’s technology could monitor employee actions, flag suspicious behaviors, and virtually eliminate insider threats. I stopped at the booth the next day and asked, “How can your monitoring technology identify individuals who might flip the color of their hat from white to black?” The answer was, “Patterns.” I found the response interesting because virtually every cyber security firm with whom I have interacted over the years talks about patterns.

Thanks, OpenAI. Good enough.
The problem is that individuals aware of what are mostly brute-force methods of identifying that employee A tried to access a Dark Web site known for selling malware works if the bad actor is clueless. But what happens if the bad actors were actually wearing white hats, riding white stallions, and saying, “Hi ho, Silver, away”?
Here’s the answer: “Prosecutors allege incident response pros used ALPHV/BlackCat to commit string of ransomware attacks
.” The write up explains that “cybersecurity turncoats attacked at least five US companies while working for” cyber security firms. Here’s an interesting passage from the write up:
Ryan Clifford Goldberg, Kevin Tyler Martin and an unnamed co–conspirator — all U.S. nationals — began using ALPHV, also known as BlackCat, ransomware to attack companies in May 2023, according to indictments and other court documents in the U.S. District Court for the Southern District of Florida. At the time of the attacks, Goldberg was a manager of incident response at Sygnia, while Martin, a ransomware negotiator at DigitalMint, allegedly collaborated with Goldberg and another co-conspirator, who also worked at DigitalMint and allegedly obtained an affiliate account on ALPHV. The trio are accused of carrying out the conspiracy from May 2023 through April 2025, according to an affidavit.
How long did the malware attacks persist? Just from May 2023 until April 2025.
Obviously the purpose of the bad behavior was money. But the key point is that, according to the article, “he was recruited by the unnamed co-conspirator.”
And that, gentle reader, is how bad actors operate. Money pressure, some social engineering probably at a cyber security conference, and a pooling of expertise. I am not sure that insider threat software can identify this type of behavior. The evidence is that multiple cyber security firms employed these alleged bad actors and the scam was afoot for more that 20 months. And what about the people who hired these individuals? That screening seems to be somewhat spotty, doesn’t it?
Several observations:
- Cyber security firms themselves are not able to operate in a secure manner
- Trust in Fancy Dan software may be misplaced. Managers and co-workers need to be alert and have a way to communicate suspicions in an appropriate way
- The vendors of insider threat detection software may want to provide some hard proof that their systems operate when hats change from black to white.
Everyone talks about the boom in smart software. But cyber security is undergoing a similar economic gold rush. This example, if it is indeed accurate, indicates that companies may develop, license, and use cyber security software. Does it work? I suggest you ask the “leadership” of the firms involved in this legal matter.
Stephen E Arnold, November 7, 2025
How Frisky Will AI Become? Users Like Frisky… a Lot
November 7, 2025
OpenAI promised to create technology that would benefit humanity, much like Google and other Big tech companies. We know how that has gone. Much to the worry of its team, OpenAI released a TikTok-like app powered by AI. What could go wrong? Well we’re still waiting to see the fallout, but TechCrunch shares that possibilities in the story: “OpenAI Staff Grapples With The Company’s Social Media Push.”
OpenAI is headed into social media because that is where the money is. The push for social media is by OpenAI’s bigwigs. The new TikTok-like app is called Sora 2 and it has an AI-based feed. Past and present employees are concerned how Sora 2 will benefit humanity. They are worried that Sora 2 will produce more AI slop, the equivalent of digital brain junk food, to consumers instead of benefitting humanity. Even OpenAI’s CEO Sam Altman is astounded by the amount of money allowed to AI social media projects:
‘ ‘We do mostly need the capital for build [sic] AI that can do science, and for sure we are focused on AGI with almost all of our research effort,’ said Altman. ‘It is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need.’ ‘When we launched chatgpt there was a lot of ‘who needs this and where is AGI,’ Altman continued. ‘[R]eality is nuanced when it comes to optimal trajectories for a company.’”
Here’s another quote about the negative effects of AI:
‘One of the big mistakes of the social media era was [that] the feed algorithms had a bunch of unintended, negative consequences on society as a whole, and maybe even individual users. Although they were doing the thing that a user wanted — or someone thought users wanted — in the moment, which is [to] get them to, like, keep spending time on the site.’”
Let’s start taking bets about how long it will take the bad actors to transform Sora 2 into quite frisky service.
Whitney Grace, November 7, 2025
Govini? Another Palantir Technologies?
November 7, 2025
Good news. Another Palantir. Just what we need. CNBC reports, “Govini, a Defense Tech Startup Taking on Palantir, Hits $100 Million in Annual Recurring Revenue.” Writer Samantha Subin tells us:
“Govini, a defense tech software startup taking on the likes of Palantir, has blown past $100 million in annual recurring revenue, the company announced Friday. ‘We’re growing faster than 100% in a three-year CAGR, and I expect that next year we’ll continue to do the same,’ CEO Tara Murphy Dougherty told CNBC’s Morgan Brennan in an interview. With how ‘big this market is, we can keep growing for a long, long time, and that’s really exciting.’ CAGR stands for compound annual growth rate, a measurement of the rate of return. The Arlington, Virginia-based company also announced a $150 million growth investment from Bain Capital. It plans to use the money to expand its team and product offering to satisfy growing security demands.”
A former business-development leader at Palantir, Dougherty says her current firm is aiming for a “vertical slice” of the defense tech field. We learn:
“The 14-year-old Govini has already secured a string of big wins in recent years, including an over $900-million U.S. government contract and deals with the Department of War. Govini is known for its flagship AI software Ark, which it says can help modernize the military’s defense tech supply chain by better managing product lifecycles as military needs grow more sophisticated.”
The CEO asserts China’s dominance in rare earths and processed minerals and its faster shipbuilding capacity are reasons to worry. Sounds familiar. However, she believes an efficient and effective procurement system like Ark can provide an advantage for the US. Perhaps. But does it come with sides of secrecy, surveillance, and influence a la Palantir? Stay tuned.
Cynthia Murrell, November 7, 2025
Myanmar Direct Action: Online Cyber Crime Meets Kinetics
November 7, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Stragglers from Myanmar Scam Center Raided by Army Cross into Thailand As Buildings Are Blown Up.” In August 2024, France took Pavel Durov at a Paris airport. The direct action worked. Telegram has been wobbling. Myanmar, perhaps learning from the French decision to arrest the Mr. Durov, shut down an online fraud operation. The Associated Press reported on October 28, 2025: “The KK Park site, identified by Thai officials and independent experts as housing a major cybercrime operation, was raided by Myanmar’s army in mid-October as part of operations starting in early September to suppress cross-border online scams and illegal gambling.”
News reports and white papers from the United Nations make clear that sites like KK Park are more like industrial estates. Dormitories, office space, and eating facilities are provided. “Workers” or captives remain within the defined area. The Golden Triangle region strikes me as a Wild West for a range of cyber crimes, including pig butchering and industrial-scale phishing.
The geographic names and the details of the different groups in an area with competing political groups can be confusing. However, what is clear is that Myanmar’s military assaulted the militia groups protecting the facilities. Reports of explosions and people fleeing the area have become public. The cited news report says that Myanmar has been a location known to be tolerant or indifferent to certain activities within its borders.
Will Myanmar take action against other facilities believed to be involved in cyber crime? KK Park is just one industrial campus from which threat actors conduct their activities. Is Myanmar’s response a signal that law enforcement is fed up with certain criminal activity and moving with directed prejudice at certain operations? Will other countries follow the French and Myanmar method?
The big question is, “What caused Myanmar to focus on KK Park?” Will Cambodia, Lao PDR, and Thailand follow French view that enough is enough and advance to physical engagement?
Stephen E Arnold, November 7, 2025
Copilot in Excel: Brenda Has Another Problem
November 6, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
Simon Wilson posted an interesting snippet from a person whom I don’t know. The handle is @belligerentbarbies who is a member of TikTok. You can find the post “Brenda” on Simon Wilson’s Weblog. The main idea in the write up is that a person in accounting or finance assembles an Excel worksheet. In many large outfits, the worksheets are templates or set up to allow the enthusiastic MBA to plug in a few numbers. Once the numbers are “in,” then the bright, over achiever hits Shift F9 to recalculate the single worksheet. If it looks okay, the MBA mashes F9 and updates the linked spreadsheets. Bingo! A financial services firm has produced the numbers needed to slap into a public or private document. But, and here’s the best part…

Thanks, Venice.ai. Good enough.
Before the document leaves the office, a senior professional who has not used Excel checks the spreadsheet. Experience dictates to look at certain important cells of data. If those pass the smell test, then the private document is moved to the next stage of its life. It goes into production so that the high net worth individual, the clued in business reporter, the big customers, and people in the CEO’s bridge group get the document.
Because those “reports” can move a stock up or down or provide useful information about a deal that is not put into a number context, most outfits protect Excel spreadsheets. Heck, even the fill-in-the-blank templates are big time secrets. Each of the investment firms for which I worked over the years follow the same process. Each uses its own, custom-tailored, carefully structure set of formulas to produce the quite significant reports, opinions, and marketing documents.
Brenda knows Excel. Most Big Dogs know some Excel, but as these corporate animals fight their way to Carpetland, those Excel skills atrophy. Now Simon Wilson’s post enters and references Copilot. The post is insightful because it highlights a process gap. Specifically if Copilot is involved in an Excel spreadsheet, Copilot might— just might in this hypothetical — make a change. The Big Dog in Carpetland does not catch the change. The Big Dog just sniffs a few spots in the forest or jungle of numbers.
Before Copilot Brenda or similar professional was involved. Copilot may make it possible to ignore Brenda and push the report out. If the financial whales make money, life is good. But what happens if the Copilot tweaked worksheet is hallucinating. I am not talking a few disco biscuits but mind warping errors whipped up because AI is essentially operating at “good enough” levels of excellence.
Bad things transpire. As interesting as this problem is to contemplate, there’s another angle that the Simon Wilson post did not address. What if Copilot is phoning home. The idea is that user interaction with a cloud-based service is designed to process data and add those data to its training process. The AI wizards have some jargon for this “learn as you go” approach.
The issue is, however, what happens if that proprietary spreadsheet or the “numbers” about a particular company find their way into a competitor’s smart output? What if Financial firm A does not know this “process” has compromised the confidentiality of a worksheet. What if Financial firm B spots the information and uses it to advantage firm B?
Where’s Brenda in this process? Who? She’s been RIFed. What about Big Dog in Carpetland? That professional is clueless until someone spots the leak and the information ruins what was a calm day with no fires to fight. Now a burning Piper Cub is in the office. Not good, is it.
I know that Microsoft Copilot will be or is positioned as super secure. I know that hypotheticals are just that: Made up thought donuts.
But I think the potential for some knowledge leaking may exist. After all Copilot, although marvelous, is not Brenda. Clueless leaders in Carpetland are not interested in fairy tales; they are interested in making money, reducing headcount, and enjoying days without a fierce fire ruining a perfectly good Louis XIV desk.
Net net: Copilot, how are you and Brenda communicating? What’s that? Brenda is not answering her company provided mobile. Wow. Bummer.
Stephen E Arnold, November 6, 2025
Fear in Four Flavors or What Is in the Closet?
November 6, 2025
This essay is the work of a dumb dinobaby. No smart software required.
AI fear. Are you afraid to resist the push to make smart software a part of your life. I think of AI as a utility, a bit like old fashioned enterprise search just on very expensive steroids. One never knows how that drug use will turn out. Will the athlete win trophies or drop from heart failure in the middle of an event?
The write up “Meet the People Who Dare to Say No to Artificial Intelligence” is a rehash of some AI tropes. What makes the write up stand up and salute is a single line in the article. (This is a link from Microsoft. If the link is dead, call let one of its caring customer support chatbots know, not me.) Here it is:
Michael, a 36-year-old software engineer in Chicago who spoke on the condition that he be identified only by his first name out of fear of professional repercussions…
I find this interesting. A professional will not reveal his name for fear of “professional repercussions.” I think the subject is algorithms, not politics. I think the subject is neural networks, not racial violence. I think the subject is online, not the behavior of a religious figure.

Two roommates are afraid of a blue light. Very normal. Thanks, Venice.ai. Good enough.
Let’s think about the “fear” of talking about smart software.
I asked AI why a 35-year-old would experience fear. Here’s the short answer from the remarkably friendly, eager AI system:
- Biological responses to perceived threats,
- Psychological factors like imagination and past trauma,
- Personality traits,
- Social and cultural influences.
It seems to me that external and internal factors enter into fear. In the case of talking about smart software, what could be operating. Let me hypothesize for a moment.
First, the person may see smart software as posing a threat. Okay, that’s an individual perception. Everyone can have an opinion. But the fear angle strikes me as a displacement activity in the brain. Instead of thinking about the upside of smart software, the person afraid to talk about a collection of zeros and ones only sees doom and gloom. Okay, I sort of understand.
Second, the person may have some psychological problems. But software is not the same as a seven year old afraid there is a demon in the closet. We are back, it seems, to the mysteries of the mind.
Third, the person is fearful of zeros and ones because the person is afraid of many things. Software is just another fear trigger like a person uncomfortable around little spiders is afraid of a great big one like the tarantulas I had to kill with a piece of wood when my father wanted to drive his automobile in our garage in Campinas, Brazil. Tarantulas, it turned out, liked the garage because it was cool and out of the sun. I guess the garage was similar to a Philz’ Coffee to an AI engineer in Silicon Valley.
Fourth, social and cultural influences cause a person to experience fear. I think of my neighbor approached by a group of young people demanding money and her credit card. Her social group consists of 75 year old females who play bridge. The youngsters were a group of teenagers hanging out in a parking lot in an upscale outdoor mall. Now my neighbor does not want to go to the outdoor mall alone. Nothing happened but those social and cultural influences kicked in.
Anyway fear is real.
Nevertheless, I think smart software fear boils down to more basic issues. One, smart software will cause a person to lose his or her job. The job market is not good; therefore, fear of not paying bills, social disgrace, etc. kick in. Okay, but it seems that learning about smart software might take the edge off.
Two, smart software may suck today, but it is improving rapidly. This is the seven year old afraid of the closet behavior. Tough love says, “Open the closet. Tell me what you see.” In most cases, there is no person in the closet. I did hear about a situation involving a third party hiding in the closet. The kid’s opening the door revealed the stranger. Stuff happens.
Three, a person was raised in an environment in which fear was a companion that behavior may carry forward. Boo.
Net net: What is in Mr. AI’s closet?
Stephen E Arnold, November 6, 2025
If You Want to Be Performant, Do AI or Try to Do AI
November 6, 2025
For firms that have invested heavily in AI only to be met with disappointment, three tech executives offer some quality spin. Fortune reports, “Experts Say the High Failure Rate in AI adoption Isn’t a Bug, but a Feature.” The leaders expressed this interesting perspective at Fortune’s recent Most Powerful Women Summit. Writer Dave Smith writes:
“The panel discussion, titled ‘Working It Out: How AI Is Transforming the Office,’ tackled head-on a widely circulated MIT study suggesting that approximately 95% of enterprise AI pilots fail to pay off. The statistic has fueled doubts about whether AI can deliver on its promises, but the three panelists—Amy Coleman, executive vice president and chief people officer at Microsoft; Karin Klein, founding partner at Bloomberg Beta; and Jessica Wu, cofounder and CEO of Sola—pushed back forcefully on the narrative that failure signals fundamental problems with the technology.? ‘We’re in the early innings,’ Klein said. ‘Of course, there’s going to be a ton of experiments that don’t work. But, like, has anybody ever started to ride a bike on the first try? No. We get up, we dust ourselves off, we keep experimenting, and somehow we figure it out. And it’s the same thing with AI.’”
Interesting analogy. Ideally kiddos learn to ride on a cul-de-sac with supervision, not set loose on the highway. Shouldn’t organizations do their AI experimentations before making huge investments? Or before, say, basing high-stakes decisions in medicine, law-enforcement, social work, or mortgage approvals on AI tech? Ethical experimentation calls for parameters, after all. Have those been trampled in the race to adopt AI?
Cynthia Murrell, November 6, 2025

