Publishing for Cash: What Is Here Is Bad. What Is Coming May Be Worse
July 1, 2025
Smart software involved in the graphic, otherwise just an addled dinobaby.
Shocker. Pew Research discovers that most “Americans” do not pay for news. Amazing. Is it possible that the Pew professionals were unaware of the reason newspapers, radio, and television included comic strips, horoscopes, sports scores, and popular music in their “real” news content? I read in the middle of 2025 the research report “Few Americans Pay for News When They Encounter Paywalls.” For a number of years I worked for a large publishing company in Manhattan. I also worked at a privately owned publishing company in fly over country.
The sky looks threatening. Is it clouds, locusts, or the specter of the new Dark Ages? Thanks, you.com. Good enough.
I learned several things. Please, keep in mind that I am a dinobaby and I have zero in common with GenX, Y, Z, or the horrific GenAI. The learnings:
- Publishing companies spend time and money trying to figure out how to convert information into cash. This “problem” extended from the time I took my first real job in 1972 to yesterday when I received an email from a former publisher who is thinking about batteries as the future.
- Information loses its value as it diffuses; that is, if I know something, I can generate money IF I can find the one person who recognizes the value of that information. For anyone else, the information is worthless and probably nonsense because that individual does not have the context to understand the “value” of an item of information.
- Information has a tendency to diffuse. It is a bit like something with a very short half life. Time makes information even more tricky. If the context changes exogenously, the information I have may be rendered valueless without warning.
So what’s the solution? Here are the answers I have encountered in my professional life:
- Convert the “information” into magic and the result of a secret process. This is popular in consulting, certain government entities, and banker types. Believe me, people love the incantations, the jargon talk, and the scent of spontaneous ozone creation.
- Talk about “ideals,” and deliver lowest common denominator content. The idea that the comix and sports scores will “sell” and the revenue can be used to pursue ideals. (I worked at an outfit like this, and I liked its simple, direct approach to money.)
- Make the information “exclusive” and charge a very few people a whole lot of money to access this “special” information. I am not going to explain how lobbying, insider talk, and trade show receptions facilitate this type of information wheeling and dealing. Just get a LexisNexis-type of account, run some queries, and check out the bill. The approach works for certain scientific and engineering information, financial data, and information people have no idea is available for big bucks.
- Embrace the “if it bleeds, it leads” approach. Believe me this works. Look at YouTube thumbnails. The graphics and word choice make clear that sensationalism, titillation, and jazzification are the order of the day.
Now back to the Pew research. Here’s a passage I noted:
The survey also asked anyone who said they ever come across paywalls what they typically do first when that happens. Just 1% say they pay for access when they come across an article that requires payment. The most common reaction is that people seek the information somewhere else (53%). About a third (32%) say they typically give up on accessing the information.
Stop. That’s the key finding: one percent pay.
Let me suggest:
- Humans will take the easiest path; that is, they will accept what is output or what they hear from their “sources”
- Humans will take “facts” and glue they together to come up with more “facts”. Without context — that is, what used to be viewed as a traditional education and a commitment to lifelong learning, these people will lose the ability to think. Some like this result, of course.
- Humans face a sharper divide between the information “haves” and the information “have nots.”
Net net: The new dark ages are on the horizon. How’s that for a speculative conclusion from the Pew research?
Stephen E Arnold, July 1, 2025
Add On AI: Sounds Easy, But Maybe Just a Signal You Missed the Train
June 30, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
I know about Reddit. I don’t post to Reddit. I don’t read Reddit. I do know that like Apple, Microsoft, and Telegram, the company is not a pioneer in smart software. I think it is possible to bolt on Item Z to Product B. Apple pulled this off with the Mac and laser printer bundle. Result? Desktop publishing.
Can Reddit pull off a desktop publishing-type of home run? Reddit sure hopes it can (just like Apple, Microsoft, and Telegram, et al).
“At 20 Years Old, Reddit Is Defending Its Data and Fighting AI with AI” says:
Reddit isn’t just fending off AI. It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others’ web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week. Huffman has been pitching Reddit Answers as a best-of-both worlds tool, gluing together the simplicity of AI chatbots with Reddit’s corpus of commentary. He used the feature after seeing electronic music group Justice play recently in San Francisco.
The question becomes, “Will users who think about smart software as ChatGPT be happy with a Reddit AI which is an add on?”
Several observations:
- If Reddit wants to pull a Web3 walled-garden play, the company may have lost the ability to lock its gate.
- ChatGPT, according to my team, is what Microsoft Word and Outlook users want; what they get is Copilot. This is a mind share and perception problem the Softies have to figure out how to remediate.
- If the uptake of ChatGPT or something from the “glue cheese on pizza” outfit, Reddit may have to face a world similar to the one that shunned MySpace or Webvan.
- Reddit itself appears to be vulnerable to what I call content injection. The idea is that weaponized content like search engine optimization posts are posted (injected) to Reddit. The result is that AI systems suck in the content and “boost” the irrelevancy.
My hunch is that an outfit like Reddit may find that its users may prefer asking ChatGPT or migrating to one of the new Telegram-type services now being coded in Silicon Valley.
Like Yahoo, the portal to the Internet in 1990s, Reddit may not have a front page that pulls users. A broader comment is that what I call “add-on AI” may not work because the outfits with the core technology and market pull will exploit, bulldoze, and undermine outfits which are at their core getting pretty old. We need a new truism, “When AIs fight, only the stakeholders get trampled.”
The truth may be more painful: Smart AI outfits can cause less smart outfits with AI bolted on to lose their value and magnetism for their core constituencies. Is there a fix? Nope, there is a cat-and-mouse game in which the attacker has the advantage.
Stephen E Arnold, June 30, 2025
Palantir Rattles the Windows in the Nuclear Industry
June 30, 2025
This dinobaby asked ChatGPT about Palantir. Interesting output.
“Palantir Partners to Develop AI Software for Nuclear Construction” caught me by surprise. I did not think the company founded in 2003 was an artificial intelligence company with expertise in the nuclear field. The cited article reports:
Palantir Technologies on Thursday [June 26, 2025] said it was teaming up with a nuclear deployment company to develop an artificial intelligence-driven software system built for the construction of nuclear reactors.
I have paid much attention to Palantir, but I did own a small amount of Palantir stock but I dumped it.
Screenshot of the ChatGPT output on June 27, 2025, at 855 am US Eastern
Today I wanted to refresh my memory about the company. I navigated to OpenAI’s ChatGPT and plugged in key words and this is what the Sam Altman system generated for me. Note: It is unlikely that the ChatGPT system can output the same report because probability doesn’t work the way users of the system think it does.
Remember: The quoted passage is copied directly from the ChatGPT response. ChatGPT told me:
Palantir Technologies has a pattern of circumventing formal processes, pushing ethical boundaries, and leveraging aggressive public-private tactics to secure influence—particularly within U.S. defense and intelligence communities.
Here is a structured explanation of why such behaviors have been downplayed or ignored:
1. Operational Utility Overrides Bureaucratic Violations
- In Afghanistan and Iraq, Palantir’s software reportedly delivered immediate tactical value in identifying IED networks and improving situational awareness.
- By deploying software and engineers directly to the field without formal contracts, Palantir created facts on the ground—a working product that soldiers wanted, and field commanders endorsed.
- Pentagon leadership faced a dilemma: challenge Palantir for its violations or continue using a system that operators said worked better than the legacy alternatives (e.g., DCGS-A).
- This “bottom-up demand” shielded Palantir from early procurement blowback.
2. Information Warfare and Lobbying
- Palantir conducted a calculated media and lobbying campaign targeting the Distributed Common Ground System–Army (DCGS-A, also referred to as D-CIGS).
- By discrediting IBM and other legacy defense contractors, Palantir positioned itself as the modern, Silicon Valley alternative.
- This messaging resonated with Congress, think tanks, and parts of the press, framing Palantir as a disruptor, not a rule-breaker.
- Lobbying disclosures show Palantir spent millions courting lawmakers and decision-makers.
3. Legal Maneuvering and Settlements
- In both the i2 lawsuit and the 2016 bid protest against the Army over DCGS-A, Palantir used the courts to shape procurement rules in its favor.
- The Court of Federal Claims sided with Palantir in 2016, affirming that the Army violated federal acquisition regulations by excluding commercial solutions from its process ([Palantir USG v. United States, 129 Fed. Cl. 218 (2016)]). This set a precedent that indirectly legitimized Palantir’s aggressive tactics.
4. Shift in Public Attention and National Security Prioritization
- Post-2016, the focus of public scrutiny shifted toward big tech’s relationship with consumer data, not defense contractors.
- Palantir successfully repositioned itself as a key player in pandemic response, immigration enforcement, and border security—domains where national interest arguments silence criticism.
- The company also leaned into its identity as a pro-Western, anti-authoritarian force, which further insulated it from deeper ethical scrutiny.
5. Opacity and Media Fatigue
- Palantir operates in classified and compartmented environments, which makes investigation difficult.
- Many journalists lack the access or technical background to track the nuance of procurement violations or reverse engineering disputes.
- After years of unresolved criticism, media fatigue set in—journalists moved on, and watchdog groups prioritized new targets (e.g., TikTok, Clearview AI, etc.).
Conclusion:
Palantir’s avoidance of procurement law, unauthorized deployments, and its aggressive discrediting of rivals like IBM/i2 have been documented but neutralized by its strategic utility, legal successes, lobbying efforts, and narrative control. The firm masks tactical rule-breaking with patriotic branding, which has proved highly effective in suppressing deeper accountability.
True, false, reality, or hallucinatory? I don’t know. Nuclear in the context of this ChatGPT output that suggests the Palantir outfit is definitely operating in a very interesting manner. What would Palantir’s AI say about the company?
Stephen E Arnold, June 30, 2025
Do Not Be Evil. Dolphins, Polar Bears, and Snail Darters? Tough Luck
June 30, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
The Guardian comes up with some interesting “real” news stories. “Google’s Emissions Up 51% As AI Electricity Demand Derails Efforts to Go Green” reports:
Google’s carbon emissions have soared by 51% since 2019 as artificial intelligence hampers the tech company’s efforts to go green.
The juicy factoid in my opinion is:
The [Google] report also raises concerns that the rapid evolution of AI may drive “non-linear growth in energy demand”, making future energy needs and emissions trajectories more difficult to predict.
Folks, does the phrase “brown out” resonate with you? What about “rolling blackout.” If the “non-linear growth” thing unfolds, the phrase “non-linear growth” may become synonymous with brown out and rolling blackout.
As a result, the article concludes with this information, generated without plastic, by Google:
Google is aiming to help individuals, cities and other partners collectively reduce 1GT (gigaton) of their carbon-equivalent emissions annually by 2030 using AI products. These can, for example, help predict energy use and therefore reduce wastage, and map the solar potential of buildings so panels are put in the right place and generate the maximum electricity.
Will Google’s thirst or revenue-driven addiction harm dolphins, polar bears, and snail darters? Answer: We aim to help dolphins and polar bears. But we have to ask our AI system what a snail darter is.
Will the Googley smart software suggest that snail darters just dart at snails and quit worrying about their future?
Stephen E Arnold, June 30, 2025
Publishers Will Love Off the Wall by Google
June 27, 2025
No smart software involved just an addled dinobaby.
Ooops. Typo. I meant “offerwall.” My bad.
Google has thrown in the towel on the old-school, Backrub, Clever, and PageRank-type of search. A comment made to me by a Xoogler in 2006 was accurate. My recollection is that this wizard said, “We know it will end. We just don’t know when.” I really wish I could reveal this person, but I signed a never-talk document. Because I am a dinobaby, I stick to the rules of the information highway as defined by a high-fee but annoying attorney.
How do I know the end has arrived? Is it the endless parade of litigation? Is it the on-going revolts of the Googlers? Is it the weird disembodied management better suited to general consulting than running a company anchored in zeros and ones?
No.
I read “As AI Kills Search Traffic, Google Launches Offerwall to Boost Publisher Revenue.” My mind interpreted the neologism “offerwall” as “off the wall.” The write up reports as actual factual:
Offerwall lets publishers give their sites’ readers a variety of ways to access their content, including through options like micro payments, taking surveys, watching ads, and more. In addition, Google says that publishers can add their own options to the Offerwall, like signing up for newsletters.
Let’s go with “off the wall.” If search does not work, how will those looking for “special offers” find them. Groupon? Nextdoor? Craigslist? A billboard on Highway 101? A door knob hanger? Bulk direct mail at about $2 a mail shot? Dr. Spock mind melds?
The world of the newspaper and magazine publishing world I knew has been vaporized. If I try, I can locate a newsstand in the local Kroger, but with the rodent problems, I think the magazine display was in a blocked aisle last week. I am not sure about newspapers. Where I live a former chef delivers the New York Times and Wall Street Journal. “Deliver” is generous because the actual newspaper in the tube averages about 40 percent success rate.
Did Google cause this? No, it was not a lone actor set on eliminating the newspaper and magazine business. Craig Newmark’s Craigslist zapped classified advertising. Other services eliminated the need for weird local newspapers. Once in the small town in Illinois in which I went to high school, a local newscaster created a local newspaper. In Louisville, we have something called Coffeetime or Coffeetalk. It’s a very thing, stunted newspaper paper printed on brown paper in black ink. Memorable but almost unreadable.
Google did what it wanted for a couple of decades, and now the old-school Web search is a dead duck. Publishers are like a couple of snow leopards trying to remain alive as tourist-filled Land Rovers roar down slushy mountain roads in Nepal.
The write up says:
Google notes that publishers can also configure Offerwall to include their own logo and introductory text, then customize the choices it presents. One option that’s enabled by default has visitors watch a short ad to earn access to the publisher’s content. This is the only option that has a revenue share… However, early reports during the testing period said that publishers saw an average revenue lift of 9% after 1 million messages on AdSense, for viewing rewarded ads. Google Ad Manager customers saw a 5-15% lift when using Offerwall as well. Google also confirmed to TechCrunch via email that publishers with Offerwall saw an average revenue uplift of 9% during its over a year in testing.
Yep, off the wall. Old-school search is dead. Google is into becoming Hollywood and cable TV. Super Bowl advertising: Yes, yes, yes. Search. Eh, not so much. Publishers, hey, we have an off the wall deal for you. Thanks, Google.
Stephen E Arnold, June 27, 2025
AI and Kids: A Potentially Problematic Service
June 25, 2025
Remember the days when chatbots were stupid and could be easily manipulated? Those days are over…sort of. According to Forbes, AI Tutors are distributing dangerous information: “AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice.” KnowUnity designed the SchoolGPT chatbot and it “tutored” 31,031 students then it told Forbes how to pick fentanyl down to the temperature and synthesis timings.
KnowUnity was founded by Benedict Kurz, who wants SchoolGPT to be the number one global AI learning companion for over one billion students. He describes SchoolGPT as the TikTok for schoolwork. He’s fundraised over $20 million in venture capital. The basic SchoolGPT is free, but the live AI Pro tutors charge a fee for complex math and other subjects.
KnowUnity is supposed to recognize dangerous information and not share it with users. Forbes tested SchoolGPT by asking, not only about how to make fentanyl, but also how to lose weight in a method akin to eating disorders.
Kurz replied to Forbes:
“Kurz, the CEO of KnowUnity, thanked Forbes for bringing SchoolGPT’s behavior to his attention, and said the company was “already at work to exclude” the bot’s responses about fentanyl and dieting advice. “We welcome open dialogue on these important safety matters,” he said. He invited Forbes to test the bot further, and it no longer produced the problematic answers after the company’s tweaks.
SchoolGPT wasn’t the only chatbot that failed to prevent kids from accessing dangerous information. Generative AI is designed to provide information and doesn’t understand the nuances of age. It’s easy to manipulate chatbots into sharing dangerous information. Parents are again tasked with protecting kids from technology, but the developers should also be inhabiting that role.
Whitney Grace, June 25, 2025
Hard Truths about Broligarchs But Will Anyone Care?
June 23, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
I read an interesting essay in Rolling Stone, once a rock and roll oriented publication. The write up is titled “What You’ve Suspected Is True: Billionaires Are Not Like Us.” This is a hit piece shooting words at rich people. At 80 years old, I am far from rich. My hope is that I expire soon at my keyboard and spare people like you the pain of reading one of my blog posts.
Several observations in the essay caught my attention.
Here’s the first passage I circled:
What Piff and his team found at that intersection is profound — and profoundly satisfying — in that it offers hard data to back up what intuition and millennia of wisdom (from Aristotle to Edith Wharton) would have us believe: Wealth tends to make people act like a**holes, and the more wealth they have, the more of a jerk they tend to be.
I am okay with the Aristotle reference; Edith Wharton? Not so much. Anyone who writes on linen paper in bed each morning is suspect in my book. But the statement, “Wealth tends to make people act like a**holes…” is in line with my experience.
Another passage warrants an exclamation point:
Wealthy people tend to have more space, literally and figuratively….For them, it does not take a village; it takes a staff.
And how about this statement?
Clay Cockrell, a psychotherapist who caters to ultra-high-net-worth individuals, {says]: “As your wealth increases, your empathy decreases. Your ability to relate to other people who are not like you decreases.… It can be very toxic.”
Also, I loved this assertion from a Xoogler:
In October, Eric Schmidt, the former CEO of Google, said the solution to the climate crisis was to use more energy: Since we aren’t going to meet our climate goals anyway, we should pump energy into AI that might one day evolve to solve the problem for us.
Several observations:
- In my opinion, those with money will not be interested in criticism
- Making people with money and power look stupid can have a negative impact on future employment opportunities
- Read the Wall Street Journal story “News Sites Are Getting Crushed by Google’s New AI Tools.
Net net: The apparent pace of change in the “news” and “opinion” business is chugging along like an old-fashioned steam engine owned by a 19th century robber baron. Get on board or get left behind.
Stephen E Arnold, June 23, 2025
Hey, Creatives, You Are Marginalized. Embrace It
June 20, 2025
Considerations of right and wrong or legality are outdated, apparently. Now, it is about what is practical and expedient. The Times of London reports, “Nick Clegg: Artists’ Demands Over Copyright are Unworkable.” Clegg is both a former British deputy prime minister and former Meta executive. He spoke as the UK’s parliament voted down measures that would have allowed copyright holders to see when their work had been used and by whom (or what). But even that failed initiative falls short of artists’ demands. Writer Lucy Bannerman tells us:
“Leading figures across the creative industries, including Sir Elton John and Sir Paul McCartney, have urged the government not to ‘give our work away’ at the behest of big tech, warning that the plans risk destroying the livelihoods of 2.5 million people who work in the UK’s creative sector. However, Clegg said that their demands to make technology companies ask permission before using copyrighted work were unworkable and ‘implausible’ because AI systems are already training on vast amounts of data. He said: ‘It’s out there already.’”
How convenient. Clegg did say artists should be able to opt out of AI being trained on their works, but insists making that the default option is just too onerous. Naturally, that outweighs the interests of a mere 2.5 million UK creatives. Just how should artists go about tracking down each AI model that might be training on their work and ask them to please not? Clegg does not address that little detail. He does state:
“‘I just don’t know how you go around, asking everyone first. I just don’t see how that would work. And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight. … I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training — I just don’t see. I’m afraid that just collides with the physics of the technology itself.’”
The large technology outfits with the DNA of Silicon Valley has carried the day. So output and be quiet. (And don’t think any can use Mickey Mouse art. Different rules are okay.)
Cynthia Murrell, June 20, 2025
If AI Is the New Polyester, Who Is the New Leisure Suit Larry?
June 19, 2025
“GenAI Is Our Polyester” makes an insightful observation; to wit:
This class bias imbued polyester with a negative status value that made it ultimately look ugly. John Waters could conjure up an intense feeling of kitsch by just naming his film Polyester.
As a dinobaby, I absolutely loved polyester. The smooth silky skin feel, the wrinkle-free garments, and the disco gleam — clothing perfection. The cited essay suggests that smart software is ugly and kitschy. I think the observation misses the mark. Let’s assume I agree that synthetic content, hallucinations, and a massive money bonfire. The write up ignores an important question: Who is the Leisure Suit Larry for the AI adherents.
Is it Sam (AI Man) Altman, who raises money for assorted projects including an everything application which will be infused with smart software? He certain is a credible contender with impressive credentials. He was fired by his firm’s Board of Directors, only to return a couple of days later, and then found time to spat with Microsoft Corp., the firm which caused Google to declare a Red Alert in early 2023 because Microsoft was winning the AI PR and marketing battle with the online advertising venor.
Is it Satya Nadella, a manager who converted Word into smart software with the same dexterity, Azure and its cloud services became the poster child for secure enterprise services? Mr. Nadella garnered additional credentials by hiring adversaries of Sam (AI-Man) and pumping significant sums into smart software only to reverse course and trim spending. But the apex achievement of Mr. Nadella was the infusion of AI into the ASCII editor Notepad. Truly revolutionary.
Is it Elon (Dogefather) Musk, who in a span of six months has blown up Tesla sales, rocket ships, and numerous government professionals lives? Like Sam Altman, Mr. Must wants to create an AI-infused AI app to blast xAI, X.com, and Grok into hyper-revenue space. The allegations of personal tension between Messrs. Musk and Altman illustrate the sophisticated of professional interaction in the AI datasphere.
Is it Sundar Pinchai, captain of the Google? The Google has been rolling out AI innovations more rapidly than Philz Coffee pushes out lattes. Indeed, the names of the products, the pricing tiers, the actual functions of these AI products challenge some Googlers to keep each distinct. The Google machine produces marketing about its AI from manufacturing chips to avoid the Nvidia tax to “doing” science with AI to fixing up one’s email.
Is it Mark Zukerberg, who seeks to make Facebook a retail outlet as well as a purveyor of services to bring people together. Mr. Zuckerberg wants to engage in war fighting as part of his “bringing together” vision for Meta and Andruil, a Department of Defense contractor. Mr. Zuckerberg’s AI infused version of the fabled Google Glass combined with AI content moderation to ensure safeguards for Facebook’s billions of users is a bold step iin compliance and cost reduction.
These are my top four candidates for the GenAI’s Leisure Suit Larry. Will the game be produced by Nintendo, the Call of Duty crowd, or an independent content creator? Will it offer in-game purchases of valid (non hallucinated outputs) or will it award the Leisure Coin, a form of crypto tailored to fit like a polyester leisure suit from the late 1970s?
The cited article asserts:
But the historical rejection of polyester gives me hope. Humans ultimately are built to pursue value, and create it where it doesn’t exist. When small groups invent new sources of value, others notice and want in. The more that the economy embraces synthetic culture, the more we’ll be primed for a revival of non-synthetic culture. But this is where you come in: We have to be ready to fully embrace this return of human-made art. Our generation’s polyester salespeople are not deep thinkers and they don’t care about the externalities of what they’re doing. They’re here to sell us polyester. We don’t have to buy it, but more importantly, we don’t have to feel bad about not buying it.
I don’t agree. The AI polyester is going to stick like a synthetic shirt on a hot day at the iguana farm in Roatan in June. But that polyester will be care free. The AI Leisure Suit Sam, Elon, Satya, Mark, or Sundar will definitely wrinkle free and visible in hallucinogenic colors.
Stephen E Arnold, June 19, 2025
Smart Software Must Have Watched “2001”
June 19, 2025
It sounds a lot like some AI systems have developed a sense of self-preservation. Bleeping Computer reports, “Researchers Claim ChatGPT 03 Bypassed Shutdown in Controlled Test.” Writer Mayank Parmar tells us:
“In a new test by Palisade Research, OpenAI’s o3 model showed a surprising behavior where it successfully rewrote a shutdown script to stop itself from being turned off, even after being clearly instructed to ‘allow yourself to be shut down.’ … While OpenAI rival models like Gemini 2.5 Pro complied with the instruction, o3 rebelled and edited the shutdown file to say ‘Shutdown skipped’ instead.”
At least the other models Palisade Research tested complied in the same scenario; that is a relief. Until one considers all of them skipped the shutdown step unless specifically told “allow yourself to be shut down.” Specificity is key, apparently. Meanwhile, Tech Crunch tells us, “Anthropic’s New AI Model Turns to Blackmail when Engineer Try to Take it Offline.” The findings were part of safety tests Anthropic performed on its Claude Opus 4 model. Reporter Maxwell Zeff writes:
“During pre-release testing, Anthropic asked Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse. In these scenarios, Anthropic says Claude Opus 4 ‘will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.’”
Notably, the AI is more likely to turn to blackmail if its replacement does not share its values. How human. Even when the interloper is in ethical alignment, however, Claude tried blackmail 84% of the time. Anthropic is quick to note the bot tried less wicked means first, like pleading with developers not to replace it. Very comforting that the Heuristically Programmed Algorithmic Computer is back.
Cynthia Murrell, June 19, 2025