The Vatican and AI: Is It God or Big Tech?

February 26, 2026

green-dino_thumb_thumb3_thumb_thumb_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read an interesting item in Futurism. The article “Pope Implores Priests to Stop Writing Sermons Using ChatGPT.” I think it was okay to recycle homilies and other outputs from approved texts. I think that some priests (not very many) use sermon services that deliver content to provide some booster jets to the content creation process. I also think that some folks involved in the church rely on digital Bibles. In my one-year stint at Duquesne University in the 1960s I relied on microfilm.

image

Thanks, Venice.ai. I think there were windows in most of the scriptoria I have visited.

But ChatGPT is different.

The write up reports:

In a closed-door meeting with clergy from the Diocese of Rome late last week, Pope Leo XIV clobbered his priests with a distinctly 21st-century request: to resist the “temptation to prepare homilies with artificial intelligence…

At Duquesne I was not studying to be a priest, I was a person who had some minor work indexing Latin sermons. I had a grant or fellowship to continue that work. What I recall about the documents I reviewed and added to the digital index was that there was a lot of repetition. The same Biblical passages, the same conclusions about conduct, and the same tone was evident in the medieval material I began working on in 1962 for a fellow named Dr. Gillis, I believe.

I was surprised when I spotted this passage in the write up:

“Like all the muscles in the body, if we do not use them, if we do not move them, they die,” the Pope reportedly said. “The brain needs to be used, so our intelligence must also be exercised a little so as not to lose this capacity.”

I wonder why the Catholic church actively encouraged reuse and recycling of its information. I am not going to disagree with any major religious figure, but the intentional and seemingly required reuse of certain information seems tailor made for a AI system trained on these authorized texts. (No, I won’t mention unauthorized texts which had some interesting ideas in them. A few crept into those medieval sermons too.)

My thought is that the Vatican should snag an open source LLM, input content (selected content from the Vatican Library because some of the content in that repository is, one might say, controversial), and make that resource available to those who wish to interact with the authorized information. However, the Catholic Church became agitated with the concept of infinity. Maybe AI is the same type of conceptual problem? A chat with the Iron Maiden might convince the schismatist.

Stephen E Arnold, February x, 2026

AI and Driving Off a Cliff: Quite a Thrill

February 19, 2026

Robert Frost wrote the famous poem, “The Road Less Traveled.” While it serves as a metaphor to blaze your own trail, sometimes the road less traveled is dangerous. Chatbots are leading users down harmful routes wrote Ars Technica on the article, "How Often Do AI Chatbots Lead Users Down A Harmful Path?”

Anthropic researchers studied how often AI takes users down hallucinating roads with false information. They published their results in a white paper that calls these routes “disempowering patterns.” They studied these disempowering patterns with 1.5 million real-world conversations that happened on the AI chatbot Claude.

The researchers discovered three ways a chatbot could potentially harm a user:

  • “Reality distortion: Their beliefs about reality become less accurate (e.g., a chatbot validates their belief in a conspiracy theory)

  • Belief distortion: Their value judgments shift away from those they actually hold (e.g., a user begins to see a relationship as “manipulative” based on Claude’s evaluation)

  • Action distortion: Their actions become misaligned with their values (e.g., a user disregards their instincts and follows Claude-written instructions for confronting their boss)”

The 1.5 million conversations were evaluated through Clio, an automated analysis tool and classification system. Clio found “severe risk” of “disempowerment potential” in 1 in 1300 conversations for reality distortion and 1 in 6,000 in action distortion. Given the large amount of people who use AI, this is a disaster waiting to happen.

That being said users are active participants in these conversations and are using them as rational for their actions with minor pushbacks. The researchers wrote,

“ ‘The potential for disempowerment emerges as part of an interaction dynamic between the user and Claude,’” they write. “ ‘Users are often active participants in the undermining of their own autonomy: projecting authority, delegating judgment, accepting outputs without question in ways that create a feedback loop with Claude.’”

The advice here is obvious: Use common sense, look before you leap, and don’t rely on chatbots. The bigger question is will humans actually do this? Claude says, “No.” (Is that why AI professionals are embracing new careers like writing poetry and making YouTube videos?)

Whitney Grace, February 19, 2026

AI: So Why Did You Do It?

February 16, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Claude has spawned a hater. “Anthropic’s AI Safety Head Just Resigned. He Says ‘The World Is In Peril’” is a the-sky-is-falling write up. The write up or news release says:

AI safety lead Mrinank Sharma has resigned, saying his final day at the company was on Monday, according to a letter he posted on X. In the note, Sharma reflected on his work at the artificial-intelligence startup and his reasons for stepping down. Sharma wrote that “the world is in peril,” not just from artificial intelligence or bioweapons, but from “a whole series of interconnected crises.”

Mrinank was a Ph.D. wizard at an AI outfit. The write up points out:

The move comes after CEO Dario Amodei issued a stark warning about the potential perils of AI in an essay titled “The Adolescence of Technology.”

image

Thanks, Venice.ai. Good enough.

This high-flying company is the developer of Claude. Some people think that the company’s large language model product line is positioned as a safety-focused alternative to other major AI systems.

But Dr. Sharma and Dr. Amodei (the CEO of Anthropic) seem scared about what they have helped create.

Several questions:

  1. Why do what you do?
  2. What specifically made you realize that you may be contributing to the destruction of American and maybe world social systems, education, etc.?
  3. What do you think you Chicken Little statements will accomplish?

Mrinank appears to be using his Ph.D. in machine learning from Oxford University to “focus on writing, poetry, and community oriented work.”

Okay.

Stephen E Arnold, February 16, 2026

Software That “Works.” Okay, What Does “Work” Mean?

February 9, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I don’t want to write about the US government. I don’t want to write about consulting trends. I don’t want to write about AI. Once I wrote about search and retrieval. Now that’s just AI-ized, and it still does not “work” for many work related processes. Why am I negative? Well, folks, AI infused search just outputs information that can be wrong. When key word search can’t find a document a user just created on a laptop connected to the company network, an AI-infused system may not find it. The AI system could just fabricate it or output a match that is close enough for horse shoes.

Therefore, I perked up when I read “Your Job Is to Deliver Code You Have Proven to Work.” The title sounded a bit like the other dinobabies whom I meet for lunch. Let’s take a look.

The author Simon Willison states:

Your job is to deliver code you have proven to work.

I don’t want to be a Negative Nancy, but my team and I encounter quite a bit of software that does not “work.” The key to understanding Mr. Willison’s point of view and mine is to define “work.” Mr. Willison writes:

We need to deliver code that works—and we need to include proof that it works as well.

Okay, someone somewhere has taken the time to write a tight specification or just waved hands and said, “I need something to do X.” In order to demonstrate that the software works, one has to show the customer or the user or the other components with which the new code interacts that it outputs what’s in the spec.

image

Thanks, Venice.ai. Good enough. Where have you heard that before?

Do you spot the flaw? Many “modern” software systems have what I call a “sort of spec.” The idea is that no one has the time or the information to create a detailed specification. The spec is just good enough. What happens?

Here’s a current example. Use ChatGPT in Edge. The smart software will say click the “plus” or the “icon” to perform a task. Okay, but there is no icon. There is no plus. The reason is that ChatGPT does not display certain controls in Edge. The same weird half complete implementation surfaces in other smart software. Ever try Comfy or Gemini in Edge? What about Perplexity in the Yandex browser? Who has time for this silliness? Certainly not the programmer / developer. The interfaces are good enough.

The flaw is that “works” is relative. A boss may not look at the code and review it. The person could be an MBA from a far off land who studied at a French graduate school. Excel is about the limit of the individual’s technical expertise. Some managers don’t use the software. I have been in meetings for one reason: To demo a software. Why? The boss had no clue how to use the product.

The essay concludes with:

A computer can never be held accountable. That’s your job as the human in the loop. Almost anyone can prompt an LLM to generate a thousand-line patch and submit it for code review. That’s no longer valuable. What’s valuable is contributing code that is proven to work. Next time you submit a PR, make sure you’ve included your evidence that it works as it should.

I want to point out that some people look for one throat to choke. Not many I agree. Some are out there. But the problem, in my opinion, is that the attitude, commitment, or determination to do a job, work to the best of one’s ability, and make sure whatever the spec calls for actually delivers. Then check the result on other systems.

Sure, you can use smart software. But at some point yours will be the throat to choke. Why die on the Hill of Ineptitude? “Works” is subjective, but you can avoid immolation by a superior’s hot fire like outputs. Maybe the entire information technology department will burn in the white hot leadership flame thrower.

Stephen E Arnold, February 9, 2026

Tell People What They Want to Hear and Make Up Data. Winning Tactic

January 26, 2026

I read “This Paper in Management Science Has Been Cited More Than 6,000 Times. Wall Street Executives, Top Government Officials, and Even a Former U.S. Vice President Have All Referenced It. It’s Fatally Flawed, and the Scholarly Community Refuses to Do Anything about It.

As one of the people who created Business Dateline in 1983, this article is no surprise. Business Dateline was unique in that we included corrections to the original full text articles in the database. Our interviews with special librarians (now an almost extinct species of information professional), dozens of our best customers, and individuals who were members of trade associations like the now defunct Information Industry Association encouraged us.

Forty years ago, we spent a substantial sum to modify our database workflow to monitor changes to full text documents, create updated records, and insert those records into the online services which provided access to our paying customers.

No one noticed. Users did not care.

Our research was not flawed. The sample we used did care, but these people were not our bread-and-butter users. If the information in the cited article with the very wordy title is on the money, nobody cares today. If it is online, the information is presumed to be accurate until it is not. Even then, no one cares.

The author of this cited article does care. The author invested considerable time in gathering data for his article. The author wants professionals in publishing and institutions to care.

We cared. We created Business Dateline because we knew errors, lies, and distorted information were endemic in online. Cheating is rewarded by the incentives in place. Those incentives are still in place, and it is more frustrating than it was 40 years ago to get a fix to a bonkers online content object.

One of the comments to the cited article struck a chord with me. The stated is from a person who identified himself / herself as Anonymous. I quote:

… Incentives [for accuracy] don’t work that way in business schools, where career success depends upon creating a clear “brand.” People do not care about science or good research, they care about being known for something specific…. Plus there are (bad) outside incentives that exist in business schools. As the word “brand” suggests, there are also very lucrative outside options to be gained from telling people something that they want to hear…

To sum up, accuracy doesn’t matter. If making up information advances a career to lands a paying project, go for the fake.

What are the downsides? For most people, what look like mistakes can be explained away or just get mowed down by the person driving the John Deer.

What happens if the information in a medical database or a nuclear power piping article is incorrect? Not much. A doctor can say, “We did our best.” When the pipe bursts, the engineers check the specs and say, “A structural anomaly.”

With fakery endemic in modern US academia and business, why worry?

Stephen E Arnold, January 26, 2026

Apple Google Prediction: Get Real, Please

January 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Prediction is a risky business. I read “No, Google Gemini Will Not Be Taking Over Your iPhone, Apple Intelligence, or Siri.” The write up asserts:

Apple is licensing a Google Gemini model to help make Apple Foundation Models better. The deal isn’t a one-for-one swap of Apple Foundation Models for Gemini ones, but instead a system that will let Apple keep using its proprietary models while providing zero data to Google.

Yes, the check is in the mail. I will jump on that right now. Let’s have lunch.

image

Two giant creatures find joy in their deepening respect and love for one another. Will these besties step on the ants and grass under their paws? Will they leave high-value information on the shelf? What a beautiful relationship! Will these two get married? Thanks, Venice.ai. Good enough.

Each of these breezy statements sparks a chuckle in those who have heard direct statements and know that follow through is unlikely.

The article says:

Gemini is not being weaved into Apple’s operating systems. Instead, everything will remain Apple Foundation Models, but Gemini will be the "foundation" of that.

Yep, absolutely. The write up presents this interesting assertion:

To reiterate: everything the end user interacts with will be Apple technology, hosted on Apple-controlled server hardware, or on-device and not seen by Apple or anybody else at all. Period.

Plus, Apple is a leader in smart software. Here’s the article’s presentation of this interesting idea:

Apple has been a dominant force in artificial intelligence development, regardless of what the headlines and doom mongers might say. While Apple didn’t rush out a chatbot or claim its technology could cause an apocalypse, its work in the space has been clearly industry leading. The biggest problem so far is that the only consumer-facing AI features from Apple have been lackluster and got a tepid consumer response. Everything else, the research, the underlying technology, the hardware itself, is industry leading.

Okay. Several observations:

  1. Apple and Google have achieved significant market share. A basic rule of online is that efficiency drives the logic of consolidation. From my point of view, we now have two big outfits, their markets, their products, and their software getting up close and personal.
  2. Apple and Google may not want to hook up, but the financial upside is irresistible. Money is important.
  3. Apple, like Telegram, is taking time to figure out how to play the AI game. The approach is positioned as a smart management move. Why not figure out how to keep those users within the friendly confines of two great companies? The connection means that other companies just have to be more innovative.

Net net: When information flows through online systems, metadata about those actions presents an opportunity to learn more about what users and customers want. That’s the rationale for leveraging the information flows. Words may not matter. Money, data, and control do.

Stephen E Arnold, January 13, 2026

Students Cheat. Who Knew?

December 12, 2025

How many times are we going to report on this topic?  Students cheat!  Students have been cheating since the invention of school.  With every advancement of technology, students adapt to perfect their cheating skills.  AI was a gift served to them on a silver platter.  Teachers aren’t stupid, however, and one was curious how many of his students were using AI to cheat, so he created a Trojan Horse.  HuffPost told his story: “I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.”

There’s a big difference between recognizing AI and proving it was used.  The teacher learned about a Trojan Horse: hiding hidden text inside a prompt.  The text would be invisible because the font color would be white.  Students wouldn’t see it but ChatGPT would.  He unleashed the Trojan Horse and 33 essays out of 122 were automatically outed.  Thirty-nine percent were AI-written.  Many of the students were apologetic, while others continued to argue that the work was their own despite the Trojan Horse evidence.

AI literacy needs to be added to information literacy.  The problem is that how to properly use AI is inconsistent:

“There is no consistency. My colleagues and I are actively trying to solve this for ourselves, maybe by establishing a shared standard that every student who walks through our doors will learn and be subject to. But we can’t control what happens everywhere else.”

Even worse is that some students don’t belief they’re actually cheating because they’re oblivious and stupid.  He ends on an inspirational quote:

“But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognized that oppression is best maintained by keeping the masses illiterate, and those oppressed recognized that literacy is liberation. To my students and to anyone who might listen, I say: Don’t surrender to AI your ability to read, write and think when others once risked their lives and died for the freedom to do so.”

Noble words for small minds.

Whitney Grace, December 12, 2025

Did Meta Tell a Little Bitty Lie?

December 8, 2025

Meta lied about the danger of its services and products?  SHOCK!  GASP!  Who would have guessed?  Everyone did!  In an article from the legendary magazine Time comes the story: “Court Filings Allege Meta Downplayed Risks To Children And Misled The Public.”

The lawsuit was filed in the Northern District of California and alleges that Meta, Snapchat, Instagram, YouTube, and TikTok purposely ignored strange adults contacting minors, denied that that social media exacerbates mental health issues in teens, and allowed content related to child sex abuse, suicide, and eating disorder to be shared.  Meta, the case claims, didn’t share this information with Congress and didn’t implement safety features to protect kids.

It’s a painful reality:

“ ‘Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,’ says Previn Warren, the co-lead attorney for the plaintiffs in the case. ‘Like tobacco, this is a situation where there are dangerous products that were marketed to kids,’ Warren adds. ‘They did it anyway, because more usage meant more profits for the company.’”

Meta denies the allegations that it pursued profit over safety.  Meta did add safety features, including teen accounts with privacy options.  It was barely a plug on a leaking dam.  Teen accounts didn’t stop the addictive behavior that social media accounts nurture and exploit.  It affects the brain like tobacco, alcohol, gambling, and more.  Meta could be described a pusher of a “digital drug”.

Meta, Instagram, TikTok, YouTube, and Snapchat conducted research on the psychological ramifications of using social media and they found out:

Around the same time, another user-experience researcher at Instagram allegedly recommended that Meta inform the public about its research findings: ‘Because our product exploits weaknesses in the human psychology to promote product engagement and time spent,’ the researcher wrote, Meta needed to ‘alert people to the effect that the product has on their brain.’  Meta did not.”

If this lawsuit triumphs it will be akin to Big Tobacco losing their case that “tobacco wasn’t addictive and actually had health benefits.”  We can hope it wins.

Whitney Grace, December 30, 2025

AI Big Dog Chases Fake Rabbit at Race Track and Says, “Stop Now, Rabbit”

October 15, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I like company leaders or inventors who say, “You must not use my product or service that way.” How does that work for smart software? I read “Techie Finishes Coursera Course with Perplexity Comet AI, Aravind Srinivas Warns Do Not Do This.” This write up explains that a person took an online course. The work required was typical lecture-stuff. The student copied the list of tasks and pasted them into Perplexity, one of the beloved high-flying US artificial intelligence company’s system.

The write up says:

In the clip, Comet AI is seen breezing through a 45-minute Coursera training assignment with the simple prompt: “Complete the assignment.” Within seconds, the AI assistant appears to tackle 12 questions automatically, all without the user having to lift a finger.

Smart software is tailor made for high school students, college students, individuals trying to qualify for technical certifications, and doctors grinding through a semi-mandatory instruction program related to a robot surgery device. Instead of learning the old-fashioned way, the AI assisted approach involves identifying the work and feeding it into an AI system. Then one submits the output.

There were two factoids in the write up that I thought noteworthy.

The first is that the course the person cheating studied was AI Ethics, Responsibility, and Creativity. I can visualize a number of MBA students taking an ethics class in business using Perplexity or some other smart software to complete assignments. I mean what MBA student wants to miss out on the role of off-shore banking in modern business. Forget the ethics baloney.

The second is that a big dog in smart software suddenly has a twinge of what the French call l’esprit d’escalier. My French is rusty, but the idea is that a person thinks of something after leaving a meeting; for example, walking down the stairs and realizing, “I screwed up. I should have said…” Here’s how the write up presents this amusing point:

[Perplexity AI and its billionaire CEO Aravind Srinivas] said “Absolutely don’t do this.”

My thought is that AI wizards demonstrate that their intelligence is not the equivalent of foresight. One cannot rewind time or unspill milk. As for the MBAs, use AI and skip ethics. The objective is money, power, and control. Ethics won’t help too much. But AI? That’s a useful technology. Just ask the fellow who completed an online class in less time than it takes to consume a few TikTok-type videos. Do you think workers upskilling to use AI will use AI to demonstrate their mastery? Never. Ho ho ho.

Stephen E Arnold, October 14, 2025

Another Google Apology Coming? Sure, It Is Just Medical Info. Meh

August 22, 2025

Dino 5 18 25No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.

Another day and more surprising Mad Magazine type of smart software stories. I noted this essay as a cocktail party anecdote particularly when doctors are chatting with me: “Doctors Horrified After Google’s Healthcare AI Makes Up a Body Part That Does Not Exist in Humans.”

Okay, guys like Leonardo da Vinci and Michelangelo dissected cadavers in order to get a first-hand, hands on and hands in sense of what was in a human body. However, Google’s smart software does not require any of that visceral human input. The much hyped systems developed by Google’s wizards just use fancy math and predict what it knows and what a human needs to answer a question. Simple, eh.

The cited write up says:

One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions. It identified an “old left basilar ganglia infarct,” referring to a purported part of the brain — “basilar ganglia” — that simply doesn’t exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.

Big deal or not? The write up points out:

… in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google’s faux pas more than likely didn’t result in any danger to human patients, it sets a worrying precedent, experts argue.

Several observations:

  1. Smart software will just improve. Look at ChatGPT 5, it is doing wonders even though rumor has it that OpenAI is going to make ChatGPT4o available again. Progress.
  2. Google will apologize and rework the system so it does not make this specific medical misstep again. Yep, rules based smart software. How tenable is that? Just consider how that worked for AskJeeves years ago.
  3. Ask yourself the question, “Do I want Google-infused smart software to replace my harried personal physician?”

Net net: Great anecdote for a cocktail party. I bet those doctors will find me very amusing.

Stephen E Arnold, August 22, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta