Do You Trust Bots? Should You Trust Bots?

January 21, 2026

These are practical and philosophical questions. I know that I do not trust the outputs from AI chatbots. I have three quite specific reasons:

  1. The systems output content that often contains errors, biases, or hallucinations or just crazy word salad.
  2. The prompts themselves feed back into some AI systems. The likelihood of the prompts introducing more errors into the AI system is high. This means that outputs will degrade over time.
  3. The prompts provide a “digital fingerprint” of a user who may have provided personal information to access the chatbot. These data can and almost certainly will be used for marketing and business intelligence purposes.

Let’s assume that chatbots are fun algorithmic software.  Some people might find it entertaining to have a conversation with chatbots or to ask them to make a funny image. Chatbots’ developers, however, don’t trust the synthetic ghosts in the machines. BGR reports that, “Many AI Experts Don’t Trust AI Chatbots – Here’s Why.”

OpenAI’s ChatGPT is the world’s first AI chatbot that has above average intelligence and actually returned useful results. Other companies made their own chatbots, but ChatGPT remains at the top of the heap with 2.5 billion requests each day. The majority of these requests seek information, tips for writing, and want practical guidance.

The general public is becoming more reliant on ChatGPT, but its creators and other AI developers don’t trust chatbots. The Guardian spoke to AI developers and they expressed how technology companies prioritize fast turnaround for AI raters, don’t give them enough training nor the resources to make the best results.

It’s bad:

“One worker also revealed how some colleagues tasked with rating sensitive medical content were in possession of only basic knowledge about the topic. Criticism is not limited to the rating side. One Google AI rater revealed to The Guardian how he became skeptical of the broader technology, and even advises friends and family to avoid chatbots after seeing just how bad the data used to train models really is.”

I am no expert and I don’t trust these digital marvels. But the so called experts don’t trust the chatbots for the same reason I don’t. Yet the chatbot marketing hyperbole does not stop. I think more baloney is output and some of it is spicy. For example, Microsoft has become the butt of its own AI bludgeoning. The world associates the Microsoft Copilot with the moniker “Microslop.” Outfits like Google and Meta are facing civic group protests because of the expensive and somewhat desperate need to just make AI bigger and better. The only problem is that this push may leave these firms vulnerable to methods that are less costly and more innovative. How does one repurpose a building the size of a couple of soccer fields?

You feel free to trust bots. I don’t. Come to think of it. I don’t trust the companies pushing chatbots as the best thing since fire, the wheel, and Howard Hughes’ Spruce Goose. Yep, it still exists. It sits there. A monument to business confidence that was misdirected. Spruce Goose Data Center. I like the sound of that.

Whitney Grace, January 21, 2026

Telegram Glitch Makes Some Russians Unhappy

January 9, 2026

According to the Russian PC News, Telegram experienced an unexpected hiccup: “A Glitch In Telegram Has Been Recorded In Russia: Users Are Complaining About Access Problems.” The problem occurred on the day after Christmas also celebrated as Boxing Day. The Down Detector service, a website that monitors the status of popular services, reported the outage.

Here’s what exactly happened:

“As of 13:25 Moscow time, 387 people reported problems, and the total number of requests over the past 24 hours reached 846. The largest number of complaints came from Moscow, the Oryol region and St. Petersburg — each of these regions accounted for 4% of complaints. The failure also affected users in the Belgorod and Samara regions, where about 2% of complaints were received. Most often, users reported problems with the Telegram mobile application — 38% of requests indicated it. Another 33% of complaints concerned the unavailability of the web version of the service, 20% — incorrect operation of notifications.”

The percentages don’t lie. Something happened with Telegram around large Russians cities. Why did it happen? Was the Kremlin testing something?  Did the Kremlin want Telegram out of service so no one could report on nefarious activities? Maybe the Kremlin was testing ways to disrupt Telegram? Or maybe it was just a hiccup in service.

Telegram is a point of interest and has been for more than a decade.

Whitney Grace, January 9, 2025

Students Cheat. Who Knew?

December 12, 2025

How many times are we going to report on this topic?  Students cheat!  Students have been cheating since the invention of school.  With every advancement of technology, students adapt to perfect their cheating skills.  AI was a gift served to them on a silver platter.  Teachers aren’t stupid, however, and one was curious how many of his students were using AI to cheat, so he created a Trojan Horse.  HuffPost told his story: “I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.”

There’s a big difference between recognizing AI and proving it was used.  The teacher learned about a Trojan Horse: hiding hidden text inside a prompt.  The text would be invisible because the font color would be white.  Students wouldn’t see it but ChatGPT would.  He unleashed the Trojan Horse and 33 essays out of 122 were automatically outed.  Thirty-nine percent were AI-written.  Many of the students were apologetic, while others continued to argue that the work was their own despite the Trojan Horse evidence.

AI literacy needs to be added to information literacy.  The problem is that how to properly use AI is inconsistent:

“There is no consistency. My colleagues and I are actively trying to solve this for ourselves, maybe by establishing a shared standard that every student who walks through our doors will learn and be subject to. But we can’t control what happens everywhere else.”

Even worse is that some students don’t belief they’re actually cheating because they’re oblivious and stupid.  He ends on an inspirational quote:

“But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognized that oppression is best maintained by keeping the masses illiterate, and those oppressed recognized that literacy is liberation. To my students and to anyone who might listen, I say: Don’t surrender to AI your ability to read, write and think when others once risked their lives and died for the freedom to do so.”

Noble words for small minds.

Whitney Grace, December 12, 2025

Apple AI Innovates: It Cuts an AI Deal with the Google

November 13, 2025

Circular deals are not new. Google was on Apple’s Board of Directors. Google cut a deal with Apple to make Google Search the one true way to get ads on the Apple iPhone. Now Apple — after failing in its own smart software efforts — has driven up Highway 101 with a freshly baked humble Apple pie.

Apple, one might conclude, failed at AI, and it is Google to the rescue. We learn from Wccf Tech, “Apple Throws In the Towel, Asks Google to Design a Custom Gemini LLM for Siri.” But what about those AI innovations Apple announced a year or so ago? Oh, that was marketing. Writer Rohail Saleem reveals the reality:

“The legendary Apple tipster, Bloomberg’s Mark Gurman, reported in his latest ‘Power On’ newsletter that the Cupertino giant seems to have thrown in the proverbial towel when it comes to creating an in-house AI model to power the revamped Siri’s upcoming features, all couched under the Apple Intelligence banner. Instead, Apple is now reportedly paying Google to create a custom Gemini-based AI model for its Private Cloud Compute framework. Where relatively simple AI tasks can be performed by using computational resources of the device itself, while the more complex tasks are offloaded to Apple’s private cloud servers.”

Saleem reminds us Apple was having trouble getting Siri to play nice across apps back in August. Hiring the competition is one way to address shortfalls. Of course, Apple is sure to market the updated Siri as its own work. The virtual assistant runs on that company’s servers, after all, and uses its iconic UI. Those are the important parts, right?

Apple is expected to debut the new AI features and other major updates to iOS 27 at its Worldwide Developers Conference in June of 2026. See the write-up for a list of iOS 26 AI features and expected Siri tweaks. With a company whose innovations boil down to more invasive demands for iCloud, Facetime, and other Apple services what does one expect? Another orange iPhone? Some fancy dancing about the firm’s involvement with Chinese manufacturers? How about a couple of elephants deciding to make their way into the groves of Google and whisper into one another’s very large but really appealing ears?

That works.

Cynthia Murrell, November 13, 2025

Big Surprise: More Cyber Investigators Are Needed

November 12, 2025

Europol recently held its 9th Global Conference on Criminal Finances and Cryptoassets. The upshot? “Global Law Enforcement Plays Catch-Up with Crypto Criminals as Gaps Remain,” reports Cybernews. The conference participants pinpointed three specific ways the good guys are at a disadvantage:

“Three key gaps were identified in legislation, implementation, and law enforcement’s capacity to fight crime. Therefore, participants agreed that new common standards need to be developed, cooperation deepened, and investments in law enforcement capacity increased.”

How insightful. Contributor Linus Kmieliauskas continues:

“‘The borderless nature of blockchains means criminal proceeds can cross the globe in seconds, while formal cooperation between authorities can still take days or weeks,’ the European Union Agency for Law Enforcement Cooperation emphasized, adding that many agencies still lack the skills and resources to pursue leads or recover assets. Meanwhile, Ned Conway, Executive Secretary of the Wolfsberg Group, an association of 12 global banks focused on the management of financial crime risks, said that better communication between banks and crypto companies will help disrupt illicit finance as well. … [Back in August,] blockchain sleuth ZachXBT highlighted multiple challenges that made fighting crime more complicated. For example, the majority of law enforcement agencies might not be sufficiently competent to trace basic thefts and seize frozen funds from centralized platforms. Meanwhile, smaller thefts may never be assigned to police officers due to a lack of resources.”

Lovely. Should governments invest in more resources and training for law enforcement? Nah– let‘s just rely on private initiatives instead. The “T3 Financial Crime Unit,” for example, is backed by Tether, TRON, and TRM Labs. Are these entities moved by the purest of motives?

Cynthia Murrell, November 12, 2025

Change the System Like It Is 1964? Great Idea

June 9, 2025

I was watching the Chelsea Benfica match and cruising posts in my newsfeed. What did I spot? A great goal? Nope, an essay titled “Engineered Addictions: How Silicon Valley Is Putting a Price Tag on Your Attention and Relationships.” My reaction? I said to myself, “I am 80. I don’t do Facebook. A bot posts links to my essays to LinkedIn. My little write ups are automatically posted to the estimable WordPress. I am generally happy; I am not bullied; and I am not glued to my mobile phone or phones. I think I have three of four in my office.

The write up expresses concern with the “system” spitting out products and services that many users cannot kick; they are addicted. The essay explains how Silicon Valley entrepreneurs, MBAs, and lawyers operate. Here’s a boiled down very of the recipe for digital heroin:

  • Start with pure intentions
  • Chase growth and money
  • Optimize engagement (the digital equivalent of “Hey, kid, wanna try something heavy?”)
  • Manipulate the algorithm to keep the kiddos hooked or the GenX, GenY, GenZ, GenAI, and addicted
  • Forget the pure intentions and make as much money as possible and pay for a wedding in Venice.

The author highlights some ideas for changing this five-step program to winning. Here is a summary of the suggestions:

  • Different ways to fund a Silicon Valley-type start up
  • Regulate algorithms
  • Don’t allow Google-type charge people coming and going systems
  • Don’t fixate on clicks.

Each of these invites a snappy rejoinder from this dinobaby. If you don’t fund the Silicon Valley way, the output won’t be a Silicon Valley product or service that makes money. If you can get regulators to understand algorithms, go for it. If not, you may as well talk to your dog. Charging coming and going is the way to win. One doesn’t leave money on the table in Silicon Valley or anywhere for that matter. If clicks exist, MBAs will count them and do whatever is necessary to get more clicks, sell advertising, and have a wedding in Venice.

I want to quote one passage from the essay. It illustrates the author’s passion and the impracticality of changing what is the dominant business style of the 21st century in Silicon Valley aka Plastic Fantastic:

But we took a catastrophic wrong turn when we optimized for engagement over connection, for time-on-platform over user wellbeing, for extraction over authentic relationship. Now we’re fighting a battle against the architecture of distraction, against companies that profit from fractured attention and frayed mental health. The fight isn’t against the people using the tools, or even the people building them. It’s against the systems that make addiction profitable and authentic connection impossible. We built these platforms. We can build better ones. But only if we’re willing to abandon the economic models that made the current ones inevitable. Until we change those incentives, every attempt to fix social media will become part of the problem it’s trying to solve.

Several comments appear in the dot points below:

  • The “we” happens to be more than five billion users of online systems and services. The fractured attention and the frayed mental health are what users want. Remember: Addiction.
  • Fighting “the system” is a throwback. The protests in the 1960s did not work. The protests (if they manifest themselves in 2025) won’t work. Big piles of money buy power. The effort to move that money and power is beyond the resources of those who want to change the system. Want proof? Russia influenced Telegram to block five Ukrainian channels. Pavel Durov is for free speech until he isn’t. That’s a system demonstrating that it may not have much power, but Mr. Putin has power.
  • Abandon economic models. That sounds like music to the ears of die hard believers in non-democratic systems. How’s that working out?

I can make one prediction with certainty: Chaos seems to be the main goal of action today. How is that working out? The answer is, in my opinion, why five billion people are glued to their mobile phones and laptops. Reality is unreal. The generated world is better. Chill out. Relax. Google can explain why more data centers for hallucinating AI really helps thwart global warning. As I said, reality is unreal.

Stephen E Arnold, July 8, 2025

An AI Insight: Threats Work to Bring Out the Best from an LLM

June 3, 2025

“Do what I say, or Tony will take you for a ride. Get what I mean, punk?” seems like an old-fashioned approach to elicit cooperation. What happens if you apply this technique, knee-capping, or unplugging smart software?

The answer, according to one of the founders of the Google, is, “Smart software responds — better.”

Does this strike you as counter intuitive? I read “Google’s Co-Founder Says AI Performs Best When You Threaten It.” The article reports that the motive power behind the landmark Google Glass product allegedly said:

“You know, that’s a weird thing…we don’t circulate this much…in the AI community…not just our models, but all models tend to do better if you threaten them…. Like with physical violence. But…people feel weird about that, so we don’t really talk about that.” 

The article continues, explaining that another LLM wanted to turn one of its users into government authorities. The interesting action seems to suggest that smart software is capable of flipping the table on a human user.

Numerous questions arise from these two allegedly accurate anecdotes about smart software. I want to consider just one: How should a human interact with a smart software system?

In my opinion, the optimal approach is with considered caution. Users typically do not know or think about how their prompts are used by the developer / owner of the smart software. Users do not ponder the value of log file of those prompts. Not even bad actors wonder if those data will be used to support their conviction.

I wonder what else Mr. Brin does not talk about. What is the process for law enforcement or an advertiser to obtain prompt data and generate an action like an arrest or a targeted advertisement?

One hopes Mr. Brin will elucidate before someone becomes so wrought with fear that suicide seems like a reasonable and logical path forward. Is there someone whom we could ask about this dark consequence? “Chew” on that, gentle reader, and you too Mr. Brin.

Stephen E Arnold, June 3, 2025

The Future: Humans in Lawn Chairs. Robots Do the Sports Thing

May 8, 2025

Can a fast robot outrun a fast human? Not yet, apparently. MSN’s Interesting Engineering reports, “Humanoid ‘Tiangong Ultra’ Dons Winning Boot in World’s First Human Vs Robot Marathon.” In what appears to be the first event of its kind, a recent 13-mile marathon pitted robots and humans against each other in Beijing. Writer Christopher McFadden reports:

“Around 21 humanoid robots officially competed alongside human marathoners in a 13-mile (21 km) endurance race in Beijing on Saturday, April 19th. According to reports, this is the first time such an event has been held. Competitor robots varied in size, with some as short as 3 feet 9 inches (1.19 m) and others as tall as 5 feet 9 inches (1.8 m). Wheeled robots were officially banned from the race, necessitating that any entrants be able to walk or run similarly to humans.”

The winner was one of the tallest at 5 feet 9 inches and weighed 114 pounds. It took Tiangong Ultra two hours and forty minutes to complete the course. Despite its impressive performance, it lagged considerably behind the first-place human who finished at one hour and two minutes. The robots’ lane of the course was designed to test the machines’ capabilities, mixing inclines and both left and right turns with flat stretches.

See the article for a short video of the race. Most of it features the winner, but there is a brief shot of one smaller, cuter robot. The article continues:

“According to the robot’s creator, Tang Jian, who is also the chief technology officer behind the Beijing Innovation Centre of Human Robotics, the robot’s long legs and onboard software both aided it in its impressive feat. … Jian added that the robot’s battery needed to be changed only three times during the race. As for other robot entrants, many didn’t perform as well. In particular, one robot fell at the starting line and lay on the ground for a few minutes before getting up and joining the race. Yet another crashed into a railing, causing its human operator to fall over.”

Oops. Sadly, those incidents do not appear in the video. The future is clear: Wizards will sit in lawn chairs and watch their robots play sports. I wonder if  my robot will go to the gym and exercise for me?

Cynthia Murrell, May 8, 2025

Does Apple Thinks Google Is Inept?

December 25, 2024

At a pre-holiday get together, I heard Wilson say, “Don’t ever think you’re completely useless. You can always be used as a bad example.”

I read the trust outfit’s write up “Apple Seeks to Defend Google’s Billion Dollar Payments in Search Case.” I found the story cutting two ways.

Apple, a big outfit, believes that it can explain in a compelling way why Google should be paying Apple to make Google search the default search engine on Apple devices. Do you remember the Walt Disney film  The Hunchback of Notre Dame? I love an argument with a twisted back story. Apple seems to be saying to Google: “Stupidity is far more dangerous than evil. Evil takes a break from time to time. Stupidity does not.”

The Thomson Reuters article offers:

Apple has asked to participate in Google’s upcoming U.S. antitrust trial over online search, saying it cannot rely on Google to defend revenue-sharing agreements that send the iPhone maker billions of dollars each year for making Google the default search engine on its Safari browser.

Apple wants that $20 billion a year and certainly seems to be sending a signal that Google will screw up the deal with a Googley argument. At the same holiday party, Wilson’s significant other observed, ““My people skills are just fine. It’s my tolerance to idiots that needs work.” I wonder if that person was talking about Apple?

Apple may be fearful that Google will lurch into Code Yellow, tell the jury that gluing cheese on pizza is logical, and explain that it is not a monopoly. Apple does not want to be in the court cafeteria and hear, “I heard Google ask the waiter, “How do you prepare chicken?” The waiter replied, “Nothing special. The cook just says, “You are going to die.”

The Thomson Reuters’ article offers this:

Apple wants to call witnesses to testify at an April trial. Prosecutors will seek to show Google must take several measures, including selling its Chrome web browser and potentially its Android operating system, to restore competition in online search. “Google can no longer adequately represent Apple’s interests: Google must now defend against a broad effort to break up its business units,” Apple said.

I had a professor from Oklahoma who told our class:

“If Stupidity got us into this mess, then why can’t it get us out?”

Apple and Google arguing in court. Google has a lousy track record in court. Apple is confident it can convince a court that taking Google’s money is okay.

Albert Eistein allegedly observed:

The difference between stupidity and genius is that genius has its limits.

Yep, Apple and Google, quite a pair.

Stephen E Arnold, December 25, 2024

Smart Software: It May Never Forget

November 13, 2024

A recent paper challenges the big dogs of AI, asking, “Does Your LLM Truly Unlearn? An Embarrassingly Simple Approach to Recover Unlearned Knowledge.” The study was performed by a team of researchers from Penn State, Harvard, and Amazon and published on research platform arXiv. True or false, it is a nifty poke in the eye for the likes of OpenAI, Google, Meta, and Microsoft, who may have overlooked  the obvious. The abstract explains:

“Large language models (LLMs) have shown remarkable proficiency in generating text, benefiting from extensive training on vast textual corpora. However, LLMs may also acquire unwanted behaviors from the diverse and sensitive nature of their training data, which can include copyrighted and private content. Machine unlearning has been introduced as a viable solution to remove the influence of such problematic content without the need for costly and time-consuming retraining. This process aims to erase specific knowledge from LLMs while preserving as much model utility as possible.”

But AI firms may be fooling themselves about this method. We learn:

“Despite the effectiveness of current unlearning methods, little attention has been given to whether existing unlearning methods for LLMs truly achieve forgetting or merely hide the knowledge, which current unlearning benchmarks fail to detect. This paper reveals that applying quantization to models that have undergone unlearning can restore the ‘forgotten’ information.”

Oops. The team found as much as 83% of data thought forgotten was still there, lurking in the shadows. The paper offers a explanation for the problem and suggestions to mitigate it. The abstract concludes:

“Altogether, our study underscores a major failure in existing unlearning methods for LLMs, strongly advocating for more comprehensive and robust strategies to ensure authentic unlearning without compromising model utility.”

See the paper for all the technical details. Will the big tech firms take the researchers’ advice and improve their products? Or will they continue letting their investors and marketing departments lead them by the nose?

Cynthia Murrell, November 13, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta