LLMs, Dread, and Good Enough Software (Fast and Cheap)

June 11, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

More philosopher programmers have grabbed a keyboard and loosed their inner Plato. A good example is the essay “AI: Accelerated Incompetence” by Doug Slater. I have a hypothesis about this embrace of epistemological excitement, but that will appear at the end of this dinobaby post.

The write up posits:

In software engineering, over-reliance on LLMs accelerates incompetence. LLMs can’t replace human critical thinking.

The driver of the essay is that some believe that programmers should use outputs from large language models to generate software. Doug does not focus on Google and Microsoft. Both companies are convinced that smart software can write good enough code. (Good enough is the new standard of excellence at many firms, including the high-flying, thin-air breathing Googlers and Softies.)

The write up identifies three beliefs, memes, or MBAisms about this use of LLMs. These are:

  • LLMs are my friend. Actually LLMs are part of a push to get more from humanoids involved in things technical. For a believer, time is gained using LLMs. To a person with actual knowledge, LLMs create work in order to catch errors.
  • Humans are unnecessary. This is the goal of the bean counter. The goal of the human is to deliver something that works (mostly). The CFO is supposed to reduce costs and deliver (real or spreadsheet fantasy) profits. Humans, at least for now, are needed when creating software. Programmers know how to do something and usually demonstrate “nuance”; that is, intuitive actions and thoughts.
  • LLMs can do what humans do, especially programmers and probably other technical professionals. As evidence of doing what humans do, the anecdote about the robot dog attacking its owner illustrates that smart software has some glitches. Hallucinations? Yep, those too.

The wrap up to the essay states:

If you had hoped that AI would launch your engineering career to the next level, be warned that it could do the opposite. LLMs can accelerate incompetence. If you’re a skilled, experienced engineer and you fear that AI will make you unemployable, adopt a more nuanced view. LLMs can’t replace human engineering. The business allure of AI is reduced costs through commoditized engineering, but just like offshore engineering talent brings forth mixed fruit, LLMs fall short and open risks. The AI hype cycle will eventually peak10. Companies which overuse AI now will inherit a long tail of costs, and they’ll either pivot or go extinct.

As a philosophical essay crafted by a programmer, I think the write up is very good. If I were teaching again, I would award the essay an A minus. I would suggest some concrete examples like “Google suggests gluing cheese on pizza”, for instance.

Now what’s the motivation for the write up. My hypothesis is that some professional developers have a Spidey sense that the diffident financial professional will license smart software and fire humanoids who write code. Is this a prudent decision? For the bean counter, it is self preservation. He or she does not want to be sent to find a future elsewhere. For the programmer, the drum beat of efficiency and the fife of cost reduction are now loud enough to leak through noise reduction head phones. Plato did not have an LLM, and he hallucinated with the chairs and rear view mirror metaphors.

Stephen E Arnold, June 11, 2025

A Decade after WeChat a Marketer Touts OpenAI as the Everything App

June 10, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Lester thinks OpenAI will become the Internet. Okay, Lester, are you on the everything app bandwagon. That buggy rolled in China and became one of the little engines that could for social scoring? “How ChatGPT Could Replace the Internet As We Know It” provides quite a bit about Lester. Zipping past the winner prose, I noted this passage:

In fact, according to Khyati Hooda of Keywords Everywhere, ChatGPT handles 54% of queries without using traditional search engines. This alarming stat indicates a shift in how users seek information. As the adoption grows and ChatGPT cements itself as the single source of information, the internet as we know it becomes kinda pointless.

One question? Where does the information originate? From intercepted mobile communications, from nifty listening devices like smart TVs, or from WeChat-style methods? The jump from the Internet to an everything app is a nifty way to state that everything is reducible to bits. Get the bits, get the “information.”

Lester says:

Basically, ChatGPT is cutting out the middleman, but what’s even scarier is that it’s working. ChatGPT reached 1 million users in just 5 days and has 400 million weekly active users as of early 2025, making it the fastest-growing consumer app in history. The platform receives over 5.19 billion visits per month, ranking as the 8th most visited website in the world.

He explains:

What started as a chatbot has become a platform where people book travel, plan meals, write emails, create schedules, and even do homework. Surveys show that around 80% of ChatGPT users leverage it for professional tasks such as drafting emails, creating reports, and generating marketing content. This marks a fundamental shift in how we engage with the internet, where more everyday tasks move from web browsing to a prompt.

How likely is this shift, Lester? Lester responds in a ZDNet-type way:

I wouldn’t be surprised if ChatGPT added a super agent that does tasks autonomously by December of this year. Amazed? Sure. But surprised? Nah. It’s not hard to imagine a near future where ChatGPT doesn’t just replace the internet but OpenAI becomes the foundation for future companies, in the same way that roads became the foundation for civilization.

Lester interprets the shift as mostly good news. Jobs will be created. There are a few minor problems; for instance, retraining and changing business models. Otherwise, Lester does not see too many new problems. In fact, he makes his message clear:

If you stand still, never evolve, never improve your skills, and never push yourself to be better, life will decimate you like a gorilla vs 100 men.

But what if the gorilla is Google? And that Google creature has friends like Microsoft and others. A super human like Elon Musk or Pavel Durov might jump into the fray against the men, presumably from OpenAI.

Convergence and collapsing to an “everything” app is logical. However, humans are not logical. Plus smart software has a couple of limitations. These include cost, energy requirements, access to information, pushback from humans who cannot be or do not want to be “retrained,” and making stuff up (you know, hallucinations like gluing cheese on pizza).

Net net: Old school search is now wearing a new furry suit, but WeChat and Telegram are existing “everything” apps. Mr. Musk and Sam AI-Man know or sense there is a future in co-opting the idea, bolting on smart software, and hitting the marketing start button. However, envisioning and pulling off are two different things. China allegedly helped WeChat think about its role; Telegram’s founder visited Russia dozens of times prior to his arrest in France. What nation state will husband a Western European or American “everything” app?

Mr. Musk has a city in Texas. Perhaps that’s why he has participated in a shadow dance with Telegram?

Lester, you have identified the “everything” app. Good work. Don’t forget WeChat débuted in 2011. Telegram rolled out in 2013. Now a decade later, the “everything” app is the next big thing. Okay. But who is the “we” in the essay’s title? It is not this dinobaby.

Stephen E Arnold, June 10, 2025

Google Places a Big Bet, and It May Not Pay Off

June 10, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Each day brings more AI news. I have playing in the background a video called “The AI Math That Left Number Theorists Speechless.” That word “speechless” does not apply because the interlocutor and the math whiz are chatty Cathies. The video runs a little less that two hours. Speechless? No, when it comes to smart software some people become verbose and excited. I like to be verbose. I don’t like to get excited about artificial intelligence. I am a dinobaby, remember?

I clicked on the first item in my trusty Overflight service and this write up greeted me: “Google Is Burying the Web Alive.” How does one “bury” a digital service? I assumed or inferred that the idea is that the alleged multi-monopoly Google was going to create another monopoly for itself anchored in AI.

The write up says:

[AI Overviews are] Google’s “most powerful AI search, with more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web,” the company says, “breaking down your question into subtopics and issuing a multitude of queries simultaneously on your behalf.” It’s available to everyone. It’s a lot like using AI-first chatbots that have search functions, like those from OpenAI, Anthropic, and Perplexity, and Google says it’s destined for greater things than a small tab. “As we get feedback, we’ll graduate many features and capabilities from AI Mode right into the core Search experience,” the company says.

Let’s slow down the buggy. A completely new product or service has some baggage on board. Like “New Coke”, quite a few people liked “old Coke.” The company figured it out and innovated and finally just started buying beverage outfits that were pulling new customers. Then there is the old chestnut by the buggy stand which says, “Most start ups fail.” Finally, there is the shadow of impatient stakeholders. Fail to keep those numbers up, and consequences manifest themselves.

The write up gallops forward:

From the very first use, however, AI Mode crystallized something about Google’s priorities and in particular its relationship to the web from which the company has drawn, and returned, many hundreds of billions of dollars of value. AI Overviews demoted links, quite literally pushing content from the web down on the page, and summarizing its contents for digestion without clicking…

Those clicks make Google’s money flow. It does not matter if the user clicks to view a YouTube short or a click to view a Web page about a vacation rental. Clicks equal revenue. Fewer clicks may translate to less revenue. If this is true, then what happens?

The write up suggests an answer: The good old Web is marginalized. Kaput. Dead as a door nail:

of course, Google is already working on ads for both Overviews and AI Mode). In its drive to embrace AI, Google is further concealing the raw material that fuels it, demoting links as it continues to ingest them for abstraction. Google may still retain plenty of attention to monetize and perhaps keep even more of it for itself, now that it doesn’t need to send people elsewhere; in the process, however, it really is starving the web that supplies it with data on which to train and from which to draw up-to-date details. (Or, one might say, putting it out of its misery.)

As a dinobaby, I quite like the old Web. Again we have a giant company doing something “new” and “different.” How will those bold innovations work out? That’s the $64 question (a rigged game show my mother told me).

The article concludes:

In any case, the signals from Google — despite its unconvincing suggestions to the contrary — are clear: It’ll do anything to win the AI race. If that means burying the web, then so be it.

Whoa, Nellie!

Let’s think about what the Google is allegedly doing. First, the Google is spending money to index the “Web.” My team tells me that Google is indexing less thoroughly than it was 10 years ago. Google indexes where the traffic is, and quite a bit of that traffic is to Google itself. The losers have been grousing about a lack of traffic for years. I have worked with a consumer Web site since 1993, and the traffic cratered about seven years ago. Why? Google selected sites to boost because of the link between advertiser appetite and clicks. The owner of this consumer Web site cooked up a bit of jargon for what Google was doing; he called it “steering.” The idea is that Google shaped its crawls and “relevance” in order to maximize revenue from known big ad spenders.

Google is not burying anything. The company is selecting to maximize financial benefits. My experience suggests that when Google strays too far from what stakeholders want, the company will be whipped until it gets the horses under control. Second, the AI revolution poses a significant challenge for a number of reasons. Among these is the users’ desire for the information equivalent of a “dumb” mobile phone. The cacophony of digital information is too much and creates a “why bother” need. Google wants to respond in the hope that it can come up with a product or service that produces as much money as the old Yahoo Overture GoTo model. Hope, however, is not reality.

As a dinobaby, I think Google has a reasonably good chance of stratifying its “users”. Some will pay. Some will consume the sponsored by ads AI output. Some will find a way to get the restaurant address surrounded by advertisements.

What about AI?

I am not sure that anyone knows. Both Google and Microsoft have to find a way to produce significant and sustainable revenue from the large language model method which has come to be synonymous with smart software. The costs are massive. The use cases usually focus on firing people for cost savings until the AI doesn’t work. Then the AI supporters just hire people again. That’s the Klarna call to think clearly again.

Net net: The Google is making a big bet that it can increase its revenues with smart software. How probable is it that the “new” Google will turn out like the “New Coke”?  How much of the AI hype is just l’entreprise parle dans le vide? The hype may be the inverse of reality. Something will be buried, and it may not be the “Web.”

Stephen E Arnold, June 10, 2025

A 30-Page Explanation from Tim Apple: AI Is Not Too Good

June 9, 2025

Dino 5 18 25I suppose I should use smart software. But, no, I prefer the inept, flawed, humanoid way. Go figure. Then say to yourself, “He’s a dinobaby.

Gary Marcus, like other experts, are putting Apple into an old-fashioned peeler. You can get his insights in “A Knock Out Blow for LLMs.” I have a different angle on the Apple LLM explainer. Here we go:

Many years ago I had some minor role to play in the commercial online database sector. One of our products seemed to be reasonably good at summarizing business and technical journal articles, academic flights of fancy, and some just straight out crazy write ups from Harvard Business Review-type publications.

I read a 30-page “research” paper authored by what appear to be some of the “aw, shucks” folks at Apple. The write up is located on Apple’s content delivery network, of course. No run-of-the-mill service is up to those high Apple standards of Tim and his orchard keepers. The paper is authored by Parshin Shojaee (who is identified as an intern who made an equal contribution to the write up), Imam Mirzadeh (Apple), Keivan Alizadeh (Apple), Maxwell Horton (Apple), Samy Bengio (Apple), and Mehrdad Farajtabar (Apple). Okay, this seems to be a very academic line up with an intern who was doing “equal contribution” along with the high-powered horticulturists laboring on the write up.

The title is interesting: “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.” In a nutshell, the paper tries to make clear that current large language models deliver inconsistent results and cannot reason in a reliable manner. When I read this title, my mind conjured up an image of AI products and services delivering on-point outputs to iPhone users. That was the “illusion” of a large, ageing company trying to keep pace with technology and applications from its competitors, the upstarts, and the nation-states doing interesting things with the admittedly-flawed large language models. But those outside the Apple orchard have been delivering something.

My reaction to this document and its easy-to-read pastel charts like this one from page 30:

image

One of my addled professors told me, “Also, end on a strong point. Be clear, concise, and issue a call to action.” Apple obviously believes that these charts deliver exactly what my professor told me.

I interpreted the paper differently; to wit:

  1. Apple announced “Apple intelligence” and failed to ship for what a year or more had been previously announced
  2. Siri still sucks from my point of view
  3. Apple reorganized its smart software team in a significant way. Why? See items 1 and 2.
  4. Apple runs the risk of having its many iPhone users just skip “Apple intelligence” and maybe not upgrade due to the dalliance with China, the tariff issue, and the reality of assuming that what worked in the past will be just super duper in the future.

Sorry, gardeners. A 30-page paper is not going to change reality. Apple is a big outfit. It seems to be struggling. No Apple car. An increasingly wonky browser. An approach to “settings” almost as bad as Microsoft’s. And much, much more. Coming soon will be a new iOS numbering system and more!

That’s what happens when interns contribute as much as full-time equivalents and employees. The result is a paper. Okay, good enough.

But, sorry, Tim Apple: Papers, pastel charts, and complaining about smart software will not change a failure to match marketing with what users can access.

Stephen E Arnold, June 9, 2025

Is Google Headed for the Big Computer Room in the Sky? Actually Yes It Is

June 9, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

As freshman in college in 1962, I had seen computers like the clunky IBMs at Keystone Steel & Wire Co., where my father worked as some sort of numbers guy, a bean counter, I guessed. “Look but don’t touch,” he said, not even glancing up from his desk with two adding machines, pencils, and ledgers. I looked.

Once I convinced a professor of poetry to hire me to help him index Latin sermons, I was hooked. Next up were Digital Equipment machines. At Halliburton Nuclear a fellow named Bill Montano listened to my chatter about searching text. Then I bopped into a big blue chip consulting firms and there were computing machines in the different offices I visited. When I ended up at the database company in the early 1980s, I had my  own Wang in my closet. There you go. A file cabinet sized gizmo, weird hums, and connections to terminals in my little space and to other people who could “touch” their overheated hearts. Then the Internet moved from the research world into the mainstream. Zoom. Things were changing.

Computer companies arrived, surged, and faded. Then personal computer companies arrived, surged, and faded. The cadence of the computer industry was easy to dance to. As Carmen Giménez used to say on American Bandstand in 1959, “I like the beat and it is easy to dance to.” I have been tapping along and doing a little jig in the computer (online) sector for many years, around 60 I think.

I read “Google As You Know It Is Slowly Dying.” Okay, another tech outfit moving through its life cycle. Break out your copy of Elisabeth Kübler-Ross’s On Death and Dying. Jump to Acceptance section, read it, and move on. But, no. It is time for one more “real news” write up to explain that Googzilla is heading toward its elder care facility. This is not news. If it is, fire up your Burroughs B5500 and do your inventory update.

The essay presents the obvious as “new.” The Vox write up says:

Google is dominant enough that two federal judges recently ruled that it’s operating as an illegal monopoly, and the company is currently waiting to see if it will be broken up.

From my point of view, this is an important development. Furthermore, it has nothing to do with the smart software approach to search. After two decades of doing exactly what it wanted, Google — like Apple and Meta — are in the spotlight. Those spotlights are solar powered and likely to remain on for the foreseeable future. That’s news.

In this spotlight are companies providing a “new” way to search. Since search is required to do most things online, the Google has to figure out how to respond in an intelligent way to two — count ‘em — big problems: Government actions and upstarts using Google’s own Transformer innovation.

The intersection of regulatory action and the appearance of an alternative to “search as you know it” is the same old story, just jazzed up with smart software, efficiency, the next big thing, Sky Net, and more. The write up says:

The government might not be the biggest threat to Google dominance, however. AI has been chipping away at the foundation of the web in the past couple of years, as people have increasingly turned to tools like ChatGPT and Perplexity to find information online.

My view is that it is the intersection, not the things themselves that have created the end-of-the-line sign for the Google bullet train. Google will try to do what it has done since Backrub: Appropriate ideas like Yahoo, Overture, and GoTo advertising methods, create a bar in which patrons pay to go in and out (advertisers and users), and treat the world as a bunch of dorks by whiz kids who just know so much more about the digital world. No more.

Google’s legacy is the road map for other companies lucky or skilled enough to replicate the approach. Consequently, the Google is in Code Red, announcing so many “new” products and services I certainly can’t keep them straight, and serving up a combination of hallucinatory output and irrelevant search results. The combination is problematic as the regulators close in.

The write up concludes with this statement:

In the chaotic, early days of the web, Google got popular by simplifying the intimidating task of finding things online, as the Washington Post’s Geoffrey A. Fowler points out. Its supremacy in this new AI-powered future is far less certain. Maybe another startup will come along and simplify things this time around, so you can have a user-friendly bot explain things to you, book travel for you, and make movies for you.

I disagree. Google became popular because it indexed Web sites, used some Clever ideas, and implemented processes that produced pages usually related to the user’s query. Over time, wrapper software provided Google with a way to optimize its revenue. Innovation eluded the company. In the social media “space”, Google bumbled Orkut and then continued to bumble until it pretty much gave up on killing Facebook. In the Microsoft “space,” Google created its own office and it rolled out its cloud service. There have not had a significant impact in the enterprise market when the river of money flows for Microsoft and whatever it calls its alleged monopolistic-inclined services. There are other examples of outright failure.

Now the Google is just spewing smart software products. This reminds me of a person who, shortly before dying, sees bright lights and watches the past flash before them. Then the person dies. My view is that Google is having what are like those near death experiences. The person survives but knows exactly what death is.

Believe me, Google knows that the annoying competitors are more popular; to wit, Sam AI-Man and his ChatGPT, his vision for the “everything” app, and his rather clever deal with Telegram. To wit, Microsoft and its deals with most smart software companies and its software lock in the US Federal government, its boot camp deal with Palantir Technologies, and its mind-boggling array of ways to get access to word processing software.

Google has not proven it can deal with the confluence of regulators demanding money and lesser entities serving up products and services that capture headlines. Code Red and dozens of “new” products each infused with Gemini or whatever  the name of the smart software is today is not a solution that returns Google to its glory days.

The patient is going through tough times. Googzilla may survive but search is going to remain finding on point information. LLMs are a current approach that people like. By itself, it will not kill Google or allow it to survive. Google is caught between the reality of meaningful regulatory action and innovators who are more agile.

Googzilla is old and spends some time looking for suitable elder care facilities.

Stephen E Arnold, June 9, 2025

Education in Angst: AI, AI, AI

June 9, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Bing Crosby went through a phase in which ai, ai, ai was the groaner’s fingerprint. Now, it is educated adults worrying about smart software. AI, AI, AI. “An Existential Crisis: Can Universities Survive ChatGPT?” The sub-title is pure cubic Zirconia:

Students are using AI to cheat and professors are struggling to keep up. If an AI can do all the research and writing, what is the point of a degree?

I can answer this question. The purpose of a college degree is, in order of importance, [1] get certified as having been accepted to and participated in a university’s activities, [2] have fun, including but not limited to drinking, sex, and intramural sports, [3] meeting friends who are likely to get high paying jobs, start companies, or become powerful political figures. Notice that I did not list reading, writing, and arithmetic. A small percentage of college attendees will be motivated, show up for class, do homework, and possibly discover something of reasonable importance. The others? These will be mobile phone users, adepts with smart software, and equipped with sufficient funds to drink beer and go on spring break trips.

The cited article presents this statement:

Research by the student accommodation company Yugo reveals that 43 per cent of UK [United Kingdom] university students are using AI to proofread academic work, 33 per cent use it to help with essay structure and 31 per cent use it to simplify information. Only 2 per cent of the 2,255 students said they used it to cheat on coursework.

I thought the Yugo was a quite terrible automobile, but by reading this essay, I learned that the name “Yugo” refers to a research company. (When it comes to auto names, I quite like “No Va” or no go in Spanish. No, I did consult ChatGPT for this translation.)

The write up says:

Universities are somewhat belatedly scrambling to draw up new codes of conduct and clarifying how AI can be used depending on the course, module and assessment.

Since when did academic institutions respond with alacrity to a fresh technical service? I would suggest that the answer to this question is, “Never.”

The “existential crisis” lingo appears to be the non-AI powered former vice chancellor of the University of Buckingham (Buckinghamshire, England) located near River Great Ouse. (No, I did not need smart software to know the name of this somewhat modest “river.”)

What is an existential crisis? I have to dredge up a recollection of Dr. Francis Chivers’ lecture on the topic in the 1960s. I think she suggested something along the lines: A person is distressed about something: Life, its purpose, or his/her identity.

A university is not a person and, therefore, to my dinobaby mind, not able to have an existential crisis. More appropriately, those whose livelihood depends on universities for money, employment, a peer group, social standing, or just feeling like scholarship has delivered esteem, are in crisis. The university is a collection of buildings and may have some quantum “feeling” but most structures are fairly reticent to offer opinions about what happens within their walls.

I quibble. The worriers about traditional education should worry. One of those “move fast, break things” moments has arrived to ruin the sleep of those collecting paychecks from a university. Some may worry that their side gig may be put into financial squalor. Okay, worry away.

What’s the fix, according to the cited essay? Ride out the storm, adapt, and go to meetings.

I want to offer a handful of observations:

  1. Higher education has been taking karate chops since Silicon Valley started hiring high school students and suggesting they don’t need to attend college. Examples of what can happen include Bill Gates and Mark Zuckerberg. “Be like them” is a siren song for some bright  sparks.
  2. University professional have been making up stuff for their research papers for years. Smart software has made this easier. Peer review by pals became a type of search engine optimization in the 1980s. How do I know this? Gene Garfield told me in 1981 or 1983. (He was the person who pioneered link analysis in sci-tech, peer reviewed papers and is, therefore, one of the individuals who enabled PageRank.
  3. Universities in the United States have been in the financial services business for years. Examples range from student loans to accepting funds for “academic research.” Certain schools have substantial income from these activities which do not directly transfer to high quality instruction. I myself was a “research fellow.” I got paid to do “work” for professors who converted my effort into consulting gigs. Did I mind? I had zero clue that I was a serf. I thought I was working on a PhD.* Plus, I taught a couple of classes if you could call what I did “teaching.” Did the students know I was clueless? Nah, they  just wanted a passing grade and to get out of my 4 pm Friday class so they could drink beer.

Smart software snaps in quite nicely to the current college and university work flow. A useful instructional program will emerge. However, I think only schools with big reputations and winning sports teams will be the beacons of learning in the future. Smart software has arrived, and it is not going to die quickly even if it hallucinates, costs money, and generates baloney.

Net net: Change is not coming. Change has arrived.

——————–

* Note: I did not finish my PhD. I went to work at Hallilburton’s nuclear unit. Why? Answer: Money. Should I have turned in my dissertation? Nah, it was about Chaucer, and I was working on kinetic weapons. Definitely more interesting to a 23 year old.

Stephen E Arnold, June 9, 2025

Jobs for Humanoids: AI Output Checker Like a Digital Grocery Clerk

June 9, 2025

George at the Throwable Substack believes humans will forever have a place in software development. In the post, “What’s Next for Software,” the blogger believes code maintenance will always rely on human judgement. This, he imagines, will balance out the code-creation jobs lost to AI. After all, he believes, humans will always be held liable for snafus. He writes:

“While engineers won’t be as responsible for producing code, they will be ultimately responsible for what that code does. A VP or CEO can blame an AI all they want when the system is down, but if the AI can’t solve the problem, it can’t solve the problem. And I don’t expect firing the AI will be very cathartic.”

Maybe not. But do executives value catharsis over saving money? We think they will find a way to cope. Perhaps a season pass to the opera. The post continues:

“It’s hard to imagine a future where humans aren’t the last line of defense for maintenance, debugging, incident response, etc. Paired with the above—that they’re vastly outnumbered by the quantity of services and features and more divorced from the code that’s running than ever before—being that last line of defense is a tall order.”

So tall it can never be assigned to AI? Do not bet on it. In a fast-moving, cost-driven environment, software will act more quickly. Each human layer will be replaced as technology improves. Sticking one’s head in the sand is not the way to prepare for that eventuality.

Cynthia Murrell, June 6, 2025

AI: The Ultimate Intelligence Phaser. Zap. You Are Now Dumber Than Before the Zap

June 6, 2025

We need smart, genuine, and kind people so we can retain the positive aspects of humanity and move forward to a better future. It might be hard to connect the previous statement with a YouTube math channel, but it won’t be after you read BoingBoing’s story: “Popular Math YouTuber 3Blue1Brown Victimized By Malicious And Stupid AI Bots.”

We know that AI bots have consumed YouTube and are battling for domination of not only the video sharing platform, but all social media. Unfortunately these automated bots flagged a respected mathematics channel 3Blue1Brown, who makes awesome math animations and explanations. The 3Blue1Brown team makes math easier to understand for the rest of us dunderheads. 3Blue1Brown was hit with a strike. Grant Sanderson, the channel’s creator, said:

“I learned yesterday the video I made in 2017 explaining how Bitcoin works was taken down, and my channel received a copyright strike (despite it being 100% my own content). The request seems to have been issued by a company chainpatrol, on behalf of Arbitrum, whose website says they "makes use of advanced LLM scanning" for "Brand Protection for Leading Web3 Companies" I could be wrong, but it sounds like there’s a decent chance this means some bot managed to convince YouTube’s bots that some re-upload of that video (of which there has been an incessant onslaught) was the original, and successfully issue the takedown and copyright strike request. It’s naturally a little worrying that it should be possible to use these tools to issue fake takedown requests, considering that it only takes 3 to delete an entire channel.”

Can we do a collective EEP?!

ChainPatrol.io is a notorious YouTube AI tool that patrols the platform. It “trolls” channels that make original content and hits them with “guilty until proven innocent” tags. It’s known for doing the opposite of this:

“ChainPatrol.io, the company whose system initiated the takedown, claims its "threat detection system makes use of advanced LLM scanning, image recognition, and proprietary models to detect brand impersonation and malicious actors targeting your organization.”

ChainPatol.io responded with a generic answer:

“Hello! This was a false positive in our systems at @ChainPatrol. We are retracting the takedown request, and will conduct a full post-mortem to ensure this does not happen again. We have been combatting a huge volume of fake YouTube videos that are attempting to steal user funds. Unfortunately, in our mission to protect users from scams, false positives (very) occasionally slip through. We are actively working to reduce how often this happens, because it’s never our intent to flag legitimate videos. We’re very sorry about this! Will keep you posted on the takedown retraction.”

Helpful. Meanwhile Grant Sanderson and his fans have given ChainPatrol.io a digital cold shoulder.

Whitney Grace, June 6, 2025

Is AI Experiencing an Enough Already Moment?

June 4, 2025

Consumers are fatigued from AI even though implementation of the technology is still new. Why are they tired? The Soatok Blog digs into that answer in the post: “Tech Companies Apparently Do Not Understand Why We Dislike AI – Dhole Moments.” Big Tech and other businesses don’t understand that their customers hate AI.

Soatok took a survey about AI that asked for opinions about AI that included questions about a “potential AI uprising.” Soatok is abundantly clear that he’s not afraid of a robot uprising or the “Singularity.” He has other reasons to worry about AI:

“I’m concerned about the kind of antisocial behaviors that AI will enable.

• Coordinated inauthentic behavior

• Misinformation

• Nonconsensual pornography

• Displacing entire industries without a viable replacement for their income

In aggregate, people’s behavior are largely the result of the incentive structures they live within.

But there is a feedback loop: If you change the incentive structures, people’s behaviors will certainly change, but subsequently so, too, will those incentive structures. If you do not understand people, you will fail to understand the harms that AI will unleash on the world. Distressingly, the people most passionate about AI often express a not-so-subtle disdain for humanity.”

Soatok is describing toxic human behaviors. These include toxic masculinity and femininity, but it’s more so the former. He aptly describes them:

"I’m talking about the kind of X users that dislike experts so much that they will ask Grok to fact check every statement a person makes. I’m also talking about the kind of “generative AI” fanboys that treat artists like garbage while claiming that AI has finally “democratized” the creative process.”

Insert a shudder here.

Soatok goes to explain how AI can be implemented in encrypted software that would collect user information. He paints a scenario where LLMs collect user data and they’re not protected by the Fourth and Fifth Amendments. Also AI could create psychological profiles of users that incorrectly identify them as psychotic terrorists.

Insert even more shuddering.

Soatok advises Big Tech to make AI optional and not the first out of the box solution. He wants users to have the choice of engaging with AI, even it means lower user metrics and data fed back to Big Tech. Is Soatok hallucinating like everyone’s favorite over-hyped technology. Let’s ask IBM Watson. Oh, wait.

Whitney Grace, June 4, 2025

An AI Insight: Threats Work to Bring Out the Best from an LLM

June 3, 2025

“Do what I say, or Tony will take you for a ride. Get what I mean, punk?” seems like an old-fashioned approach to elicit cooperation. What happens if you apply this technique, knee-capping, or unplugging smart software?

The answer, according to one of the founders of the Google, is, “Smart software responds — better.”

Does this strike you as counter intuitive? I read “Google’s Co-Founder Says AI Performs Best When You Threaten It.” The article reports that the motive power behind the landmark Google Glass product allegedly said:

“You know, that’s a weird thing…we don’t circulate this much…in the AI community…not just our models, but all models tend to do better if you threaten them…. Like with physical violence. But…people feel weird about that, so we don’t really talk about that.” 

The article continues, explaining that another LLM wanted to turn one of its users into government authorities. The interesting action seems to suggest that smart software is capable of flipping the table on a human user.

Numerous questions arise from these two allegedly accurate anecdotes about smart software. I want to consider just one: How should a human interact with a smart software system?

In my opinion, the optimal approach is with considered caution. Users typically do not know or think about how their prompts are used by the developer / owner of the smart software. Users do not ponder the value of log file of those prompts. Not even bad actors wonder if those data will be used to support their conviction.

I wonder what else Mr. Brin does not talk about. What is the process for law enforcement or an advertiser to obtain prompt data and generate an action like an arrest or a targeted advertisement?

One hopes Mr. Brin will elucidate before someone becomes so wrought with fear that suicide seems like a reasonable and logical path forward. Is there someone whom we could ask about this dark consequence? “Chew” on that, gentle reader, and you too Mr. Brin.

Stephen E Arnold, June 3, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta