Oh, Oh, a Technological Insight: Unstable, Degrading, Non-Reversable.
April 9, 2025
Dinobaby says, “No smart software involved. That’s for “real” journalists and pundits.
“Building a House of Cards” has a subtitle which echoes other statements of “Oh, oh, this is not good”:
Beneath the glossy promises of artificial intelligence lies a ticking time bomb — and it’s not the one you’re expecting
Yep, another, person who seems younger than I has realized that flows of digital information erode, not just social structures but other functions as well.
The author, who publishes in Mr. Plan B, states:
The real crisis isn’t Skynet-style robot overlords. It’s the quiet, systematic automation of human bias at scale.
The observation is excellent. The bias of engineers and coders who set thresholds, orchestrate algorithmic beavers, and use available data. The human bias is woven into the systems people use, believe, and depend upon.
The essay asserts:
We’re not coding intelligence — we’re fossilizing prejudice.
That, in my opinion, is a good line.
The author, however, runs into a bit of a problem. The idea of a developers’ manifesto is interesting but flawed. Most devs, as some term this group, like creating stuff and solving problems. That’s the kick. Most of the devs with whom I have worked laugh when I tell them I majored in medieval religious poetry. One, a friend of mine, said, “I paid someone to write my freshman essay, and I never took any classes other than math and science.”
I like that: Ignorance and a good laugh at how I spent my college years. The one saving grace is that I got paid to help a professor index Latin sermons using the university’s one computer to output the word lists and microfilm locators. Hey, in 1962, this was voodoo.
Those who craft the systems are not compensated to think about whether Latin sermons were original or just passed around when a visiting monk exchanged some fair copies for a snort of monastery wine and a bit of roast pig. Let me tell you that most of those sermons were tediously similar and raised such thorny problems as the originality of the “author.”
The essay concludes with a factoid:
25 years in tech taught me one thing: Every “revolutionary” technology eventually faces its reckoning. AI’s is coming.
I am not sure that those engaged in the noble art and craft of engineering “smart” software accept, relate, or care about the validity of the author’s statement.
The good news is that the essay’s author now understand that flows of digital information do not construct. The bits zipping around erode just like the glass beads or corn cob abrasive in a body shop’s media blaster aimed at rusted automobile frame.
The body shop “restores” the rusted part until it is as good as new. Even better some mechanics say.
As long as it is “good enough,” the customer is happy. But those in the know realize that the frame will someday be unable to support the stress placed upon it.
See. Philosophy from a mechanical process. But the meaning speaks to a car nut. One may have to give up or start over.
Stephen E Arnold, April 9, 2025
Programmers? Just the Top Code Wizards Needed. Sorry.
April 8, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
Microsoft has some interesting ideas about smart software and writing “code.” To sum it up, consider another profession.
“Microsoft CTO Predicts AI Will Generate 95% of Code by 2030” reports:
Developers’ roles will shift toward orchestrating AI-driven workflows and solving complex problems.
I think this means that instead of figuring out how to make something happen, one will perform the higher level mental work. The “script” comes out of the smart software.
The write up says:
“It doesn’t mean that the AI is doing the software engineering job … authorship is still going to be human,” Scott explained. “It creates another layer of abstraction [as] we go from being an input master (programming languages) to a prompt master (AI orchestrator).” He doesn’t believe AI will replace developers, but it will fundamentally change their workflows. Instead of painstakingly writing every line of code, engineers will increasingly rely on AI tools to generate code based on prompts and instructions. In this new paradigm, developers will focus on guiding AI systems rather than programming computers manually. By articulating their needs through prompts, engineers will allow AI to handle much of the repetitive work, freeing them to concentrate on higher-level tasks like design and problem-solving.
The idea is good. Does it imply that smart software has reached the end of its current trajectory and will not be able to:
- Recognize a problem
- Formulate appropriate questions
- Obtain via research, experimentation, or Eureka! moments a solution?
The observation by the Microsoft CTO does not seem to consider this question about a trolly line that can follow its tracks.
The article heads off in another direction; specifically, what happens to the costs?
IBM CEO Arvind Krishna’s is quoted as saying:
“If you can produce 30 percent more code with the same number of people, are you going to get more code written or less?” Krishna rhetorically posed, suggesting that increased efficiency would stimulate innovation and market growth rather than job losses.
Where does this leave “coders”?
Several observations:
- Those in the top one percent of skills are in good shape. The other 99 percent may want to consider different paths to a bright, fulfilling future
- Money, not quality, is going to become more important
- Inexperienced “coders” may find themselves looking for ways to get skills at the same time unneeded “coders” are trying to reskill.
It is no surprise that CNET reported, “The public is particularly concerned about job losses. AI experts are more optimistic.”
Net net: Smart software, good or bad, is going to reshape work in a big chunk of the workforce. Are schools preparing students for this shift? Are there government programs in place to assist older workers? As a dinobaby, it seems the answer is not far to seek.
Stephen E Arnold, April 8, 2025
HP and Dead Printers: Hey, Okay, We Will Not Pay
April 8, 2025
HP found an effective way to ensure those who buy its printers also buy its pricy ink: Firmware updates that bricked the printers if a competitor’s cartridge was installed. Not all customers appreciated the ingenuity. Ars Technica reports, "HP Avoids Monetary Damages Over Bricked Printers in Class-Action Settlement." Reporter Scharon Harding writes:
"In December 2020, Mobile Emergency Housing Corp. and a company called Performance Automotive & Tire Center filed a class-action complaint against HP [PDF], alleging that the company ‘wrongfully compels users of its printers to buy and use only HP ink and toner supplies by transmitting firmware updates without authorization to HP printers over the Internet that lock out its competitors’ ink and toner supply cartridges.’ The complaint centered on a firmware update issued in November 2020; it sought a court ruling that HP’s actions broke the law, an injunction against the firmware updates, and monetary and punitive damages. ‘HP’s firmware "updates" act as malware—adding, deleting or altering code, diminishing the capabilities of HP printers, and rendering the competitors’ supply cartridges incompatible with HP printers,’ the 2020 complaint reads."
Yikes. The name HP gave this practice is almost Orwellian. We learn:
"HP calls using updates to prevent printers from using third-party ink and toner Dynamic Security. The term aims to brand the device bricking as a security measure. In recent years, HP has continued pushing this claim, despite security experts that Ars has spoken with agreeing that there’s virtually zero reason for printer users to worry about getting hacked through ink."
No kidding. After nearly four years of litigation, the parties reached a settlement. HP does not admit any wrongdoing and will not pay monetary relief to affected customers. It must, however, let users decline similar updates; well, those who own a few particular models, anyway. It will also put disclaimers about Dynamic Security on product pages. Because adding a couple lines to the fine print will surely do the trick.
Harding notes that, though this settlement does not include monetary restitution, other decisions have. Those few million dollars do not seem to have influenced HP to abolish the practice, however.
Cynthia Murrell, April 8, 2025
Passwords: Reuse Pumps Up Crime
April 8, 2025
Cloudflare reports that password reuse is one of the biggest mistakes users make that compromises their personal information online. Cloudflare monitored traffic through their services between September-November 2024 and discovered that 41% of all logins for Cloudflare protected Web sites used compromised passwords. Cloudflare discussed why this vulnerability in the blog post: “Password Reuse Is Rampant: Nearly Half Of Observed User Logins Are Compromised.”
As part of their services, Cloudflare monitors if passwords have been leaked in any known data breaches and then warn users of the potential threat. Cloudflare analyzed traffic from Internet properties on the company’s free plan that includes the leaked credentials feature.
When Cloudflare conducted this research, the biggest challenge was distinguishing between real humans an d bad actors. They focused on successful login attempts, because this indicates real humans were involved . The data revealed that 41% of human authentication attempts involved leaked credentials. Despite warning PSAs about reusing old passwords, users haven’t changed their ways.
Bot attacks are also on the rise. These bots are programmed with stolen passwords and credentials and are told to test them on targeted Web sites.
Here’s what Cloudflare found:
“Data from the Cloudflare network exposes this trend, showing that bot-driven attacks remain alarmingly high over time. Popular platforms like WordPress, Joomla, and Drupal are frequent targets, due to their widespread use and exploitable vulnerabilities, as we will explore in the upcoming section.
Once bots successfully breach one account, attackers reuse the same credentials across other services to amplify their reach. They even sometimes try to evade detection by using sophisticated evasion tactics, such as spreading login attempts across different source IP addresses or mimicking human behavior, attempting to blend into legitimate traffic. The result is a constant, automated threat vector that challenges traditional security measures and exploits the weakest link: password reuse.”
Cloudflare advises people to have multi-factor authentication on accounts, explore using passkeys, and for God’s sake please change your password. I have heard that Telegram’s technology enables some capable bots. Does Telegram rely on Cloudflare for some services? Huh.
Whitney Grace, April 8, 2025
Amazon Takes the First Step Toward Moby Dickdom
April 7, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
This Engadget article does not predict the future. “Amazon Will Use AI to Generate Recaps for Book Series on the Kindle” reports:
Amazon’s new feature could make it easier to get into the latest release in a series, especially if it’s been some time since you’ve read the previous books. The new Recaps feature is part of the latest software update for the Kindle, and the company compares it to “Previously on…” segments you can watch for TV shows. Amazon announced Recaps in a blog post, where it said that you can get access to it once you receive the software update over the air or after you download and install it from Amazon’s website. Amazon didn’t talk about the technology behind the feature in its post, but a spokesperson has confirmed to TechCrunch that the recaps will be AI generated.
You may know a person who majored in American or English literature. Here’s a question you could pose:
Do those novels by a successful author follow a pattern; that is, repeatable elements and a formula?
My hunch is that authors who have written a series of books have a recipe. The idea is, “If it makes money, do it again.” In the event that you could ask Nora Roberts or commune with Billy Shakespeare, did their publishers ask, “Could you produce another one of those for us? We have a new advance policy.” When my Internet 2000: The Path to the Total Network made money in 1994, I used the approach, tone, and research method for my subsequent monographs. Why? People paid to read or flip through the collected information presented my way. I admit I that combined luck, what I learned at a blue chip consulting firm, and inputs from people who had written successful non-fiction “reports.” My new monograph — The Telegram Labyrinth — follows this blueprint. Just ask my son, and he will say, “My dad has a template and fills in the blanks.”
If a dinobaby can do it, what about flawed smart software?
Chase down a person who teaches creative writing, preferably in a pastoral setting. Ask that person, “Do successful authors of series follow a pattern?”
Here’s what I think is likely to happen at Amazon. Remember. I have zero knowledge about the inner workings of the Bezos bulldozer. I inhale its fumes like many other people. Also, Engadget doesn’t get near this idea. This is a dinobaby opinion.
Amazon will train its smart software to write summaries. Then someone at Amazon will ask the smart software to generate a 5,000 word short story in the style of Nora Roberts or some other money spinner. If the story is okay, then the Amazonian with a desire to shift gears says, “Can you take this short story and expand it to a 200,000 word novel, using the patterns, motifs, and rhetorical techniques of the series of novels by Nora, Mark, or whoever.
Guess what?
Amazon now has an “original” novel which can be marketed as an Amazon test, a special to honor whomever, or experiment. If Prime members or the curious click a lot, that Amazon employee has a new business to propose to the big bulldozer driver.
How likely is this scenario? My instinct is that there is a 99 percent probability that an individual at Amazon or the firm from which Amazon is licensing its smart software has or will do this.
How likely is it that Amazon will sell these books to the specific audience known to consume the confections of Nora and Mark or whoever? I think the likelihood is close to 80 percent. The barriers are:
- Bad optics among publishers, many of which are not pals of fume spouting bulldozers in the few remaining bookstores
- Legal issues because both publishers and authors will grouse and take legal action. The method mostly worked when Google was scanning everything from timetables of 19th century trains in England to books just unwrapped for the romance novel crowd
- Management disorganization. Yep, Amazon is suffering the organization dysfunction syndrome just like other technology marvels
- The outputs lack the human touch. The project gets put on ice until OpenAI, Anthropic, or whatever comes along and does a better job and probably for fewer computing resources which means more profit.
What’s important is that this first step is now public and underway.
Engadget says, “Use it at your own risk.” Whose risk may I ask?
Stephen E Arnold, April 7, 2025
AI May Fizzle and the New York Times Is Thrilled
April 7, 2025
Yep, a dinobaby blog post. No smart software required.
I read “The Tech Fantasy That Powers A.I. Is Running on Fumes.” Is this a gleeful headline or not. Not even 10 days after the Italian “all AI” newspaper found itself the butt of merciless humor, the NYT is going for the jugular.
The write up opines:
- “Midtech” — tech but not really
- “Silly” — Showing little thought or judgment
- “Academics” — Ivory tower dwellers, not real journalists and thinkers
Here’s a quote from a person who obviously does not like self check outs:
The economists Daron Acemoglu and Pascual Restrepo call these kinds of technological fizzles “so-so” technologies. They change some jobs. They’re kind of nifty for a while. Eventually they become background noise or are flat-out annoying, say, when you’re bagging two weeks’ worth of your own groceries.
And now the finale:
But A.I. is a parasite. It attaches itself to a robust learning ecosystem and speeds up some parts of the decision process. The parasite and the host can peacefully coexist as long as the parasite does not starve its host. The political problem with A.I.’s hype is that its most compelling use case is starving the host — fewer teachers, fewer degrees, fewer workers, fewer healthy information environments.
My thought is that the “real” journalists at the NYT hope that AI fails. Most routine stories can be handled by smart software. Sure, there are errors. But looking at a couple of versions of the same event is close enough for horse shoes.
The writing is on the wall of the bean counters’ offices: Reduce costs. Translation: Some “real” journalists can try to get a job as a big time consultant. Oh, strike that. Outfits that sell brains are replacing flakey MBAs with smart software. Well, there is PR and marketing. Oh, oh, strike that tool. Telegram’s little engines of user controlled smart software can automate ads. Will other ad outfits follow Telegram’s lead? Absolutely.
Yikes. It won’t be long before some “real” journalists will have an opportunity to write their version of:
- Du côté de chez Swann
- À l’ombre des jeunes filles en fleurs
- Le Côté de Guermantes
- Sodome et Gomorrhe
- La Prisonnière
- Albertine disparue (also published as La Fugitive)
- Le Temps retrouvé
Which one will evoke the smell of the newsroom?
Stephen E Arnold, April 7, 2025
A TikTok Use Case: Another How To
April 7, 2025
Another dinobaby blog post. Eight decades and still thrilled when I point out foibles.
Social media services strike me as problematic. As a dinobaby, I marvel at the number of people who view services through a porthole in their personal submarine. Write ups that are amazed at the applications of social media which are negative remind me that there are some reasons meaningful regulation of TikTok-type services has not been formulated. Are these negative use cases news? For me, nope.
I read “How TikTok Is Emerging As an Essential Tool for Migrant Smugglers.” The write up explains how a “harmless” service can be used for criminal activities. The article says:
At a time when legal pathways to the U.S. have been slashed and criminal groups are raking in money from migrant smuggling, social media apps like TikTok have become an essential tool for smugglers and migrants alike. The videos—taken to cartoonish extremes—offer a rare look inside a long elusive industry and the narratives used by trafficking networks to fuel migration north.
Yep, TikTok is a marketing tool for people smugglers. Wow! Really?
Is this a surprise? My hunch is that the write up reveals more about the publication and the researchers than it does about human smugglers.
Is this factoid unheard of?
A 2023 study by the United Nations reported that 64% of the migrants they interviewed had access to a smart phone and the internet during their migration to the U.S.
A free service used by millions of people provides a communications fabric. Marketing is the go-to function of organizations, licit and illicit.
Several observations:
- Social media — operating in the US or in countries with different agendas — is a tool. Tools can be used for many purposes. Why wouldn’t bad actors exploit TikTok or any other social media service.
- The intentional use of a social media service for illegal purposes is wide spread. LinkedIn includes fake personas; Telegram offers pirated video content; and Facebook — sure, even Facebook — allows individuals to advertise property for sale which may not come with a legitimate sales receipt from the person who found a product on a door step in an affluent neighborhood. Social media invites improper activity.
- Regulation in many countries has not kept space with the diffusion of social media. In 2025, worrying about misuse of these services is not even news.
The big question is, “Have we reached a point of no return with social media?” I have been involved in computers and digital information for more than a half century. The datasphere is the world in which we live.
Will the datasphere evolve? Yes, the intentional use of social media is shifting toward negative applications. For me that means that for every new service, I do not perceive a social benefit. I see opportunities for accelerating improper use of data flows.
What strikes me about the write up is that documenting a single issue is interesting, but it misses what and how flows of information in TikTok-like service operate. Who are the winners? Who are the losers? And, who will own TikTok and the information space for its users?
Stephen E Arnold, April 7, 2025
Free! Does Google Do Anything for Free?
April 7, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
What an inducement! Such a deal!
How excited was I to read this headline:
Gemini 2.5 Pro Is Google’s Most Powerful AI Model and It’s Already Free
The write up explains:
Google points to several benchmark tests that show the prowess of Gemini 2.5 Pro. At the time of writing it tops the LMArena leaderboard, where users give ratings on responses from dozens of AI chatbots. It also scores 18.8 percent on the Humanity’s Last Exam test—which measures human knowledge and reasoning—narrowly edging out rival models from OpenAI and Anthropic.
As a dinobaby, I understand this reveal is quantumly supreme. Google is not only the best. The “free” approach puts everyone on notice that Google is not interested in money. Google is interested in…. Well, frankly, I am not sure.
Thanks, You.com. Good enough. I have to pay to get this type of smart art.
Possible answers include: [a] publicity to deal with the PR tsunami the OpenAI Ghibli capability splashed across my newsfeeds, [b] a response to the Chinese open source alternatives from eCommerce outfits and mysterious venture capital firms, [c] Google’s tacit admission that its best card is the joker that allows free access to the game, [d] an unimaginative response to a competitive environment less and less Google centric each day.
Pick one.
The write up reports:
The frenetic pace of AI development shows no signs of slowing down anytime soon, and we can expect more Gemini 2.5 models to appear in the near future. “As always, we welcome feedback so we can continue to improve Gemini’s impressive new abilities at a rapid pace, all with the goal of making our AI more helpful,” says Koray Kavukcuoglu, from Google’s DeepMind AI lab.
The question is, “Have the low-hanging AI goodies been harvested?”
I find that models are becoming less distinctive. One of my team handed me two sheets of paper. On one was a paragraph from our locally installed Deepseek. The other was a sheet of paper of an answer from You.com’s “smart” option.
My response was, “So?” I could not tell which model produced what because the person whom I pay had removed the idiosyncratic formatting of the Deepseek output and the equally distinctive outputting from You.com’s Smart option.
My team member asked, “Which do you prefer?”
I said, “Get Whitney to create one write up and input our approach to the topic.”
Both were okay; neither was good enough to use as handed to me.
Good enough. The AI systems reached “good enough” last year. Since then, not much change except increasing similarity.
Free is about right. What’s next? Paying people to use Bing Google?
Now to answer the headline question, “Does Google do anything for free?” My answer: Only when the walls are closing in.
Stephen E Arnold, April 7, 2025
Errors? AI Makes Accuracy Irrelevant
April 4, 2025
This blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything.
We have poked around some AI services. A few are very close to being dark patterns that want to become like Al Capone or moe accurately AI Capone. Am I thinking of 1min.ai? Others just try to sound so friendly when outputting wackiness? Am I thinking about the Softies or ChatGPT? I don’t know.
I did read “AI Search Has A Citation Problem.” The main point is that AI struggles with accuracy. One can gild the lily and argue that it makes work faster. I won’t argue that quick incorrect output may speed some tasks. However, the write up points out:
Premium chatbots provided more confidently incorrect answers than their free counterparts.
I think this means that paying money does not deliver accuracy, judgment, or useful information. I would agree.
A farmer wonders how the steam engine ended up in his corn field. How did smart software get involved in deciding that distorted information was a useful output for students and workers? Thanks, You.com. The train was supposed to be on its side, but by getting the image different from my prompt, you have done the job. Close enough for horse shoes, right?
The write up also points out:
Generative search tools fabricated links and cited syndicated and copied versions of articles.
I agree.
Here’s a useful finding if one accepts the data in the write up as close enough for horseshoes:
Overall, the chatbots often failed to retrieve the correct articles. Collectively, they provided incorrect answers to more than 60 percent of queries. Across different platforms, the level of inaccuracy varied, with Perplexity answering 37 percent of the queries incorrectly, while Grok 3 had a much higher error rate, answering 94 percent of the queries incorrectly.
The alleged error rate of Grok is in line with my experience. I try to understand, but when space ships explode, people set Cybertrucks on fire, and the cratering of Tesla stock cause my widowed neighbor to cry — I see a pattern of missing the mark. Your mileage or wattage may vary, of course.
The write up points out:
Platforms often failed to link back to the original source
For the underlying data and more academic explanations, please, consult the original article.
I want to shift gears and make some observations about the issue the data in the article and my team’s experience with smart software present. Here we go, gentle reader:
- People want convenience or what I call corner cutting. AI systems eliminate the old fashioned effort required to verify information. Grab and go information, like fast food, may not be good for the decision making life.
- The information floating around about a Russian content mill pumping out thousands of weaoonized news stories a day may be half wrong. Nevertheless, it makes clear that promiscuous and non-thinking AI systems can ingest weaponized content and spit it out without a warning level or even recognizing baloney when one expects a slab of Wagu beef.
- Integrating self-driving AI into autonomous systems is probably not yet a super great idea. The propaganda about Chinese wizards doing this party trick is interesting, just a tad risky when a kinetic is involved.
Where are we? Answering this question is a depressing activity. Companies like Microsoft are forging ahead with smart software helping people do things in Excel. Google is allowing its cheese-obsessed AI to write email responses. Outfits like BoingBoing are embracing questionable services like a speedy AI Popeil pocket fisherman as part of its money making effort. And how about those smart Anduril devices? Do they actually work? I don’t want to let one buzz me.
The AI crazy train is now going faster than the tracks permit. How does one stop a speeding autonomous train? I am going to stand back because that puppy is going to fall off the tracks and friction will do the job. Whoo. Whoo.
Stpehen E Arnold, April 4, 2025
Bye-Bye Newsletters, Hello AI Marketing Emails
April 4, 2025
Adam Ryan takes aim at newsletters in the Work Week article, “Perpetual: The Major Shift of Media.” Ryan starts the article saying we’re already in changing media landscape and if you’re not preparing you will be left behind. He then dives into more detail explaining that the latest trend setter is an email newsletter. From his work in advertising, Ryan has seen newsletters rise from the bottom of the food chain to million dollar marketing tools.
He explains that newsletters becoming important marketing tools wasn’t an accident and that it happened through a the democratization process. By democratization Ryan means that newsletters became easier to make through the use of simplification software. He uses the example of Shopify streamlining e-commerce and Beehiiv doing the same for newsletters. Another example is Windows making PCs easier to use with its intuitive UI.
Continuing with the Shopify example, Ryan says that mass adoption of the e-commerce tool has flooded the market place. Top brands that used to dominate the market were now overshadowed by competition. In short, everyone and the kitchen sink was selling goods and services.
Ryan says that the newsletter trend is about to shift and people (operators) who solely focus on this trend will fall out of favor. He quotes Warren Buffet: “Be fearful when others are greedy, and be greedy when others are fearful.” Ryan continues that people are changing how they consume information and they want less of it, not more. Enter the AI tool:
“Here’s what that means:
• Email open rates will drop as people consume summaries instead of full emails.
• Ad clicks will collapse as fewer people see newsletter ads.
• The entire value of an “owned audience” declines if AI decides what gets surfaced.”
It’s not the end of the line for newsletter is you become indispensable such as creating content that can’t be summarized, build relationships beyond emails, and don’t be a commodity:
“This shift is coming. AI will change how people engage with email. That means the era of high-growth newsletters is ending. The ones who survive will be the ones who own their audience relationships, create habit-driven content, and build businesses beyond the inbox.”
This is true about every major change, not just news letters.
Whitney Grace, April 4, 2025