Pinboard: A Useful Resource

September 8, 2025

I’m going to be completely honest. When I visited Pinboard I didn’t have any idea what the website was. I poked around, visited some links that look me to various social media and similar websites, until I found the about page:

"Founded in 2009, Pinboard is a fast, independently run, no-nonsense bookmarking site for people who value privacy and speed.

There are no ads and no trackers of any kind. Users pay a modest yearly fee.

Pinboard lets you bookmark from any browser, connect up Twitter accounts (and favorites), and sync with popular services like Instapaper or Pocket.

For a few more bucks a year, Pinboard offers an archiving service which saves a copy of everything you bookmark, gives you full-text search, and automatically checks your account for dead links.”

I was intrigued. Services like this are all glitz and spangles these days, but Pinboard has old school simplicity with chaotic neutral hacker vibes. Say what?

By that I mean, it’s a neat service without the high price tag. These reviews say it all:

The Guardian said, “Pinboard is a very effective service… Sometimes, you don’t need glitz; you need plumbing.”

Followed by The Economist, One dude in his underpants somewhere who has five windows open to terminal servers.”

The operator of the site takes steps to neutralized SEO spammers and Telegram posting bots. This is a very good service. There is what I call a “slow SEO spammer.” The entity behind this steady stream of baby oriented cloth is an annoyance and a bit amusing.

Whitney Grace, September 8, 2025

Dr. Bob Clippy Will See You Now

September 8, 2025

I cannot wait for AI to replace my trusted human physician whom I’ve been seeing for years. “Microsoft Claims its AI Tool Can Diagnose Complex Medical Cases Four Times More Accurately than Doctors,” Fortune reports. The company made this incredible claim in a recent blog post. How did it determine this statistic? By taking the usual resources away from human doctors it pitted against its AI. Senior Reporter Alexa Mikhail tells us:

“The team at Microsoft noted the limitations of this research. For one, the physicians in the study had between five and 20 years of experience, but were unable to use textbooks, coworkers, or—ironically—generative AI for their answers. It could have limited their performance, as these resources may typically be available during a complex medical situation.”

You don’t say? Additionally, the study did not include everyday cases. You know, the sort doctors do not need to consult books or coworkers to diagnose. Seems legit. Microsoft says it sees the tool as a complement to doctors, not a replacement for them. That sounds familiar.

Mikahil notes AI already permeates healthcare: Most of us have looked up symptoms with AI-assisted Web searches. ChatGPT is actively being used as a psychotherapist (sometimes for better, often for worse). Many healthcare executives are eager to take this much, much further. So are about half of US patients and 63% of clinicians, according to the 2025 Philips Future Health Index (FHI), who expect AI to improve health outcomes. We hope they are correct, because there may be no turning back now.

Cynthia Murrell, September 8, 2025

Common Sense Returns for Coinbase Global

September 5, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Just a quick dino tail slap for Coinbase. I read “Coinbase Reverses Remote Policy over North Korean Hacker Threats.” The write up says:

Coinbase has reversed its remote-first policy due to North Korean hackers exploiting fake remote job applications for infiltration. The company now mandates in-person orientations and U.S. citizenship for sensitive roles. This shift highlights the crypto industry’s need to balance flexible work with robust cybersecurity.

I strongly disagree with the cyber security angle. I think it is a return (hopefully) to common sense, not the mindless pursuit of cheap technical work and lousy management methods. Sure, cyber security is at risk when an organization hires people to do work from a far off land. The easy access to voice and image synthesis tools means that some outfits are hiring people who aren’t the people the really busy, super professional human resources person thinks was hired.

The write up points out:

North Korean hackers have stolen an estimated $1.6 billion from cryptocurrency platforms in 2025 alone, as detailed in a recent analysis by Ainvest. Their methods have evolved from direct cyberattacks to more insidious social engineering, including fake job applications enhanced by deepfakes and AI-generated profiles. Coinbase’s CEO, Brian Armstrong, highlighted these concerns during an appearance on the Cheeky Pint podcast, as covered by The Verge, emphasizing how remote-first policies inadvertently create vulnerabilities.

Close but the North Korean angle is akin to Microsoft saying, “1,000 Russian hackers did this.” Baloney. My view is that the organized hacking operations blend smoothly with the North Korean government’s desire for free cash and the large Chinese criminal organizations operating money laundering operations from that garden spot, the Golden Triangle.

Stealing crypto is one thing. Coordinating attacks on organizations to exfiltrate high value information is a second thing. A third thing is to perform actions that meet the needs and business methods of large-scale money laundering, phishing, and financial scamming operations.

Looking at these events from the point of view of a single company, it is easy to see that cost reduction and low cost technical expertise motivated some managers, maybe those at Coinbase. But now that more information is penetrating the MBA fog that envelopes many organizations, common sense may become more popular. Management gurus and blue chip consulting firms are not proponents of common sense in my experience. Coinbase may have seen the light.

Stephen E Arnold, September 5, 2025

AI Can Be Your Food Coach… Well, Perhaps Not

September 5, 2025

Is this better or worse than putting glue on pizza? TechSpot reveals yet another severe consequence of trusting AI: “Man Develops Rare 19th-Century Psychiatric Disorder After Following ChatGPT’s Diet Advice.” Writer Rob Thubron tells us:

“The case involved a 60-year-old man who, after reading reports on the negative impact excessive amounts of sodium chloride (common table salt) can have on the body, decided to remove it from his diet. There were plenty of articles on reducing salt intake, but he wanted it removed completely. So, he asked ChatGPT for advice, which he followed. After being on his new diet for three months, the man admitted himself to hospital over claims that his neighbor was poisoning him. His symptoms included new-onset facial acne and cherry angiomas, fatigue, insomnia, excessive thirst, poor coordination, and a rash. He also expressed increasing paranoia and auditory and visual hallucinations, which, after he attempted to escape, ‘resulted in an involuntary psychiatric hold for grave disability.’”

Yikes! It was later learned ChatGPT suggested he replace table salt with sodium bromide. That resulted, unsurprisingly, in this severe case of bromism. That malady has not been common since the 1930s. Maybe ChatGPT confused the user with a spa/hot tub or an oil and gas drill. Or perhaps its medical knowledge is just a bit out of date. Either way, this sad incident illustrates what a mistake it is to rely on generative AI for important answers. This patient was not the only one here with hallucinations.

Cynthia Murrell, September 5, 2025

Supermarket Snitches: Old-Time Methods Are Back

September 5, 2025

So much for AI and fancy cyber-security systems. One UK grocery chain has found a more efficient way to deal with petty theft—pay people to rat out others. BBC reports, “Iceland Offers £1 Reward for Reporting Shoplifters.” (Not to be confused with the country, this Iceland is a British supermarket chain.) Business reporter Charlotte Edwards tells us shoplifting is a growing problem for grocery stores and pharmacies. She writes:

“Victims minister Alex Davies-Jones told BBC Radio 4’s Today programme on Monday that shoplifting had ‘got out of hand’ in the UK. … According to the Office for National Statistics, police recorded 530,643 shoplifting offences in the year to March 2025. That is a 20% increase from 444,022 in the previous year, and the highest figure since current recording practices began in 2002-03.”

Amazing what economic uncertainty will do. In response, the government plans to put thousands more police officers on neighborhood patrols by next spring. Perhaps encouraging shoppers to keep their eyes peeled will help. We learn:

“Supermarket chain Iceland will financially reward customers who report incidents of shoplifting, as part of efforts to tackle rising levels of retail theft. The firm’s executive chairman, Richard Walker, said that shoppers who alert staff to a theft in progress will receive a £1 credit on their Iceland Bonus Card. The company estimates that shoplifting costs its business around £20m each year. Mr Walker said this figure not only impacts the company’s bottom line but also limits its ability to reduce prices and reinvest in staff wages. Iceland told the BBC that the shoplifters do not necessarily need to be apprehended for customers to receive the £1 reward but will need to be reported and verified.”

How, exactly, they will be verified is left unexplained. Perhaps that is the role for advanced security systems. Totally worth it. Walker emphasizes customers should not try to apprehend shoplifters, just report them. Surely no one will get that twisted. But with one pound sterling equal to $1.35 USD, we wonder: is that enough incentive to pull the phone out of one’s pocket?

Technology is less effective than snitching.

Cynthia Murrell, September 5, 2025

Grousing Employees Can Be Fun. Credible? You Decide

September 4, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

I read “Former Employee Accuses Meta of Inflating Ad Metrics and Sidestepping Rules.” Now former employees saying things that cast aspersions on a former employer are best processed with care. I did that, and I want to share the snippets snagging my attention. I try not to think about Meta. I am finishing my monograph about Telegram, and I have to stick to my lane. But I found this write up a hoot.

The first passage I circled says:

Questions are mounting about the reliability of Meta’s advertising metrics and data practices after new claims surfaced at a London employment tribunal this week. A former Meta product manager alleged that the social media giant inflated key metrics and sidestepped strict privacy controls set by Apple, raising concerns among advertisers and regulators about transparency in the industry.

Imagine. Meta coming up at a tribunal. Does that remind anyone of the Cambridge Analytica excitement? Do you recall the rumors that fiddling with Facebook pushed Brexit over the finish line? Whatever happened to those oh-so-clever CA people?

I found this tribunal claim interesting:

… Meta bypassed Apple’s App Tracking Transparency (ATT) rules, which require user consent before tracking their activity across iPhone apps. After Apple introduced ATT in 2021, most users opted out of tracking, leading to a significant reduction in Meta’s ability to gather information for targeted advertising. Company investors were told this would trim revenues by about $10 billion in 2022.

I thought Apple had their system buttoned up. Who knew?

Did Meta have a response? Absolutely. The write up reports:

“We are actively defending these proceedings …” a Meta spokesperson told The Financial Times. “Allegations related to the integrity of our advertising practices are without merit and we have full confidence in our performance review processes.”

True or false? Well….

Stephen E Arnold, September 4, 2025

Spotify Does Messaging: Is That Good or Bad?

September 4, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

My team and I have difficulty keeping up with the messaging apps that seem to be like mating gerbils. I noted that Spotify, the semi-controversial music app, is going to add messaging. “Spotify Adds In-App Messaging Feature to Let Users Share Music and Podcasts Directly” says:

According to the company, the update is designed “to give users what they want and make those moments of connection more seamless and streamlined in the Spotify app.” Users will be able to message people they have interacted with on Spotify before, such as through Jams, Blends and Collaborative Playlists, or those who share a Family or Duo plan.

The messaging app is no Telegram. The interesting question for me is, “Will Spotify emulate Telegram’s features as Meta’s WhatsApp has?”

Telegram, despite its somewhat negative press, has found a way to monetize user clicks, supplement subscription revenue with crypto service charges, and alleged special arrangement now being adjudicated by the French judiciary.

New messaging platforms get a look from bad actors. How will Spotify police the content? Avid music people often find ways to circumvent different rules and regulations to follow their passion.

Will Spotify cooperate with regulators or will it emulate some of the Dark Web messaging outfits or Telegram, a firm with a template for making money appear when necessary?

Stephen E Arnold, September 4, 2025

Fabulous Fakes Pollute Publishing: That AI Stuff Is Fatuous

September 4, 2025

New York Times best selling author David Baldacci testified before the US Congress about regulating AI. Medical professionals are worried about false information infiltrating medical knowledge like the scandal involving Med-Gemini and an imaginary body part. It’s getting worse says ZME Science: “A Massive Fraud Ring Is Publishing Thousands of Fake Studies and the Problem is Exploding. ‘These Networks Are Essentially Criminal Organizations.’”

Bad actors in scientific publishing used to be a small group, but now it’s a big posse:

“What we are seeing is large networks of editors and authors cooperating to publish fraudulent research at scale. They are exploiting cracks in the system to launder reputations, secure funding, and climb academic ranks. This isn’t just about the occasional plagiarized paragraph or data fudged to fool reviewers. This is about a vast and resilient system that, in some cases, mimics organized crime. And it’s infiltrating the very core of science.”

Luís Amaral discovered in a study he conducted that analyzed five million papers across 70,000 scientific journals that there is a fraudulent paper mill for publishing. You’ve heard of paper mill colleges where students can buy so-called degrees. This is similar except the products are authorship slots and journal placements from artificial research and compromised editors.

Outstanding, AI champions!

This is a way for bad actors to pad their resumes and gain undeserved creditability.

Fake science has always been a problem but it’s outpacing fact-based science. It’s cheaper to produce fake science than legitimate truth. The article then waxes poetic about the need for respectability, the dangerous consequences of false science, and how the current tools aren’t enough. It’s devastating but the expected cultural shift needed to be more respectful of truth and hard facts is not equipped to deal with the new world. Thanks, AI.

Whitney Grace, September 4, 2025

Derailing Smart Software with Invisible Prompts

September 3, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.

The write up states:

Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.

The write up includes examples like these:

… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….

Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.

Stephen E Arnold, September 3, 2025

AI Words Are the Surface: The Deeper Thought Embedding Is the Problem with AI

September 3, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker. 

Humans are biased. Content generated by humans reflects these mental patterns. Smart software is probabilistic. So what?

Select the content to train smart software. The more broadly the content base, the greater range of biases will be baked into the Fancy Dan software. Then toss in the human developers who make decisions about thresholds, weights, and rounding. Mix in the wrapper code that does the guardrails which are created by humans with some of those biases, attitudes, and idiosyncratic mental equipment.

Then provide a system to students and people eager to get more done with less effort and what do you get? A partial and important glimpse of the consequences of about 2.5 years of AI as the next big thing are presented in “On-Screen and Now IRL: FSU Researchers Find Evidence of ChatGPT Buzzwords Turning Up in Everyday Speech.”

The write up reports:

“The changes we are seeing in spoken language are pretty remarkable, especially when compared to historical trends,” Juzek said. “What stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a link.”

Conjecture. That’s a weasel word. Once words are embedded they dragged a hard sided carry on with them.

The write up adds:

“Our research highlights many important ethical questions,” Galpin said. “With the ability of LLMs to influence human language comes larger questions about how model biases and misalignment, or differences in behavior in LLMs, may begin to influence human behaviors.”

As more research data become available, I project that several factoids will become points of discussion:

  1. What happens when AI outputs are weaponized for political, personal, or financial gain?
  2. How will people consuming AI outputs recognize that their vocabulary and the attendant “value baggage” is along for the life journey?
  3. What type of mental remapping can be accomplished with shaped AI output?

For now, students are happy to let AI think for them. In the future, will that warm, fuzzy feeling persist. If ignorance is bliss, I say, “Hello, happy.”

Stephen E  Arnold, September 3, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta