Common Sense Returns for Coinbase Global

September 5, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Just a quick dino tail slap for Coinbase. I read “Coinbase Reverses Remote Policy over North Korean Hacker Threats.” The write up says:

Coinbase has reversed its remote-first policy due to North Korean hackers exploiting fake remote job applications for infiltration. The company now mandates in-person orientations and U.S. citizenship for sensitive roles. This shift highlights the crypto industry’s need to balance flexible work with robust cybersecurity.

I strongly disagree with the cyber security angle. I think it is a return (hopefully) to common sense, not the mindless pursuit of cheap technical work and lousy management methods. Sure, cyber security is at risk when an organization hires people to do work from a far off land. The easy access to voice and image synthesis tools means that some outfits are hiring people who aren’t the people the really busy, super professional human resources person thinks was hired.

The write up points out:

North Korean hackers have stolen an estimated $1.6 billion from cryptocurrency platforms in 2025 alone, as detailed in a recent analysis by Ainvest. Their methods have evolved from direct cyberattacks to more insidious social engineering, including fake job applications enhanced by deepfakes and AI-generated profiles. Coinbase’s CEO, Brian Armstrong, highlighted these concerns during an appearance on the Cheeky Pint podcast, as covered by The Verge, emphasizing how remote-first policies inadvertently create vulnerabilities.

Close but the North Korean angle is akin to Microsoft saying, “1,000 Russian hackers did this.” Baloney. My view is that the organized hacking operations blend smoothly with the North Korean government’s desire for free cash and the large Chinese criminal organizations operating money laundering operations from that garden spot, the Golden Triangle.

Stealing crypto is one thing. Coordinating attacks on organizations to exfiltrate high value information is a second thing. A third thing is to perform actions that meet the needs and business methods of large-scale money laundering, phishing, and financial scamming operations.

Looking at these events from the point of view of a single company, it is easy to see that cost reduction and low cost technical expertise motivated some managers, maybe those at Coinbase. But now that more information is penetrating the MBA fog that envelopes many organizations, common sense may become more popular. Management gurus and blue chip consulting firms are not proponents of common sense in my experience. Coinbase may have seen the light.

Stephen E Arnold, September 5, 2025

AI Can Be Your Food Coach… Well, Perhaps Not

September 5, 2025

Is this better or worse than putting glue on pizza? TechSpot reveals yet another severe consequence of trusting AI: “Man Develops Rare 19th-Century Psychiatric Disorder After Following ChatGPT’s Diet Advice.” Writer Rob Thubron tells us:

“The case involved a 60-year-old man who, after reading reports on the negative impact excessive amounts of sodium chloride (common table salt) can have on the body, decided to remove it from his diet. There were plenty of articles on reducing salt intake, but he wanted it removed completely. So, he asked ChatGPT for advice, which he followed. After being on his new diet for three months, the man admitted himself to hospital over claims that his neighbor was poisoning him. His symptoms included new-onset facial acne and cherry angiomas, fatigue, insomnia, excessive thirst, poor coordination, and a rash. He also expressed increasing paranoia and auditory and visual hallucinations, which, after he attempted to escape, ‘resulted in an involuntary psychiatric hold for grave disability.’”

Yikes! It was later learned ChatGPT suggested he replace table salt with sodium bromide. That resulted, unsurprisingly, in this severe case of bromism. That malady has not been common since the 1930s. Maybe ChatGPT confused the user with a spa/hot tub or an oil and gas drill. Or perhaps its medical knowledge is just a bit out of date. Either way, this sad incident illustrates what a mistake it is to rely on generative AI for important answers. This patient was not the only one here with hallucinations.

Cynthia Murrell, September 5, 2025

Supermarket Snitches: Old-Time Methods Are Back

September 5, 2025

So much for AI and fancy cyber-security systems. One UK grocery chain has found a more efficient way to deal with petty theft—pay people to rat out others. BBC reports, “Iceland Offers £1 Reward for Reporting Shoplifters.” (Not to be confused with the country, this Iceland is a British supermarket chain.) Business reporter Charlotte Edwards tells us shoplifting is a growing problem for grocery stores and pharmacies. She writes:

“Victims minister Alex Davies-Jones told BBC Radio 4’s Today programme on Monday that shoplifting had ‘got out of hand’ in the UK. … According to the Office for National Statistics, police recorded 530,643 shoplifting offences in the year to March 2025. That is a 20% increase from 444,022 in the previous year, and the highest figure since current recording practices began in 2002-03.”

Amazing what economic uncertainty will do. In response, the government plans to put thousands more police officers on neighborhood patrols by next spring. Perhaps encouraging shoppers to keep their eyes peeled will help. We learn:

“Supermarket chain Iceland will financially reward customers who report incidents of shoplifting, as part of efforts to tackle rising levels of retail theft. The firm’s executive chairman, Richard Walker, said that shoppers who alert staff to a theft in progress will receive a £1 credit on their Iceland Bonus Card. The company estimates that shoplifting costs its business around £20m each year. Mr Walker said this figure not only impacts the company’s bottom line but also limits its ability to reduce prices and reinvest in staff wages. Iceland told the BBC that the shoplifters do not necessarily need to be apprehended for customers to receive the £1 reward but will need to be reported and verified.”

How, exactly, they will be verified is left unexplained. Perhaps that is the role for advanced security systems. Totally worth it. Walker emphasizes customers should not try to apprehend shoplifters, just report them. Surely no one will get that twisted. But with one pound sterling equal to $1.35 USD, we wonder: is that enough incentive to pull the phone out of one’s pocket?

Technology is less effective than snitching.

Cynthia Murrell, September 5, 2025

Grousing Employees Can Be Fun. Credible? You Decide

September 4, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

I read “Former Employee Accuses Meta of Inflating Ad Metrics and Sidestepping Rules.” Now former employees saying things that cast aspersions on a former employer are best processed with care. I did that, and I want to share the snippets snagging my attention. I try not to think about Meta. I am finishing my monograph about Telegram, and I have to stick to my lane. But I found this write up a hoot.

The first passage I circled says:

Questions are mounting about the reliability of Meta’s advertising metrics and data practices after new claims surfaced at a London employment tribunal this week. A former Meta product manager alleged that the social media giant inflated key metrics and sidestepped strict privacy controls set by Apple, raising concerns among advertisers and regulators about transparency in the industry.

Imagine. Meta coming up at a tribunal. Does that remind anyone of the Cambridge Analytica excitement? Do you recall the rumors that fiddling with Facebook pushed Brexit over the finish line? Whatever happened to those oh-so-clever CA people?

I found this tribunal claim interesting:

… Meta bypassed Apple’s App Tracking Transparency (ATT) rules, which require user consent before tracking their activity across iPhone apps. After Apple introduced ATT in 2021, most users opted out of tracking, leading to a significant reduction in Meta’s ability to gather information for targeted advertising. Company investors were told this would trim revenues by about $10 billion in 2022.

I thought Apple had their system buttoned up. Who knew?

Did Meta have a response? Absolutely. The write up reports:

“We are actively defending these proceedings …” a Meta spokesperson told The Financial Times. “Allegations related to the integrity of our advertising practices are without merit and we have full confidence in our performance review processes.”

True or false? Well….

Stephen E Arnold, September 4, 2025

Spotify Does Messaging: Is That Good or Bad?

September 4, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

My team and I have difficulty keeping up with the messaging apps that seem to be like mating gerbils. I noted that Spotify, the semi-controversial music app, is going to add messaging. “Spotify Adds In-App Messaging Feature to Let Users Share Music and Podcasts Directly” says:

According to the company, the update is designed “to give users what they want and make those moments of connection more seamless and streamlined in the Spotify app.” Users will be able to message people they have interacted with on Spotify before, such as through Jams, Blends and Collaborative Playlists, or those who share a Family or Duo plan.

The messaging app is no Telegram. The interesting question for me is, “Will Spotify emulate Telegram’s features as Meta’s WhatsApp has?”

Telegram, despite its somewhat negative press, has found a way to monetize user clicks, supplement subscription revenue with crypto service charges, and alleged special arrangement now being adjudicated by the French judiciary.

New messaging platforms get a look from bad actors. How will Spotify police the content? Avid music people often find ways to circumvent different rules and regulations to follow their passion.

Will Spotify cooperate with regulators or will it emulate some of the Dark Web messaging outfits or Telegram, a firm with a template for making money appear when necessary?

Stephen E Arnold, September 4, 2025

Fabulous Fakes Pollute Publishing: That AI Stuff Is Fatuous

September 4, 2025

New York Times best selling author David Baldacci testified before the US Congress about regulating AI. Medical professionals are worried about false information infiltrating medical knowledge like the scandal involving Med-Gemini and an imaginary body part. It’s getting worse says ZME Science: “A Massive Fraud Ring Is Publishing Thousands of Fake Studies and the Problem is Exploding. ‘These Networks Are Essentially Criminal Organizations.’”

Bad actors in scientific publishing used to be a small group, but now it’s a big posse:

“What we are seeing is large networks of editors and authors cooperating to publish fraudulent research at scale. They are exploiting cracks in the system to launder reputations, secure funding, and climb academic ranks. This isn’t just about the occasional plagiarized paragraph or data fudged to fool reviewers. This is about a vast and resilient system that, in some cases, mimics organized crime. And it’s infiltrating the very core of science.”

Luís Amaral discovered in a study he conducted that analyzed five million papers across 70,000 scientific journals that there is a fraudulent paper mill for publishing. You’ve heard of paper mill colleges where students can buy so-called degrees. This is similar except the products are authorship slots and journal placements from artificial research and compromised editors.

Outstanding, AI champions!

This is a way for bad actors to pad their resumes and gain undeserved creditability.

Fake science has always been a problem but it’s outpacing fact-based science. It’s cheaper to produce fake science than legitimate truth. The article then waxes poetic about the need for respectability, the dangerous consequences of false science, and how the current tools aren’t enough. It’s devastating but the expected cultural shift needed to be more respectful of truth and hard facts is not equipped to deal with the new world. Thanks, AI.

Whitney Grace, September 4, 2025

Derailing Smart Software with Invisible Prompts

September 3, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.

The write up states:

Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.

The write up includes examples like these:

… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….

Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.

Stephen E Arnold, September 3, 2025

AI Words Are the Surface: The Deeper Thought Embedding Is the Problem with AI

September 3, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker. 

Humans are biased. Content generated by humans reflects these mental patterns. Smart software is probabilistic. So what?

Select the content to train smart software. The more broadly the content base, the greater range of biases will be baked into the Fancy Dan software. Then toss in the human developers who make decisions about thresholds, weights, and rounding. Mix in the wrapper code that does the guardrails which are created by humans with some of those biases, attitudes, and idiosyncratic mental equipment.

Then provide a system to students and people eager to get more done with less effort and what do you get? A partial and important glimpse of the consequences of about 2.5 years of AI as the next big thing are presented in “On-Screen and Now IRL: FSU Researchers Find Evidence of ChatGPT Buzzwords Turning Up in Everyday Speech.”

The write up reports:

“The changes we are seeing in spoken language are pretty remarkable, especially when compared to historical trends,” Juzek said. “What stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a link.”

Conjecture. That’s a weasel word. Once words are embedded they dragged a hard sided carry on with them.

The write up adds:

“Our research highlights many important ethical questions,” Galpin said. “With the ability of LLMs to influence human language comes larger questions about how model biases and misalignment, or differences in behavior in LLMs, may begin to influence human behaviors.”

As more research data become available, I project that several factoids will become points of discussion:

  1. What happens when AI outputs are weaponized for political, personal, or financial gain?
  2. How will people consuming AI outputs recognize that their vocabulary and the attendant “value baggage” is along for the life journey?
  3. What type of mental remapping can be accomplished with shaped AI output?

For now, students are happy to let AI think for them. In the future, will that warm, fuzzy feeling persist. If ignorance is bliss, I say, “Hello, happy.”

Stephen E  Arnold, September 3, 2025

Bending Reality or Creating a Question of Ownership and Responsibility for Errors

September 3, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

The Google has may busy digital beavers working in the superbly managed organization. The BBC, however, seems to be agitated about what may be a truly insignificant matter: Ownership of substantially altered content and responsibility for errors introduced into digital content.

YouTube secretly used AI to Edit People’s Videos. The Results Could Bend Reality” reports:

In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission.

The BBC ignores a couple of issues that struck me as significant if — please, note the “if” — the assertion about YouTube altering content belonging to another entity. I will address these after some more BBC goodness.

I noted this statement:

the company [Google] has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.

Okay, the Google digital beavers are beavering away.

I also noted this passage attributed to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh:

“You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos…. “People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”

What about those issues I thought about after reading the BBC’s write up:

  1. If Google’s changes (improvements, enhancements, AI additions, whatever), will Google “own” the resulting content? My thought is that if Google can make more money by using AI to create a “fair use” argument, it will. How long will it take a court (assuming these are still functioning) to figure out if Google’s right or the individual content creator is the copyright holder?
  2. When, not if, Google’s AI introduces some type of error, is Google responsible or is it the creator’s problem? My hunch is that Google’s attorneys will argue that it provides a content creator with a free service. See the Terms of Service for YouTube and stop complaining.
  3. What if a content creator hits a home run and Google’s AI “learns” then outputs content via its assorted AI processes? Will Google be able to deplatform the original creator and just use it as a way to make money without paying the home-run hitting YouTube creator?

Perhaps the BBC would like to consider how these tiny “experiments” can expand until they shift the monetization methods further in favor of the Google. Maybe one reason is that BBC doesn’t think these types of thoughts. The Google, based on my experience, is indeed thinking these types of “what if” talks in a sterile room with whiteboards and brilliant Googlers playing with their mobile devices or snacking on goodies.

Stephen E Arnold, September 3, 2025

Deadbots. Many Use Cases, Including Advertising

September 2, 2025

Dino 5 18 25_thumbNo AI. Just a dinobaby working the old-fashioned way.

I like the idea of deadbots, a concept explained by the ever-authoritative NPR in “AI Deadbots Are Persuasive — and Researchers Say, They’re Primed for Monetization.” The write up reports in what I imagine as a resonant, somewhat breathy voice:

AI avatars of deceased people – or “deadbots” – are showing up in new and unexpected contexts, including ones where they have the power to persuade.

Here’s a passage I thought was interesting:

Researchers are now warning that commercial use is the next frontier for deadbots. “Of course it will be monetized,” said Lindenwood University AI researcher James Hutson. Hutson co-authored several studies about deadbots, including one exploring the ethics of using AI to reanimate the dead. Hutson’s work, along with other recent studies such as one from Cambridge University, which explores the likelihood of companies using deadbots to advertise products to users, point to the potential harms of such uses. “The problem is if it is perceived as exploitative, right?” Hutson said.

Not surprisingly, some sticks in the mud see a downside to deadbots:

Quinn [a wizard a Authetic Interactions Inc.] said companies are going to try to make as much money out of AI avatars of both the dead and the living as possible, and he acknowledges there could be some bad actors. “Companies are already testing things out internally for these use cases,” Quinn said, with reference to such uses cases as endorsements featuring living celebrities created with generative AI that people can interactive with. “We just haven’t seen a lot of the implementations yet.”

I wonder if any philosophical types will consider how an interaction with a dead person’s avatar can be an “authetic interaction.”

I started thinking of deadbots I would enjoy coming to life on my digital devices; for example:

  • My first boss at a blue chip consulting firm who encouraged rumors that his previous wives accidently met with boating accidents
  • My high school English teacher who took me to the assistant principal’s office for writing a poem about the spirit of nature who looked to me like a Playboy bunny
  • The union steward who told me that I was working too fast and making other workers look like they were not working hard
  • The airline professional who told me our flight would be delayed when a passenger died during push back from the gate. (The fellow was sitting next to me. Airport food did it I think.)
  • The owner of an enterprise search company who insisted, “Our enterprise information retrieval puts all your company’s information at an employee’s fingertips.”

You may have other ideas for deadbots. How would you monetize a deadbot, Google- and Meta-type companies? Will Hollywood do deadbot motion pictures? (I know the answer to that question.)

Stephen E Arnold, September 2, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta