Apple Management in China: Apple Intelligence in Action

March 31, 2026

green-dino_thumb_thumb[3]Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read an article that does not resonate with me. I am no Apple fan dinobaby, nor am I thrilled with Microslop or the Linux folks. That MVA/TSO is okay though.

The article in question is “Apple Intelligence Rolling Out Now in China per User Reports [U: Pulled].” Okay. I think this means that Apple’s intelligence leadership made the very late and fluxion infused smart software available in China. I think the weird [U: Pulled] means that someone sent an email. Then someone else send a text message. The chain ended with the intelligence leadership blocking the service… from China.

image

Thanks, Venice.ai. Good enough. I was delighted that my prompt did not violate your independently elevating guard rails. If you tracked my prompts over time, you will see that I stick within some very narrow illustrative lanes. But that’s work, and the goal is to use AI to do work so humans can enjoy their decider perks.

That seems okay to me. Big US company. Non US country upon which Apple’s vaunted “manufacturing capability” pivots. Very late and quite opaque smart software pop ups and then disappears. Poof. Magic.

Does this raise any questions about organizing the animals in the circus train.

As Warner Wolff used to say when he was a TV star, “Let’s go to the videotape.”

Apple Intelligence’s China launch was a mistake and it has since been pulled. Apple is apparently still awaiting regulatory approval despite the features having been ready for months.

What?

The cited story says, “Apple has yet to make an official announcement about the expansion of Apple Intelligence. So it’s always possible this rollout was accidental or a test.

What?

I am curious about the way decisions are made and unmade at Apple. I am curious about why the communications chains within Apple worked or did not work. I am curious about who alerted someone that the much, much delayed smart software stumble bumbled from vaporous service to something much worse: Management miasma.

As a dinobaby, I wonder if Tim Apple asks himself, “Why didn’t I just say, ‘Hey, this AI stuff is a half baked tuna casserole. We pass.”

Yep, too late, Mr. Apple. Look at that through the interface that obscures information.

Stephen E Arnold, March 31, 2026

Why the World Loves US Big Tech Outfits: The Meta Example

March 31, 2026

green-dino_thumb_thumb[3]Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

In the annals of big tech, one more lawsuit is likely to irrelevant. Even with LexisNexis-type systems, the decision will probably be as difficult to locate as the 1868 Erie War involving that charming fellow Corny Vanderbilt. Be that as it may, I noted the BBC article “Meta Told to Pay $375m for Misleading Users over Child Safety.” The write up reports as actual factual:

A court in New Mexico has ordered Meta to pay $375m (£279m) for misleading users over the safety of its platforms for children. A jury found that Meta, which owns Facebook, Instagram and WhatsApp, was liable for the way in which its platforms endangered children and exposed them to sexually explicit material and contact with sexual predators.

How nitroglycerin like is an allegation about kiddies and explicit material? Based on what I hear at the law enforcement conferences which invite me to speak, this is a topic that catches investigator attention. A couple of years ago I presented an analysis of a business person’s online service. This individual was a member of the Chamber of Commerce. He operated a data service for businesses, and he operated what I call a “ghost service” for individuals with what I called less than salubrious interests. Not only did a couple of investigators want to speak with me after my talk, I had a follow up conversation with a Federal investigator in Detroit, and a phone call from an investigator who wanted me to send my presentation to a government connected organization focused on the types of content apparently referenced in this litigation. Usually someone says, “Good presentation.” Once in a while, I will be asked to join a group of attendees in the bar to talk informally (of course).

image

Thanks, MidJourney. Believe it or not, Venice.ai refused to generate the image because I requested adult content. Make sense? Sure, because good enough is indeed excellence.

I am not interested in the alleged fine. I know there will be appeals. Big companies have legal resources and often deal with set backs by playing the long game or just ignoring the legal outcome. That’s what the losing company’s legal eagles beat their wings and squawk for. Yeah, that and billing. I don’t want to forget that minor detail.

I want to step back and ask this question, “What is the impact of the charges levied at a US big tech firm in other countries?” Based on  my personal experience of living in another country and working in a handful of these nations, I would offer these observations:

  1. Although heinous, this particular case provides a case study of a big tech outfit in the US doing exactly what it incentivizes employees to do. I want to be clear: The job descriptions and the incentive plans for workers allow certain steps to be taken. Thus, the behavior is emergent. Take away the lingo of the job description and the metrics for a bonus or promotion, and the worker behavior changes. Without these direct actions, certain behaviors are almost guaranteed to produce the type of issues identified in this litigation.
  2. The managerial and leadership set up makes it possible for a senior manager to say, “Whoa, I did not know this situation was taking place.” That is probably true. The job description, the incentives, and the compounds in the Petri dish blossom with behaviors others may find egregious. This means the leadership is telling a “truth” so the firm’s lawyers can do what lawyers do. (See my comment about billing.)
  3. Observers outside the US wonder how a company can allow certain actions to occur. Over time, if the US big tech companies demonstrate similar product and service manifestations, either fear, frustration, or distrust becomes linked with the concept of a US big tech firm. Just as a whiff of a spouse’s perfume can evoke memories or emotions, these trials perform the same function with regard to US big technology. Toss in AI capabilities and big tech becomes BAIT (big AI tech). Such associations are not helpful for American firms’ image.

The BBC article adds:

Meta is also involved in a separate trial in Los Angeles, in which a young woman claims that she became addicted to platforms like Instagram and YouTube, owned by Google, as a child because of how they are intentionally designed. There are thousands of similar lawsuits winding their way through the US courts.

That word “thousands” is shocking. The implications of the business actions of US companies could have knock on effects that the companies themselves will not recognize. At some point, fines and talk could be judged ineffective. That’s why it is helpful to look at how countries like Russia are making a tactical decision to kill Telegram. Could that happen in other countries? That’s a question worth considering in my opinion.

Stephen E Arnold, March 31, 2026

Old and Fired? Suck It Up, Buttercup

March 26, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

A fellow 54 years old claims age discrimination. The senior director of monetization analytics wrote an article that makes clear he believes the estimable Meta dumped old people. Full disclosure: I am 82 and I think a person who is a bit more than a quarter century younger than I am is not what I would call old. I completed college at a one-horser in the Midwest and managed to fool enough people that I deserved a graduate degree. I had been working in “real” jobs with a secretary and staff (believe it or not) before this Franchet fellow was conceived.

image

Thanks, Venice.ai. Sort of bad, but good old Google Gemini fixed up your output. Is that why having just one AI system is a really lame idea? I think it is.

The same whimpers were emitted when IBM (another outstanding company) identified employees who bumped up health care liabilities, wanted vacations with pay, and expected retirement accounts. Why keep these dodders around when cheap and good enough professionals were available in the idyllic city of Bangalore, India? Some of these people who were allowed to find their future elsewhere posted in social media about their job loss. How did that work out? It didn’t. The opportunity to push boundaries was not withdrawn. That hot desk in New Jersey went to a contract worker somewhere forcing the manager to obtain a world clock to schedule a video conference F2F or face to face.

Meta Unfairly Targeted Older Workers During Layoffs Last Year, Lawsuit Claims” explains:

“Employees 40 and older were 1.5 times as likely to be included in the layoffs than employees under 40, and employees 50 and older were 2.5 times as likely to be terminated than employees under 40,” the lawsuit reads, allegedly citing data provided by the company to laid-off workers.

Am I, an authentic dinobaby, surprised? You have to be kidding me with that stupid question. Let me explain why Silicon Valley-type outfits and the BAIT outfits (big AI tech firms) do not want people who appear to be old timers to their leadership. I will give three reasons and make them really simple and clear:

  1. Cost
  2. Cost
  3. Cost

Now there may be other issues; for example, a dinobaby like myself listens, questions, and then when warranted, pushes back. How many zippy computer scientists under the age of 23 want that? Answer: Zero. How many MBAs want to have their cherished boilerplate game plans disabused? Answer: Zero. How many Peter Principle promotees want to be reminded they are making a bad decision? Answer: Zero.

I find the idea that Meta is culling old cattle believable and part of the playbook. Many of these outfits senior managers struggle with imposter syndrome. These individuals sense that something is amiss. Therefore, a wide range of coping mechanisms come into play. Examples range from forming a squishy bond with another humanoid to buying a vehicle with a big engine, from ignoring physical exercise to a gym rat (albeit a gym with chrome machines and odor free plastic on the weight bench). I would include the odd cruise ship scale yacht and trophy wife or companion. Yes, these icons of American business have to deal with those inner anxieties. (I will not mention drugs, Epstein Epstein Epstein, and causing a discarded companion to attempt suicide. No, I definitely will not.)

The terminated Franchet is the source of this passage in the cited article:

Six months before his termination, in August 2024, Franchet received an “At or Above Expectations” performance rating. Just a few months later, Meta introduced a new “lowest performer” category. The lawsuit claims the review process used ahead of the layoffs was less rigorous than usual. During that process, Franchet received a “Met Most Expectations” performance rating and was classified as one of the company’s lowest performers.

So the personnel procedure did not work. How many systems and policies regarding people work at Meta? I don’t know the answer, but there is the occasional suicide attributed to the firm’s “bringing everyone together” system. I have heard that law enforcement in some cities checks Facebook Marketplace for that area if there is a notable robbery. One officer told me a couple of years ago, “Who needs a fence. There’s Facebook Marketplace.” I thought this was an interesting observation.

Net net: Old people belong in the warehouses for the soon to be unliving. Get used to it. Worrying will take years off your life. Be a happy dinobaby and don’t litigate. That reduces one’s changes for a consulting gig. Former employees who take a big company to court may get a “lowest performer” hashtag.

Stephen E Arnold, March 26, 2026

Smart Software: Caution Advised

March 26, 2026

green-dino_thumb_thumb[3]_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

A Washington State University (Spokane, Washington) reported some information about ChatGPT’s accuracy. Science Daily summarized the professor’s research in “Study Finds ChatGPT Gets Science Wrong More Often Than You Think.” The “you” troubled me because I know that one must have an answer or sufficient knowledge about a topic before creating a prompt when factual information is required. Therefore, the “you” might be considered overly broad.

image

Sam, who works in the field of artificial intelligence, is outraged that his friends will not count his horse shoe on the roof as a point. His three friends find his arguments better than a Jimmy Kimmel quip. Thanks, Venice.ai. Good enough but why is Sam fat and the other three more svelte?

What did the good professor set out to learn? The WSU luminary wanted to test the ChatGPT large language model. The approach was to create scientific questions and then prompt the model. ChatGPT had to output information stating the hypotheses were “true” or “false.” I am using quotes because I have learned as I marched toward my present status of dinobaby that truth and falsity are mercurial.

Let’s jump to the findings. Please, read the full article in ScienceDaily.

I noted this statement:

In total, the team evaluated more than 700 hypotheses and asked the same question 10 times for each one to measure consistency. When the experiment was first conducted in 2024, ChatGPT answered correctly 76.5% of the time. In a follow-up test in 2025, accuracy rose slightly to 80%. However, once the researchers adjusted for random guessing, the results looked far less impressive. The AI performed only about 60% better than chance, a level closer to a low D than to strong reliability.

But the killer comment, in my opinion, was this one:

The system had the most difficulty identifying false statements, correctly labeling them only 16.4% of the time. It also showed notable inconsistency. Even when given the exact same prompt 10 times, ChatGPT produced consistent answers only about 73% of the time.

Knowing what’s wrong strikes me an important mental or knowledge value operation. The score range for figuring out what might be fake ranged between 16 percent and 73 percent. But what about the other 84 percent of 27 percent? How does that work out for decisions that involve medical treatments, stress analyses for alternative nuclear reactors, or smart weapons? I know the answer, and most people who interact with me don’t like how I respond to this question. But here it is: Today’s smart software is essentially close enough for horseshoes.  Stated another way, one might want to get a chimpanzee to throw darts at a target with “answers” written on Post It Notes.

The Science Daily article pointed out:

The findings … highlight the importance of using caution when relying on AI for important decisions, especially those that require nuanced or complex reasoning. While generative AI can produce smooth, convincing language, it does not yet demonstrate the same level of conceptual understanding. According to [Professor] Cicek, these results suggest that artificial general intelligence capable of truly “thinking” may still be further away than many expect. “Current AI tools don’t understand the world the way we do — they don’t have a ‘brain,'” Cicek said. “They just memorize, and they can give you some insight, but they don’t understand what they’re talking about.”

Several observations:

  1. Studies like the one from the Washington State University professor suggest that smart software makes mistakes…frequently. The so-called improvements in news releases and marketing collateral are not in line with smart software’s actual fact functions.
  2. The need for a next big thing has created a situation which disseminates a fictional description of what developers believe their probabilistic word prediction systems can deliver. Belief is good. Failure to recognize and articulate limitations is bad. We are in a bad information space in my opinion.
  3. The money pumped into smart software is notable. Furthermore the relatively small number of organizations investing tens of billions of dollars want to “own” the market. The idea would get a student in an MBA program a high mark and maybe a grant. In real life, the mismatch between what is marketed and what the systems can do in a fact centric setting is wide.

Net net: One hopes that existing smart software can be juiced up with additional methods. Until then keep that accuracy range in mind: 16 to 73 percent. Getting to 100 percent matters to some. On the other hand, most users of smart software don’t know what’s fact or fiction in the smart software output. Furthermore, good enough is the new norm for excellence for many people and organizations.

Stephen E Arnold, March 26, 2026

Microsoft Saddles Up Like Don Quixot-AI

March 25, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Two stories caught my attention about America’s answer to the l’Académie française. You know that’s Microsoft, the outfit trying to eliminate the word “microslop” from global speech. Yeah, good luck with that.

image

Two Sloppies look at the results of their smart software efforts. Thanks, Venice.ai. Good enough and you didn’t tell me my prompts violated your sense of decency. I know I produce really controversial prompts. But I feel safe knowing you are watching.

The first write up about Microslop is “Microsoft Realizes It’s Epically Screwed Up Windows 11 as Users Rage at Copilot AI Crammed Everywhere.” Obviously Futurism has decided to tow the linguistic line. The headline suggests that one of those New Coke and Bud Lite moments has arrived. A change to a beloved and absolutely wonderful product has sparked disgruntlement.

The write up reports:

Microsoft seems to have finally noticed that its house is on fire, particularly following the heavy-handed embrace of AI garnering it the widely used pejorative of “Microslop.” Unsubstantiated rumors over Windows 12 embracing AI even more triggered a massive uproar earlier this month, once again highlighting widespread disillusionment.

Microslop’s leadership team member allegedly said:

…we are reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets and Notepad.

An ASCII editor without smart software. Are you kidding me? Apparently not. Futurism believes that Microslop has realized after billions of dollars and years of hoo-hah that the company’s AI craziness is the digital equivalent of the Jaguar rebranding. (Hey, will the new Jags have a Windows AI agent on board? Just a thought for Microsoft leadership.)

So this is an apparent retrenchment, mea culpa, and crawfish bundled into one PR-type comment.

But there’s more in a second article. In corporate America, someone has to take the fall for a big money failure, and it is definitely not the Big Dog of Softie leadership. No, siree.

The article “Satya Nadella Paid $650M to Recruit His AI Chief. 2 Years Later, He’s Quietly Sidelining Him — And the Numbers Behind the Move Are Brutal” says:

Microsoft [shouldn’t this have been Microslop?] CEO Satya Nadella announced a sweeping reorganization of the company’s AI leadership on March 17, unifying its consumer and enterprise Copilot teams under a single executive and quietly sidelining Mustafa Suleyman — the former DeepMind co-founder he paid $650 million to bring aboard just two years ago.

I don’t want to beat a dead strategy. However, several observations appear to be warranted:

First, how can large companies think up, plan, deploy, and then flounder in the midst of obvious customer outcry? I wish I had an answer. The fact that these missteps occur is interesting because it demonstrates that [a] awareness of what will fly and what won’t is short circuited and [b] significant time and money go down the drain before leadership takes corrective action. Remarkable.

Second, smart software in 2022 was a clever marketing stunt to put Google on its back paw. That worked. The movement from marketing to revenue did not happen. Smart software is from my point of view a utility like search and retrieval. It is a Don Quixote technology. The marketing of the attack on a windmill is okay. Trying to make marketing match up with reality is difficult and sometimes impossible. Case in point: AI in Notepad. What were the Sloppies thinking? Notepad!

Third, Microslop in my opinion is the first of the Big AI Tech (BAIT) outfits to do the good old switcheroo. Others will follow when they learn: [a] Smart software creates problems of sufficient complexity that humans can’t solve them. And [b] when the mistakes spark kinetic reactions. These will be coming because Microslop powers a number of nations’ computer systems. With AI baked in, the potential energy is going to be released and not in a controlled and planned way. There will be booms.

Finally, a reorganization makes sense for quarterly investor calls and news releases. In reality, the reorg just underscores how poorly conceived and implemented was the Microslop grand plan. How do I know? The word “microsoft” came into being for a reason.

Net net: I bet those cheap Apple Neo gizmos will sell because Microslop failed to anticipate the knock on reaction from their Copilot in Notepad thing. Notepad! Carpletland leadership deserves a bonus for these bold decisions. Sancho, saddle up.

Stephen E Arnold, March 25, 2026

AI and Hitting a Math Wall

March 25, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

The average AI chatbot user realizes that the technology has its limits. An intelligent user (who doubles checks their facts) knows that the bots are prone to hallucinations and takes everything it dishes out with a binary salt grain. Gizmodo explains the limits of AI bots and how the technology is about to hit a computational brick wall: “AI Agents Are Poised to Hit A Mathematical Wall, Study Finds."

AI bots are built on LLMS with the belief that they will infinitely grow, gain more knowledge, and become more human in their autonomy. The father and son research team, Vishal Sikka and Varin Sikka, wrote a paper (hopefully without AI’s help) about the limits of AI. Apparently LLMs can’t do agentic and computation tasks beyond a certain complexity. In other words, AI may face computational limits. Thus, mathy innovation is going to be needed.

The paper explains that AI are programmed to complete tasks only as far as the parameters of the LLM. LLMs have limited processing capabilities and must operate within its their bands of knowledge. When tasks go beyond those parameters, more complex models are needed. The LLMs can’t extrapolate the required information so they either fail in the tasks or return incorrect information.

AI, therefore, needs to be helped out with humans who come up with new methods and techniques:

"The basic premise of the research really pours some cold water on the idea that agentic AI, models that are able to be given multi-step tasks that are completed completely autonomously without human supervision, will be the vehicle for achieving artificial general intelligence. That’s not to say that the technology doesn’t have a function or won’t improve, but it does place a much lower ceiling on what is possible than what AI companies would like to acknowledge when giving a “sky is the limit” pitch.”

Other experts have reported similar results and the average user can tell you the same thing. Can AI replace humans. No, but the MBAs and bean counters have calculated that smart software is cheaper and faster. Plus, AI does not need health care, retirement contributions, or vacations.

Whitney Grace, March 25, 2026

Palantir Technologies: Nicked by Sharp Marketing and Metaphors

March 24, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I learned about an article by reading a March 13, 2026 report titled “It Beggars Belief: MoD Sources Warn Palantir’s Role at Heart of Government Is Threat to UK’s Security.” The write up says:

Palantir, the US AI surveillance and security firm with hundreds of millions of pounds in UK government contracts, poses “a national security threat to the UK”, according to two anonymous high-level sources working with the Ministry of Defence.

My problem is that the sources are anonymous. The UK has struggled with certain types of software. One example comes to mind: The British Post Office. Another is the National Health Services’ arm wrestling with software. Plus, I am not familiar with the online publication The Nerve.

image

Thanks, Venice.ai. Good enough.

One of the anonymous sources in The Nerve’s write up allegedly said:

“Allowing a single entity, foreign or domestic, to have such far-reaching, pervasive access is inherently dangerous. How our national cybersecurity center has allowed this beggars belief.”

Jim Killock, executive director of the Open Rights Group, allegedly told The Nerve:

If the US has detailed insights across everything that the MoD does, then in the event of us being recalcitrant about helping the US bomb some country, they can remind us – subtly or unsubtly – what they might do in retaliation. “The Ministry of Defence or the prime minister must have some inkling of the risks, but now we find ourselves hitched to an erratic, dangerous, megalomaniac power in denial of its own limits. If Palantir knows everything, it just gives them huge extra leverage.”

What’s interesting is that a personage using the alias sschueller provided a pointer to a February x, 2026, article in the Swiss online publication Republik. Its article “How Tenaciously Palantir Courted Switzerland” provided some additional color about Palantir Technologies.

Here’s are some quotes from the Republik write up. Are they accurate? I have no idea. I find them interesting, however.

“Palantir is here to disrupt. (…) and, when it’s necessary, to scare our enemies and occasionally kill them.”

and

“The rise of the West has not been made possible by the superiority of its ideas, values, or religion, but rather by its superiority in the use of organized violence.”

and

CTO Shyam Sankar said that Palantir products help “optimize the kill chain.”

I find Palantir somewhat amusing. The company named itself after a seeing stone, a fictional creation in the fantasy novel, The Lord of the Rings by J..R.R. Tolkien. The palentiri are not likely to save whales and snail darters.

Several observations seem to be warranted:

  1. Palantir’s PR is either doing its job or it is failing in its effort to present the firm in a positive manner
  2. Specialized software companies may find their marketing methods turn off certain commercial and government customers
  3. The company seems to engender fear, not just concern. (Is that a reason why most specialized software companies walk softly and market without becoming poster kids like NSO Group for questionable practices.)

Net net: My view is that some US technology companies are feeding negative perceptions about American business, technologies, and trustworthiness. But I am a dinobaby in rural Kentucky. What do I know about American firms selling to non-US entities? Nothing. Absolutely nothing. Why worry?

Stephen E Arnold, March 24, 2026

Amazon: Employee Terminations Bumps into PR Spin

March 23, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I pointed out a couple of times that Amazon has lost control of its messaging. My point pivoted on the service outages allegedly caused by a human, then caused by an AI, and then caused by some mysterious combination of humans and smart software. Somewhere in the messaging is a statement that may be true.

image

Thanks, Venice.ai, good enough. Would one of your corporate users replace a human with your system? Okay, I understand. You want to talk to your PR department and a company attorney. Got it.

One of the interesting aspects of the online book store is that it is now looking more and more like the love child of eBay and Google or Etsy and Microsoft. It is not the basic online bookstore any longer. Amazon seems to be struggling to cope with its business and online systems. In theory the company can run grocery stores and third party resellers who sort of tell the truth about their products, but the reality is often different.

I keep in my office the female undergarment sent to me when I ordered a Ryzen 5950 CPU several years ago. I love to tell the story about that mix-up. I once had a photo of one of my engineers trying to stuff the bright red undergarment into the CPU socket on a motherboard. That was a hoot.

Today I read “AWS CEO Explains 3 Reasons AI Can’t Replace Junior Devs.” When I clicked on the write up, I knew that Amazon has in the last 12 months allowed about 30,000 of its employees to find their future elsewhere. I knew about the three pizza teams favored by the technical professionals. I knew about the wild pricing algorithms that deliver surprises, not predictable invoices that make bean counters smile. Maybe like some AI systems, I was hallucinating. I am an 82 year old dinobaby. Anything is, therefore, possible.

What are the three reasons that AI cannot absolutely never ever replace junior developers? Here these are, paraphrased for brevity:

  1. Cheap, young technical people are into AI.
  2. Don’t fire old, experienced dinobabies.
  3. Firing cheap programmers ruptures the employment “pipeline.” Young coders can grow up to be old coders, assuming that AI does not replace everyone in the pipeline.

Do I believe this assurance? Well, sort of. Here’s what I think this statement implies. First, hiring a bunch of cheap, junior developers allows “leadership” to pick the one or two to keep. The rest of the litter is donated to the animal shelter type outfits. The basic idea is that the best and brightest have to prove their worth to “leadership.”

Second, I think that pushing out the dinobabies as IBM-type firms have done for decades reduces the monetary impact of health care, retirement “matching,” and vacations which get longer when senior employees hang in. Bean counters look at numbers; they don’t look at the people underneath the numbers unless they have to offer a brief glance and reflexive smile.

Third, humans are needed because AI gets things wrong. If there are no humans, can one trust AI to find and remediate the error. A human with its old-fashioned approach to figuring out puzzles, can often locate an issue and fix it or tell a smart AI what the error is and prompt the system to make a quite specific fix. But someday, according to the money spending, future inventing big AI tech outfits, their systems can do everything. One human can orchestrate smart software to run a very large company. One can’t get fired if one doesn’t get hired. Therefore, problem solved.

What’s Amazon’s game plan? I think this article featuring the AWS CEO is an example of Amazon trying to get control of its narrative. Will it work? Sure, as long as the 30,000 RIFed Amazonians don’t talk, blog, or make TikTok-type videos. If the system stays online, there is not problem. The Bezos bulldozer can get back to serious business like knocking down corn stalks to develop land for a new data center. Tip: Harden the structures so missiles and drones can’t easily knock them offline.

Stephen E Arnold, March 23, 2026

Incentivizing Cheating with Smart Software

March 20, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

One of my children attended a nifty university which operated on an honor system. This child grew up with a Anderson Jacobson terminal with bunny rabbit ears in the kitchen. Gee, didn’t every family in the 1970s have this online device? When he joined a fraternity, he was enshrined on fake currency issued by “brothers” to commemorate his value to his fellow members. Here is an image of that artifact:

image

One of the last surviving commemorative faux bank notes issued for my son’s online expertise.

The question one might ask is, “Why?”

The answer is that when my son went to the nifty university, I equipped him with the tools of my profession: Online wizard. He had a computer, a high-speed modem for that time point, and online accounts for Dialog Information Services, European Space Agency DataStar, Delphi, and Systems Development Corporation. (You remember that outfit and Dr. Carlos Cuadra, don’t you?) The system — clunky by today’s standards — was magic. A fraternity brother would ask Erik, “Can you get me information for the 10 page paper I have to turn in Monday?” Erik replied, “Yep. What’s the topic?” His fraternity brother said something like, “I have to explain role of the price system and knowledge in market coordination.” Erik fired up his computer logged into Dialog, entered File 15, input the search statement, and printed off the citations and 150 word abstracts. Time elapsed was five minutes. His fraternity brother was grateful. Erik’s magical ability diffused. He was a life saver. Hence, the commemorative green faux bank note.

Was this cheating? For those unfamiliar with online information in the early 1980s, it was not cheating. Online access was magic, just like AI today. To the university, I am fairly confident, some professors and administrators would be horrified and scheduling faculty meetings to discuss this technological assault on academia even though the university research librarian had access to these systems. My point is that cheating is relative.

From my point of view, my son was using a system I had been fortunate enough to help create some of the digital information he accessed. For my son, online was no big deal, and it was not much different from watching a weird but amusing script display a cookie monster moving across his screen. Based on what I have heard from his fraternity brothers whom I have been fortunate to meet, several expressed their gratitude to me for setting up my son to help these bright sparks graduate with a knowledge of online access. I want to add that none of those whom I met is a loser or felon as far as I know.

Why am I recounting a decades old anecdote? Answer: I read “We’re Training Students To Write Worse To Prove They’re Not Robots, And It’s Pushing Them To Use More AI.” The main idea of the essay is that AI has had an unintended consequence. The students using AI will become stupid, and they will be taught to write in a stupid way so no one thinks the author is using AI. Yeah, believe it or not.

The write up cites an academic who is not too keen on smart software. The essay says:

… the answer is to stop treating AI as a policing problem and start treating it as an educational one. Teach students how to write. Teach them how to think critically about AI tools. Teach them when those tools are helpful, when they’re harmful, and when they’re a crutch. And for the love of all that is good, stop deploying detection tools that punish good writers and push everyone toward a bland, algorithmic mean. We are, quite literally, limiting our students’ writing to satisfy a machine that can’t tell the difference.

My reaction? Cheating or a tool? Stupid or successful tool users? If I had a child today, his or her access device would have multiple AI tools installed by me. The trick, of course, is to show, discuss, and guide.

Robots should be so lucky. They learn by violating copyright and invisible data sucking. Humans do the interaction thing.

Stephen E Arnold, March 20, 2026

From IBM Watson Health to Today: Same Footpath, Same Forest

March 20, 2026

Are you familiar with the Dunning Kruger Effect? The Dunning Kruger Effect is when people with limited knowledge are overly confident in their competency. AI could be creating an entire healthcare industry centered around the effect says Healthcare.Digital: “Do We Have A Dunning Kruger Effect Problem In Healthcare AI?”  This sums up how AI is transforming healthcare:

“Instead, empirical evidence suggests that AI acts as an epistemic distortion field, generating a universal uplift in confidence that frequently outruns actual improvements in performance. This analysis explores the depth of this “Dunning-Kruger problem” in healthcare AI, examining how the illusion of competence, the reversal of traditional expertise gradients, and the opacity of “black box” systems threaten to undermine the foundations of patient safety and professional accountability.”

Without all the flowery language, that quote is saying AI is threatening the healthcare fundamentals of holding providers accountable and keeping patients safe. Why is this happening? AI is dumbing down medical science. Instead of requiring doctors to be competent, they’re getting lazy and allowing AI to do all the heavy lifting.

An excellent case study is IBM Watson interpreting oncology results. Watson failed miserably, because it was trained on fake cases and the results were:

“The recommendations provided by Watson were essentially mirrors of the subjective treatment preferences of a single institution, making them geographically inappropriate and often unsafe in other clinical contexts.”

As I said, same footpath, same dense forest.

Whitney Grace, March 20, 2026

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta