AI: Continuous Degradation
December 9, 2025
Many folks are unhappy with the flood of AI “tools” popping up unbidden. For example, writer Raghav Sethi at Make Use Of laments, “I’m Drowning in AI Features I Never Asked For and I Absolutely Hate It.” At first, Sethi was excited about the technology. Now, though, he just wishes the AI creep would stop. He writes:
“Somewhere along the way, tech companies forgot what made their products great in the first place. Every update now seems to revolve around AI, even if it means breaking what already worked. The focus isn’t on refining the experience anymore; it’s about finding new places to wedge in an AI assistant, a chatbot, or some vaguely ‘smart’ feature that adds little value to the people actually using it.”
Gemini is the author’s first example: He found it slower and less useful than the old Google Assistant, to which he returned. Never impressed by Apple’s Siri, he found Apple Intelligence made it even less useful. As for Microsoft, he is annoyed it wedges Copilot into Windows, every 365 app, and even the lock screen. Rather than helpful tool, it is a constant distraction. Smaller firms also embrace the unfortunate trend. The maker of Sethi’s favorite browser, Arc, released its AI-based successor Dia. He asserts it “lost everything that made the original special.” He summarizes:
“At this point, AI isn’t even about improving products anymore. It’s a marketing checkbox companies use to convince shareholders they’re staying ahead in this artificial race. Whether it’s a feature nobody asked for or a chatbot no one uses, it’s all about being able to say ‘we have AI too.’ That constant push for relevance is exactly what’s ruining the products that used to feel polished and well-thought-out.”
And it is does not stop with products, the post notes. It is also ruining “social” media. Sethi is more inclined to believe the dead Internet theory than he used to be. From Instagram to Reddit to X, platforms are filled with AI-generated, SEO-optimized drivel designed to make someone somewhere an easy buck. What used to connect us to other humans is now a colossal waste of time. Even Google Search– formerly a reliable way to find good information– now leads results with a confident AI summery that is often wrong.
The write-up goes on to remind us LLMs are built on the stolen work of human creators and that it is sopping up our data to build comprehensive profiles on us all. Both excellent points. (For anyone wishing to use AI without it reporting back to its corporate overlords, he points to this article on how to run an LLM on one’s own computer. The endeavor does require some beefy hardware, however.)
Sethi concludes with the wish companies would reconsider their rush to inject AI everywhere and focus on what actually makes their products work well for the user. One can hope.
Cynthia Murrell, December 9, 2025
Apples Misses the AI Boat Again
December 4, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Apple and Telegram have a characteristic in common. Both did not recognize the AI boomlet that began in 2020 or so. Apple was thinking about Granny scarfs that could hold an iPhone and working out ways to cope with its dependence on Chinese manufacturing. Telegram was struggling with the US legal system and trying to create a programming language that a mere human could use to code a distributed application.
Apple’s ship has sailed, and it may dock at Google’s Gemini private island or it could decide to purchase an isolated chunk of real estate and build its de-perplexing AI system at that location.

Thanks, MidJourney. Good enough.
I thought about missing a boat or a train. The reason? I read “Apple AI Chief John Giannandrea Retiring After Siri Delays.” I simply don’t know who has been responsible for Apple AI. Siri did not work when I looked at it on my wife’s iPhone many years ago. Apparently it doesn’t work today. Could that be a factor in the leadership changes at the Tim Apple outfit?
The write up states:
Giannandrea will serve as an advisor between now and 2026, with former Microsoft AI researcher Amar Subramanya set to take over as vice president of AI. Subramanya will report to Apple engineering chief Craig Federighi, and will lead Apple Foundation Models, ML research, and AI Safety and Evaluation. Subramanya was previously corporate vice president of AI at Microsoft, and before that, he spent 16 years at Google.
Apple will probably have a person who knows some people to call at Softie and Google headquarters. However, when will the next AI boat arrive. Apple excelled at announcing AI, but no boat arrived. Telegram has an excuse; for example, our owner Pavel Durov has been embroiled in legal hassles and arm wrestling with the reality that developing complex applications for the Telegram platform is too difficult. One would have thought that Apple could have figured out a way to improve Siri, but it apparently was lost in a reality distortion field. Telegram didn’t because Pavel Durov was in jail in Paris, then confined to the country, and had to report to the French judiciary like a truant school boy. Apple just failed.
The write up says:
Giannandrea’s departure comes after Apple’s major iOS 18 Siri failure. Apple introduced a smarter, “?Apple Intelligence?” version of ?Siri? at WWDC 2024, and advertised the functionality when marketing the iPhone 16. In early 2025, Apple announced that it would not be able to release the promised version of ?Siri? as planned, and updates were delayed until spring 2026. An exodus of Apple’s AI team followed as Apple scrambled to improve ?Siri? and deliver on features like personal context, onscreen awareness, and improved app integration. Apple is now rumored to be partnering with Google for a more advanced version of ?Siri? and other ?Apple Intelligence? features that are set to come out next year.
My hunch is that grafting AI into the bizarro world of the iPhone and other Apple computing devices may be a challenge. Telegram’s solution is to not do hardware. Apple is now an outfit distinguishing itself by missing the boat. When does the next one arrive?
Stephen E Arnold, December 4, 2025
Agentic Software: Close Enough for Horse Shoes
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I read a document that I would describe as tortured. The lingo was trendy. The charts and graphs sported trendy colors. The data gathering seemed to be a mix of “interviews” and other people’s research. Plus the write up was a bit scattered. I prefer the rigidity of old-fashioned organization. Nevertheless, I did spot one chunk of information that I found interesting.
The title of the research report (sort of an MBA- or blue chip consulting firm-type of document) is “State of Agentic AI: Founder’s Edition.” I think it was issued in March 2025, but with backdating popular, who knows. I had the research report in my files, and yesterday (November 3, 2025) I was gathering some background information for a talk I am giving on November 6, 2025. The document walked through data about the use of software to replace people. Actually, the smart software agents generally do several things according to the agent vendors’ marketing collateral. The cited document restated these items this way:
- Agents are set up to reach specific goals
- Agents are used to reason which means “break down their main goal … into smaller manageable tasks and think about the next best steps.”
- Agents operate without any humans in India or Pakistan operating invisibly and behind the scenes
- Agents can consult a “memory” of previous tasks, “experiences,” work, etc.
Agents, when properly set up and trained, can perform about as well as a human. I came away from the tan and pink charts with a ball park figure of 75 to 80 percent reliability. Close enough for horseshoes? Yep.
There is a run down of pricing options. Pricing seems to be challenge for the vendors with API usage charges and traditional software licensing used by a third of the agentic vendors.
Now here’s the most important segment from the document:
We asked founders in our survey: “What are the biggest issues you have encountered when deploying AI Agents for your customers? Please rank them in order of magnitude (e.g. Rank 1 assigned to the biggest issue)” The results of the Top 3 issues were illuminating: we’ve frequently heard that integrating with legacy tech stacks and dealing with data quality issues are painful. These issues haven’t gone away; they’ve merely been eclipsed by other major problems. Namely:
- Difficulties in integrating AI agents into existing customer/company workflows, and the human-agent interface (60% of respondents)
- Employee resistance and non-technical factors (50% of respondents)
- Data privacy and security (50% of respondents).
Here’s the chart tallying the results:

Several ideas crossed my mind as I worked through this research data:
- Getting the human-software interface right is a problem. I know from my work at places like the University of Michigan, the Modern Language Association, and Thomson-Reuters that people have idiosyncratic ways to do their jobs. Two people with similar jobs add the equivalent of extra dashboard lights and yard gnomes to the process. Agentic software at this time is not particularly skilled in the dashboard LED and concrete gnome facets of a work process. Maybe someday, but right now, that’s a common deal breaker. Employees says, “I want my concrete unicorn, thank you.”
- Humans say they are into mobile phones, smart in-car entertainment systems, and customer service systems that do not deliver any customer service whatsoever. Change as somebody from Harvard said in a lecture: “Change is hard.” Yeah, and it may not get any easier if the humanoid thinks he or she will allowed to find their future pushing burritos at the El Nopal Restaurant in the near future.
- Agentic software vendors assume that licensees will allow their creations to suck up corporate data, keep company secrets, and avoid disappointing customers by presenting proprietary information to a competitor. Security is “regular” enterprise software is a bit of a challenge. Security in a new type of agentic software is likely to be the equivalent of a ride on roller coaster which has tossed several middle school kids to their death and cut off the foot of a popular female. She survived, but now has a non-smart, non-human replacement.
Net net: Agentic software will be deployed. Most of its work will be good enough. Why will this be tolerated in personnel, customer service, loan approvals, and similar jobs? The answer is reduced headcounts. Humans cost money to manage. Humans want health care. Humans want raises. Software which is good enough seems to cost less. Therefore, welcome to the agentic future.
Stephen E Arnold, November 11, 2025
Google Is Really Cute: Push Your Content into the Jaws of Googzilla
November 4, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Google has a new, helpful, clever, and cute service just for everyone with a business Web site. “Google Labs’ Free New Experiment Creates AI-Generated Ads for Your Small Business” lays out the basics of Pomelli. (I think this word means knobs or handles.)

A Googley business process designed to extract money and data from certain customers. Thanks, Venice.ai. Good enough.
The cited article states:
Pomelli uses AI to create campaigns that are unique to your business; all you need to do is upload your business website to begin. Google says Pomelli uses your business URL to create a “Business DNA” that analyzes your website images to identify brand identity. The Business DNA profile includes tone of voice, color palettes, fonts, and pictures. Pomelli can also generate logos, taglines, and brand values.
Just imagine Google processing your Web site, its content, images, links, and entities like email addresses, phone numbers, etc. Then using its smart software to create an advertising campaign, ads, and suggestions for the amount of money you should / will / must spend via Google’s own advertising system. What a cute idea!
The write up points out:
Google says this feature eliminates the laborious process of brainstorming unique ad campaigns. If users have their own campaign ideas, they can enter them into Pomelli as a prompt. Finally, Pomelli will generate marketing assets for social media, websites, and advertisements. These assets can be edited, allowing users to change images, headers, fonts, color palettes, descriptions, and create a call to action.
How will those tireless search engine optimization consultants and Google certified ad reselling outfits react to this new and still “experimental” service? I am confident that [a] some will rationalize the wonderfulness of this service and sell advisory services about the automated replacement for marketing and creative agencies; [b] some will not understand that it is time to think about a substantive side gig because Google is automating basic business functions and plugging into the customer’s wallet with no pesky intermediary to shave off some bucks; and [c] others will watch as their own sales efforts become less and less productive and then go out of business because adaptation is hard.
Is Google’s idea original? No, Adobe has something called AI Found, according to the write up. Google is not into innovation. Need I remind you that Google advertising has some roots in the Yahoo garden in bins marked GoTo.com and Overture.com. Also, there is a bank account with some Google money from a settlement about certain intellectual property rights that Yahoo believed Google used as a source of business process inspiration.
As Google moves into automating hooks, it accrues several significant benefits which seem to stick up in Google’s push to help its users:
- Crawling costs may be reduced. The users will push content to Google. This may or may not be a significant factor, but the user who updates provides Google with timely information.
- The uploaded or pushed content can be piped into the Google AI system and used to inform the advertising and marketing confection Pomelli. Training data and ad prospects in one go.
- The automation of a core business function allows Google to penetrate more deeply into a business. What if that business uses Microsoft products? It strikes me that the Googlers will say, “Hey, switch to Google and you get advertising bonus bucks that can be used to reduce your overall costs.”
- The advertising process is a knob that Google can be used to pull the user and his cash directly into the Google business process automation scheme.
As I said, cute and also clever. We love you, Google. Keep on being Googley. Pull those users’ knobs, okay.
Stephen E Arnold, November 4, 2025
Social Credit Already Exists In The West…Just with Different Spins
November 4, 2025
China is a dystopian nightmare with its social credit system. Westerners believe they can breathe a sigh of relief because that doesn’t happen in their home countries. Oh, how wrong they are. Social credit systems are here, they’re just run by a capitalist system. The Nexus author Natalie Pang explores the idea in, “Your Phone Already Has Social Credit. We Just Lie About It.”
What exactly is social credit? It’s your digital reputation, a profile of your behavior captured by everything: Amazon, credit score, Airbnb, Uber, etc. There isn’t any difference between the social credit system in the west and China, except for one thing: transparency. China is 100% transparent that it rates people, while the West hides it behind many facades. China’s social credit system has been disbanded except for a few outliers. In the West, it’s alarming the impact it has on lives:
“Your credit score doesn’t just determine loan eligibility; it affects where you can live, which jobs you can get, and how much you pay for car insurance. But traditional credit scoring is expanding rapidly. Some specialized lenders scan social media profiles as part of alternative credit assessments, particularly for borrowers with limited credit histories. Payment apps and financial services increasingly track spending patterns and transaction behaviors to build comprehensive risk profiles. The European Central Bank has asked some institutions to monitor social media chatter for early warnings of bank runs, though this is more about systemic risk than individual account decisions. Background check companies routinely analyze social media presence for character assessment. LinkedIn algorithmically manages your professional visibility based on engagement patterns, posting frequency, and network connections, rankings that recruiters increasingly rely on to filter candidates. Even dating has become a scoring system: apps use engagement rates and response patterns to determine who rises to the top of the queue and who gets buried.”
Another difference between China and the West is that these apps don’t talk or affect each other. Amazon doesn’t impact your ride shares, while your dating app doesn’t impact your credit score. These data points can be described as proprietary data or a violation of a user’s privacy, so these companies don’t share them. Another way of putting it these companies don’t want to harm their bottom line.
Social crediting systems are already affecting the west, but only in realm of capitalism and social media. The bigger question to ask is what will happen if companies decide to share data for a profit? Then we’re screwed.
Whitney Grace, November 4, 2025
Is It Unfair to Blame AI for Layoffs? Sure
October 30, 2025
When AI exploded onto the scene, we were promised the tech would help workers, not replace them. Then that story began to shift, with companies revealing they do plan to slash expenses by substituting software for humans. But some are skeptical of this narrative, and for good reason. Techspot asks, “Is AI Really Behind Layoffs, or Just a Convenient Excuse for Companies?” Reporter Rob Thubron writes:
“Several large organizations, including Accenture, Salesforce, Klarna, Microsoft, and Duolingo, have said they are reducing staff numbers as AI helps streamline operations, reduce costs, and increase efficiency. But Fabian Stephany, Assistant Professor of AI & Work at the Oxford Internet Institute, told CNBC that companies are ‘scapegoating’ the technology.”
Stephany notes many companies are still trying to expel the extra humans they hired during the pandemic. Apparently, return-to-office mandates have not driven out as many workers as hoped. The write-up continues:
“Blaming AI for layoffs also has its advantages. Multibillion- and trillion-dollar companies can not only push the narrative that the changes must be made in order to stay competitive, but doing so also makes them appear more cutting-edge, tech-savvy, and efficient in the eyes of potential investors. Interestingly, a study by the Yale Budget Lab a few weeks ago showed there is little evidence that AI has displaced workers more severely than earlier innovations such as computers or the internet. Meanwhile, Goldman Sachs Research has estimated that AI could ultimately displace 6 to 7 percent of the US workforce, though it concluded the effect would likely be temporary.”
The write-up includes a graph Anthropic made in 2023 that compares gaps between actual and expected AI usage by occupation. A few fields overshot the expectation– most notably in computer and mathematical jobs. Most, though, fell short. So are workers really losing their jobs to AI? Or is that just a high-tech scapegoat?
Cynthia Murrell, October 30, 2025
A Big Waste of Time: Talking about Time to Young People
October 29, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I will be 81 in a matter of days. In 1963, I had a professor at the third rate institution I attended who required that the class read Sir Francis Bacon’s essay “Of Time.” Snappy stuff. I was 18 years old, and there was one thing I did not think about. I don’t recall worrying about time. I structured my life around what classes I had to attend, what assignments I had to do, when I worked at the root beer stand, and when I had to show up at some family function like a holiday. Time was anchored in immediacy. There was no past except the day before. There was no future except checking tasks off my mental checklist or the notecards for which I became famous. Yes, I still write down things to do on notecards.

An older person provides some advice to a young person about using time and taking risks. The young person listens and responds in an appropriate way for 2025 college graduates. Thanks, Venice.ai. Good enough.
That sporty guy Francis wrote:
“Men fear time, but time fears the pyramids.”
I know that this thought did not resonate for me in 1963, and to be frank, I am not sure it resonates with me. The pyramids exist but data about when they were constructed strikes me as fuzzy. I thought about this mismatch between youth, time, and the lack of knowledge about pyramid construction or similar matters when I read “Don’t Waste Your 20s Not Taking Big Risks: You Have It So Easy, and So Little Time.”
The time talk doesn’t work for young people. Time is measured in weird and idiosyncratic ways. The “amount” of time is experiential, contextual, and personal. The write up says:
You don’t appreciate how little time you have to easily go after it and how much harder it’s going to be later.
I am sorry. This does not compute.
The write up continues:
Each year you delay is costing you 10% of the easiest period in your life to take a big risk. So if you are in college or you’re in your 20s and you think that you might want to start a business, completely change your career, move to a new city, do something radical like that, you should do that as soon as humanly possible. Ignore the scared voice in your head. The downside is basically non-existent.
I view this statement as generally bad advice. An informed decision is important. The key word is “informed.” The meaning of “informed” depends on the individual. We are dealing with moving targets. An “informed” decision to a drug addict means one thing. Time to this individual is defined by narcotic need. An “informed” decision for a person who wants to do well in college means doing the work, trying to be organized, and obtaining information to achieve desired outcomes.
“Ignore” is important when one deals with life. “Ignore” is not important in the context of time. I am not sure what time is. I have zero interest in trying to defend Sir Francis’ pyramid time nor do I pay attention to the floundering physicists who argue about what time is.
For a young person today, life is like the world of any young person at any point in history. Telling that young person to not waste time is pointless. In fact, it is a waste of time.
The cited essay wants young people to do stuff, probably backpack in some remote country or start an AI company. The environment today is that the experiential, contextual, and personal cues for “time” come from inputs unique to this point in history. Nevertheless, young people make what they can of their life in the digital fish bowl.
Several observations:
- Decisions occur even if the person involved does not go through the weird notecard drill I did and do. The reality is “stuff happens” and then young people adapt in a way defined by their experiential, contextual, and personal space
- Young people hear “time” and define it as a young person. That means most have no clue what time means in a philosophical or technical context. Give them an essay to read. Have them write 500 words. Forget it. That worked for me and it probably works for many young people if they can actually read Bacon’s essay without AI support.
- At any point in a human’s life, time is not viewed as part of a big picture. Those words about “using time wisely” tell me more about the person speaking them than valid inputs for another individual. Thanks, but I don’t think about time unless it is anchored in some way.
Net net: As the general environment in the US and the technical business sector seems less warm and fuzzy, making informed decisions works better than watching roses die. Risk must be assessed. If it is not, interesting things happen to people. But time as a big idea or a resource to be use in a way that fits into some grand life plan is something oddly positioned in a TikTok-type of amped up Hollywood movie world. Making the best decision based on the information one has is a more useful way to mark off life intervals in my opinion. If your inputs come from Twitter, well, that may work for you. For me, not a chance.
Stephen E Arnold, October 29, 2025
Smart Software: The DNA and Its DORK Sequence
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I love article that “prove” something. This is a gem: “Study Proves Being Rude to AI Chatbots Gets Better Results Than Being Nice.” Of course, I believe everything I read online. This write up reports as actual factual:
A new study claims that being rude leads to more accurate results, so don’t be afraid to tell off your chatbot. Researchers at Pennsylvania State University found that “impolite prompts consistently outperform polite ones” when querying large language models such as ChatGPT.
My initial reaction is that I would much prefer providing my inputs about smart software directly to outfits creating these modern confections of a bunch of technologies and snake oil. How about a button on Microsoft Copilot, Google Gemini or whatever it is now, and the others in the Silicon Valley global domination triathlon of deception, money burning, and method recycling? This button would be labeled, “Provide feedback to leadership.” Think that will happen? Unlikely.
Thanks, Venice.ai, not good enough, you inept creation of egomaniacal wizards.
Smart YouTube and smart You.com were both dead for hours. Hey, no problemo. Want to provide feedback? Sure, just write “we care” at either firm. A wizard will jump right on the input.
The write up adds:
Okay, but why does being rude work? Turns out, the authors don’t know, but they have some theories.
Based on my experience with Silicon Valley type smart software outfits, I have an explanation. The majority of the leadership has a latent protein in their DNA. This DORK sequence ensures that arrogance, indifference to others, and boundless confidence takes precedence over other characteristics; for example, ethical compass aligned with social norms.
Built by DORK software responds to dorkish behavior because the DORK sequence wakes up and actually attempts to function in a semi-reliable way.
The write up concludes with this gem:
The exact reason isn’t fully understood. Since language models don’t have feelings, the team believes the difference may come down to phrasing, though they admit “more investigation is needed.”
Well, that makes sense. No one is exactly sure how the black boxes churned out by the next big thing outfits work. Therefore, why being a dork to the model remains a mystery. Can the DORK sequence be modified by CRISPR/Cas9? Is there funding the Pennsylvania State University experts can pursue? I sure hope so.
Stephen E Arnold, October 22, 2025
AI Service Industry: Titan or Titanic?
October 6, 2025
Venture capitalists believe they have a new recipe for success: Buy up managed-services providers and replace most of the staff with AI agents. So far, it seems to be working. (For the VCs, of course, not the human workers.) However, asserts TechCrunch, “The AI Services Transformation May Be Harder than VCs Think.” Reporter Connie Loizos throws cold water on investors’ hopes:
“But early warning signs suggest this whole services-industry metamorphosis may be more complicated than VCs anticipate. A recent study by researchers at Stanford Social Media Lab and BetterUp Labs that surveyed 1,150 full-time employees across industries found that 40% of those employees are having to shoulder more work because of what the researchers call ‘workslop’ — AI-generated work that appears polished but lacks substance, creating more work (and headaches) for colleagues. The trend is taking a toll on the organizations. Employees involved in the survey say they’re spending an average of nearly two hours dealing with each instance of workslop, including to first decipher it, then decide whether or not to send it back, and oftentimes just to fix it themselves. Based on those participants’ estimates of time spent, along with their self-reported salaries, the authors of the survey estimate that workslop carries an invisible tax of $186 per month per person. ‘For an organization of 10,000 workers, given the estimated prevalence of workslop . . . this yields over $9 million per year in lost productivity,’ they write in a new Harvard Business Review article.”
Surprise: compounding baloney produces more baloney. If companies implement the plan as designed, “workslop” will expand even as the humans who might catch it are sacked. But if firms keep on enough people to fix AI mistakes, they will not realize the promised profits. In that case, what is the point of the whole endeavor? Rather than upending an entire industry for no reason, maybe we should just leave service jobs to the humans that need them.
Cynthia Murrell, October 6, 2025
Being Good: Irrelevant at This Time
September 29, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
I read an essay titled “Being Good Isn’t Enough.” The author seems sincere. He provides insight about how to combine knowledge to create greater knowledge value. These are not my terms. The jargon appears in “The Knowledge Value Revolution or a History of the Future by Taichi Sakaiya. The book was published in Japan in 1985. I gave some talks shortly after the book was available. One of the individuals whom I met after one of my lectures at the Osaka Institute of Technology. I recommend the book because it expands on the concepts touched upon in the cited essay.
“Being Good Isn’t Enough” states:
The biggest gains come from combining disciplines. There are four that show up everywhere: technical skill, product thinking, project execution, and people skills. And the more senior you get, the more you’re expected to contribute to each.
Sakaiya includes this Japanese proverb:
As an infant, he was a prodigy. As a student, he was brilliant. But after 20 years, he was just another young man.
“Being Good Isn’t Enough” walks through the idea of identifying “your weakest discipline” and then adds:
work on that.
Sound advice. However, in today’s business environment in the US, I do not think this suggestion is particularly helpful; to wit:
Find a mentor, be a mentor. Lead a project, propose one. Do the work, present it. Create spaces for others to do the same. Do whatever it takes to get better…. But all of this requires maybe the most important thing of all: agency. It’s more powerful than smarts or credentials or luck. And the best part is you can literally just choose to be high-agency. High-agency people make things happen. Low-agency people wait. And if you want to progress, you can’t wait.
I think the advice is unlikely to “work” in the present world of work is calibrating as if it were 1970. Today the path forward depends on:
- Political connections
- Friends who can make introductions
- Former colleagues who can provide a soft recommendation in order to avoid HR issues
- Influence either inherited from a parent or other family member or fame
- Credentials in the form of a degree or a letter of acceptance from an institution perceived by the lender or possible employer as credible.
A skill or blended skills are less relevant at this time.
The obvious problem is that a person looking for a job has to be more than a bundle of knowledge value. For most people, Sakaiya’s and “Being Good’s” assertions are unlikely to lead to what most people want from work: Fulfillment, reward, and stability.
Stephen E Arnold, September 29, 2025

