Amazon and its Imperative to Dump Human Workers
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Everyone loves Amazon. The local merchants thank Amazon for allowing them to find their future elsewhere. The people and companies dependent on Amazon Web Services rejoiced when the AWS system failed and created an opportunity to do some troubleshooting and vendor shopping. The customer (me) who received a pair of ladies underwear instead of an AMD Ryzen 5750X. I enjoyed being the butt of jokes about my red, see through microprocessor. Was I happy!
Mice discuss Amazon’s elimination of expensive humanoids. Thanks, Venice.ai. Good enough.
However, I read “Amazon Plans to Replace More Than Half a Million Jobs With Robots.” My reaction was that some employees and people in the Amazon job pipeline were not thrilled to learn that Amazon allegedly will dump humans and embrace robots. What a great idea. No health care! No paid leave! No grousing about work rules! No medical costs! No desks! Just silent, efficient, depreciable machines. Of course there will be smart software. What could go wrong? Whoops. Wrong question after taking out an estimated one third of the Internet for a day. How about this question, “Will the stakeholders be happy?” There you go.
The write up cranked out by the Gray Lady reports from confidential documents and other sources says:
Amazon’s U.S. work force has more than tripled since 2018 to almost 1.2 million. But Amazon’s automation team expects the company can avoid hiring more than 160,000 people in the United States it would otherwise need by 2027. That would save about 30 cents on each item that Amazon picks, packs and delivers to customers. Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.
Why is Amazon dumping humans? The NYT turns to that institution that found Jeffrey Epstein a font of inspiration. I read this statement in the cited article:
“Nobody else has the same incentive as Amazon to find the way to automate,” said Daron Acemoglu, a professor at the Massachusetts Institute of Technology who studies automation and won the Nobel Prize in economic science last year. “Once they work out how to do this profitably, it will spread to others, too.” If the plans pan out, “one of the biggest employers in the United States will become a net job destroyer, not a net job creator,” Mr. Acemoglu said.
Ah, save money. Keep more money for stakeholders. Who knew? Who could have foreseen this motivation?
What jobs will Amazon provide to humans? Obviously leadership will keep leadership jobs. In my decades of professional work experience, I have never met a CEO who really believes anyone else can do his or her job. Well, the NYT has an answer about what humans will do at Amazon; to wit:
Amazon has said it has a million robots at work around the globe, and it believes the humans who take care of them will be the jobs of the future. Both hourly workers and managers will need to know more about engineering and robotics as Amazon’s facilities operate more like advanced factories.
I wish to close this essay with several observations:
- Much of the information in the write up come from company documents. I am not comfortable with the use of this type of information. It strikes me as a short cut, a bit like Google or self-made expert saying, “See what I did!”
- Many words were used to get one message across: Robots and by extension smart software will put people out of work. Basic income time, right? Why not say that?
- The reason wants to dump people is easy to summarize: Humans are expensive. Cut humans, costs drop (in theory). But are there social costs? Sure, but why dwell on those.
Net net: Sigh. Did anyone reviewing this story note the Amazon online collapse? Perhaps there is a relationship between cost cutting at Amazon and the company’s stability?
Stephen E Arnold, October x, 2025
Moral Police? Not OpenAI, Dude and Not Anywhere in Silicon Valley
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Coming up with clever stuff is either the warp or the woof of innovation. With the breakthroughs in software that seems intelligent, clever is morphing into societal responsibility. For decades I have asserted that the flow of digital information erodes notional structures. From my Eagleton Lecture in the mid-1980s to the observations in this blog, the accuracy of my observation is verified. What began as disintermediation in the niche of special librarians has become the driving force for the interesting world now visible to most people.

Worrying about morality in 2025 is like using a horse and buggy to commute in Silicon Valley. Thanks, Venice.ai. Good enough.
I can understand the big idea behind Sam AI-Man’s statements as reported in “Sam Altman Says OpenAI Isn’t ‘Moral Police of the World’ after Erotica ChatGPT Post Blows Up.” Technology is — like, you know, so, um — neutral. This means that its instrumental nature appears in applications. Who hassles the fellow who innovated with Trinitrotoluene or electric cars with top speeds measured in hundreds of miles per hour?
The write up says:
OpenAI CEO Sam Altman said Wednesday [October 15, 2025] that the company is “not the elected moral police of the world” after receiving backlash over his decision to loosen restrictions and allow content like erotica within its chatbot ChatGPT. The artificial intelligence startup has expanded its safety controls in recent months as it faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to “safely relax” most restrictions now that it has new tools and has been able to mitigate “serious mental health issues.”
This is a sporty paragraph. It contains highly charged words and a message. The message, as I understand it, is, “We can’t tell people what to do or not to do with our neutral and really good smart software.”
Smart software has become the next big thing for some companies. Sure, many organizations are using AI, but the motors driving the next big thing are parked in structures linked with some large high technology outfits.
What’s a Silicon Valley type outfit supposed to do with this moral frippery? The answer, according to the write up:
On Tuesday [October 13, 2025] , OpenAI announced assembled a council of eight experts who will provide insight into how AI impacts users’ mental health, emotions and motivation. Altman posted about the company’s aim to loosen restrictions that same day, sparking confusion and swift backlash on social media.
What am I confused about the arrow of time? Sam AI-Man did one thing on the 13th of October and then explained that his firm is not the moral police on the 14th of October. Okay, make a move and then crawfish. That works for me, and I think the approach will become part of the managerial toolkit for many Silicon Valley outfits.
For example, what if AI does not generate enough data to pay off the really patient, super understanding, and truly king people who fund the AI effort? What if the “think it and it will become real” approach fizzles? What if AI turns out to be just another utility useful for specific applications like writing high school essays or automating a sales professional’s prospect follow up letter? What if….? No, I won’t go there.
Several observations:
- Silicon Valley-type outfits now have the tools to modify social behavior. Whether it is Peter Thiel as puppet master or Pavel Durov carrying a goat to inspire TONcoin dApp developers, these individuals can control hearts and minds.
- Ignoring or imposing philosophical notions with technology was not a problem when an innovation like Teslas A/C motor was confined to a small sector of industry. But today, the innovations can ripple globally in seconds. It should be no surprise that technology and ideology are for now intertwined.
- Control? Not possible. The ink, as the saying goes, has been spilled on the blotter. Out of the bottle. Period.
The waffling is little more than fire fighting. The uncertainty in modern life is a “benefit” of neutral technology. How do you like those real time ads that follow you around from online experience to online experience? Sam AI-Man and others of his ilk are not the moral police. That concept is as outdated as a horse-and-buggy on El Camino Real. Quaint but anachronistic. Just swipe left for another rationalization. It is 2025.
Stephen E Arnold, October 23, 2025
A Newsletter Firm Appears to Struggle for AI Options
October 17, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Adapting to AI’s Evolving Landscape: A Survival Guide for Businesses.” The premise of the article will be music to the ears of venture funders and go-go Silicon Valley-type AI companies. The write up says:
AI-driven search is upending traditional information pathways and putting the heat on businesses and organizations facing a web traffic free-fall. Survival instincts have companies scrambling to shift their web strategies — perhaps ending the days of the open internet as we know it. After decades of pursuing web-optimization strategies that encouraged high-volume content generation, many businesses are now feeling that their content-marketing strategies might be backfiring.
I am not exactly sure about this statement. But let’s press forward.
I noted this passage:
Without the incentive of web clicks and ad revenue to drive content creation, the foundation of the web as a free and open entity is called into question.
Okay, smart software is exploiting the people who put up SEO-tailored content to get sales leads and hopefully make money. From my point of view, technology can be disruptive. The impacts, however, can be positive or negative.
What’s the fix if there is one? The write up offers these thought starters:
- Embrace micro transactions. [I suppose this is good if one has high volume. It may not be so good if shipping and warehouse costs cannot be effectively managed. Vendors of high ticket items may find a micro-transaction for a $500,000 per year enterprise software license tough to complete via Venmo.]
- Implement a walled garden. [That works if one controls the market. Google wants to “register” Android developers. I think Google may have an easier time with the walled-garden tactic than a local bakery specializing in treats for canines.]
- Accepts the monopolies. [You have a choice?]
My reaction to the write up is that it does little to provide substantive guidance as smart software continues to expand like digital kudzu. What is important is that the article appears in the consumer oriented publication from Kiplinger of newsletter fame. Unfortunately the article makes clear that Kiplinger is struggling to find a solution to AI. My hunch is that Kiplinger is looking for possible solutions. The firm may want to dig a little deeper for options.
Stephen E Arnold, October 17, 2025
Apple: Waking Up Is Hard to Do
October 16, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read a letter. I think this letter or at least parts of it were written by a human. These days it can be tough to know. The letter appeared in “Wiley Hodges’s Open Letter to Tim Cook Regarding ICEBlock.” Mr. Hodge, according to the cited article, retired from Apple, the computer and services company in 2022.
The letter expresses some concern that Apple removed an app from the Apple online store. Here’s a snippet from the “letter”:
Apple and you are better than this. You represent the best of what America can be, and I pray that you will find it in your heart to continue to demonstrate that you are true to the values you have so long and so admirably espoused.
It does seem to me that Apple is a flexible outfit. The purpose of the letter is unknown to me. On the surface, it is a single former employee’s expression of unhappiness at how “leadership” leads and deciders “decide.” However, below the surface it a signal that some people thought a for profit, pragmatic, and somewhat frisky Fancy Dancing organization was like Snow White, the Easter bunny, or the Lone Ranger.

Thanks, Venice.ai. Good enough.
Sorry. That’s not how big companies work or many little companies for that matter. Most organizations do what they can to balance a PR image with what the company actually does. Examples range from arguing via sleek and definitely expensive lawyers that what they do does not violate laws. Also, companies work out deals. Some of these involve doing things to fit in to the culture of a particular company. I have watched money change hands when registering a vehicle in the government office in Sao Paulo. These things happen because they are practical. Apple, for example, has an interesting relationship with a certain large country in Asia. I wonder if there is a bit of the old soft shoe going on in that region of the world.
These are, however, not the main point of this blog post. There cited article contains this statement:
Hodges, earlier in his letter, makes reference to Apple’s 2016 standoff with the FBI over a locked iPhone belonging to the mass shooter in San Bernardino, California. The FBI and Justice Department pressured Apple to create a version of iOS that would allow them to backdoor the iPhone’s passcode lock. Apple adamantly refused.
Okay, the time delta is nine years. What has changed? Obviously social media, the economic situation, the relationship among entities, and a number of lawsuits. These are the touchpoints of our milieu. One has to surf on the waves of change and the ripples and waves of datasphere.
But I want to highlight several points about my reaction to the this blog post containing the Hodge’s letter:
- Some people are realizing that their hoped-for vision of Apple, a publicly traded company, is not the here-and-now Apple. The fairy land of a company that cares is pretty much like any other big technology outfit. Shocker.
- Apple is not much different today than it was nine years ago. Plucking an example which positioned the Cupertino kids as standing up for an ideal does not line up with the reality. Technology existed then to gain access to digital devices. Believing the a company’s PR reflected reality illustrates how crazy some perceptions are. Saying is not doing.
- Apple remains to me one of the most invasive of the technology giants. The constant logging in, the weirdness of forcing people to have data in the iCloud when those people do not know the data are there or want it there for that matter, the oddball notifications that tell a user that an “new device” is connected when the iPad has been used for years, and a few other quirks like hiding files are examples of the reality of the company.
News flash: Apple is like the other Silicon Valley-type big technology companies. These firms have a game plan of do it and apologize. Push forward. I find it amusing that adults are experiencing the same grief as a sixth grader with a crush on the really cute person in home room. Yep, waking up is hard to do. Stop hitting the snooze alarm and join the real world.
Net net: The essay is a hoot. Here is an adult realizing that there is no Santa with apparently tireless animals and dwarfs at the North Pole. The cited article contains what appears to be another expression of annoyance, anger, and sorrow that Apple is not what the humans thought it was. Apple is Apple, and the only change agent able to modify the company is money and/or fear, a good combo in my experience.
Stephen E Arnold, October 16, 2025
Deepseek: Why Trust Any Smart Software?
October 16, 2025
This essay is the work of a dumb dinobaby. No smart software required.
We have completed our work on my new book “The Telegram Labyrinth.” In the course of researching and writing about Pavel Durov’s online messaging system, we learned one thing: Software is not what it seems to the user. Most Telegram users believe that Telegram is end to end encrypted. It is, but only if the user goes through some hoops. The vast majority of users don’t go through hoops. Those millions upon millions of users know much about the third-party bots chugging away in Groups and Channels (public and private). Even fewer users realize that a service charge is applied to each monetary transaction in the Telegram system. That money flows to the GOAT (greatest of all time) technical wizard, Pavel Durov and some close associates. Who knew?
I read “The Demonization of Deepseek: How NIST Turned Open Science into a Security Scare.” The write up focuses on a study or analysis conducted by what used to be the National Bureau of Standards. (I loved those traffic jams on Quince Orchard Road in Gaithersburg, Maryland.) The software put under the NIST (National Institute of Science & Technology) is the China-linked Deepseek smart software.
The cited article discusses the NIST study. Let’s see what it says about the China-linked artificial intelligence system. Presumably Deepseek did more with less; that is, the idea was to demonstrate that Chinese innovation could make US methods of large language models. The result would be better, faster, and cheaper. Cheap has a tendency to win in some product and service categories. Also, “good enough” is a winner in today’s market. (How about the reliability of some of those 2025 automobiles and trucks?)
The write up says:
NIST’s recent report on Deepseek is not a neutral technical evaluation. It is a political hit piece disguised as science. There is no evidence of backdoors, spyware, or data exfiltration. What is really happening is the U.S. government using fear and misinformation to sabotage open science, open research, and open source. They are attacking gifts to humanity with politics and lies to protect corporate power and preserve control. Deepseek’s work is a genuine contribution to human knowledge, and it is being discredited for reasons that have nothing to do with security.
Okay, that’s clear.
Let’s look at how the cited write up positions Deepseek:
Deepseek built competitive AI models. Not perfect, but impressive given their budget. They spent far less than OpenAI or Anthropic and still achieved near-frontier performance. Then they open-sourced everything under Apache 2.0.
The point of the write up is that analysis has been politicized. This is an interesting allegation. I am not confident that any “objective” analysis is indeed without spin. Remember those reports about smoking cigarettes and the work of the Tobacco Institute. (I am a dinobaby, but I remember.)
The write up does identify three concerns a user of Deepseek should have. Let me quote from the cited article:
- Using Deepseek’s API: If you send sensitive data to Deepseek’s hosted service, that data goes through Chinese infrastructure. This is a real data sovereignty issue, the same as using any foreign cloud provider.
- Jailbreak susceptibility: If you’re building production applications, you need to test ANY model for vulnerabilities and implement application-level safeguards. Don’t rely solely on model guardrails. Also – use an inference time guard model (such as LlamaGuard or Qwen3Guard) to classify and filter both prompts and responses.
- Bias and censorship: All models reflect their training data. Be aware of this regardless of which model you use.
Let me offer several observations:
- Most people are unaware of what can be accomplished from software use. Assumptions about what it does and does not do are dangerous. We have tested Deepseek running locally. It is okay. This means it can do some things well like translate a passage in English into German. It has no clue about timely issues because most LLMs are not updated in near real time. Some are, but others are not. Who needs timely information when cheating on a high school essay? Answer: no one.
- The write up focuses on Deepseek, but its implications are much more broad. I think that the mindless write ups from consulting firms and online magazines is a very big problem. Critical thinking is just not the common. It is a problem in the US but other countries have this blind spot as well.
- The idea that political perceptions alter what should be an objective analysis is troubling to me. I have written a number of reports for government agencies; for example, a report about Japan’s obsession with a database industry for the Office of Technology Assessment. Yep, I am a dinobaby remember. I may have been right or wrong in my report, but I was not influenced by any political concept or actor. I could have been because I did a stint in the office of Admiral / Congressman Craig Hosmer. My OTA work was not part of the “game” for me.
Net net: Trust is important. I think it is being eroded. I also believe that there are few people who present information without fear or favor. Now here’s the key part of my perception: One cannot trust smart software or any of the programmer assembled, hidden threshold, and masked training methods that go into these confections. More critical thinking is needed. A deceptive business practice if well crafted cannot be perceived. Remember Telegram Messenger is 13 years young and users of the system don’t have much awareness of bots, mini apps, and dapps. What don’t people know about smart software?
Stephen E Arnold, October 16, 2025
Who Is Afraid of the Big Bad AI Wolf? Mr. Beast Perhaps?
October 14, 2025
This essay is the work of a dumb dinobaby. No smart software required.
The story “MrBeast Warns of ‘Scary Times’ as AI Threatens YouTube Creators” is apparently about You Tube creators. Mr. Beast, a notable YouTube personality, is the source of the information. Is the article about YouTube creators? Yep, but it is also about Mr. Beast.

The write up says:
MrBeast may not personally face the threat of being replaced by AI as his brand thrives on large-scale, real-world stunts that rely on authenticity and human emotion. But his concern runs deeper than self-preservation. It’s about the millions of smaller creators who depend on platforms like YouTube to make a living. As one of the most influential figures on the internet, his words carry weight. The 27-year-old recently topped Forbes’ 2025 list of highest-earning creators, earning roughly $85 million and building a following of over 630 million across platforms.
Okay, Mr. Beast’s fame depended on YouTube. He is still in the YouTube fold. However, he has other business enterprises. He recognizes that smart software could create problems for creators.
I think smart software is another software tool. It is becoming a utility like a PDF editor.
The problem with Mr. Beast’s analysis is that it appears to be focused on other creators. I am not so sure. I think the comments presented in the write up reveal more about Mr. Beast than they do about the “other” creators. One example is:
“When AI videos are just as good as normal videos, I wonder what that will do to YouTube and how it will impact the millions of creators currently making content for a living… scary times,” MrBeast — whose real name is Jimmy Donaldson — wrote on X.
I am no expert on human psychology, but I see the use of the word “impact” and “scary” as a glimpse of what Mr. Beast is thinking. His production costs allegedly rival those of traditional commercial video outfits. The ideas and tropes have become increasingly strained and bizarre. YouTube acts in a unilateral way and outputs smarm to the creators desperate to know why the flow of their money has been reduced if not cut off. Those disappearing van life videos are just one example of how video magnets can melt down and be crushed under the wheels of the Google bus.
My thought is that Google will use AI to create alterative Mr. Beast-type videos with AI. Then squeeze the Mr. Beast type creators and let the traffic flow to Mother Google. No royalties required, so Google wins. Mr. Beast-type creators can find their future and money elsewhere. Simple.
Stephen E Arnold, October 14, 2025
The Ka-Ching Game: The EU Rings the Big Tech Cash Register Tactic
October 14, 2025
This essay is the work of a dumb dinobaby. No smart software required.
The unusually tinted Financial Times published another “they will pay up and change, really” write up. The article is “Meta and Apple Close to Settling EU Cases.” [Note: You have to pay to read the FT’s orange write up.] The main idea is that these U S big technology outfits are cutting deals. The objective is to show that these two firms are interested in making friends with European Commission professionals. The combination of nice talk and multi-million euro payments should do the trick. That’s the hope.

Thanks, Venice.ai. Good enough.
The cute penalty method the EU crafted involved daily financial penalties for assorted alleged business practices. The penalties had an escalator feature. If the U S big tech outfits did not comply or pretend to comply, then the EU could send an invoice for up to five percent of the firm’s gross revenues. Could the E U collect? Well, that’s another issue. If Apple leaves the E U, the elected officials would have to use an Android mobile. If Meta departed, the elected officials would have to listen to their children’s complaints about their ruined social life. I think some grandmothers would be honked if the flow of grandchildren pictures were interrupted. (Who needs this? Take the money, Christina.)
Several observations:
- The EU will take money; the EU will cook up additional rules to make the Wild West outfits come to town but mostly behave
- The U S big tech companies will write a check, issue smarmy statements, and do exactly what they want to do. Decades of regulatory inefficacy creates certain opportunities. Some U S outfits spot those and figure out how to benefit from lack of action or ineptitude
- The efforts to curtail the U S big tech companies have historically been a rinse and repeat exercise. That won’t change.
The problem for the EU with regard to the U S is different from the other challenges it faces. In my opinion, the E U like other countries is:
- Unprepared for the new services in development by U S firms. I address these in a series of lectures I am doing for some government types in Colorado. Attendance at the talks is restricted, so I can’t provide any details about these five new services hurtling toward the online markets in the U S and elsewhere
- Unable to break its cycle of clever laws, U S company behavior, and accept money. More is needed. A good example of how one country addressed a problem online took place in France. That was a positive, decisive action and will interrupt the flow of cash from fines. Perhaps more E U countries should consider this French approach?
- The Big Tech outfits are not constrained by geographic borders. In case you have not caught up with some of the ideas of Silicon Valley, may I suggest you read the enervating and somewhat weird writings of a fellow named René Gerard?
Net net: Yep, a deal. No big surprise. Will it work? Nope.
Stephen E Arnold, October 15, 2025
AI and America: Not a Winner It Seems
October 13, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Los Alamos National Laboratory perceives itself as one of the world’s leading science and research facilities. Jason Pruet is the Director of Los Alamos’s National Security AI Office and he was interviewed in “Q&A With Jason Pruet.” Pruet’s job is to prepare the laboratory for AI integration. He used to view AI as another tool for advancement, but Pruet now believes AI would disrupt the fundamental landscape of science, security, and more.
In the interview, Pruet states that the US government invested more in AI than any time in the past. He compared this investment to the World War II paradigm of science for the public good. Pruet explained that before the war, the US government wasn’t involved with science. After the war, Los Alamos shifted the dynamic and shaped modern America’s dedication to science, engineering, etc.
One of the biggest advances in AI technology is transformer architecture that allows huge progress to scale AI models, especially for mixing different information types. Pruet said that China is treating AI like a general purpose technology (i.e electricity) and they’ve launched a National AI strategy. The recent advances in AI are changing power structures. It’s turning into a new international arms race but that might not be the best metaphor:
“[Pruet:] All that said, I’m increasingly uncomfortable viewing this through the lens of a traditional arms race. Many thoughtful and respected people have emphasized that AI poses enormous risks for humanity. There are credible reports that China’s leadership has come to the same view, and that internally, they are trying to better balance the potential risks rather than recklessly seek advantage. It may be that the only path for managing these risks involves new kinds of international collaborations and agreements.”
Then Pruet had this to say about the state of the US’s AI development:
“Like we’re behind. The ability to use machines for general-purpose reasoning represents a seminal advance with enormous consequences. This will accelerate progress in science and technology and expand the frontiers of knowledge. It could also pose disruptions to national security paradigms, educational systems, energy, and other foundational aspects of our society. As with other powerful general-purpose technologies, making this transition will depend on creating the right ecosystem. To do that, we will need new kinds of partnerships with industry and universities.”
The sentiment seems to be focused on going faster and farther than any other country in the AI game. With the circular deals OpenAI has been crafting, AI seems to be more about financial innovation than technical innovation.
Whitney Grace, October 13, 2025
Weaponization of LLMs Is a Thing. Will Users Care? Nope
October 10, 2025
This essay is the work of a dumb dinobaby. No smart software required.
A European country’s intelligence agency learned about my research into automatic indexing. We did a series of lectures to a group of officers. Our research method, the results, and some examples preceded a hands on activity. Everyone was polite. I delivered versions of the lecture to some public audiences. At one event, I did a live demo with a couple of people in the audience. Each followed a procedure, and I showed the speed with which the method turned up in the Google index. These presentations took place in the early 2000s. I assumed that the behavior we discovered would be disseminated and then it would diffuse. It was obvious that:
- Weaponized content would be “noted” by daemons looking for new and changed information
- The systems were sensitive to what I called “pulses” of data. We showed how widely used algorithms react to sequences of content
- The systems would alter what they would output based on these “augmented content objects.”
In short, online systems could be manipulated or weaponized with specific actions. Most of these actions could be orchestrated and tuned to have maximum impact. One example in my talks was taking a particular word string and making it turn up in queries where one would not expect that behavior. Our research showed that a few as four weaponized content objects orchestrated in a specific time interval would do the trick. Yep, four. How many weaponized write ups can my local installation of LLMs produce in 15 minutes? Answer: Hundreds. How long does it take to push those content objects into information streams used for “training.” Seconds.
Fish live in an environment. Do fish know about the outside world? Thanks, Midjourney. Not a ringer but close enough in horseshoes.
I was surprised when I read “A Small Number of Samples Can Poison LLMs of Any Size.” You can read the paper and work through the prose. The basic idea is that selecting or shaping training data or new inputs to recalibrate training data can alter what the target system does. I quite like the phrase “weaponize information.” Not only does the method work, it can be automated.
What’s this mean?
The intentional selection of information or the use of a sample of information from a domain can generate biases in what the smart software knows, thinks, decides, and outputs. Dr. Timnit Gebru and her parrot colleagues were nibbling around the Google cafeteria. Their research caused the Google to put up a barrier to this line of thinking. My hunch is that she and her fellow travelers found that content that is representative will reflect the biases of the authors. This means that careful selection of content for training or updating training sets can be steered. That’s what the Anthropic write up make clear.
Several observations are warranted:
- Whoever selects training data or the information used to update and recalibrate training data can control what is displayed, recommended, or included in outputs like recommendations
- Users of online systems and smart software are like fish in a fish bowl. The LLM and smart software crowd are the people who fill the bowl and feed the fish. Fish have a tough time understanding what’s outside their bowl. I don’t like the word “bubble” because these pop. An information fish bowl is tough to escape and break.
- As smart software companies converge into essentially an oligopoly using the types of systems I described in the early 2000s with some added sizzle from the Transformer thinking, a new type of information industrial complex is being assembled on a very large scale. There’s a reason why Sam AI-Man can maintain his enthusiasm for ChatGPT. He sees the potential of seemingly innocuous functions like apps within ChatGPT.
There are some interesting knock on effects from this intentional or inadvertent weaponization of online systems. One is that the escalating violent incidents are an output of these online systems. Inject some René Girard-type content into training data sets. Watch what those systems output. “Real” journalists are explaining how they use smart software for background research. Student uses online systems without checking to see if the outputs line up with what other experts say. What about investment firms allowing smart software to make certain financial decisions.
Weaponize what the fish live in and consume. The fish are controlled and shaped by weaponized information. How long has this quirk of online been known? A couple of decades, maybe more. Why hasn’t “anything” been done to address this problem? Fish just ask, “What problem?”
Stephen E Arnold, October x, 2025
I spotted
AI Has a Secret: Humans Do the Work
October 10, 2025
A key component of artificial intelligence output is not artificial at all. The Guardian reveals “How Thousands of ‘Overworked, Underpaid’ Humans Train Google’s AI to Seem Smart.” From accuracy to content moderation, Google Gemini and other AI models rely on a host of humans employed by third-party contractors. Humans whose jobs get harder and harder as they are pressured to churn through the work faster and faster. Gee, what could go wrong?
Reporter Varsha Bansal relates:
“Each new model release comes with the promise of higher accuracy, which means that for each version, these AI raters are working hard to check if the model responses are safe for the user. Thousands of humans lend their intelligence to teach chatbots the right responses across domains as varied as medicine, architecture and astrophysics, correcting mistakes and steering away from harmful outputs.”
Very important work—which is why companies treat these folks as valued assets. Just kidding. We learn:
“Despite their significant contributions to these AI models, which would perhaps hallucinate if not for these quality control editors, these workers feel hidden. ‘AI isn’t magic; it’s a pyramid scheme of human labor,’ said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. ‘These raters are the middle rung: invisible, essential and expendable.’”
And, increasingly, rushed. The write-up continues:
“[One rater’s] timer of 30 minutes for each task shrank to 15 – which meant reading, fact-checking and rating approximately 500 words per response, sometimes more. The tightening constraints made her question the quality of her work and, by extension, the reliability of the AI. In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini’s predecessor, a ‘faulty’ and ‘dangerous’ product.”
And that is how we get AI advice like using glue on pizza or adding rocks to one’s diet. After those actual suggestions went out, Google focused on quality over quantity. Briefly. But, according to workers, it was not long before they were again told to emphasize speed over accuracy. For example, last December, Google announced raters could no longer skip prompts on topics they knew little about. Think workers with no medical expertise reviewing health advice. Not great. Furthermore, guardrails around harmful content were perforated with new loopholes. Bansal quotes Rachael Sawyer, a rater employed by Gemini contractor GlobalLogic:
“It used to be that the model could not say racial slurs whatsoever. In February, that changed, and now, as long as the user uses a racial slur, the model can repeat it, but it can’t generate it. It can replicate harassing speech, sexism, stereotypes, things like that. It can replicate pornographic material as long as the user has input it; it can’t generate that material itself.”
Lovely. It is policies like this that leave many workers very uncomfortable with the software they are helping to produce. In fact, most say they avoid using LLMs and actively discourage friends and family from doing so.
On top of the disillusionment, pressure to perform full tilt, and low pay, raters also face job insecurity. We learn GlobalLogic has been rolling out layoffs since the beginning of the year. The article concludes with this quote from Sawyer:
‘I just want people to know that AI is being sold as this tech magic – that’s why there’s a little sparkle symbol next to an AI response,’ said Sawyer. ‘But it’s not. It’s built on the backs of overworked, underpaid human beings.’
We wish we could say we are surprised.
Cynthia Murrell, October 10, 2025