Agentic Software: Close Enough for Horse Shoes

November 11, 2025

green-dino_thumb_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

I read a document that I would describe as tortured. The lingo was trendy. The charts and graphs sported trendy colors. The data gathering seemed to be a mix of “interviews” and other people’s research. Plus the write up was a bit scattered. I prefer the rigidity of old-fashioned organization. Nevertheless, I did spot one chunk of information that I found interesting.

The title of the research report (sort of an MBA- or blue chip consulting firm-type of document) is “State of Agentic AI: Founder’s Edition.” I think it was issued in March 2025, but with backdating popular, who knows. I had the research report in my files, and yesterday (November 3, 2025) I was gathering some background information for a talk I am giving on November 6, 2025. The document walked through data about the use of software to replace people. Actually, the smart software agents generally do several things according to the agent vendors’ marketing collateral. The cited document restated these items this way:

  1. Agents are set up to reach specific goals
  2. Agents are used to reason which means “break down their main goal … into smaller manageable tasks and think about the next best steps.”
  3. Agents operate without any humans in India or Pakistan operating invisibly and behind the scenes
  4. Agents can consult a “memory” of previous tasks, “experiences,” work, etc.

Agents, when properly set up and trained, can perform about as well as a human. I came away from the tan and pink charts with a ball park figure of 75 to 80 percent reliability. Close enough for horseshoes? Yep.

There is a run down of pricing options. Pricing seems to be challenge for the vendors with API usage charges and traditional software licensing used by a third of the agentic vendors.

Now here’s the most important segment from the document:

We asked founders in our survey: “What are the biggest issues you have encountered when deploying AI Agents for your customers? Please rank them in order of magnitude (e.g. Rank 1 assigned to the biggest issue)” The results of the Top 3 issues were illuminating: we’ve frequently heard that integrating with legacy tech stacks and dealing with data quality issues are painful. These issues haven’t gone away; they’ve merely been eclipsed by other major problems. Namely:

  • Difficulties in integrating AI agents into existing customer/company workflows, and the human-agent interface (60% of respondents)
  • Employee resistance and non-technical factors (50% of respondents)
  • Data privacy and security (50% of respondents).

Here’s the chart tallying the results:

image

Several ideas crossed my mind as I worked through this research data:

  1. Getting the human-software interface right is a problem. I know from my work at places like the University of Michigan, the Modern Language Association, and Thomson-Reuters that people have idiosyncratic ways to do their jobs. Two people with similar jobs add the equivalent of extra dashboard lights and yard gnomes to the process. Agentic software at this time is not particularly skilled in the dashboard LED and concrete gnome facets of a work process. Maybe someday, but right now, that’s a common deal breaker. Employees says, “I want my concrete unicorn, thank you.”
  2. Humans say they are into mobile phones, smart in-car entertainment systems, and customer service systems that do not deliver any customer service whatsoever. Change as somebody from Harvard said in a lecture: “Change is hard.” Yeah, and it may not get any easier if the humanoid thinks he or she will allowed to find their future pushing burritos at the El Nopal Restaurant in the near future.
  3. Agentic software vendors assume that licensees will allow their creations to suck up corporate data, keep company secrets, and avoid disappointing  customers by presenting proprietary information to a competitor. Security is “regular” enterprise software is a bit of a challenge. Security in a new type of agentic software is likely to be the equivalent of a ride on roller coaster which has tossed several middle school kids to their death and cut off the foot of a popular female. She survived, but now has a non-smart, non-human replacement.

Net net: Agentic software will be deployed. Most of its work will be good enough. Why will this be tolerated in personnel, customer service, loan approvals, and similar jobs? The answer is reduced headcounts. Humans cost money to manage. Humans want health care. Humans want raises. Software which is good enough seems to cost less. Therefore, welcome to the agentic future.

Stephen E Arnold, November 11, 2025

AI Dreams Are Plugged into Big Rock Candy Mountain

November 5, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

One of the niches in the power generation industry is demand forecasting. When I worked at Halliburton Nuclear, I sat in meetings. One feature of these meetings was diagrams. I don’t have any of these diagrams because there were confidentiality rules. I followed those. This is what some of the diagrams resembled:

image

Source: https://mavink.com/

When I took a job at Booz, Allen, the firm had its own demand experts. The diagrams favored by one of the utility rate and demand experts looked like this. Note: Booz, Allen had rules, so the diagram comes from the cited source:

image

Source: https://vtchk.ru/photo/demand-curve/16

These curves speak volumes to the people who fund, engineer, and construct power generation facilities. The main idea for these semi-abstract curves is that balancing demand and supply is important. The price of electricity depends on figuring out the probable relationship of demand for power, the available supply and the supply that will come on line at estimated times in the future. The price people and organizations pay for electricity depend on these types of diagrams, the reams of data analysts crunch, and a group of people sitting in a green conference room at a plastic table agree the curves mean.

A recent report from Turner & Townsend (a UK consulting outfit) identifies some trends in the power generation sector with some emphasis on the data centers required for smart software. You can work through the report on the Turner & Townsend Web site by clicking this link. The main idea is that huge AI-centric data centers needed to power the Googley transformer-centric approach to smart software outstrips available power.

The response to this in the bit AI companies is, “We can put servers in orbit” and “We can build small nuclear reactors and park them near the data centers” and “We can buy turbines and use gas or other carbon fuels to power out data centers.” These are comments made by individuals who choose not to look at the wonky type of curves I show.

It takes time to build a conventional power generation facility. The legal process in the US has traditionally been difficult and expensive. A million dollars won’t even pay for a little environmental impact study. Lawyers can cost than a rail car loaded with specialized materials required for nuclear reactors. The costs for the PR required to place a baby nuke in Memphis next to a big data center may be more expensive than buying some Google ads and hiring a local marketing firm. Some people may not feel comfortable with a new, unproven baby nuke in their neighborhood. Coal- and oil-fired plants invite certain types of people to mount noisy and newsworthy protests. Putting a data center in orbit poses some additional paperwork challenges and a little bit of extra engineering work.

So what’s the big detailed report show. Here’s my diagram of the power, demand, and price future with those giant data centers in the US. You can work out the impact on non-US installations:

image

This diagram was whipped up by Stephen E Arnold.

The message in these curves reflects one of the “challenges” identified in the Turner & Townsend report: Cost.

What does this mean to those areas of the US where Big AI Boys plan to build large data centers? Answer: Their revenue streams need to be robust and their funding sources have open wallets.

What does this mean for the cost of electricity to consumers and run-of-the-mill organizations? Answer: Higher costs, brown outs, and fancy new meters than can adjust prices and current on the fly. Crank up the data center, and the Super Bowl broadcast may not be in some homes.

What does this mean for ubiquitous, 24×7 AI availability in software, home appliances, and mobile devices? Answer: Higher costs, brown outs, and degraded services.

How will the incredibly self aware, other centric, ethical senior managers at AI companies respond? Answer: No problem. Think thorium reactors and data centers in space.

Also, the cost of building new power generation facilities is not a problem for some Big Dogs. The time required for licensing, engineering, and construction. No problem. Just go fast, break things.

And overcoming resistance to turbines next to a school or a small thorium reactor in a subdivision? Hey, no problem. People will adapt or they can move to another city.

What about the engineering and the innovation? Answer: Not to worry. We have the smartest people in the world.

What about common sense and self awareness? Response: Yo, what do those terms mean are they synonyms for disco biscuits?

The next big thing lives on Big Rock Candy Mountain.

Stephen E Arnold, November 5, 2025

A Nice Way of Saying AI Will Crash and Burn

November 5, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a write up last week. Today is October 27, 2025, and this dinobaby has a tough time keeping AI comments, analyses, and proclamations straight. The old fashioned idea of putting a date on each article or post is just not GenAI. As a dinobaby, I find this streamlining like Google dumping apostrophes from its mobile keyboard ill advised. I would say “stupid,” but one cannot doubt the wisdom of the quantumly supreme AI PR and advertising machine, can one? One cannot tell some folks that AI is a utility. It may be important, but the cost may be AI’s Achilles’ heel or the sword on which AI impales itself.

image

A young wizard built a wonder model aircraft. But it caught on fire and is burning. This is sort of sad. Thanks, ChatGPT. Good enough.

These and other thoughts flitted through my mind when I read “Surviving the AI Capex Boom.” The write up includes an abstract, and you can work through the 14 page document to get the inside scoop. I will assume you have read the document.  Here are some observations:

  1. The “boom” is visible to anyone who scans technical headlines. The hype for AI is exceeded only by the money pumped into the next big thing. The problem is that the payoff from AI is underwhelming when compared to the amount of cash pumped into a sector relying on a single technical innovation or breakthrough: The “transformer” method. Fragile is not the word for the situation.
  2. The idea that there are cheap and expensive AI stocks is interesting. I am, however, that cheap and expensive are substantively different. Google has multiple lines of revenue. If AI fails, it has advertising and some other cute businesses. Amazon has trouble with just about everything at the moment. Meta is — how shall I phrase it — struggling with organizational issues that illustrate managerial issues. So there is Google and everyone else.
  3. OpenAI is a case study in an entirely new class of business activities. From announcing that erotica is just the thing for ChatGPT to sort of trying to invent the next iPhone, Sam AI-Man is a heck of a fund raising machine. Only his hyperbole power works as well. His love of circular deals means that he survives, or he does some serious damage to a number of fellow travelers. I say, “No thanks, Sam.”
  4. The social impact of flawed AI is beginning to take shape. The consequences will be unpleasant in many ways. One example: Mental health knock ons. But, hey, this is a tiny percentage, a rounding error.

Net net: I am not convinced that investing in AI at this time is the wise move for an 81 year old dinobaby. Sorry, Kai Wu. You will survive the AI boom. You are, from my viewpoint, a banker. Bankers usually win. But others may not enjoy the benefits you do.

Stephen E Arnold, November 5, 2025

Transformers May Face a Choice: The Junk Pile or Pizza Hut

November 4, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

I read a marketing collateral-type of write up in Venture Beat. The puffy delight carries this title “The Beginning of the End of the Transformer Era? Neuro-Symbolic AI Startup AUI Announces New Funding at $750M Valuation.” The transformer is a Googley thing. Obviously with many users of Google’s Googley AI, Google perceives itself as the Big Dog in smart software. Sorry, Sam AI-Man, Google really, really believes it is the leader; otherwise, why would Apple turn to Google for help with its AI challenges? Ah, you don’t know? Too bad, Sam, I feel for you.

image

Thanks, MidJourney. Good enough.

This write up makes clear that someone has $750 million reasons to fund a different approach to smart software. Contrarian brilliance or dumb move? I don’t know. The write up says:

AUI is the company behind Apollo-1, a new foundation model built for task-oriented dialog, which it describes as the "economic half" of conversational AI — distinct from the open-ended dialog handled by LLMs like ChatGPT and Gemini. The firm argues that existing LLMs lack the determinism, policy enforcement, and operational certainty required by enterprises, especially in regulated sectors.

But there’s more:

Apollo-1’s core innovation is its neuro-symbolic architecture, which separates linguistic fluency from task reasoning. Instead of using the most common technology underpinning most LLMs and conversational AI systems today — the vaunted transformer architecture described in the seminal 2017 Google paper "Attention Is All You Need" — AUI’s system integrates two layers:

  • Neural modules, powered by LLMs, handle perception: encoding user inputs and generating natural language responses.

  • A symbolic reasoning engine, developed over several years, interprets structured task elements such as intents, entities, and parameters. This symbolic state engine determines the appropriate next actions using deterministic logic.

This hybrid architecture allows Apollo-1 to maintain state continuity, enforce organizational policies, and reliably trigger tool or API calls — capabilities that transformer-only agents lack.

What’s important is that interest in an alternative to the Googley approach is growing. The idea is that maybe — just maybe — Google’s transformer is burning cash and not getting much smarter with each billion dollar camp fire. Consequently individuals with a different approach warrant a closer look.

The marketing oriented write up ends this way:’

While LLMs have advanced general-purpose dialog and creativity, they remain probabilistic — a barrier to enterprise deployment in finance, healthcare, and customer service. Apollo-1 targets this gap by offering a system where policy adherence and deterministic task completion are first-class design goals.

Researchers around the world are working overtime to find a way to deliver smart software without the Mad Magazine economics of power, CPUs, and litigation associated with the Googley approach. When a practical breakthrough takes place, outfits mired in Googley methods may be working at a job their mothers did not envision for her progeny.

Stephen E Arnold, November 4, 2025

Google Is Really Cute: Push Your Content into the Jaws of Googzilla

November 4, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google has a new, helpful, clever, and cute service just for everyone with a business Web site. “Google Labs’ Free New Experiment Creates AI-Generated Ads for Your Small Business” lays out the basics of Pomelli. (I think this word means knobs or handles.)

image

A Googley business process designed to extract money and data from certain customers. Thanks, Venice.ai. Good enough.

The cited article states:

Pomelli uses AI to create campaigns that are unique to your business; all you need to do is upload your business website to begin. Google says Pomelli uses your business URL to create a “Business DNA” that analyzes your website images to identify brand identity. The Business DNA profile includes tone of voice, color palettes, fonts, and pictures. Pomelli can also generate logos, taglines, and brand values.

Just imagine Google processing your Web site, its content, images, links, and entities like email addresses, phone numbers, etc. Then using its smart software to create an advertising campaign, ads, and suggestions for the amount of money you should / will / must spend via Google’s own advertising system. What a cute idea!

The write up points out:

Google says this feature eliminates the laborious process of brainstorming unique ad campaigns. If users have their own campaign ideas, they can enter them into Pomelli as a prompt. Finally, Pomelli will generate marketing assets for social media, websites, and advertisements. These assets can be edited, allowing users to change images, headers, fonts, color palettes, descriptions, and create a call to action.

How will those tireless search engine optimization consultants and Google certified ad reselling outfits react to this new and still “experimental” service? I am confident that [a] some will rationalize the wonderfulness of this service and sell advisory services about the automated replacement for marketing and creative agencies; [b] some will not understand that it is time to think about a substantive side gig because Google is automating basic business functions and plugging into the customer’s wallet with no pesky intermediary to shave off some bucks; and [c] others will watch as their own sales efforts become less and less productive and then go out of business because adaptation is hard.

Is Google’s idea original? No, Adobe has something called AI Found, according to the write up. Google is not into innovation. Need I remind you that Google advertising has some roots in the Yahoo garden in bins marked GoTo.com and Overture.com. Also, there is a bank account with some Google money from a settlement about certain intellectual property rights that Yahoo believed Google used as a source of business process inspiration.

As Google moves into automating hooks, it accrues several significant benefits which seem to stick up in Google’s push to help its users:

  1. Crawling costs may be reduced. The users will push content to Google. This may or may not be a significant factor, but the user who updates provides Google with timely information.
  2. The uploaded or pushed content can be piped into the Google AI system and used to inform the advertising and marketing confection Pomelli. Training data and ad prospects in one go.
  3. The automation of a core business function allows Google to penetrate more deeply into a business. What if that business uses Microsoft products? It strikes me that the Googlers will say, “Hey, switch to Google and you get advertising bonus bucks that can be used to reduce your overall costs.”
  4. The advertising process is a knob that Google can be used to pull the user and his cash directly into the Google business process automation scheme.

As I said, cute and also clever. We love you, Google. Keep on being Googley. Pull those users’ knobs, okay.

Stephen E Arnold, November 4, 2025

Hollywood Has to Learn to Love AI. You Too, Mr. Beast

October 31, 2025

green-dino_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Russia’s leadership is good at talking, stalling, and doing what it wants. Is OpenAI copying this tactic? ”OpenAI Cracks Down on Sora 2 Deepfakes after Pressure from Bryan Cranston, SAG-AFTRA” reports:

OpenAI announced on Monday [October 20, 2025] in a joint statement that it will be working with Bryan Cranston, SAG-AFTRA, and other actor unions to protect against deepfakes on its artificial intelligence video creation app Sora.

Talking, stalling or “negotiating,” and then doing what it wants may be within the scope of this sentence.

The write up adds via a quote from OpenAI leadership:

“OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” Altman said in a statement. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”

This sounds good. I am not sure it will impress  teens as much as Mr. Altman’s posture on erotic chats, but the statement sounds good. If I knew Russian, it would be interesting to translate the statement. Then one could compare the statement with some of those emitted by the Kremlin.

image

Producing a big budget commercial film or a Mr. Beast-type video will look very different in 18 to 24 months. Thanks, Venice.ai. Good enough.

Several observations:

  1. Mr. Altman has to generate cash or the appearance of cash. At some point investors will become pushy.  Pushy investors can be problematic.
  2. OpenAI’s approach to model behavior does not give me confidence that the company can figure out how to engineer guard rails and then enforce them. Young men and women fiddling with OpenAI can be quite ingenious.
  3. The BBC ran a news program with the news reader as a deep fake. What does this suggest about a Hollywood producer facing financial pressure working out a deal with an AI entrepreneur facing even greater financial pressure? I think it means that humanoids are expendable first a little bit and then for the entire digital production. Gamification will be too delicious.

Net net: I think I know how this interaction will play out. Sam Altman, the big name stars, and the AI outfits know. The lawyers know. Who doesn’t know? Frankly everyone knows how digital disintermediation works. Just ask a recent college grad with a degree in art history.

Stephen E Arnold, October 31, 2025

AI Is So Hard! We Are Working Hard! Do Not Hit Me in the Head, Mom, Please

October 28, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a pretty crazy “we are wonderful and hard working” story in the Murdoch Wall Street Journal. The story is “AI Workers Are Putting In 100-Hour Workweeks to Win the New Tech Arms Race.” (This is a paywalled article so you will have to pay or pray that the permalink has not disappeared. In either event, don’t complain to me. Tell, the WSJ helpful customer support people or just subscribe at about $800 per year. Mr. Murdoch knows value.)

image

Thanks, Venice.ai. Good enough.

The story makes clear that Silicon Valley AI outfits slot themselves somewhere between the Chinese approach of 9-9-6 and the Japanese goal of karoshi. The shorthand 9-9-6 means that a Chinese professional labors 12 hours a day from 9 am to 9 pm and six days a week. No wonder some of those gadget factories have suicide nets installed on housing unit floors three and higher. And the Japanese karoshi concept is working oneself to death. At the blue chip consulting company where I labored, it was routine to see heaps of pizza boxes and some employees exiting the elevator crying from exhaustion as I was arriving for another fun day at an egomaniacal American institution.

Get this premise of a pivotal moment in the really important life of a super important suite of technologies that no one can define:

Executives and researchers at Microsoft, Anthropic, Alphabet’s Google, Meta Platforms, Apple and OpenAI have said they see their work as critical to a seminal moment in history as they duel with rivals and seek new ways to bring AI to the masses.

These fellows are inventing the fire and the wheel at the same time. Wow. That is so hard. The researchers are working even harder.

The write up includes this humble brag about those hard working AI professionals:

“Everyone is working all the time, it’s extremely intense, and there doesn’t seem to be any kind of natural stopping point,” Madhavi Sewak, a distinguished researcher at Google’s DeepMind, said in a recent interview.

And after me-too mobile apps, cloud connectors, and ho-hum devices, the Wall Street Journal story makes it clear these AI people are doing something important and they are working really hard. The proof is ordering food on Saturdays:

Corporate credit-card transaction data from the expense-management startup Ramp shows a surge in Saturday orders from San Francisco-area restaurants for delivery and takeout from noon to midnight. The uptick far exceeds previous years in San Francisco and other U.S. cities, according to Ramp.

Okay, I think you get the gist of the WSJ story. Let me offer several observations:

  1. Anyone who wants to work in the important field of AI you will have to work hard
  2. You will be involved in making the digital equivalent of fire and the wheel. You have no life because your work is important and hard.
  3. AI is hard.

My view is that smart software is a bundle of technologies that have narrowed to text centric activities via Google’s “transformer” system and possibly improper use of content obtained without permission from different sources. The people are working hard for two reasons. First, dumping more content into the large language model approach is not improving accuracy. Second, the pressure on the companies is a result of the burning of cash by the train car load and zero hockey stick profit from the investments. Some numbers person explained that an investment bank would get back its millions in investment by squeezing Microsoft. Yeah, and my French bulldog will sprout wings and fly. Third, the moves by OpenAI into erotic services and a Telegram-like approach to building an everything app signals that making money is hard.

What if making sustainable growth and profits from AI is even harder? What will life be like if an AI company with many very smart and hard working professionals goes out of business? Which will be harder: Get another job in AI at one of those juicy compensation packages or working through the issues related to loss of self esteem, mental and physical exhaustion, and a mom who says, “Just shake it off”?

The WSJ doesn’t address why the pressure is piled on. I will. The companies have to produce money. Yep, cash back for investors and their puppets. Have you ever met with a wealthy garbage collection company owner who wants his multi million investment in the digital fire or the digital wheel? Those meetings can be hard.

Here’s a question to end this essay: What if AI cannot be made better using 45 years of technology? What’s the next breakthrough to be? Figuring that out and doing it is closer to the Manhattan Project than taking a burning stick from a lightning strike and cooking a squirrel.

Stephen E Arnold, October 28, 2025

Microsoft, by Golly, Has an Ethical Compass: It Points to Security? No. Clippy? No. Subscriptions? Yes!

October 27, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The elephants are in training for a big fight. Yo, grass, watch out.

Microsoft AI Chief Says Company Won’t Build Chatbots for Erotica” reports:

Microsoft AI CEO Mustafa Suleyman said the software giant won’t build artificial intelligence services that provide “simulated erotica,” distancing itself from longtime partner OpenAI. “That’s just not a service we’re going to provide,” Suleyman said on Thursday [October 23, 2025] at the Paley International Council Summit in Menlo Park, California. “Other companies will build that.”

My immediate question: “Will Microsoft build tools and provide services allowing others to create erotica or conduct illegal activities; for example, delivery of phishing emails from the Microsoft Cloud to Outlook users?” A quick no seems to be implicit in this report about what Microsoft itself will do. A more pragmatic yes means that Microsoft will have no easy, quick, and cheap way to restrain what a percentage of its users will either do directly or via some type of obfuscation.

image

Microsoft seems to step away from converting the digital Bob into an adult star or Clippy engaging with a user in a “suggestive” interaction.

The write up adds:

On Thursday, Suleyman said the creation of seemingly conscious AI is already happening, primarily with erotica-focused services. He referenced Altman’s comments as well as Elon Musk’s Grok, which in July launched its own companion features, including a female anime character. “You can already see it with some of these avatars and people leaning into the kind of sexbot erotica direction,” Suleyman said. “This is very dangerous, and I think we should be making conscious decisions to avoid those kinds of things.”

I heard that 25 percent of Internet traffic is related to erotica. That seems low based on my estimates which are now a decade old. Sex not only sells; it seems to be one of the killer applications for digital services whether the user is obfuscated, registered, or using mom’s computer.

My hunch is that the AI enhanced services will trip over their own [a] internal resources, [b] the costs of preventing abuse, sexual or criminal, and [c] the leadership waffling.

There is big money in salacious content. Talking about what will and won’t happen in a rapidly evolving area of technology is little more than marketing spin. The proof will be what happens as AI becomes more unavoidable in Microsoft software and services. Those clever teenagers with Windows running on a cheap computer can do some very interesting things. Many of these will be actions that older wizards do not anticipate or simply push to the margins of their very full 9-9-6 day.

Stephen E Arnold, October 27, 2025

Losing Money? No Problem, Says OpenAI.

October 24, 2025

Losing billions? Not to worry.

I wouldn’t want to work on OpenAI’s financial team with these numbers, according to Tech In Asia’s article, “OpenAI’s H1 2025: $4.3b In Income, $13.5b In Loss.” You don’t have to be proficient in math to see that OpenAI is in the red after losing over thirteen billion dollars and only bringing in a little over four billion.

The biggest costs were from the research and development department operating at a loss of $6.7 billion. It spent $2 billion in sales and advertising, then had $2.5 bullion in stock-based compensation. These were both double that of expenses in these departments last year. Operating costs were another hit at $7.8 billion and it spent $2.5 billion in cash.

Here’s the current state of things:

“OpenAI paid Microsoft 20% of its revenue under an existing agreement.

At the end of June, the company held roughly US$17.5 billion in cash and securities, boosted by US$10 billion in new funding, and as of the end of July, was seeking an additional US$30 billion from investors.

A tender offer underway values OpenAI’s for-profit arm at about US$500 billion.”

The company isn’t doing well in the numbers but its technology is certainly in high demand and will put the company back in black…eventually. We believe that if one thinks it, the “it” will manifest, become true, and make the world very bright.

Whitney Grace, October 24, 2025

Amazon and its Imperative to Dump Human Workers

October 22, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Everyone loves Amazon. The local merchants thank Amazon for allowing them to find their future elsewhere. The people and companies dependent on Amazon Web Services rejoiced when the AWS system failed and created an opportunity to do some troubleshooting and vendor shopping. The customer (me) who received a pair of ladies underwear instead of an AMD Ryzen 5750X. I enjoyed being the butt of jokes about my red, see through microprocessor. Was I happy!

image

Mice discuss Amazon’s elimination of expensive humanoids. Thanks, Venice.ai. Good enough.

However, I read “Amazon Plans to Replace More Than Half a Million Jobs With Robots.” My reaction was that some employees and people in the Amazon job pipeline were not thrilled to learn that Amazon allegedly will dump humans and embrace robots. What a great idea. No health care! No paid leave! No grousing about work rules! No medical costs! No desks! Just silent, efficient, depreciable machines. Of course there will be smart software. What could go wrong? Whoops. Wrong question after taking out an estimated one third of the Internet for a day. How about this question, “Will the stakeholders be happy?” There you go.

The write up cranked out by the Gray Lady reports from confidential documents and other sources says:

Amazon’s U.S. work force has more than tripled since 2018 to almost 1.2 million. But Amazon’s automation team expects the company can avoid hiring more than 160,000 people in the United States it would otherwise need by 2027. That would save about 30 cents on each item that Amazon picks, packs and delivers to customers. Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.

Why is Amazon dumping humans? The NYT turns to that institution that found Jeffrey Epstein a font of inspiration. I read this statement in the cited article:

“Nobody else has the same incentive as Amazon to find the way to automate,” said Daron Acemoglu, a professor at the Massachusetts Institute of Technology who studies automation and won the Nobel Prize in economic science last year. “Once they work out how to do this profitably, it will spread to others, too.” If the plans pan out, “one of the biggest employers in the United States will become a net job destroyer, not a net job creator,” Mr. Acemoglu said.

Ah, save money. Keep more money for stakeholders. Who knew? Who could have foreseen this motivation?

What jobs will Amazon provide to humans? Obviously leadership will keep leadership jobs. In my decades of professional work experience, I have never met a CEO who really believes anyone else can do his or her job. Well, the NYT has an answer about what humans will do at Amazon; to wit:

Amazon has said it has a million robots at work around the globe, and it believes the humans who take care of them will be the jobs of the future. Both hourly workers and managers will need to know more about engineering and robotics as Amazon’s facilities operate more like advanced factories.

I wish to close this essay with several observations:

  1. Much of the information in the write up come from company documents. I am not comfortable with the use of this type of information. It strikes me as a short cut, a bit like Google or self-made expert saying, “See what I did!”
  2. Many words were used to get one message across: Robots and by extension smart software will put people out of work. Basic income time, right? Why not say that?
  3. The reason wants to dump people is easy to summarize: Humans are expensive. Cut humans, costs drop (in theory). But are there social costs? Sure, but why dwell on those.

Net net: Sigh. Did anyone reviewing this story note the Amazon online collapse? Perhaps there is a relationship between cost cutting at Amazon and the company’s stability?

Stephen E Arnold, October 22, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta