AI Will Kill, and People Will Grow Accustomed to That … Smile

October 30, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted a story in SFGate, which I think was or is part of a dead tree newspaper. What struck me was the photograph (allegedly not a deep fake) of two people looking not just happy. I sensed a bit of self satisfaction and confidence. Regardless, both people gracing “Society Will Accept a Death Caused by a Robotaxi, Waymo Co-CEO Says.” Death, as far back as I can recall as an 81-year-old dinobaby, has never made me happy, but I just accepted the way life works. Part of me says that my vibrating waves will continue. I think Blaise Pascal suggested that one should believe in God because what’s the downside. Go, Blaise, a guy who did not get to experience an an accident involving a self-driving smart vehicle.

image

A traffic jam in a major metro area. The cause? A self-driving smart vehicle struck a school bus. But everyone is accustomed to this type of trivial problem. Thanks, MidJourney. Good enough like some high-tech outfits’ smart software.

But Waymo is a Google confection dating from 2010 if my memory is on the money. Google is a reasonably big company. It brokers, sells, and creates a market for its online advertising business. The cash spun from that revolving door is used to fund great ideas and moon shots. Messrs. Brin, Page, and assorted wizards had some time to kill as they sat in their automobiles creeping up and down Highway 101. The idea of a self-driving car that would allow a very intelligent, multi-tasking driver to do something productive than become a semi-sentient meat blob sparked an idea. We can rig a car to creep along Highway 101. Cool. That insight spawned what is now known as Waymo.

An estimable Google Waymo expert found himself involved in litigation related to Google’s intellectual property. I had ignored Waymo until the Anthony Levandowski founded a company, sold it to Uber, and then ended up in a legal matter that last from 2017 to 2019. Publicity, I have heard, whether positive or negative, is good. I knew about Waymo: A Google project, intellectual property, and litigation. Way to go, Waymo.

For me, Waymo appears in some social media posts (allegedly actual factual) when Waymo vehicles get trapped in a dead end in Cow Town. Sometimes the Waymos don’t get out of the way of traffic barriers and sit purring and beeping. I have heard that some residents of San Francisco have [a] kicked, [b] sprayed graffiti on Waymos, and/or [c] put traffic cones in certain roads to befuddle the smart Google software-powered vehicles. From a distance, these look a bit like something from a Mad Max motion picture.

My personal view is that I would never stand in front of a rolling Waymo. I know that [a] Google search results are not particularly useful, [b] Google’s AI outputs crazy information like glue cheese on pizza, and [c] Waymo’s have been involved in traffic incidents which cause me to stay away from Waymos.

The cited article says that the Googler said in response to a question about a Waymo hypothetical killing of a person:

“I think that society will,” Mawakana answered, slowly, before positioning the question as an industry wide issue. “I think the challenge for us is making sure that society has a high enough bar on safety that companies are held to.” She said that companies should be transparent about their records by publishing data about how many crashes they’re involved in, and she pointed to the “hub” of safety information on Waymo’s website. Self-driving cars will dramatically reduce crashes, Mawakana said, but not by 100%: “We have to be in this open and honest dialogue about the fact that we know it’s not perfection.” [Emphasis added by Beyond Search]

My reactions to this allegedly true and accurate statement from a Googler are:

  1. I am not confident that Google can be “transparent.” Google, according to one US court is a monopoly. Google has been fined by the European Union for saying one thing and doing another. The only reason I know about these court decisions is because legal processes released information. Google did not provide the information as part of its commitment to transparency.
  2. Waymos create problems because the Google smart software cannot handle the demands of driving in the real world. The software is good enough, but not good enough to figure out dead ends, actions by human drivers, and potentially dangerous situations. I am aware of fender benders and collisions with fixed objects that have surfaced in Waymo’s 15 year history.
  3. Self driving cars specifically Waymo will injure or kill people. But Waymo cars are safe. So some level of killing humans is okay with Google, regulators, and the society in general. What about the family of the person who is killed by good enough Google software? The answer: The lawyers will blame something other than Google. Then fight in court because Google has oodles of cash from its estimable online advertising business.

The cited article quotes the Waymo Googler as saying:

“If you are not being transparent, then it is my view that you are not doing what is necessary in order to actually earn the right to make the roads safer,” Mawakana said. [Emphasis added by Beyond Search]

Of course, I believe everything Google says. Why not believe that Waymos will make self driving vehicle caused deaths acceptable? Why not believe Google is transparent? Why not believe that Google will make roads safer? Why not?

But I like the idea that people will accept an AI vehicle killing people. Stuff happens, right?

Stephen E Arnold, October 30, 2025

Old Social Media Outfits May Be Vulnerable: Wrong Product, Wrong Time

October 30, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Social Media Became Television. Gen Z Changed the Channel.” I liked the title. I liked the way data were used to support the assertion about young people. I don’t think the conclusion is accurate.

Let’s look at what the write up asserts.

First, I noted this statement:

Turns out, infinite video from people you don’t know has a name we already invented in 1950: Television.

I think this means that digital services are the “vast wasteland” that Newton Minnow identified this environment. I was 17 years old and a freshman in college. My parents acquired a TV set in 1956 when I was 12 years old. I vaguely remember that it sucked. My father watched the news. My mother did not pay any attention as far as I can recall. Not surprisingly I was not TV oriented, and I am not today.

The write up says:

For twenty years, tech companies optimized every platform toward the same end state. Student directories became feeds. Messaging apps became feeds. AI art tools became feeds. Podcasts moved to video. Newsletters added video. Everything flowed toward the same product: endless short videos recommended by machines.

I agree. But that is a consequence of shifting to digital media. Fast, easy, crispy information becomes “important.”

The write up says via a quote from an entity known as Jon Burn-Murdoch:

It has gone largely unnoticed that time spent on social media peaked in 2022 and has since gone into steady decline

That’s okay. I don’t know if the statement is true or false. This chart is presented to support the assertion:

image

The problem is that the downturn in the 16 to 24 graph looks like a dip but a dip from a high level of consumption. And what about the 11 to 15 year olds, what I call GenAI? Not on the radar.

This quote supports the assertion that content consumption has shifted from friends to anonymous sources:

Today, only a fraction of time spent on Meta’s services—7% on Instagram, 17% on Facebook—involves consuming content from online “friends” (“friend sharing”). A majority of time spent on both apps is watching videos, increasingly short-form videos that are “unconnected”—i.e., not from a friend or followed account—and recommended by AI-powered algorithms Meta developed as a direct competitive response to TikTok’s rise, which stalled Meta’s growth.

Okay, Meta has growth problems. I would add that Telegram has growth problems. The antics of the Googlers make clear that the firm has growth problems. I would argue that Microsoft has growth problems. Each of these outfits has run out of prospects. Lower birth rates, cost, and the fear-centric environment may have something to do with online behaviors.

My view is that social media and short videos are not going away. New services are going to emerge. Meta-era outfits are just experiencing what happened to the US steel industry when newer technology became available in lower-cost countries. The US auto industry is in a vulnerable position because of China’s manufacturing, labor cost, and regulatory environment.

The flow of digital information is not stopping. Those who lose the ability to think will find ways to pretend to be learning, having fun, and contributing to society. My concern is that what these young people think and actually do are likely to be more surprising than the magnetism of platforms a decade old, crafted for users who have moved on.

The buzzy services will be anchored in AI and probably feature mental health, personalized “chats,” and synthetic relationships. Yep, a version of a text chat or radio.

Stephen E Arnold, October x, 2025

Is It Unfair to Blame AI for Layoffs? Sure

October 30, 2025

When AI exploded onto the scene, we were promised the tech would help workers, not replace them. Then that story began to shift, with companies revealing they do plan to slash expenses by substituting software for humans. But some are skeptical of this narrative, and for good reason. Techspot asks, “Is AI Really Behind Layoffs, or Just a Convenient Excuse for Companies?” Reporter Rob Thubron writes:

“Several large organizations, including Accenture, Salesforce, Klarna, Microsoft, and Duolingo, have said they are reducing staff numbers as AI helps streamline operations, reduce costs, and increase efficiency. But Fabian Stephany, Assistant Professor of AI & Work at the Oxford Internet Institute, told CNBC that companies are ‘scapegoating’ the technology.”

Stephany notes many companies are still trying to expel the extra humans they hired during the pandemic. Apparently, return-to-office mandates have not driven out as many workers as hoped. The write-up continues:

“Blaming AI for layoffs also has its advantages. Multibillion- and trillion-dollar companies can not only push the narrative that the changes must be made in order to stay competitive, but doing so also makes them appear more cutting-edge, tech-savvy, and efficient in the eyes of potential investors. Interestingly, a study by the Yale Budget Lab a few weeks ago showed there is little evidence that AI has displaced workers more severely than earlier innovations such as computers or the internet. Meanwhile, Goldman Sachs Research has estimated that AI could ultimately displace 6 to 7 percent of the US workforce, though it concluded the effect would likely be temporary.”

The write-up includes a graph Anthropic made in 2023 that compares gaps between actual and expected AI usage by occupation. A few fields overshot the expectation– most notably in computer and mathematical jobs. Most, though, fell short. So are workers really losing their jobs to AI? Or is that just a high-tech scapegoat?

Cynthia Murrell, October 30, 2025

Creative Types: Sweating AI Bullets

October 30, 2025

Artists and authors are in a tizzy (and rightly so) because AI is stealing their content. AI algorithms potentially will also put them out of jobs, but the latest data from Nieman Labs explains that people are using chatbots for information seeking over content: “People Are Using ChatGPT Twice As Much As They Were Last Year. They’re Still Just As Skeptical Of AI In News.”

Usage has doubled of AI chatbots in 2024 compared to the previous years. It’s being used for tasks formerly reserved for search engines and news outlets. There is still ambivalence about the information it provides.

Here are stats about information consumption trends:

“For publishers worried about declining referral traffic, our findings paint a worrying picture, in line with other recent findings in industry and academic research. Among those who say they have seen AI answers for their searches, only a third say they “always or often” click through to the source links, while 28% say they “rarely or never” do. This suggests a significant portion of user journeys may now end on the search results page.

Contrary to some vocal criticisms of these summaries, a good chunk of population do seem to find them trustworthy. In the U.S., 49% of those who have seen them express trust in them, although it is worth pointing out that this trust is often conditional.”

When it comes to trust habits, people believe AI on low-stakes, “first pass” information or the answer is “good enough,” because AI is trained on large amounts of data. When the stakes are higher, people will do further research. There is a “comfort gap” between AI news and human oversight. Very few people implicitly trust AI. People still prefer humans curating and writing the news over a machine. They also don’t mind AI being used for assisting tasks such as editing or translation, but a human touch is still needed o the final product.

Humans are still needed as is old-fashioned information getting. The process remains the same, the tools have just changed.

Whitney Grace, October 30, 2025

The Good Old Days of Mainframes? Is Vibe the Answer?

October 29, 2025

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I like mainframe stories. I read a very good one titled “That Time I Trashed The Company Mainframe, And The Lesson I Learned.” The incident took place decades ago. The main idea is that a young programmer wrote an innocuous program, stuffed it in a mainframe, and generated instant chaos. The lesson for the young programmer was to check and double check one’s code. Easy to say.

There were several gems in the write up. I want to highlight these.

image

The future is in the hands of smart software. Thanks, Venice AI. Good enough.

First, there is a reference to the programming required for the F-16. Keep in mind that these aircraft are still operational today. The aircraft entered service in the early 1980s. Yep, mainframe code. What does that tell you about fixing up software for some F-16s? Some special knowledge is going to be required. This information is not routinely presented in university computer science courses. My mainframe wizard is darned old and not too peppy. Just whip out your iPhone and bang out some Rust. You can get the F-16 up to speed in no time.

Second, a number of product names appear in the essay. These include:

  • Fortran, yep just like JavaScript
  • Zilog 8000, a definite fave in electrical engineering courses today
  • Job Control Language, easy peasy.

What’s interesting is that I believe that many major systems today are still in daily use.

Third, the write up captures the approach that made those who worked in data centers so darned popular. Emily Post’s mom approved of this behavior:

In 1982, we had no email (executives did, but no one else); therefore, we all had a phone as our primary communication device. When I picked up the phone, all I heard was a lot of swear words and yelling. The IBM mainframe operator was screaming at me for submitting a job that caused his operator console to overflow with errors. He was acting as if I had trashed the entire mainframe and made his life a living hell.

Would some of the young data snowflakes melt with this professional exchange. Gee, of course not. Just head to a Googley relaxation pod and chill. You hope.

I wish to quote form the wrap up of the cited article:

That is the lesson I learned here: reading source code is essential, and I could actually understand a codebase I had never seen before. Confidence-building things like this really helped me move forward in becoming a more professional programmer.

Just keep in mind that smart software is going to do this type of job in the future. There will be absolutely no problems. I am confident that experienced humans will fail their automated hiring tests administered by a tailored large language model. A perfect world with perfect software is arriving.

Stephen E Arnold, October 29, 2025

A Big Waste of Time: Talking about Time to Young People

October 29, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I will be 81 in a matter of days. In 1963, I had a professor at the third rate institution I attended who required that the class read Sir Francis Bacon’s essay “Of Time.” Snappy stuff. I was 18 years old, and there was one thing I did not think about. I don’t recall worrying about time. I structured my life around what classes I had to attend, what assignments I had to do, when I worked at the root beer stand, and when I had to show up at some family function like a holiday. Time was anchored in immediacy. There was no past except the day before. There was no future except checking tasks off my mental checklist or the notecards for which I became famous. Yes, I still write down things to do on notecards.

image

An older person provides some advice to a young person about using time and taking risks. The young person listens and responds in an appropriate way for 2025 college graduates. Thanks, Venice.ai. Good enough.

That sporty guy Francis wrote:

“Men fear time, but time fears the pyramids.”

I know that this thought did not resonate for me in 1963, and to be frank, I am not sure it resonates with me. The pyramids exist but data about when they were constructed strikes me as fuzzy. I thought about this mismatch between youth, time, and the lack of knowledge about pyramid construction or similar matters when I read “Don’t Waste Your 20s Not Taking Big Risks: You Have It So Easy, and So Little Time.”

The time talk doesn’t work for young people. Time is measured in weird and idiosyncratic ways. The “amount” of time is experiential, contextual, and personal. The write up says:

You don’t appreciate how little time you have to easily go after it and how much harder it’s going to be later.

I am sorry. This does not compute.

The write up continues:

Each year you delay is costing you 10% of the easiest period in your life to take a big risk. So if you are in college or you’re in your 20s and you think that you might want to start a business, completely change your career, move to a new city, do something radical like that, you should do that as soon as humanly possible. Ignore the scared voice in your head. The downside is basically non-existent.

I view this statement as generally bad advice. An informed decision is important. The key word is “informed.” The meaning of “informed” depends on the individual. We are dealing with moving targets. An “informed” decision to a drug addict means one thing. Time to this individual is defined by narcotic need. An “informed” decision for a person who wants to do well in college means doing the work, trying to be organized, and obtaining information to achieve desired outcomes.

“Ignore” is important when one deals with life. “Ignore” is not important in the context of time. I am not sure what time is. I have zero interest in trying to defend Sir Francis’ pyramid time nor do I pay attention to the floundering physicists who argue about what time is.

For a young person today, life is like the world of any young person at any point in history. Telling that young person to not waste time is pointless. In fact, it is a waste of time.

The cited essay wants young people to do stuff, probably backpack in some remote country or start an AI company. The environment today is that the experiential, contextual, and personal cues for “time” come from inputs unique to this point in history. Nevertheless, young people make what they can of their life in the digital fish bowl.

Several observations:

  1. Decisions occur even if the person involved does not go through the weird notecard drill I did and do. The reality is “stuff happens” and then young people adapt in a way defined by their experiential, contextual, and personal space
  2. Young people hear “time” and define it as a young person. That means most have no clue what time means in a philosophical or technical context. Give them an essay to read. Have them write 500 words. Forget it. That worked for me and it probably works for many young people if they can actually read Bacon’s essay without AI support.
  3. At any point in a human’s life, time is not viewed as part of a big picture. Those words about “using time wisely” tell me more about the person speaking them than valid inputs for another individual. Thanks, but I don’t think about time unless it is anchored in some way.

Net net: As the general environment in the US and the technical business sector seems less warm and fuzzy, making informed decisions works better than watching roses die. Risk must be assessed. If it is not, interesting things happen to people. But time as a big idea or a resource to be use in a way that fits into some grand life plan is something oddly positioned in a TikTok-type of amped up Hollywood movie world. Making the best decision based on the information one has is a more useful way to mark off life intervals in my opinion. If your inputs come from Twitter, well, that may work for you. For me, not a chance.

Stephen E Arnold, October 29, 2025

Think It and the It May Not Happen. Right, OpenAI?

October 29, 2025

The collaboration that was meant to revolutionize how humans interact with technology has hit some snags. Coming up with another iPhone-level idea is tough, it seems. Ars Technica reports, “OpenAI, Jony Ive Struggle with Technical Details on Secretive New AI Gadget.” While he was at Apple, Ive designed some of that company’s most iconic products. When OpenAI bought his startup for $6.5 billion in May, Altman and Ive promised a radical new AI assistant that would eclipse Amazon’s Alexa and Google Assistant: a palm-sized, screenless device that would incorporate real-world context and adapt to each user’s needs.

In order to achieve this grand vision, OpenAI hired at least a dozen Apple device experts on top of the 20-some former Apple employees at Ive’s startup. We are told it also poached some workers from Meta’s Quest headset and smart glasses projects. However, that pool of considerable talent has not ensured smooth sailing. We learn:

“Despite having hardware developed by Ive and his team—whose alluring designs of the iMac, iPod, and iPhone helped turn Apple into one of the most valuable companies in the world—obstacles remain in the device’s software and the infrastructure needed to power it. These include deciding on the assistant’s ‘personality,’ privacy issues, and budgeting for the computing power needed to run OpenAI’s models on a mass consumer device.”

Ah yes, computing power. The reason data centers are springing up like thirsty weeds across the land. While Amazon and Google have plenty of compute to power their assistants, we learn, OpenAI has some catching up to do. As for those privacy issues, the write-up does not elaborate. We would be curious to know those details.

Then there is the issue of the virtual aide’s personality. The write-up tells us:

“Two people familiar with the project said that settling on the device’s ‘voice’ and its mannerisms were a challenge. One issue is ensuring the device only chimes in when useful, preventing it from talking too much or not knowing when to finish the conversation—an ongoing issue with ChatGPT. ‘The concept is that you should have a friend who’s a computer who isn’t your weird AI girlfriend… like [Apple’s digital voice assistant] Siri but better,’ said one person who was briefed on the plans. OpenAI was looking for ‘ways for it to be accessible but not intrusive.’ ‘Model personality is a hard thing to balance,’ said another person close to the project. ‘It can’t be too sycophantic, not too direct, helpful, but doesn’t keep talking in a feedback loop.’”

Yes, one would not want to annoy the end user with cyclic conversations. Or a “weird AI girlfriend.” (By the way, have we given up hope on default male or gender-neutral AI voices? Just wondering.) The article notes a couple devices that sound similar to Altman and Ive’s vision have not fared well. Humane, a firm funded in part by Altman personally, has ditched its AI pin. Meanwhile, the Friend AI necklace has been widely reviled. Will the Apple vets (eventually) succeed where others have failed? But in OpenAI Land the “Sky” is the limit. He, just buy stuff. That sometimes is easier.

Cynthia Murrell, October 29, 2025

Okay Business Strategy Experts: What Now for AI Innovation?

October 29, 2025

As AI forces its way into our lives, it requires us to shift our thinking in several areas. On his Substack, Charlie Graham examines how AI may render a key software strategy obsolete. He declares, “’Be Different’ Doesn’t Work for Building Products Anymore.” Personally, we believe coming up with something lots of people want or something rich people must absolutely have is the key to success. But it is also a wise develop something to distinguish oneself from the competition. Or, at least, it was. Now that approach may be wasted effort. Graham writes:

“In the past, the best practice to win in a competitive market was to differentiate yourself – ‘be different,’ as Steve Jobs would say. But product differentiation is no longer effective in this new world.

  • Differentiate on an amazing UX? You used to rely on your awesome UX team for a sustainable advantage. Now, dozens of competitors can screenshot (or soon video) your flow and give it to an AI to reproduce quickly.
  • Differentiate by excelling at one feature? You might get a temporary lead, but it’s now pretty trivial for competitors to get close to your functionality.
  • Differentiate on business model? If it starts working, dozens of your recently started competitors will vibe-code a switch over.
  • Differentiate on ‘proprietary data’? This isn’t the key differentiator it was expected to be, as we are finding data can be simulated or companies can find similar-enough data to get 80% of the way there.

Instead we live in a red ocean where features are copied in days or weeks and everyone is fighting with similar products for the same scraps. So what does work?”

The post proposes several answers to that question. For example, those with large, proprietary distribution networks still have an advantage. Also, obscure, complex niches come with fewer competitors. So does taking on difficult or expensive product integrations. On the darker side, one could guard against customer loss by compounding data lock-in, making migration away as painful as possible. Then there is networking– a consistent necessity; social media and online marketplaces now fill that need. See the post for details on each of these points. What other truisms will AI force us to reconsider?

Cynthia Murrell, October 29, 2025

Google and Anthropic: Sharing a Sleeping Bag. Will They Get Married?

October 28, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Anthropic to Use 1 Million Google TPUs” contains a couple of interesting allegedly true factoids. The hook for the story is that Google has worked out a deal for Anthropic to use a few of Google’s smart processors. According to the Analytics India article:

The expansion is valued at ‘tens of billions of dollars,’ with an expected capacity of over a gigawatt coming online in 2026.

The numbers are the first thing that caught my attention. One million chips. Tens of billions. A gigawatt of power. I worked at Halliburton Nuclear years ago. If I remember what one of the Couchmans (either Don or Mel) told me. A gigawatt would could power about one million homes simultaneously. Think in terms of San Jose which has about a million residents I think.

image

Thanks, MidJourney. Good enough.

Second, I noted this statement:

The company [Anthropic] reported serving over 300,000 business customers, and the number of large accounts—those generating more than $100,000 in annual revenue—has increased nearly sevenfold in the past year.

The numbers are smaller. 300,000 business customers. What’s a business customer? Not defined. Dun & Bradstreet and other company tracking services split businesses up by revenue, their business sector, and other slices. Okay, 300,000. The estimable US Small Business Administration has kicked out a number of 36 million businesses in the US. (Is this number correct? What? You are doubting the US government data? Incredible.) The point is that Anthropic has 0.9375% of this SBA number of businesses. Now let’s assume that Anthropic gets four times its 300,000 business users in the next two years. That means Anthropic’s power demands for 1.2 million business users means that it will require the electrical generation capacity of Los Angeles. No big deal, right? The only hitch in the git along is that Anthropic-type growth could move more quickly than the folks who have to build, expand, or invent new energy sources. You see the problem. Big numbers don’t match the reality of power availability.

But Anthropic and Google are in one of those circular deals. Google invests in Anthropic; Anthropic buys a few processors. Analytics India says:

Google has been involved in various funding efforts for Anthropic, and a report from The New York Times earlier this year stated that it owns 14% of Anthropic, citing legal findings.

Several observations:

  1. This AI sector is into really big numbers. Most people cannot think about really big numbers. Most people think about a $300 property tax bill or paying for groceries at the price leader. (Did you think Whole Foods or Kroger?)
  2. The diffusion of AI to a tiny percentage of US businesses has a fairly hefty need for power, chips, and assorted infrastructure. That’s good for those in that business.
  3. The power generation shortfall is a bit of pothole, a deep pothole.

So what? Just Anthropic will require power equivalent to keeping the lights on in four LAs or one Istanbul.

Do you see a problem? I don’t because I believe that magical Google and Anthropic can solve any problem.

A rough calculation is that a human brain consumes 0.000002% of a gigawatt in 24 hours. That’s efficient. But I have confidence in Google and Anthropic. No problem is too big or complex for these bright, energetic professionals.

Stephen E Arnold, October 28, 2025

AI Is So Hard! We Are Working Hard! Do Not Hit Me in the Head, Mom, Please

October 28, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a pretty crazy “we are wonderful and hard working” story in the Murdoch Wall Street Journal. The story is “AI Workers Are Putting In 100-Hour Workweeks to Win the New Tech Arms Race.” (This is a paywalled article so you will have to pay or pray that the permalink has not disappeared. In either event, don’t complain to me. Tell, the WSJ helpful customer support people or just subscribe at about $800 per year. Mr. Murdoch knows value.)

image

Thanks, Venice.ai. Good enough.

The story makes clear that Silicon Valley AI outfits slot themselves somewhere between the Chinese approach of 9-9-6 and the Japanese goal of karoshi. The shorthand 9-9-6 means that a Chinese professional labors 12 hours a day from 9 am to 9 pm and six days a week. No wonder some of those gadget factories have suicide nets installed on housing unit floors three and higher. And the Japanese karoshi concept is working oneself to death. At the blue chip consulting company where I labored, it was routine to see heaps of pizza boxes and some employees exiting the elevator crying from exhaustion as I was arriving for another fun day at an egomaniacal American institution.

Get this premise of a pivotal moment in the really important life of a super important suite of technologies that no one can define:

Executives and researchers at Microsoft, Anthropic, Alphabet’s Google, Meta Platforms, Apple and OpenAI have said they see their work as critical to a seminal moment in history as they duel with rivals and seek new ways to bring AI to the masses.

These fellows are inventing the fire and the wheel at the same time. Wow. That is so hard. The researchers are working even harder.

The write up includes this humble brag about those hard working AI professionals:

“Everyone is working all the time, it’s extremely intense, and there doesn’t seem to be any kind of natural stopping point,” Madhavi Sewak, a distinguished researcher at Google’s DeepMind, said in a recent interview.

And after me-too mobile apps, cloud connectors, and ho-hum devices, the Wall Street Journal story makes it clear these AI people are doing something important and they are working really hard. The proof is ordering food on Saturdays:

Corporate credit-card transaction data from the expense-management startup Ramp shows a surge in Saturday orders from San Francisco-area restaurants for delivery and takeout from noon to midnight. The uptick far exceeds previous years in San Francisco and other U.S. cities, according to Ramp.

Okay, I think you get the gist of the WSJ story. Let me offer several observations:

  1. Anyone who wants to work in the important field of AI you will have to work hard
  2. You will be involved in making the digital equivalent of fire and the wheel. You have no life because your work is important and hard.
  3. AI is hard.

My view is that smart software is a bundle of technologies that have narrowed to text centric activities via Google’s “transformer” system and possibly improper use of content obtained without permission from different sources. The people are working hard for two reasons. First, dumping more content into the large language model approach is not improving accuracy. Second, the pressure on the companies is a result of the burning of cash by the train car load and zero hockey stick profit from the investments. Some numbers person explained that an investment bank would get back its millions in investment by squeezing Microsoft. Yeah, and my French bulldog will sprout wings and fly. Third, the moves by OpenAI into erotic services and a Telegram-like approach to building an everything app signals that making money is hard.

What if making sustainable growth and profits from AI is even harder? What will life be like if an AI company with many very smart and hard working professionals goes out of business? Which will be harder: Get another job in AI at one of those juicy compensation packages or working through the issues related to loss of self esteem, mental and physical exhaustion, and a mom who says, “Just shake it off”?

The WSJ doesn’t address why the pressure is piled on. I will. The companies have to produce money. Yep, cash back for investors and their puppets. Have you ever met with a wealthy garbage collection company owner who wants his multi million investment in the digital fire or the digital wheel? Those meetings can be hard.

Here’s a question to end this essay: What if AI cannot be made better using 45 years of technology? What’s the next breakthrough to be? Figuring that out and doing it is closer to the Manhattan Project than taking a burning stick from a lightning strike and cooking a squirrel.

Stephen E Arnold, October 28, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta