Want Clicks? Use Sex. It Works

October 15, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Imagine

I read a number of gloomy news articles today. The AI balloon will destroy the economy. Chicago is no longer that wonderful town, but was it ever. Telegram says it will put AI into its enchanting Messenger service. Plus, I read a New York Times’ story titled “Elon Musk Gambles on Sexy A.I. Companions.” That brilliant world leading technologist knows how to get clicks: Sex. What an idea. No one has thought of that before! (Oh, the story lurks behind a paywall. Another brilliant idea for 2025.)

image

Thanks Venice.ai. Good enough.

The write up says:

Mr. Musk, already known for pushing boundaries, has broken with mainstream norms and demonstrated the lengths to which he will go to gain ground in the A.I. field, where xAI has lagged behind more established competitors. Other A.I. companies, such as Meta or OpenAI, have shied away from creating chatbots that can engage in sexual conversations because of the reputational and regulatory risks.

Elon Musk has not. The idea of allow users of a social media, smart software game that unwraps more explicit challenges is a good one. It is not as white hot as a burning Tesla Cybertruck with its 12-volt powered automatic doors, but the idea is steamy.

The write up says:

The billionaire has urged his followers on X to try conversing with the sexy chatbots, sharing a video clip on X of an animated Ani dancing in underwear.

That sounds exciting. For a dinobaby like me, I prefer people fully clothed and behaving according to the conventions I learned in college when i took the required course “College Social Customs.” I admit that I was one of the few people on campus who took these “customs” to heart, The makings of a dinobaby were apparently rooted in my make up. Others in the class went to a bar to get drunk and flout as many of the guidelines as possible. Mr. Musk seems to share a kindred spirit with those in my 1962 freshman in college behavior course.

The write up says:

Mr. Musk has said the A.I. companions will help people strengthen their real-world connections and address one of his chief anxieties: population decline that he warns could lead to civilizational collapse.

My hunch is that the idea is for the right kind of people to have babies. Mr. Musk and Pavel Durov (founder of Telegram) have sired lots of kiddies. These kiddies are probably closer to what Mr. Musk wants to pop out of his sexual incubator.

The write up says:

Mr. Musk’s chatbots lack some sexual content limitations imposed by other chatbot creators that do allow some illicit conversations, users said. Nomi AI, for example, blocks some extreme material, limiting conversations to something more akin to what would be allowed on the dating app Tinder.

Yep, I get the point. Sex sells. Want sex? Use Grok and publicize the result on X.com.

How popular will this Grok feature be among the more young-at-heart users of Grok? Answer: Popular. Will other tech bro type outfits emulate Mr. Musk’s innovative marketing method? Answer: Mr. Musk is a follower. Just check out some of the services offered by certain online adult services.

What a wonderful online service. Perfect for 2025 and inclusion in a College Social Customs class for idea-starved students. No tavern required. Just a mobile device. Ah, innovation.

Stephen E Arnold, October 15, 2025

Who Is Afraid of the Big Bad AI Wolf? Mr. Beast Perhaps?

October 14, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The story “MrBeast Warns of ‘Scary Times’ as AI Threatens YouTube Creators” is apparently about You Tube creators. Mr. Beast, a notable YouTube personality, is the source of the information. Is the article about YouTube creators? Yep, but it is also about Mr. Beast.

image

The write up says:

MrBeast may not personally face the threat of being replaced by AI as his brand thrives on large-scale, real-world stunts that rely on authenticity and human emotion. But his concern runs deeper than self-preservation. It’s about the millions of smaller creators who depend on platforms like YouTube to make a living. As one of the most influential figures on the internet, his words carry weight. The 27-year-old recently topped Forbes’ 2025 list of highest-earning creators, earning roughly $85 million and building a following of over 630 million across platforms.

Okay, Mr. Beast’s fame depended on YouTube. He is still in the YouTube fold. However, he has other business enterprises. He recognizes that smart software could create problems for creators.

I think smart software is another software tool. It is becoming a utility like a PDF editor.

The problem with Mr. Beast’s analysis is that it appears to be focused on other creators. I am not so sure. I think the comments presented in the write up reveal more about Mr. Beast than they do about the “other” creators. One example is:

“When AI videos are just as good as normal videos, I wonder what that will do to YouTube and how it will impact the millions of creators currently making content for a living… scary times,” MrBeast — whose real name is Jimmy Donaldson — wrote on X.

I am no expert on human psychology, but I see the use of the word “impact” and “scary” as a glimpse of what Mr. Beast is thinking. His production costs allegedly rival those of traditional commercial video outfits. The ideas and tropes have become increasingly strained and bizarre. YouTube acts in a unilateral way and outputs smarm to the creators desperate to know why the flow of their money has been reduced if not cut off. Those disappearing van life videos are just one example of how video magnets can melt down and be crushed under the wheels of the Google bus.

My thought is that Google will use AI to create alterative Mr. Beast-type videos with AI. Then squeeze the Mr. Beast type creators and let the traffic flow to Mother  Google. No royalties required, so Google wins. Mr. Beast-type creators can find their future and money elsewhere. Simple.

Stephen E Arnold, October 14, 2025

Blue Chip Consultants: Spin, Sizzle, and Fizzle with AI

October 14, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Can one quantify the payoffs from AI? Not easily. So what’s the solution? How about a “free” as in “marketing collateral” report from the blue-chip consulting firm McKinsey & Co. (You know that outfit because it figured out how to put Eastern Kentucky, Indiana, and West Virginia on the map.)

I like company reports like “Upgrading Software Business Models to Thrive in the AI Era.” These combine the weird spirit of Ezra Pound with used car sales professionals and blend in a bit of “we know more” rhetoric. Based on my experience, this is a winning combination for many professionals. This document speaks to those in the business of selling software. Today software does not come in boxes or as part of the deal when one buys a giant mainframe. Nope, software is out there. In the cloud. Companies use cloud solutions because — as consultants explained years ago — an organization can fire most technical staff and shift to pay-as-you go services. That big room that held the mainframe can become a sublease. That’s efficiency.

This particular report is the work of four — count them — four people who can help your business. Just bring money and the right attitude. McKinsey is selective. That’s how it decided to enter the pharmaceutical consulting business. Here’s a statement the happy and cooperative group of like-minded consultants presented:

while global enterprise spending on AI applications has increased eightfold over the last year to close to $5 billion, it still only represents less than 1 percent of total software application spending.

Converting this consultant speak to my style of English, the four blue chippers are trying to say that AI is not living up to the hype. Why? A software company today is having a tough time proving that AI delivers. The lack of fungible proof in the form of profits means that something is not going according to plan. Remember: The plan is to increase the revenue from software infused with AI.

Options include the exciting taxi meter approach. This means that the customers of enterprise software doesn’t know how much something costs upfront. Invoices deliver the cost. Surprise is not popular among some bean counters. Amazon’s AWS is in the surprise business. So is Microsoft Azure. However, surprise is not a good approach for some customers.

Licensees of enterprise software with that AI goodness mixed in could balk at paying fees for computational processes outside the control of the software licensee. This is the excitement a first year calculus student experiences when the values of variables are mysterious or unknown. Once one wrestles the variables to the ground, then one learns that the curve never reaches the x axis. It’s infinite, sport.

Pricing AI is a killer. The China-linked folks at Deepseek and its fellow travelers are into the easy, fast, and cheap approach to smart software. One can argue whether the intellectual property is original. One cannot argue that cheap is a compelling feature of some AI solutions. Cue the song: Where or When with the lines:

It seems we stood and talked like this before
We looked at each other in the same way then
But I can’t remember where or QWEN…

The problem is that enterprise software with AI is tough to price. The enterprise software company’s engineering and development costs go up. Their actual operating costs rise. The enterprise software company has to provide fungible proof that the bundle delivers value to warrant a higher price. That’s hard. AI is everywhere, and quite a few services are free, cheap or, or do it yourself code.

McKinsey itself does not have an answer to the problem the report from four blue chip consultants has identified. The report itself is start evidence that explaining AI pricing, operational, and use case data is a work in progress. My view is that:

  1. AI hype painted a picture of wonderful, easily identifiable benefits. That picture is a bit like a AI generated video. It is momentarily engaging but not real.
  2. AI state of the art today is output with errors. Hey, that sounds special when one is relying on AI for a medical diagnosis for your child or grandchild or managing your retirement account.,
  3. AI is a utility function. Software utilities get bundled into software that does something for which the user or licensee is willing to pay. At this time, AI is a work in progress, a novelty, and a cloud of unknowing. At some point, the fog will clear, but it won’t happen as quickly as the AI furnaces burn cash.
  4. What to sell, to whom, and pricing are problems created by AI. Asking smart software what to do is probably not going to produce a useful answer when the enterprise market is in turmoil, wallowing in uncertainty, and increasingly resistant to “surprise” pricing models.

Net net: McKinsey itself has not figured out AI. The idea is that clients will hire blue chip consultants to figure out AI. Therefore, the more studies and analyses blue chip consultants conduct, the closer these outfits will come to an answer. That’s good for the consulting business. The enterprise software companies may hire the blue chip consultants to answer the money and value questions. The bad news is that the fate of AI in enterprise software developers is in the hands of the licensees. Based on the McKinsey report, these folks are going slow. The mismatch among these players may produce friction. That will be exciting.

Stephen E Arnold, October 14, 2025

AI and America: Not a Winner It Seems

October 13, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Los Alamos National Laboratory perceives itself as one of the world’s leading science and research facilities. Jason Pruet is the Director of Los Alamos’s National Security AI Office and he was interviewed in “Q&A With Jason Pruet.” Pruet’s job is to prepare the laboratory for AI integration. He used to view AI as another tool for advancement, but Pruet now believes AI would disrupt the fundamental landscape of science, security, and more.

In the interview, Pruet states that the US government invested more in AI than any time in the past. He compared this investment to the World War II paradigm of science for the public good. Pruet explained that before the war, the US government wasn’t involved with science. After the war, Los Alamos shifted the dynamic and shaped modern America’s dedication to science, engineering, etc.

One of the biggest advances in AI technology is transformer architecture that allows huge progress to scale AI models, especially for mixing different information types. Pruet said that China is treating AI like a general purpose technology (i.e electricity) and they’ve launched a National AI strategy. The recent advances in AI are changing power structures. It’s turning into a new international arms race but that might not be the best metaphor:

“[Pruet:] All that said, I’m increasingly uncomfortable viewing this through the lens of a traditional arms race. Many thoughtful and respected people have emphasized that AI poses enormous risks for humanity. There are credible reports that China’s leadership has come to the same view, and that internally, they are trying to better balance the potential risks rather than recklessly seek advantage. It may be that the only path for managing these risks involves new kinds of international collaborations and agreements.”

Then Pruet had this to say about the state of the US’s AI development:

“Like we’re behind. The ability to use machines for general-purpose reasoning represents a seminal advance with enormous consequences. This will accelerate progress in science and technology and expand the frontiers of knowledge. It could also pose disruptions to national security paradigms, educational systems, energy, and other foundational aspects of our society. As with other powerful general-purpose technologies, making this transition will depend on creating the right ecosystem. To do that, we will need new kinds of partnerships with industry and universities.”

The sentiment seems to be focused on going faster and farther than any other country in the AI game. With the circular deals OpenAI has been crafting, AI seems to be more about financial innovation than technical innovation.

Whitney Grace, October 13, 2025

Parenting 100: A Remedial Guide to Raising Children

October 13, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I am not sure what’s up this week (October 6 to 10, 2025). I am seeing more articles about the impact of mobile devices, social media, doom scrolling, and related cheerful subjects in my newsfeed. A representative article is “Lazy Parents Are Giving Their Toddlers ChatGPT on Voice Mode to Keep Them Entertained for Hours.”

Let’s take a look at a couple of passages that I thought were interesting:

with the rise of human-like AI chatbots, a generation of “iPad babies” could seem almost quaint: some parents are now encouraging their kids to talk with AI models, sometimes for hours on end…

I get it. Parents are busy today. If they are lucky enough to have jobs, automatic meeting services keep them hopping. Then there is the administrivia of life. Children just add to the burden. Why not stick the kiddie in a playpen with an iPad. Tim Apple will be happy.

What’s the harm? How about this factoid (maybe an assertion from smart software?) from the write up:

AI chatbots have been implicated in the suicides of several teenagers, while a wave of reports detail how even grown adults have become so entranced by their interactions with sycophantic AI interlocutors that they develop severe delusions and suffer breaks with reality — sometimes with deadly consequences.

Okay, bummer. The write up includes a hint of risk for parents about these chat-sitters; to wit:

Andrew McStay, a professor of technology and society at Bangor University, isn’t against letting children use AI — with the right safeguards and supervision. But he was unequivocal about the major risks involved, and pointed to how AI instills a false impression of empathy.

Several observations seem warranted:

  1. Which is better? Mom and dad interacting with the kiddo. Maybe grandma could be a good stand in? Or, letting the kid tune in and drop out?
  2. Imagine sending a  chat surfer to school. Human interaction is not going to be as smooth and stress free as having someone take the kiddo’s animal crackers and milk or pouting until kiddo can log on again.
  3. Visualize the future: Is this chat surfer going to be a great employee and colleague? Answer: No.

I find it amazing that decades after these tools became available that people do not understand the damage flowing bits do to thinking, self esteem, and social conventions. Empathy? Sure, just like those luminaries at Silicon Valley type AI companies. Warm, caring, trustworthy.

Stephen E Arnold, October 13, 2025

Weaponization of LLMs Is a Thing. Will Users Care? Nope

October 10, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A European country’s intelligence agency learned about my research into automatic indexing. We did a series of lectures to a group of officers. Our research method, the results, and some examples preceded a hands on activity. Everyone was polite. I delivered versions of the lecture to some public audiences. At one event, I did a live demo with a couple of people in the audience. Each followed a procedure, and I showed the speed with which the method turned up in the Google index. These presentations took place in the early 2000s. I assumed that the behavior we discovered would be disseminated and then it would diffuse. It was obvious that:

  1. Weaponized content would be “noted” by daemons looking for new and changed information
  2. The systems were sensitive to what I called “pulses” of data. We showed how widely used algorithms react to sequences of content
  3. The systems would alter what they would output based on these “augmented content objects.”

In short, online systems could be manipulated or weaponized with specific actions. Most of these actions could be orchestrated and tuned to have maximum impact. One example in my talks was taking a particular word string and making it turn up in queries where one would not expect that behavior. Our research showed that a few as four weaponized content objects orchestrated in a specific time interval would do the trick. Yep, four. How many weaponized write ups can my local installation of LLMs produce in 15 minutes? Answer: Hundreds. How long does it take to push those content objects into information streams used for “training.” Seconds.

10 10 fish in fish bowl

Fish live in an environment. Do fish know about the outside world? Thanks, Midjourney. Not a ringer but close enough in horseshoes.

I was surprised when I read “A Small Number of Samples Can Poison LLMs of Any Size.” You can read the paper and work through the prose. The basic idea is that selecting or shaping training data or new inputs to recalibrate training data can alter what the target system does. I quite like the phrase “weaponize information.” Not only does the method work, it can be automated.

What’s this mean?

The intentional selection of information or the use of a sample of information from a domain can generate biases in what the smart software knows, thinks, decides, and outputs. Dr. Timnit Gebru and her parrot colleagues were nibbling around the Google cafeteria. Their research caused the Google to put up a barrier to this line of thinking. My hunch is that she and her fellow travelers found that content that is representative will reflect the biases of the authors. This means that careful selection of content for training or updating training sets can be steered. That’s what the Anthropic write up make clear.

Several observations are warranted:

  1. Whoever selects training data or the information used to update and recalibrate training data can control what is displayed, recommended, or included in outputs like recommendations
  2. Users of online systems and smart software are like fish in a fish bowl. The LLM and smart software crowd are the people who fill the bowl and feed the fish. Fish have a tough time understanding what’s outside their bowl. I don’t like the word “bubble” because these pop. An information fish bowl is tough to escape and break.
  3. As smart software companies converge into essentially an oligopoly using the types of systems I described in the early 2000s with some added sizzle from the Transformer thinking, a new type of information industrial complex is being assembled on a very large scale. There’s a reason why Sam AI-Man can maintain his enthusiasm for ChatGPT. He sees the potential of seemingly innocuous functions like apps within ChatGPT.

There are some interesting knock on effects from this intentional or inadvertent weaponization of online systems. One is that the escalating violent incidents are an output of these online systems. Inject some René Girard-type content into training data sets. Watch what those systems output. “Real” journalists are explaining how they use smart software for background research. Student uses online systems without checking to see if the outputs line up with what other experts say. What about investment firms allowing smart software to make certain financial decisions.

Weaponize what the fish live in and consume. The fish are controlled and shaped by weaponized information. How long has this quirk of online been known? A couple of decades, maybe more. Why hasn’t “anything” been done to address this problem? Fish just ask, “What problem?”

Stephen E Arnold, October x, 2025

I spotted

ChatGPT Finds Humans Useful

October 10, 2025

OpenAI is chasing consumers during primetime football games, we learn from 9to5Mac’s piece, “Pressure Mounts on Siri as ChatGPT Ads Start Airing on Primetime TV.” The first of these ads premiered during NFL Primetime. We are told the campaign focuses on ways people are using ChatGPT in their everyday lives, like creating recipes or fitness plans. So wholesome! (We assume they are leaving out the many downsides of overreliance on the tech.) Does this mean firm’s second Super Bowl ad will be more down to earth than its first one?

Writer Ben Lovejoy asserts this campaign highlights how embarrassingly far Apple’s Siri is behind ChatGPT. iPhone users have the option to get an answer from ChatGPT when Siri fails them. But, as Lovejoy notes, the permission prompt serves as a spotlight on Siri’s inadequacies.

The ad campaign comes with an interesting caveat. We learn:

“With growing concern in the creative sector around the use of AI, the company has gone out of its way to ensure that no artificial intelligence was used for the actual creative work. Creative Review reports: Crucially, the campaign was created largely through human endeavour, with the team at OpenAI noting that: ‘Human craft was central to the campaign’s creation. Every frame was shot on film, shaped by directors, photographers, producers and many more masters of craft.’ That ‘largely’ rider reflects that ChatGPT was used for some background work, with ‘streamlining shot lists and organising schedules’ given as examples.”

Will this acknowledgement that real life is better than AI fakery backfire on the premier AI company? And no Sora?

Cynthia Murrell, October 10, 2025

AI Has a Secret: Humans Do the Work

October 10, 2025

A key component of artificial intelligence output is not artificial at all. The Guardian reveals “How Thousands of ‘Overworked, Underpaid’ Humans Train Google’s AI to Seem Smart.”  From accuracy to content moderation, Google Gemini and other AI models rely on a host of humans employed by third-party contractors. Humans whose jobs get harder and harder as they are pressured to churn through the work faster and faster. Gee, what could go wrong?

Reporter Varsha Bansal relates:

“Each new model release comes with the promise of higher accuracy, which means that for each version, these AI raters are working hard to check if the model responses are safe for the user. Thousands of humans lend their intelligence to teach chatbots the right responses across domains as varied as medicine, architecture and astrophysics, correcting mistakes and steering away from harmful outputs.”

Very important work—which is why companies treat these folks as valued assets. Just kidding. We learn:

“Despite their significant contributions to these AI models, which would perhaps hallucinate if not for these quality control editors, these workers feel hidden. ‘AI isn’t magic; it’s a pyramid scheme of human labor,’ said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. ‘These raters are the middle rung: invisible, essential and expendable.’”

And, increasingly, rushed. The write-up continues:

“[One rater’s] timer of 30 minutes for each task shrank to 15 – which meant reading, fact-checking and rating approximately 500 words per response, sometimes more. The tightening constraints made her question the quality of her work and, by extension, the reliability of the AI. In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini’s predecessor, a ‘faulty’ and ‘dangerous’ product.”

And that is how we get AI advice like using glue on pizza or adding rocks to one’s diet. After those actual suggestions went out, Google focused on quality over quantity. Briefly. But, according to workers, it was not long before they were again told to emphasize speed over accuracy. For example, last December, Google announced raters could no longer skip prompts on topics they knew little about. Think workers with no medical expertise reviewing health advice. Not great. Furthermore, guardrails around harmful content were perforated with new loopholes. Bansal quotes Rachael Sawyer, a rater employed by Gemini contractor GlobalLogic:

“It used to be that the model could not say racial slurs whatsoever. In February, that changed, and now, as long as the user uses a racial slur, the model can repeat it, but it can’t generate it. It can replicate harassing speech, sexism, stereotypes, things like that. It can replicate pornographic material as long as the user has input it; it can’t generate that material itself.”

Lovely. It is policies like this that leave many workers very uncomfortable with the software they are helping to produce. In fact, most say they avoid using LLMs and actively discourage friends and family from doing so.

On top of the disillusionment, pressure to perform full tilt, and low pay, raters also face job insecurity. We learn GlobalLogic has been rolling out layoffs since the beginning of the year. The article concludes with this quote from Sawyer:

‘I just want people to know that AI is being sold as this tech magic – that’s why there’s a little sparkle symbol next to an AI response,’ said Sawyer. ‘But it’s not. It’s built on the backs of overworked, underpaid human beings.’

We wish we could say we are surprised.

Cynthia Murrell, October 10, 2025

AI Embraces the Ethos of Enterprise Search

October 9, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In my files, I have examples of the marketing collateral generated by enterprise search vendors. I have some clippings from trade publications and other odds and ends dumped into my enterprise search folder. One of these reports is “Fastgründer John Markus Lervik dømt til fengsel.” The article is no longer online, but you can read my 2014 summary at this Beyond Search link. The write up documents an enterprise search vendor who used some alleged accounting methods to put a shine on the company. In 2008, Microsoft purchased Fast Search & Transfer putting an end to this interesting company.

image

A young CPA MBA BA (with honors) is jockeying a spreadsheet. His father worked for an enterprise search vendor based in the UK. His son is using his father’s template but cannot get the numbers to show positive cash flows across six quarters. Thanks, Venice.ai. Good enough.

Why am I mentioning Fast Search & Transfer? The information in Fortune Magazine’s “‘There’s So Much Pressure to Be the Company That Went from Zero to $100 Million in X Days’: Inside the Sketchy World of ARR and Inflated AI Startup Accounting” jogged my memory about Fast Search and a couple of other interesting companies in the enterprise search sector.

Enterprise search was the alleged technology to put an organization’s information at the fingertips of employees. Enterprise search would unify silos of information. Enterprise search would unlock the value of an organization’s “hidden” or “dark” data. Enterprise search would put those hours wasted looking for information to better use. (IDC was the cheerleader for the efficiency payoff from enterprise search.)

Does this sound familiar? It should every vendor applying AI to an organization’s information challenges is either recycling old chestnuts from the Golden Age of Enterprise Search or wandering in the data orchard discovering these glittering generalities amidst nuggets of high value jargon.

The Fortune article states:

There’s now a massive amount of pressure on AI-focused founders, at earlier stages than ever before: If you’re not generating revenue immediately, what are you even doing? Founders—in an effort to keep up with the Joneses—are counting all sorts of things as “long-term revenue” that are, to be blunt, nothing your Accounting 101 professor would recognize as legitimate. Exacerbating the pressure is the fact that more VCs than ever are trying to funnel capital into possible winners, at a time where there’s no certainty about what evaluating success or traction even looks like.

Would AI start ups fudge numbers? Of course not. Someone at the start up or investment firm took a class in business ethics. (The pizza in those study groups was good. Great if it could be charged to another group member’s Visa without her knowledge. Ho ho ho.)

The write up purses the idea that ARR or annual recurring revenue is a metric that may not reflect the health of an AI business. No kidding? When an outfit has zero revenue resulting from dumping investor case into a burning dumpster fire, it is difficult for me to understand how people see a payoff from AI. The “payoff” comes from moving money around, not from getting cash from people or organizations on a consistent basis. Subscription-like business models are great until churn becomes a factor.

The real point of the write up for me is that financial tricks, not customers paying for the product or service, are the name of the game. One big enterprise search outfit used “circular” deals to boost revenue. I did some small work for this outfit, so I cannot identify it. The same method is now part of the AI revolution involving Nvidia, OpenAI, and a number of other outfits. Whose money is moving? Who gets it? What’s the payoff? These are questions not addressed in depth in the information to which I have access?

I think financial intermediaries are the folks taking home the money. Some vendors may get paid like masters of black art accounting. But investor payoff? I am not so sure. For me the good old days of enterprise search are back again, just with bigger numbers and more impactful financial consequences.

As an aside, the Fortune article uses the word “shit” twice. Freudian slip or a change in editorial standards at Fortune? That word was applied by one of my team when asked to describe the companies I profiled in the Enterprise Search Report I wrote many years ago. “Are you talking about my book or enterprise search?” I asked. My team member replied, “The enterprise search thing.”

Stephen E Arnold, October 2025

With or Without AI: Winners Win and Losers Lose

October 8, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Some outfits are just losers. That’s the message I got after reading “AI Magnifies Your Teams’ Strengths – and Weaknesses, Google Report Finds.” Keep in mind that this report — the DORA Report or DevOps Research & Assessment — is Googley. The write up makes clear that Google is not hallucinating. The outstanding company:

surveyed 5,000 software development professionals across industries and followed up with more than 100 hours of interviews. It may be one of the most comprehensive studies of AI’s changing role in software development, especially at the enterprise level.

image

Winners with AI win bigger. Losers with AI continue to lose. Is that sad team mascot one of Sam Altman’s AI cheerleaders. I think it is. Thanks, MidJourney. Good enough.

Obviously the study is “one of the most comprehensive”; of course, it is Google’s study!

The big finding seems to be:

… AI has moved from hype to mainstream in the enterprise software development world. Second, real advantage isn’t about the tools (or even the AI you use). It’s about building solid organizational systems. Without those systems, AI has little advantage. And third, AI is a mirror. It reflects and magnifies how well (or poorly) you already operate.

I interpret the findings of the DORA Report in an easy-to-remember way: Losers still lose even if their teams use AI. I think of this as a dominant football team. The team has the money to induce or direct events. As a result, the team has the best players. The team has the best coaches (leadership). The team has the best infrastructure. In short, when one is the best, AI makes the best better.

On the other hand, a losing team composed of losers will use AI and still lose.

I noted that the report about DORA did not include:

  1. Method of sample selection
  2. Questions asked
  3. Methodology for generating the numerous statistics in the write up.

What happens if one conducts a study to validate the idea that winners win and losers keep on losing? I think it sends a clear signal that a monopoly-type of outfit has a bit of an inferiority or fear-centric tactical view. Even the quantumly supreme need a marketing pick me up now and then.

Stephen E Arnold, October 8, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta