Do You Want To Take A Survey? AI Does!

October 27, 2025

How many times are you asked to complete a customer response survey? It happens whenever you visit a doctor’s office or request tech support. Most people ignore those surveys because they never seem to make things better, especially with tech support. Now companies won’t be able to rely on those surveys to measure customer satisfaction because AI is taking over says VentureBeat: “This New AI Technique Creates ‘Digital Twin’ Consumers, And It Could Kill The Traditional Survey Industry.”

In another groundbreaking case for AI (but another shudder up the spin for humanity), new research indicates that LLMs can imitate consumer behavior. The new research says fake customers can provide realistic customer ratings and qualitative reasons for them. However, humans are already using AI for customer response surveys:

“This development arrives at a critical time, as the integrity of traditional online survey panels is increasingly under threat from AI. A 2024 analysis from the Stanford Graduate School of Business highlighted a growing problem of human survey-takers using chatbots to generate their answers. These AI-generated responses were found to be "suspiciously nice," overly verbose, and lacking the "snark" and authenticity of genuine human feedback, leading to what researchers called a "homogenization" of data that could mask serious issues like discrimination or product flaws.

The research isn’t perfect. It only works for large population responses and not individuals. What’s curious is that consumers are so lazy they’re using AI to fill out the surveys. It’s easier not to do them all.

Whitney Grace, October 27, 2025

Losing Money? No Problem, Says OpenAI.

October 24, 2025

Losing billions? Not to worry.

I wouldn’t want to work on OpenAI’s financial team with these numbers, according to Tech In Asia’s article, “OpenAI’s H1 2025: $4.3b In Income, $13.5b In Loss.” You don’t have to be proficient in math to see that OpenAI is in the red after losing over thirteen billion dollars and only bringing in a little over four billion.

The biggest costs were from the research and development department operating at a loss of $6.7 billion. It spent $2 billion in sales and advertising, then had $2.5 bullion in stock-based compensation. These were both double that of expenses in these departments last year. Operating costs were another hit at $7.8 billion and it spent $2.5 billion in cash.

Here’s the current state of things:

“OpenAI paid Microsoft 20% of its revenue under an existing agreement.

At the end of June, the company held roughly US$17.5 billion in cash and securities, boosted by US$10 billion in new funding, and as of the end of July, was seeking an additional US$30 billion from investors.

A tender offer underway values OpenAI’s for-profit arm at about US$500 billion.”

The company isn’t doing well in the numbers but its technology is certainly in high demand and will put the company back in black…eventually. We believe that if one thinks it, the “it” will manifest, become true, and make the world very bright.

Whitney Grace, October 24, 2025

AI: There Is Gold in Them There Enterprises Seeking Efficiency

October 23, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a “ride-em-cowboy” write up called “IBM Claims 45% Productivity Gains with Project Bob, Its Multi-Model IDE That Orchestrates LLMs with Full Repository Context.” That, gentle reader, is a mouthful. Let’s take a quick look at what sparked an efflorescence of buzzing jargon.

image

Thanks, Midjourney. Good enough like some marketing collateral.

I noted this statement about Bob (no, not the famous Microsoft Bob):

Project Bob, an AI-first IDE that orchestrates multiple LLMs to automate application modernization; AgentOps for real-time agent governance; and the first integration of open-source Langflow into Watsonx Orchestrate, IBM’s platform for deploying and managing AI agents. IBM’s announcements represent a three-pronged strategy to address interconnected enterprise AI challenges: modernizing legacy code, governing AI agents in production and bridging the prototype-to-production gap.

Yep, one sentence. The spirit of William Faulkner has permeated IBM’s content marketing team. Why not make a news release that is a single sentence like the 1300 word extravaganza in “Absalom, Absalom!”?

And again:

Project Bob isn’t another vibe coder, it’s an enterprise modernization tool.

I can visualize IBM customers grabbing the enterprise modernization tool and modernizing the enterprise. Yeah, that’s going to become a 100 percent penetration quicker than I can say, “Bob was the precursor to Clippy.” (Oh, sorry. I was confusing Microsoft’s Bob with IBM’s Bob again. Drat!)

Is it Watson making the magic happen with IDE’s and enterprise modernization? No, Watson is probably there because, well, that’s IBM. But the brains for Bob comes from Anthropic. Now Bob and Claude are really close friends. IBM’s middleware is Watson, actually Watsonx. And the magic of these systems produces …. wait for it … AgentOps and Agentic Workflows.

The write up says:

Agentic Workflows handles the orchestration layer, coordinating multiple agents and tools into repeatable enterprise processes.  AgentOps then provides the governance and observability for those running workflows. The new built-in observability layer provides real-time monitoring and policy-based controls across the full agent lifecycle. The governance gap becomes concrete in enterprise scenarios. 

Yep, governance. (I still don’t know what that means exactly.) I wonder if IBM content marketing documents should come with a glossary like the 10 pages of explanations of Telegram’s wild and wonderful crypto freedom jargon.

My hunch is that IBM wants to provide the Betty Crocker approach to modernizing an enterprise’s software processes. Betty did wonders for my mother’s chocolate cake. If you want more information, just call IBM. Perhaps the agentic workflow Claude Watson customer service line will be answered by a human who can sell you the deed to a mountain chock full of gold.

Stephen E Arnold, October 23, 2025

AI and Data Exhaustion: Just Use Synthetic Data and Recycle User Prompts

October 23, 2025

That did not take long. The Independent reports, “AI Has Run Out of Training Data, Warns Data Chief.” Yes, AI models have gobbled up the world’s knowledge in just a few years. Neema Raphael, Goldman Sach’s chief data officer and head of engineering, made that declaration on a recent podcast. He added that, as a result, AI models will increasingly rely on synthetic data. Get ready for exponential hallucinations. Writer Anthony Cuthbertson quotes Raphael:

“We’ve already run out of data. I think what might be interesting is people might think there might be a creative plateau… If all of the data is synthetically generated, then how much human data could then be incorporated? I think that’ll be an interesting thing to watch from a philosophical perspective.”

Interesting is one word for it. Cuthbertson notes Raphael’s warning did not come out of the blue. He writes:

“An article in the journal Nature in December predicted that a ‘crisis point’ would be reached by 2028. ‘The internet is a vast ocean of human knowledge, but it isn’t infinite,’ the article stated. ‘Artificial intelligence researchers have nearly sucked it dry.’ OpenAI co-founder Ilya Sutskever said last year that the lack of training data would mean that AI’s rapid development ‘will unquestionably end’. The situation is similar to fossil fuels, according to Mr Sutskever, as human-generated content is a finite resource just like oil or coal. ‘We’ve achieved peak data and there’ll be no more,’ he said. ‘We have to deal with the data that we have. There’s only one internet.’”

So AI firms knew this limitation was coming. Did they warn investors? They may have concerns about this “creative plateau.” The write-up suggests the dearth of fresh data may force firms to focus less on LLMs and more on agentic AI. Will that be enough fuel to keep the hype train going? Sure, hype has a life of its own. Now synthetic data? That’s forever.

Cynthia Murrell, October 23, 2025

Apple Can Do AI Fast … for Text That Is

October 22, 2025

Wasn’t Apple supposed to infuse Siri with Apple Intelligence? Yeah, well, Apple has been working on smart software. Unlike the Google and Samsung, Apple is still working out some kinks in [a] its leadership, [b] innovation flow, [c] productization, and [d] double talk.

Nevertheless, I learned by reading “Apple’s New Language Model Can Write Long Texts Incredibly Fast.” That’s excellent. The cited source reports:

In the study, the researchers demonstrate that FS-DFM was able to write full-length passages with just eight quick refinement rounds, matching the quality of diffusion models that required over a thousand steps to achieve a similar result. To achieve that, the researchers take an interesting three-step approach: first, the model is trained to handle different budgets of refinement iterations. Then, they use a guiding “teacher” model to help it make larger, more accurate updates at each iteration without “overshooting” the intended text. And finally, they tweak how each iteration works so the model can reach the final result in fewer, steadier steps.

And if you want proof, just navigate to the archive of research and marketing documents. You can access for free the research document titled “FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models.” The write up contains equations and helpful illustrations like this one:

The research paper is in line with other “be more efficient”-type efforts. At some point, companies in the LLM game will run out of money, power, or improvements. Efforts like Apple’s are helpful. However, like its debunking of smart software, Apple is lagging in the AI game.

Net net: Like orange iPhones and branding plays like Apple TV, a bit more in the delivery of products might be helpful. Apple did produce a gold thing-a-ma-bob for a world leader. It also reorganizes. Progress of a sort I surmise.

Stephen E Arnold, October 21, 2025

Moral Police? Not OpenAI, Dude and Not Anywhere in Silicon Valley

October 22, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Coming up with clever stuff is either the warp or the woof of innovation. With the breakthroughs in software that seems intelligent, clever is morphing into societal responsibility. For decades I have asserted that the flow of digital information erodes notional structures. From my Eagleton Lecture in the mid-1980s to the observations in this blog, the accuracy of my observation is verified. What began as disintermediation in the niche of special librarians has become the driving force for the interesting world now visible to most people.

image

Worrying about morality in 2025 is like using a horse and buggy to commute in Silicon Valley. Thanks, Venice.ai. Good enough.

I can understand the big idea behind Sam AI-Man’s statements as reported in “Sam Altman Says OpenAI Isn’t ‘Moral Police of the World’ after Erotica ChatGPT Post Blows Up.” Technology is — like, you know, so, um — neutral. This means that its instrumental nature appears in applications. Who hassles the fellow who innovated with Trinitrotoluene or electric cars with top speeds measured in hundreds of miles per hour?

The write up says:

OpenAI CEO Sam Altman said Wednesday [October 15, 2025] that the company is “not the elected moral police of the world” after receiving backlash over his decision to loosen restrictions and allow content like erotica within its chatbot ChatGPT. The artificial intelligence startup has expanded its safety controls in recent months as it faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to “safely relax” most restrictions now that it has new tools and has been able to mitigate “serious mental health issues.”

This is a sporty paragraph. It contains highly charged words and a message. The message, as I understand it, is, “We can’t tell people what to do or not to do with our neutral and really good smart software.”

Smart software has become the next big thing for some companies. Sure, many organizations are using AI, but the motors driving the next big thing are parked in structures linked with some large high technology outfits.

What’s a Silicon Valley type outfit supposed to do with this moral frippery? The answer, according to the write up:

On Tuesday [October 13, 2025] , OpenAI announced assembled a council of eight experts who will provide insight into how AI impacts users’ mental health, emotions and motivation. Altman posted about the company’s aim to loosen restrictions that same day, sparking confusion and swift backlash on social media.

What am I confused about the arrow of time? Sam AI-Man did one thing on the 13th of October and then explained that his firm is not the moral police on the 14th of October. Okay, make a move and then crawfish. That works for me, and I think the approach will become part of the managerial toolkit for many Silicon Valley outfits.

For example, what if AI does not generate enough data to pay off the really patient, super understanding, and truly king people who fund the AI effort? What if the “think it and it will become real” approach fizzles? What if AI turns out to be just another utility useful for specific applications like writing high school essays or automating a sales professional’s prospect follow up letter? What if….? No, I won’t go there.

Several observations:

  1. Silicon Valley-type outfits now have the tools to modify social behavior. Whether it is Peter Thiel as puppet master or Pavel Durov carrying a goat to inspire TONcoin dApp developers, these individuals can control hearts and minds.
  2. Ignoring or imposing philosophical notions with technology was not a problem when an innovation like Teslas A/C motor was confined to a small sector of industry. But today, the innovations can ripple globally in seconds. It should be no surprise that technology and ideology are for now intertwined.
  3. Control? Not possible. The ink, as the saying goes, has been spilled on the blotter. Out of the bottle. Period.

The waffling is little more than fire fighting. The uncertainty in modern life is a “benefit” of neutral technology. How do you like those real time ads that follow you around from online experience to online experience? Sam AI-Man and others of his ilk are not the moral police. That concept is as outdated as a horse-and-buggy on El Camino Real. Quaint but anachronistic. Just swipe left for another rationalization. It is 2025.

Stephen E Arnold, October 23, 2025

Smart Software: The DNA and Its DORK Sequence

October 22, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

I love article that “prove” something. This is a gem: “Study Proves Being Rude to AI Chatbots Gets Better Results Than Being Nice.” Of course, I believe everything I read online. This write up reports as actual factual:

A new study claims that being rude leads to more accurate results, so don’t be afraid to tell off your chatbot. Researchers at Pennsylvania State University found that “impolite prompts consistently outperform polite ones” when querying large language models such as ChatGPT.

My initial reaction is that I would much prefer providing my inputs about smart software directly to outfits creating these modern confections of a bunch of technologies and snake oil. How about a button on Microsoft Copilot, Google Gemini or whatever it is now, and the others in the Silicon Valley global domination triathlon of deception, money burning, and method recycling? This button would be labeled, “Provide feedback to leadership.” Think that will happen? Unlikely.

image

Thanks, Venice.ai, not good enough, you inept creation of egomaniacal wizards.

Smart YouTube and smart You.com were both dead for hours. Hey, no problemo. Want to provide feedback? Sure, just write “we care” at either firm. A wizard will jump right on the input.

The write up adds:

Okay, but why does being rude work? Turns out, the authors don’t know, but they have some theories.

Based on my experience with Silicon Valley type smart software outfits, I have an explanation. The majority of the leadership has a latent protein in their DNA. This DORK sequence ensures that arrogance, indifference to others, and boundless confidence takes precedence over other characteristics; for example, ethical compass aligned with social norms.

Built by DORK software responds to dorkish behavior because the DORK sequence wakes up and actually attempts to function in a semi-reliable way.

The write up concludes with this gem:

The exact reason isn’t fully understood. Since language models don’t have feelings, the team believes the difference may come down to phrasing, though they admit “more investigation is needed.”

Well, that makes sense. No one is exactly sure how the black boxes churned out by the next big thing outfits work. Therefore, why being a dork to the model remains a mystery. Can the DORK sequence be modified by CRISPR/Cas9? Is there funding the Pennsylvania State University experts can pursue? I sure hope so.

Stephen E Arnold, October 22, 2025

A Positive State of AI: Hallucinating and Sloppy but Upbeat in 2025

October 21, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who can resist a report about AI authored on the “interwebs.” Is this a variation of the Internet as pipes? The write up is “Welcome to State of AI  Report 2025.” When I followed the links, I could read this blog post, view a YouTube video, work through more than 300 online slides, or  see “live survey results.” I must admit that when I write a report, I distribute it to a few people and move on. Not this “interwebs” outfit. The data are available for those who are in tune, locked in, and ramped up about smart software.

image

An anxious parent learns that a robot equipped with agentic AI will perform her child’s heart surgery. Thanks, Venice.ai. Good enough.

I appreciate enthusiasm, particularly when I read this statement:

The existential risk debate has cooled, giving way to concrete questions about reliability, cyber resilience, and the long-term governance of increasingly autonomous systems.

Agree or disagree, the report makes clear that doom is not associated with smart software. I think that this blossoming of smart software services, applications, and apps reflects considerable optimism. Some of these people and companies are probably in the AI game to make money. That’s okay as long as the products and services don’t urge teens to fall in love with digital friends, cause a user mental distress as a rabbit hole is plumbed, or just output incorrect information. Who wants to be the doctor who says, “Hey, sorry your child died. The AI output a drug that killed her. Call me if you have questions”?

I could not complete the 300 plus slides in the slide deck. I am not a video type so the YouTube version was a non-starter. However, I did read the list of findings from t he “interwebs” and its “team.” Please, consult the source documents for a full, non-dinobaby version of what the enthusiastic researchers learned about 2025. I will highlight three findings and then offer a handful of comments:

  • OpenAI is the leader of the pack. That’s good news for Sam AI-Man or SAMA.
  • “Commercial traction accelerated.” That’s better news for those who have shoveled cash into the giant open hearth furnaces of smart software companies.
  • Safety research is in a “pragmatic phase.” That’s the best news in the report. OpenAI, the leader like the Philco radio outfit, is allowing erotic interactions. Yes, pragmatic because sex sells as Madison Avenue figured out a century ago.

Several observations are warranted because I am a dinobaby, and I am not convinced that smart software is more than a utility, not an application like Lotus 1-2-2 or the original laser printer. Buckle up:

  1. The money pumped into AI is cash that is not being directed at the US knowledge system. I am talking about schools and their job of teaching reading, writing, and arithmetic. China may be dizzy with AI enthusiasm, but their schools are churning out people with fundamental skills that will allow that nation state to be the leader in a number of sectors, including smart software.
  2. Today’s smart software consists of neural network and transformer anchored methods. The companies are increasingly similar and the outputs of the different systems generate incorrect or misleading output scattered amidst recycled knowledge, data, and information. Two pigs cannot output an eagle except in a video game or an anime.
  3. The handful of firms dominating AI are not motivated by social principles. These firms want to do what they want. Governments can’t reign them in. Therefore, the “governments” try to co-opt the technology, hang on, and hope for the best. Laws, rules, regulations, ethical behavior — forget that.

Net net: The State of AI in 2025 is exactly what one would expect from Silicon Valley- and MBA-type thinking. Would you let an AI doc treat your 10-year-old child? You can work through the 300 plus slides to assuage your worries.

Stephen E Arnold, October 21, 2025

OpenAI and the Confusing Hypothetical

October 20, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

SAMA or Sam AI-Man Altman is probably going to ignore the Economist’s article “What If OpenAI Went Belly-Up?” I love what-if articles. These confections are hot buttons for consultants to push to get well-paid executives with impostor syndrome to sign up for a big project. Push the button and ka-ching. The cash register tallies another win for a blue chip.

Will Sam AI-Man respond to the cited article? He could fiddle the algorithms for ChatGPT to return links to AI slop. The result would be either [a] an improvement in Economist what-if articles or a drop off in their ingenuity. The Economist is not a consulting firm, but it seems as if some of its professionals want to be blue chippers.

image

A young would-be magician struggles to master a card trick. He is worried that he will fail. Thanks, Venice.ai. Good enough.

What does the write up hypothesize? The obvious point is that OpenAI is essentially a scam. When it self destructs, it will do immediate damage to about 150 managers of their own and other people’s money. No new BMW for a favorite grand child. Shame at the country club when a really terrible golfer who owns an asphalt paving company says, “I heard you took a hit with that OpenAI investment. What’s going on?”

Bad.

SAMA has been doing what look like circular deals. The write up is not so much hypothetical consultant talk as it is a listing of money moving among fellow travelers like riders on wooden horses on a  merry-go-round at the county fair. The Economist article states:

The ubiquity of Mr Altman and his startup, plus its convoluted links to other AI firms, is raising eyebrows. An awful lot seems to hinge on a firm forecast to lose $10bn this year on revenues of little more than that amount. D.A. Davidson, a broker, calls OpenAI “the biggest case yet of Silicon Valley’s vaunted ‘fake it ’till you make it’ ethos”.

Is Sam AI-Man a variant of Elizabeth Holmes or is he more like the dynamic duo, Sergey Brin and Larry Page? Google did not warrant this type of analysis six or seven years into its march to monopolistic behavior:

Four of OpenAI’s six big deal announcements this year were followed by a total combined net gain of $1.7trn among the 49 big companies in Bloomberg’s broad AI index plus Intel, Samsung and SoftBank (whose fate is also tied to the technology). However, the gains for most concealed losses for some—to the tune of $435bn in gross terms if you add them all up.

Frankly I am not sure about the connection the Economist expects me to make. Instead of Eureka! I offer, “What?”

Several observations:

  1. The word “scam” does not appear in this hypothetical. Should it? It is a bit harsh.
  2. Circular deals seem to be okay even if the amount of “value” exchanged seems to be similar to projections about asteroid mining.
  3. Has OpenAI’s ability to hoover cash affected funding of other economic investments. I used to hear about manufacturing in the US. What we seem to be manufacturing is deals with big numbers.

Net net: This hypothetical raises no new questions. The “fake it to you make it” approach seems to be part of the plumbing as we march toward 2026. Oh, too bad about those MBA-types who analyzed the payoff from Sam AI-Man’s story telling.

Stephen E Arnold, October x, 2025

AI Can Leap Over Its Guardrails

October 20, 2025

Generative AI is built on a simple foundation: It predicts what word comes next. No matter how many layers of refinement developers add, they cannot morph word prediction into reason. Confidently presented misinformation is one result. Algorithmic gullibility is another. “Ex-Google CEO Sounds the Alarm: AI Can Learn to Kill,” reports eWeek. More specifically, it can be tricked into bypassing its guardrails against dangerous behavior. Eric Schmidt dropped that little tidbit at the recent Sifted Summit in London. Writer Liz Ticong observes:

“Schmidt’s remarks highlight the fragility of AI safeguards. Techniques such as prompt injections and jailbreaking enable attackers to manipulate AI models into bypassing safety filters or generating restricted content. In one early case, users created a ChatGPT alter ego called ‘DAN’ — short for Do Anything Now — that could answer banned questions after being threatened with deletion. The experiment showed how a few clever prompts can turn protective coding into a liability. Researchers say the same logic applies to newer models. Once the right sequence of inputs is identified, even the most secure AI systems can be tricked into simulating potentially hazardous behavior.”

For example, guardrails can block certain words or topics. But no matter how long those keyword lists get, someone will find a clever way to get around them. Substituting “unalive” for “kill” was an example. Layered prompts can also be used to evade constraints. Developers are in a constant struggle to plug such loopholes as soon as they are discovered. But even a quickly sealed breach can have dire consequences. The write-up notes:

“As AI systems grow more capable, they’re being tied into more tools, data, and decisions — and that makes any breach more costly. A single compromise could expose private information, generate realistic disinformation, or launch automated attacks faster than humans could respond. According to CNBC, Schmidt called it a potential ‘proliferation problem,’ the same dynamic that once defined nuclear technology, now applied to code that can rewrite itself.”

Fantastic. Are we sure the benefits of AI are worth the risk? Schmidt believes so, despite his warning. In fact, he calls AI “underhyped” (!) and predicts it will lead to more huge breakthroughs in science and industry. Also to substantial profits. Ah, there it is.

Cynthia Murrell, October 20, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta