Moral Police? Not OpenAI, Dude and Not Anywhere in Silicon Valley

October 22, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Coming up with clever stuff is either the warp or the woof of innovation. With the breakthroughs in software that seems intelligent, clever is morphing into societal responsibility. For decades I have asserted that the flow of digital information erodes notional structures. From my Eagleton Lecture in the mid-1980s to the observations in this blog, the accuracy of my observation is verified. What began as disintermediation in the niche of special librarians has become the driving force for the interesting world now visible to most people.

image

Worrying about morality in 2025 is like using a horse and buggy to commute in Silicon Valley. Thanks, Venice.ai. Good enough.

I can understand the big idea behind Sam AI-Man’s statements as reported in “Sam Altman Says OpenAI Isn’t ‘Moral Police of the World’ after Erotica ChatGPT Post Blows Up.” Technology is — like, you know, so, um — neutral. This means that its instrumental nature appears in applications. Who hassles the fellow who innovated with Trinitrotoluene or electric cars with top speeds measured in hundreds of miles per hour?

The write up says:

OpenAI CEO Sam Altman said Wednesday [October 15, 2025] that the company is “not the elected moral police of the world” after receiving backlash over his decision to loosen restrictions and allow content like erotica within its chatbot ChatGPT. The artificial intelligence startup has expanded its safety controls in recent months as it faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to “safely relax” most restrictions now that it has new tools and has been able to mitigate “serious mental health issues.”

This is a sporty paragraph. It contains highly charged words and a message. The message, as I understand it, is, “We can’t tell people what to do or not to do with our neutral and really good smart software.”

Smart software has become the next big thing for some companies. Sure, many organizations are using AI, but the motors driving the next big thing are parked in structures linked with some large high technology outfits.

What’s a Silicon Valley type outfit supposed to do with this moral frippery? The answer, according to the write up:

On Tuesday [October 13, 2025] , OpenAI announced assembled a council of eight experts who will provide insight into how AI impacts users’ mental health, emotions and motivation. Altman posted about the company’s aim to loosen restrictions that same day, sparking confusion and swift backlash on social media.

What am I confused about the arrow of time? Sam AI-Man did one thing on the 13th of October and then explained that his firm is not the moral police on the 14th of October. Okay, make a move and then crawfish. That works for me, and I think the approach will become part of the managerial toolkit for many Silicon Valley outfits.

For example, what if AI does not generate enough data to pay off the really patient, super understanding, and truly king people who fund the AI effort? What if the “think it and it will become real” approach fizzles? What if AI turns out to be just another utility useful for specific applications like writing high school essays or automating a sales professional’s prospect follow up letter? What if….? No, I won’t go there.

Several observations:

  1. Silicon Valley-type outfits now have the tools to modify social behavior. Whether it is Peter Thiel as puppet master or Pavel Durov carrying a goat to inspire TONcoin dApp developers, these individuals can control hearts and minds.
  2. Ignoring or imposing philosophical notions with technology was not a problem when an innovation like Teslas A/C motor was confined to a small sector of industry. But today, the innovations can ripple globally in seconds. It should be no surprise that technology and ideology are for now intertwined.
  3. Control? Not possible. The ink, as the saying goes, has been spilled on the blotter. Out of the bottle. Period.

The waffling is little more than fire fighting. The uncertainty in modern life is a “benefit” of neutral technology. How do you like those real time ads that follow you around from online experience to online experience? Sam AI-Man and others of his ilk are not the moral police. That concept is as outdated as a horse-and-buggy on El Camino Real. Quaint but anachronistic. Just swipe left for another rationalization. It is 2025.

Stephen E Arnold, October 23, 2025

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta