AI and the Cult of Personality
January 29, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
AI or smart software may be in a cluster of bubbles or a cluster of something. One thing is certain we now have a cult of personality emerging around software that makes humans expendable or a bit like the lucky worker who dragged megaliths from quarry to nice places near the Nile River.
Let me highlight a few of the people who have emerged as individuals who saw the future and decided it would be AI. These are people who know when a technology is the next big thing. Maybe the biggest next big thing to come along since fire or the wheel. Here are a few names:
Brett Adcock
Sam Altman
Marc Benioff
Chris Cox
Jeff Dean
Timnit Gebru
Ian Goodfellow
Demis Hassabis
Dan Hendrycks
Yann LeCun
Fei-Fei Li
Satya Nadella
Andrew Ng
Elham Tabassi
Steve Yun
I want to add another name to his list, which can be expanded far beyond the names I pulled off the top of my head. This is none other than Ed Zitron.

Thanks, Venice.ai. Good enough.
I know this because I read “Ed Zitron on Big tech, Backlash, Boom and Bust: AI Has Taught Us That People Are Excited to Replace Human Beings.” The write up says:
Zitron’s blunt, brash skepticism has made him something of a cult figure. His tech newsletter, Where’s Your Ed At, now has more than 80,000 subscribers; his weekly podcast, Better Offline, is well within the Top 20 on the tech charts; he’s a regular dissenting voice in the media; and his subreddit has become a safe space for AI sceptics, including those within the tech industry itself – one user describes him as “a lighthouse in a storm of insane hyper capitalist bullsh#t”.
I think it is classy to use a colloquial term for animal excrement in a major newspaper. I wonder, however, if this write up is more about what the writer perceives as wrong with big AI than admiration for a PR and marketing person?
The write up says:
Explaining Zitron’s thesis about why generative AI is doomed to fail is not simple: last year he wrote a 19,000-word essay, laying it out. But you could break it down into two, interrelated parts. One is the actual efficacy of the technology; the other is the financial architecture of the AI boom. In Zitron’s view, the foundations are shaky in both cases.
The impending failure of AI is based upon the fact that it is lousy technology; that is, it outputs incorrect information and hallucinates. Plus, the financial structure of the training, legal cases, pings, pipes, and plumbing is money thrown into a dumpster fire.
The article humanizes Mr. Zitron, pointing out:
Growing up in Hammersmith, west London, his parents were loving and supportive, Zitron says. His father was a management consultant; his mother raised him and his three elder siblings. But “secondary school was very bad for me, and that’s about as much as I’ll go into.” He has dyspraxia – a coordination disability – and he was diagnosed with ADHD in his 20s. “I think I failed every language and every science, and I didn’t do brilliant at maths,” he says. “But I’ve always been an #sshole over the details.”
Yes, another colloquialism. Anal issues perhaps?
The write up ends on a note that reminds me of good old Don Quixote:
He just wants to tell it like it is. “It’d be much easier to just write mythology and fan fiction about what AI could do. What I want to do is understand the truth.”
Several observations:
- I am not sure if the write up is about Mr. Zitron or the Guardian’s sense that high technology has burned Fleet Street and replaced it with businesses that offer AI services
- A film about Mr. Zitron does seem to be one important point in the write up. Will it be a TikTok-type of film or a direct-to-YouTube documentary with embedded advertising?
- AI is now the punching bag for those who are not into big tech, no matter what they say to their editors. Social media gang bangs are out of style. Get those AI people.
Net net: Amusing. I wonder if Mr. Beast will tackle the video opportunity.
Stephen E Arnold, January 29, 2026
AI Stress Cracks: Immaturity Bubbles Visible from Afar
January 28, 2026
Those images of Kilauea’s lava spouting, dribbling, and sputtering are reminders of the molten core within Mother Earth. Has Mom influenced the inner heat of great big technology leaders? Probably not, but after looking at videos of the most recent lava events in Hawaii, I read “Billionaires Elon Musk and Sam Altman Explode in Ugly Online Fight over Whose Tech Killed More People.”
The write up says:
OpenAI CEO Sam Altman fired back at Elon Musk on Tuesday [January 20, 2025] after Musk posted on X warning people not to use ChatGPT, linking it to nine suicide deaths. Altman called out Musk’s claim as misleading and flipped the criticism back, pointing to Tesla’s Autopilot, which has been linked to more than 50 deaths.
A Vietnam era body count. Robert McNamara, as Secdef, liked metrics. Body counts were just one way to measure efficiency and effectiveness. Like an employee’s incentive plans, the body counts reflected remarkable achievements on the battlefield. Were there bodies to count? As I recall, it depended on a number of factors. I won’t go there. The bodycount numbers were important. The bodies often not so much.
Now we have two titans of big tech engaging in bodycounting.
Consider this passage from the cited write up:
The feud comes as Musk sues OpenAI, claiming the company abandoned its nonprofit mission. Musk is reportedly seeking up to $134 billion in damages. The timing of the spat comes amid heightened scrutiny of AI safety globally.
The issues are [a] litigation, [b] big money, and [c] AI safety. One could probably pick any or all three as motivations for the Vietnam era bodycounts.
The write up does not chase the idea about the reason that I considered. These two titans of big tech and the “next big thing” used to be pals and partners. The OpenAI construct was a product of the interaction of these pals and partners. Then the two big tech titans were not pals and partners.
Here we are: [a] litigation, [b] big moneys, and [c] AI (safety is an add on to AI). I think we have a few other drivers for this “ugly online fight.” I don’t think the bodycount is much more than a PR trope. I am yet to be convinced that big tech titans do not think about people not germane to their mission: Amassing power and money.
My view is that we are witnessing Mother Nature’s influence. These estimable titans are volcanos in big tech. They are, in my opinion as a dinobaby, spouting, dribbling, and sputtering. Kilauea is what might be called a mature volcano. Can one say that these titans are mature? I am not so sure.
Could this bodycount thing be a version of a grade school playground spat with the stakes being a little bit higher? Your mileage may vary.
Stephen E Arnold, January 26, 2026
Yext: Selling Search with Subtlety
January 27, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Every company with AI is in the search and retrieval business. I want to be direct. I think AI is useful, but it is a utility. Integrated with thought into applications, smart software can smooth some of the potholes in a work process. But what happens when a company with search-and-retrieval technology embraces AI? Do customers beat a path to the firm’s office door? Do podcasters discuss the benefits of the approach? Do I see a revolution?
I thought about the marketing challenge facing Yext, a company whose shares were trading at about $20 in 2021 and today (January 26, 2026) listing at about $8 per share. On the surface, it would seem that AI has not boosted the market’s perception of the value of the value of the company. Two or three years ago, I spoke with a VP at the company. In my “Search” folder I added my text file with the url of the company, an observation about the firm’s use of the terms “search” and “SEO.” I commented, “Check out the company when something big hits.”
I find myself looking at a write up from a German online publication called Ad Hoc News. The article I read has a juicy title and a beefy subtitle; to wit:
The Truth about Yext Inc: Is This AI Search Stock a Hidden Gem or Dead App Walking? Everyone’s Suddenly Talking about Yext Inc and Its AI Search Platform. But Is Yext Stock a Must Cop or a Value Trap You Must Dodge?
I turned to my Overflight system and noticed announcements from the company of about the company like this:
- The CEO Michael Walrath wanted to take the company private in the autumn of 2025
- The company acquired two outfits: Hearsay Systems and Places Scout. (I am unfamiliar with these firms.)
- The firm launched Yext Social. I think this is a marketing and social media management service. (I don’t know anything about social media management.)
- Yext rolled out a white paper about the market.
My thought was that these initiatives represented diversification or amplification of the firm’s search solution. A couple of them could be interesting to learn more about. The winner in this list of Overflight items was the desire of Mr. Walrath to take the firm private. Why? Who will fund the play? What will the company do as a private enterprise that it cannot with access to the US NASDAQ market?

Which direction is this company executive taking the firm? AI, SEO, enterprise search, product shopping, customer service, or some combination of these options? Thanks, MidJourney. Good enough.
When I read through the write up “The Truth about Yext”, I was surprised. The German publication presented me with an English language write up. Plus, the word choice, tone, and structure of the article were quite different from the usual articles about search with smart software. Google writes as if it is a Greek deity with an inferiority complex. Microsoft writes to disguise how much people dislike Copilot using a mad dad tone. Elasticsearch writes in the manner of a GitHub page for those in the know.
But Yext? Here are three examples of the rhetoric in the article:
- Not exactly viral-core… but the AI angle is pulling it back into the chat.
- The AI Angle: Riding the Wave vs Getting Washed
- not a sleepy bond proxy
The German publication appears to have these rhetorical principles in mind when writing about Yext: [a] Use American AI systems to rewrite the German text in a hip, jazzy way, [b] a writer who studied in Berkeley, Calif. and absorbed the pseudo-hip style of those chilling at the Roast & Toast Café, [c] a gig worker hired to write about Yext and trying very hard to hit a home run.
Does the write up provide substantive information about Yext? Answer: From my point of view, the answer is, “No.” Years ago I did profiles of enterprise search vendors for the Enterprise Search Report. My approach can be seen in the profiles on my Xenky Web site. Although these documents are rough drafts and not the final versions for the Enterprise Search Report, you can get a sense of what I expect when reading about search and retrieval.
Does the write up present a clear picture of the firm’s secret sauce? Answer: Again I would answer, “No.” After reading the article and tapping the information at my fingertips about next, I would say that the write up is a play to make Yext into a meme stock. Place a bet and either win big or lose. That’s okay, but when writing about search solid information is needed.,
Do I understand how smart software (AI) integrates into the firm’s search and retrieval systems? My answer, “No.” I am not sure if the “search” is post-processed using smart software, if the queries are converted in some way to help deliver an on point answer. I don’t know if the smart software has been integrated into the standard workflow of acquiring, parsing, indexing, and outputting results that hopefully align with the user’s query. Changing underlying search plumbing is difficult. Gemini recycles and wraps Google’s search and ad injection methods with those quantumly supreme, best-est of the universe assertions. I have no idea what Yext purports to do.
Let me offer several observations whether you like it or not:
- I think the source article had some opportunity to get up close and personal with an AI system, maybe ChatGPT or Qwen?
- I think that Yext is doing some content marketing. Venture Beat is in this game, and I wonder why Yext did not target that type of publication.
- Based on the stock performance in the heart of the boom in AI, I have some difficulty identifying Yext’s unique selling proposition. The actions from taking the company private to buying an SEO services outfit don’t make sense to me. If the tie up worked, I would expect to see Yext in numerous sources to which I have access.
Net net: Yext, what’s next?
Stephen E Arnold, January 27, 2026
Is Google the Macintosh in the Big Apple PAI?
January 27, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I want to be fair. Everyone, including companies that are recognized in the US as having the same “rights” as a citizen, is entitled to an opinion. Google is expressing an opinion, if the information in “Google Appeals Ruling on Illegal Search Monopoly” is correct. The write up says:
Google has appealed a US ruling that in 2024 found the company had an illegal monopoly in internet search and search advertising, reports CNBC. After a special hearing on penalties, the court decided in 2025 on milder measures than those originally proposed by the US Department of Justice.
Google, if I understand this news report, believes it is not a monopoly. Okay, that’s an opinion. Let’s assume that Google is correct. Its Android operating system, its Chrome browser, and its online advertising businesses along with other Google properties do not constitute a monopoly. Just keep that thought in mind: Google is not a monopoly.

Thanks, Venice. Good enough.
Consider that idea in the context of this write up in Macworld, an online information service: “If Google Helps Apple Beat Google, Does Everyone Lose?” The article states:
Basing Siri on Google Gemini, then, is a concession of defeat, and the question is what that defeat will cost. Of course, it will result in more features and very likely a far more capable Siri. Google is in a better position than Apple to deliver on the reckless promises and vaporware demos we heard and saw at WWDC 2024. The question is what compromises Apple will be asked to make, and which compromises it will be prepared to make, in return.
With all due respect to the estimable Macworld, I want to suggest the key question is: “What does the deal with Apple mean to Google’s argument that it is not a monopoly?”
The two companies control the lion’s share of the mobile device operating systems. The data from these mobile devices pump a significant amount of useful metadata and content to each of these companies. One can tell me that there will be a “Great Wall of Secrecy” between the two firms. I can be reassured that every system administrator involved in this tie up, deal, relationship, or “just pals” cooperating set up will not share data.
Google will remain the same privacy centric, user first operation it has been since it got into the online advertising business decades ago. The “don’t be evil” slogan is no longer part of the company credo, but the spirit of just being darned ethical remains the same as it was when Paul Buchheit allegedly came up with this memorable phrase. I assume it is now part of the Google DNA.,
Apple will continue to embrace the security, privacy, and vertical business approach that it has for decades. Despite the niggling complaints about the company’s using negotiation to do business with some interesting entities in the Middle Kingdom, Apple is working hard to allow its users that flexibility to do almost anything each wants within the Apple ecosystem of the super-open App Store.
Who wins in this deal?
I would suggest that Google is the winner for these reasons:
- Google now provides its services to the segment of the upscale mobile market that it has not been able to saturate
- Google provides Apple with its AI services from its constellation of data centers although that may change after Apple learns more about smart software, Google’s logs, and Google’s advertising system
- Google aced out the totally weak wristed competitors like Grok, OpenAI, Apple’s own internal AI team or teams, and open source solutions from a country where Apple a few, easy-to-manage, easy-to-replace manufacturing sets.
What’s Apple get? My view is:
- A way to meet its year’s old promises about smart software
- Some time to figure out how to position this waving of the white flag and the emails to Google suggesting, “Let’s meet up for a chat.”
- Catch up with companies that are doing useful things with smart software despite the hallucination problems.
The cited write up says:
In the end, the most likely answer is some complex mixture of incentives that may never be completely understood outside the companies (or outside an antitrust court hearing).
That statement is indeed accurate. Score a big win for the Googlers. Google is the Apple pulp, the skin, and the meat of the deal.
Stephen E Arnold, January 27, 2026
Programming: Let AI Do It. No Problem?
January 27, 2026
How about this idea: No one is a good programmer anymore.
Why?
Because the programmers are relying on LLMs instead of Stack Overflow. Ibrahim Diallo explains this conundrum in his blog post: “We Were Never Good Programmers.” Diallo says that users are baling the lack of Stack Overflow usage on heavy moderation, but he believes the answer is more nuanced. Stack Overflow was never meant to be a Q&A forum. It was meant to be a place where experts could find answers and ask questions if they couldn’t find what they needed.
Users were quick to point out to newbies that their questions had already been answered. It was often a tough love situation, specially when users didn’t think their solution was incorrect.
Then along came ChatGPT:
“Now most people, according to that graph at least, aren’t asking their questions on Stack Overflow. They’re going straight to ChatGPT or their favorite LLM. It may look like failure, but I think it means Stack Overflow did exactly what it set out to do. Maybe not the best business strategy for themselves, but the most logical outcome for users. Stack Overflow won by losing.”
Stack Overflow users thought their questions were unique snowflakes, but they weren’t. ChatGPT can answer a question because:
“When you ask ChatGPT that same question and it gives you an answer, it’s not because the LLM is running your code and debugging it in real-time. It’s because large language models are incredibly good at surfacing those existing answers. They were trained on Stack Overflow, after all.”
Stack Overflow forced programmers to rethink how they asked questions and observe their problems from another perspective. This was beneficial because it forced programmers to understand their problems to articulate them clearly.
LLMs force programmers to ask themselves the question: Are you a good enough programmer to resolve your problem when ChatGPT can’t? Answer: MBAs say, “Of course.” Accountants say, “Which is better for the bottom line: A human who wants health care or a software tool? Users say: “This stuff doesn’t work.” Yep, let AI do it.
Whitney Grace, January 27, 2026
Consulting at Deloitte, AI, Ls, and Sub Families like 3
January 23, 2026
It seems that artificial intelligence is forcing some vocabulary change in the blue chip world of big buck consulting services. “Deloitte to Scrap Traditional Job Titles As AI Ushers in a ‘Modernization’ of the Big Four” reports the inside skivvy:
… the firm is shifting away from a workforce structure that was originally designed for “traditional consulting profiles,” a model the firm now deems outdated.
When a client, maybe at a Japanese outfit, asks, “What does this mean?” the consultant can explain the nuances of a job family and a sub family; for instance, a software engineer 3 or a project management senior consultant, functional transformation. I like the idea of “functional transformation” instead of “consultant.”
However, the big news in the write up in my opinion is:
A new leadership class simply titled “Leaders” will join the senior ranks of partners, principals, and managing directors. And internally, employees will also be assigned alphanumeric levels, such as L45 for what is currently a senior consultant and L55 for managers. However, the presentation stressed that the day-to-day work, leadership, and the firm’s “compensation philosophy” will all remain the same.
The “news” is in the phrase “the firm’s compensation philosophy will all remain the same.”
All. AI means jobs will be off loaded to good enough AI agents, services, and systems. If these systems do not lead to the loss of engagements, then AI adepts will get paid more and the consultants who burn hours that could be completed in minutes or hours by software means, in my opinion:
- Unproductive workers will be moved down and out
- AI adepts will be moved up and given an “L” deslignation
- New hires will be at a baseline until their value as a sales person, AI adept, or magician who can convert AI output into a scope change, a repeatable high value work products, or do work that allows no revenue loss.
Yep, all.
The write up notes:
Last September [2025], Deloitte committed $3 billion in generative AI development through fiscal year 2030. The company has also launched Zora AI, an agentic AI model powered by Nvidia to “to automate complex business processes, eliminate data siloes and boost productivity for the human workforce.”
My conclusion: Fewer staff, higher pay for AI adepts, client fees increase. Profits, if any, go to the big number “L’s”. Is this an innovation? Nope, adaptation. Get that new job lingo in a LinkedIn profile.
Stephen E Arnold, January 23, 2026
Chat Data Leaks: Why Worry?
January 23, 2026
A couple of big outfits have said that user privacy is number one with a bullet. I believe everything I read on the Internet. For these types of assertions I have some doubts. I have met a computer wizard who can make systems behave like the fellow getting brooms to dance in the Disney movies.
I operate as if anything I type into an AI chatbot is recorded and saved. Privacy advocates want AI companies to keep chatbot chat logs confidential. If government authorities seek information about AI users, people who treasure their privacy will want a search warrant before opening the kimono. The Electronic Frontier Foundation (EFF) explains its reasoning: “AI Chatbot Companies Should Protect Your Conversations From Bulk Surveillance.”
People share extremely personal information with chatbots and that deserves to be protected. You should consider anything you share with a chatbot the equivalent of your texts, email, or phone calls. These logs are already protected by the Fourth Amendment:
“Whether you draft an email, edit an online document, or ask a question to a chatbot, you have a reasonable expectation of privacy in that information. Chatbots may be a new technology, but the constitutional principle is old and clear. Before the government can rifle through your private thoughts stored on digital platforms, it must do what it has always been required to do: get a warrant. For over a century, the Fourth Amendment has protected the content of private communications—such as letters, emails, and search engine prompts—from unreasonable government searches. AI prompts require the same constitutional protection.”
In theory, AI companies shouldn’t comply with law enforcement unless officials have a valid warrant. Law enforcement officials are already seeking out user data in blanket searched called “tower dumps” or “geofence warrants.” This means that people within a certain area have their data shared with law enforcement.
Some US courts are viewing AI chatbot logs as protected speech. Dump searches may be unconstitutional. However, what about two giant companies buying and selling software and services from one another. Will the allegedly private data seep or be shared between the firms? An official statement may say, “We don’t share.” However, what about the individuals working to solve a specific problem. How are those interactions monitored. From my experience with people who can make brooms dance, the guarantees about leaking data are just words. Senior managers can look the other way or rely on their employees’ ethical values to protect user privacy.
However, those assurances raise doubts in my mind. But as the now defunct MAD Magazine character said, “Why worry?”
Whitney Grace, January 23, 2026
AI, Horses, and Chess Masters: Got It?
January 23, 2026
Andy Jones writes a lot about AI on his blog of the same name. He took a break from AI to discuss, “Horses.” Andy decided to write about horses, because he investigated the amount of horses in the United States pre-engines. There were tons of horses in the US pre-1930, but then between 1930-1950, 90% of all the horses in the country disappeared.
Where did they go? Let’s not think about that. The fact to takeaway from this is that over the 120 years when the engine was in development, horses didn’t notice.
Andy then writes about chess and grandmasters. A similar decline happened between the capabilities of computers winning chess games over grandmasters:
"Folks started tracking computer chess in 1985. And for the next 40 years, computer chess would improve by 50 Elo per year. That meant in 2000, a human grandmaster could expect to win 90% of their games against a computer. But ten years later, the same human grandmaster would lose 90% of their games against a computer. Progress in chess was steady. Equivalence to humans was sudden.”
Alex returns to discussing AI with another similar comparison except on a quicker scale. He mentions that he was a senior researcher at Anthropic and spent a lot of his time answering questions. Anthropic implemented Claude, its chatbot, and Alex’s job answering questions went down. Claude now answers 30,000 questions a month. It only took six months for Claude to outpace Alex when it took decades for horses and grandmasters years to be made obsolete.
Whoa, Andy or is it “woe, Andy”?
Whitney Grace, January 23, 2026
YouTube: Fingernails on a Blackboard
January 22, 2026
I read “From the CEO: What’s Coming to YouTube in 2026.” Yep, fingernails on a blackboard. Let’s take a look at a handful of the points in this annual letter to the world. Are advertisers included? No. What about regulators? No. What about media parters? Uh, no.
To whom is the leter addressed? I think it is to the media who report about YouTube, which, as the letter puts it, is “the epicenter of culture. ” Yeah, okay, maybe. The letter is also addressed to “creatives.” I think this means the people who post their content to YouTube in the hopes of making big money. Plus, some of the observations are likely to be aimed at those outfits who have the opportunity to participate in the YouTube cable TV clone service.
Okay, let’s begin the dragging of one’s fingernails down an old-school blackboard.
First, one of my instructors at Oxon Hill Primary School (a suburb of Washington, DC) told me, “Have a strong beginning.” Here’s what the Google YouTube pronouncement offers:
YouTube has the scale, community, and technological investments to lead the creative industry into this next era.
Notice, please, that Google is not providing search. It is not a service. YouTube will “lead the creative industry.” That an interesting statement from a company a court has labeled a monopoly. Monopolies don’t lead; they control and dictate. Thanks, Google, your intentions are admirable… for you. For a person who wants to write novels, do investigative reporting, or sculpt, you are explaining the way the world will work.
Here’s another statement that raised little goose bumps on my dinobaby skin:
we remain committed to protecting creative integrity by supporting critical legislation like the NO FAKES Act.
I like the idea that YouTube supports legislation it perceives as useful to itself. I want to point out that Google has filed to appeal the decision that labeled the outfit a monpoly. Google also acts in an arbitrary manner which makes it difficult for those who alleged a problem with Google cannot obtain direct interaction with the “YouTube team.” Selective rules appear to be the way forward for YouTube.
Finally, I want to point out a passage that set my teeth on edge like a visit to the dentist in Campinas, Brazil, who used a foot-peddled drill to deal with a cavity afflicting me. That was fun, just like using YouTube search, YouTube filters, or the YouTube interfaces. Here’s the segment from the statement of YouTube in 2026:
To reduce the spread of low quality AI content, we’re actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content.
Whoa, Nellie or Neil! Why can’t the AI champions at Gemini / DeepMind let AI identify “slop” and label it. A user could then choose, slop or no slop? I think I know the answer. Google’s AI cannot identify slop even though Google AI generates it. Furthermore, my Google generated YouTube recommendations present slop to me and suggest videos I have already viewed. These missteps illustrate that Google will not spend the money to deal with these problems, its smart software cannot deal with these problems, or clicks are more important than anything other than cost cutting. Google and YouTube are the Henry Ford of the AI slop business.
What do Neal’s heartfelt, honest, and entitled comments mean to me, a dinobaby? I shall offer some color about how I interpreted the YouTube statement about 2026:
- The statement dictates.
- The comments about those who create content strike me as self serving, possibly duplicitous
- The issue of slop becomes a “to do” with no admission of being a big part of the problem.
Net net: Google you are definitely Googlier than in 2025.
Stephen E Arnold, January 22, 2026
AI Speed Leaves AI Using Humans Behind
January 22, 2026
Writers and other creatives are scared about what AI is doing and will do to their industry. It’s worrying that not only can AI create content, but it also moves faster than lightning. The International News Media Association posted, “AI Moves Faster Than Newsrooms; Structure Is Needed To Keep Up.”
Media outlets are implementing AI, but humans can’t keep up with the technology and don’t know how to use it. These AI’s uptake creates chaos in the newsroom and other creative environments. Oliver Wyman said that the key to solving AI implementation problems isn’t using more AI. Structure is needed to solve this AI crisis. That requires a decent governance strategy:
“The teams getting the most out of AI right now are not the ones with the flashiest models. They are the ones with simple rules everyone can follow. Rules such as:
- Who reviews what.
- When AI can be used.
- How usage is logged.
- Where human judgment fits.
Basic, but powerful. These choices build confidence. They keep quality consistent. They give every department the same expectations instead of a dozen parallel experiments.”
The article suggests that there will be two types of publishers: Those that slow down to build a structured AI foundation and others who download the flashiest tools and skip the basics.
Clarity and KISS (keeping it simple) will keep news outlets and publishers in business. I like optimists. It’s nice to see an optimistic take on AI. However, I watched publishers fumble when basic online access raced down the Information Highway. Now changes are like the Fast and Furious movies. Speed is the point. Vehicles that cannot keep up are not able to compete. Forget winning a race.
Whitney Grace, January 22, 2026

