Google: Is It Becoming Microapple?

September 19, 2025

Google’s approach to Android, the freedom to pay Apple to make Google search the default for Safari, and the registering of developers — These are Tim Apple moves. Google has another trendlet too.

Google has 1.8 billion users around the world and according to the Mens Journal Google has a new problem: “Google Issues Major Warning to All 1.8 Billion Users.” There’s a new digital security threat and it involves AI. That’s not a surprise, because artificial intelligence has been a growing concern for cyber security experts for years. Since the technology is becoming more advanced, bad actors are using it for devious actions. The newest round of black hat tricks are called “indirect prompt injections.”

Indirect prompt injections are a threat for individual users, businesses, and governments. Google warned users about this new threat and how it works:

“‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions,’ the blog post continued.

The Google blog post warned that this puts individuals and entities at risk.

‘As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,’ the blog post continued.”

Bad actors have tasked Google’s Gemini (Shock! Gasp!) to infiltrate emails and ask users for their passwords and login information. That’s not the scary part. Most spammy emails have a link for users to click on to collect data, instead this new hack uses Gemini to prompt users for the information. Downloading fear.

Google is already working on counter measures for Gemini. Good luck! Microsoft has had this problem for years! Google and Microsoft are now twins! Is this the era of Google as Microapple?

Whitney Grace, September 19, 2025

AI Search Is Great. Believe It. Now!

September 18, 2025

Dino 5 18 25_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

Cheerleaders are necessary. The idea is that energetic people lead other people to chant: Stand Up, Sit Down, Fight! Fight! Fight! If you get with the program, you stand up. You sit down. You shout, of course, fight, fight, fight. Does it help? I don’t know because I don’t cheer at sports events. I say, “And again” or some other statement designed to avoid getting dirty looks or caught up in standing, sitting, and chanting.

Others are different. “GPT-5 Thinking in ChatGPT (aka Research Goblin) Is Shockingly Good at Search” states:

Don’t use chatbots as search engines” was great advice for several years… until it wasn’t. I wrote about how good OpenAI’s o3 was at using its Bing-backed search tool back in April. GPT-5 feels even better.

The idea is that instead of working with a skilled special librarian and participating in a reference interview, people started using online Web indexes. Now we have moved from entering a query to asking a smart software system for an answer.

Consider the trajectory. A person seeking information works with a professional with knowledge of commercial databases, traditional (book) reference tools, and specific ways of tracking down and locating information needed to answer the user’s question. When the user  was not sure, the special librarian would ask, “What specific information do you need?” Some users would reply, “Get me everything about subject X?” The special librarian would ask other questions until a particular item could be identified. In the good old days, special librarians would seek the information and provide selected items to the person with the question. Ellen Shedlarz at Booz, Allen & Hamilton when I was a lowly peon did this type of work as did Dominque Doré at Halliburton NUS (a nuclear outfit).

We then moved to the era of PCs and do-it-yourself research. Everyone became an expert. Google just worked. Then mobile phones arrived so research on the go was a thing. But keying words into a search box and fiddling with links was a drag. Now just tell the smart software your problem. The solution is just there like instant oatmeal.

The Stone Age process was knowledge work. Most people seeking information did not ask, preferring as one study found to look through trade publications in an old-fashioned in box or pick up the telephone and ask a person whom one assumed knew something about a particular subject. The process was slow, inefficient, and fraught with delays. Let’s be efficient. Let’s let software do everything.

Flash forward to the era of smart software or seemingly smart software. The write up reports:

I’ve been trying out hints like “go deep” which seem to trigger a more thorough research job. I enjoy throwing those at shallow and unimportant questions like the UK Starbucks cake pops one just to see what happens! You can throw questions at it which have a single, unambiguous answer—but I think questions which are broader and don’t have a “correct” answer can be a lot more fun. The UK supermarket rankings above are a great example of that. Since I love a questionable analogy for LLMs Research Goblin is… well, it’s a goblin. It’s very industrious, not quite human and not entirely trustworthy. You have to be able to outwit it if you want to keep it gainfully employed.

The reference / special librarians are an endangered species. The people seeking information use smart software. Instead of a back-and-forth and human-intermediated interaction between a trained professional and a person with a question, we get “trying out” and “accepting the output.”

I think there are three issues inherent in this cheerleading:

  1. Knowledge work is short circuited. Instead of information-centric discussion, users accept the output. What if the output is incorrect, biased, incomplete, or made up? Cheerleaders shout more enthusiastically until a really big problem occurs.
  2. The conditioning process of accepting outputs makes even intelligent people susceptible to mental short cuts. These are good, but accuracy, nuance, and a sense of understanding the information may be pushed to the side of the information highway. Sometimes those backroads deliver unexpected and valuable insights. Forget that. Grab a burger and go.
  3. The purpose of knowledge work is to make certain that an idea, diagnosis, research study can be trusted. The mechanisms of large language models are probabilistic. Think close enough for horseshoes. Cheering loudly does not deliver accuracy of output, just volume.

Net net: Inside each large language model lurks a system capable of suggesting glue cheese on pizza, the gray mass is cancer, and eat rocks.

What’s been lost? Knowledge value from the process of obtaining information the Stone Age way. Let’s work in caves with fire provided by burning books. Sounds like a plan, Sam AI-Man. Use GPT5, use GPT5, use GPT5.

Stephen E Arnold, September 18, 2025

AI Maggots: Are These Creatures Killing the Web?

September 18, 2025

The short answer is, “Yep.”

The early days of the free, open Web held such promise. Alas, AI is changing the Internet and there is, apparently, nothing we can do about it. The Register laments, “AI Web Crawlers Are Destroying Websites in their Never-Ending Hunger for Any and All Content: But the Cure May Ruin The Web.…” Writer Steven J. Vaughan-Nichols tells us a whopping 30% of traffic is now bots, according to Cloudflare. And 80% of that, reports Fastly, comes from AI-data fetcher bots. Web crawlers have been around since 1993, of course, but this volume is something new. And destructive. Vaughan-Nichols writes:

“Fastly warns that [today’s AI crawlers are] causing ‘performance degradation, service disruption, and increased operational costs.’ Why? Because they’re hammering websites with traffic spikes that can reach up to ten or even twenty times normal levels within minutes. Moreover, AI crawlers are much more aggressive than standard crawlers. As the InMotionhosting web hosting company notes, they also tend to disregard crawl delays or bandwidth-saving guidelines and extract full page text, and sometimes attempt to follow dynamic links or scripts. he result? If you’re using a shared server for your website, as many small businesses do, even if your site isn’t being shaken down for content, other sites on the same hardware with the same Internet pipe may be getting hit. This means your site’s performance drops through the floor even if an AI crawler isn’t raiding your website. Smaller sites, like my own Practical Tech, get slammed to the point where they’re simply knocked out of service. Thanks to Cloudflare Distributed Denial of Service (DDoS) protection, my microsite can shrug off DDoS attacks. AI bot attacks – and let’s face it, they are attacks – not so much.”

Even big websites are shelling out for more processor, memory, and network resources to counter the slowdown. And no wonder: According to Web hosting firms, most visitors abandon a site that takes more than three seconds to load. Site owners have some tools to try mounting a defense, like paywalls, logins, and annoying CAPTCHA games. Unfortunately, AI is good at getting around all of those. As for the tried and true, honor-system based robots.txt files, most AI crawlers breeze right on by. Hey, love maggots.

Cynthia Murrell, September 18, 2025

AI and Security? What? Huh?

September 18, 2025

As technology advances so do bad actors and their devious actions. Bad actors are so up to date with the latest technology that it takes white hat hackers and cyber security engineers awhile to catch up to them. AI has made bad actors smarter and EWeek explains that there is we are facing a banking security crisis: “Altman Warns Of AI-Powered Fraud Crisis in Banking, Urges Stronger Security Measures.”

OpenAI CEO Sam Altman warned that AI vocal technology is a danger to society. He told the Federal Reserve Vice Chair for Supervision Michelle Bowman that US banks are lagging behind Ai vocal security, because many financial institutions still rely on voiceprint technology to verify customers’ identities.

Altman warned that AI vocal technology can easily replicate humans and deepfake videos are even scarier when they become indistinguishable from reality. Bowman mentioned potential partnering with tech companies to create solutions.

Despite sounding the warning bells, Altman didn’t offer much help:

“Despite OpenAI’s prominence in the AI industry, Altman clarified that the company is not creating tools for impersonation. Still, he stressed that the broader AI community must take responsibility for developing new verification systems, such as “proof of human” solutions.

Altman is supporting tools like The Orb, developed by Tools for Humanity. The device aims to provide “proof of personhood” in a digital world flooded with fakes. His concerns go beyond financial fraud, extending to the potential for AI superintelligence to be misused in areas such as cyberwarfare or biological threats.”

Proof of personhood? It’s like the blue check on verified X/Twitter accounts. Altman might be helping make the future but he’s definitely also part of the problem.

Whitney Grace, September 18, 2025

IBM Technology Atlas: A Buzzword Blow Up

September 17, 2025

Dino 5 18 25[3]Written by an unteachable dinobaby. Live with it.

Do you need some handy buzzwords, jargon, or magnetic phrases for your marketing outputs? IBM has created a very useful tool. It is called the “IBM Technology Atlas.” Now an atlas (maybe alas?), according to the estimable Google, is “a book of maps or charts.” Now I know what you are thinking. Has IBM published a dead tree book of maps like the trucker’s road map sold at Buc-ees?

No. IBM is too high tech forward leaning for that.

Navigate to “IBM Technology Atlas.” Here’s what your browser will display:

image

I assume you will be asking, “What does this graphic say?” or “I can’t read it.” Spot on. This Technology Atlas is not designed for legibility like those trucker road maps. Those professionals have to know where to turn in order to deliver a load of auto parts. Driving a technology sports car is more subtle.

The idea with this IBM atlas is to use your cursor to click on one of the six areas of focus for IBM’s sales machine to deliver to customers between 2024 and 2030. I know 2024 was last year, but that’s useful if one wants to know where Lewis and Clark were in Missouri in 1804. And 2030? Projecting five years into the future is strategically bold.

Pick a topic from the option arrayed around the circle:

  • AI
  • Automation
  • Data
  • Hybrid Cloud
  • Quantum
  • Security.

Click on, for instance, AI and the years at which the checkpoint for targets appears. Note you will need a high resolution monitor because no scroll bar is available to move from year to year. But no problem. Here’s what I see after clicking on AI:

image

Alternatively, you can use the radar chart and click on the radar chart. For the year 2030 targets in AI, put your cursor under AI and place it between AI and Automation. Click and you will see the exploded detail for AI at IBM in 2030:

image

Now you are ready to begin your exploration of buzzwords. Let’s stick to AI because the future of that suite of technologies is of interest to those who are shoveling cash into the Next Big Thing Furnace as I write this news item with editorial color.

Here are some of the AI words from the 2030 section of the Atlas:

Adaptable AI
Biological intelligence
Cognitive abilities
Generalist AI
Human-machine collaboration
Machine-machine collaboration
Mutual theory of mind
Neuron heterogeneity
Sensory perceptions
Unified neural architecture
WatsonX (yep, Watson).

One can work from 2024 to 2029 and build a comprehensive list of AI jargon. If this seems like real busy work, it is not. You are experiencing how a forward leaning outfit like IBM presents its strategic road map. You—a mere human— must point and click your way through a somewhat unusual presentation of dot points and a time line.

Imagine how easy absorbing this information would be if one just copied the url, pasted it into Perplexity, and asked, “Give me a 25 word summary of this information.” I did that, and here’s what Perplexity replied:

IBM’s Technology Atlas outlines six roadmaps — AI, Automation, Data, Hybrid Cloud, Quantum, Security — for advancing performance, efficiency, and future IT/business evolution through 2030.

Well, that was easy. What about the clicking through the hot links on the radar chart?

That is harder and more time consuming.

Perplexity did not understand how to navigate the IBM Technology Alas.  (Ooops. I mean “atlas.” My bad.) And — truth be told — I did not either when I first encountered this new age and undoubtedly expensive combination of design, jargon collection, and code. Would direct statements and dot points worked? Yes, but that is not cutting edge.

I would recommend this IBM Alas to a student looking for some verbiage for a résumé, a start up trying to jazz up a slide deck, or a person crafting a LinkedIn blurb.

Remember! Neuron heterogeneity is on the road map for 2030. I still like the graphic approach of those trucker road maps available where I can buy a Buc-ee’s T shirt:

image

Is there a comparable T shirt for quantum at IBM in 2030? No? Alas.

Stephen E Arnold, September 17, 2025

Qwen: Better, Faster, Cheaper. Sure, All Three

September 17, 2025

Dino 5 18 25No smart software involved. Just a dinobaby’s work.

I spotted another China Smart, U S Dumb write up. Analytics India published “Alibaba Introduces Qwen3-Next as a More Efficient LLM Architecture.” The story caught my attention because it was a high five to the China-linked Alibaba outfit and because it is a signal that India and China are on the path to BFF bliss.

The write up says:

Alibaba’s Qwen team has introduced Qwen3-Next, a new large language model architecture designed to improve efficiency in both training and inference for ultra-long context and large-parameter settings.

The sentence reinforces the better, faster, cheaper sales mantra one beloved by Crazy Eddie.

Here’s another sentence catching my attention:

At its core, Qwen3-Next combines a hybrid attention mechanism with a highly sparse mixture-of-experts (MoE) design, activating just three billion of its 80 billion parameters during inference.  The announcement blog explains that the new mechanism allows the base model to match, and in some cases outperform, the dense Qwen3-32B, while using less than 10% of its training compute. In inference, throughput surpasses 10x at context lengths beyond 32,000 tokens.

This passage emphasizes the value of the mixture of experts approach in the faster and cheaper assertions.

Do I believe the data?

Sure, I believe every factoid presented in the better, faster, cheaper marketing of large language models. Personally I find that these models, regardless of development group, are useful for some specific functions. The hallucination issue is the deal breaker. Who wants to kill a person because a smart medical system is making benign out of malignancy? Who wants an autonomous AI underwater drone to take out those college students and not the adversary’s stealth surveillance boat?

Where can you get access this better, faster, cheaper winner? The write up says, “Hugging Face, ModelScope, Alibaba Cloud Model Studio and NVIDIA API Catalog, with support from inference frameworks like SGLang and vLLM.”

Stephen E Arnold, September 17, 2025

YouTube: Behind the Scenes Cleverness?

September 17, 2025

animated-dinosaur-image-0062_thumb_t_thumbNo smart software involved. Just a dinobaby’s work.

I read “YouTube Is a Mysterious Monopoly.” The author tackles the subject of YouTube and how it seems to be making life interesting for some “creators.” In many countries, YouTube is television. I discovered this by accident in Bucharest, Cape Town, and Santiago, to name three locations where locals told me, “I watch YouTube.”

The write up offers some comments about this Google service. Let’s look at a couple of these.

First, the write up says:

…while views are down, likes and revenue have been mostly steady. He guesses that this might be caused by a change in how views are calculated, but it’s just a guess. YouTube hasn’t mentioned anything about a change, and the drop in views has been going on for about a month.

About five years ago, one of the companies with which I have worked for a while, pointed out that their Web site traffic was drifting down. As we monitored traffic and ad revenues, we noticed initial stability and then a continuing decline in both traffic and ad revenue. I recall we checked some data about competitive sites and most were experiencing the same drift downwards. Several were steady or growing. My client told me that Google was not able to provide substantive information. Is this type of decline an accident or is it what I call traffic shaping for Google’s revenue? No one has provided information to make this decline clear. Today (September 10, 2025) the explanation is related to smart software. I have my doubts. I think it is Google cleverness.

Second, the write up states:

I pay for YouTube Premium. For my money, it’s the best bang-for-the-buck subscription service on the market. I also think that YouTube is a monopoly. There are some alternatives — I also pay for Nebula, for example — but they’re tiny in comparison. YouTube is effectively the place to watch video on the internet.

In the US, Google has been tagged with the term “monopoly.” I find it interesting that YouTube is allegedly wearing a T shirt that says, “The only game in town.” I think that YouTube has become today’s version of the Google online search service. We have people dependent on the service for money, and we have some signals that Google is putting its thumb on the revenue scale or is suffering from what users are able to view on the service. Also, we have similar opaqueness about who or what is fiddling the dials. If a video or a Web site does not appear in a search result, that site may as well not exist for some people. The write up comes out and uses the “monopoly” word for YouTube.

Finally, the essay offers this statement:

Creators are forced to share notes and read tea leaves as weird things happen to their traffic. I can only guess how demoralizing that must feel.

For me, this comment illustrates that the experience of my client’s declining traffic and ad revenue seems to be taking place in the YouTube “datasphere.” What is a person dependent on YouTube revenue supposed to do when views drop or the vaunted YouTube search service does not display a hit for a video directly relevant to a user’s search. OSINT experts have compiled information about “Google dorks.” These are hit-and-miss methods to dig a relevant item from the Google index. But finding a video is a bit tricky, and there are fewer Google dorks to unlock YouTube content than for other types of information in the Google index.

What do I make of this? Several preliminary observations are warranted. First, Google is hugely successful, but the costs of running the operation and the quite difficult task of controlling the costs of ping, pipes, and power, the cost of people, and the expense of dealing with pesky government regulators. The “steering” of traffic and revenue to creators is possibly a way to hit financial targets.

Second, I think Google’s size and its incentive programs allow certain “deciders” to make changes that have local and global implications. Another Googler has to figure out what changed, and that may be too much work. The result is that Googlers don’t have a clue what’s going on.

Third, Google appears to be focused on creating walled gardens for what it views as “Web content” and for creator-generated content. What happens when a creator quits YouTube? I have heard that Google’s nifty AI may be able to extract the magnetic points of the disappeared created and let its AI crank out a satisfactory simulacrum. Hey, what are those YouTube viewers in Santiago going to watch on their Android mobile device?

My answer to this rhetorical question is the creator and Google “features” that generate the most traffic. What are these programs? A list of the alleged top 10 hits on YouTube is available at https://mashable.com/article/most-subscribed-youtube-channels. I want to point out that the Google holds down position in its own list spots number four and number 10. The four spot is Google Movies, a blend of free with ads, rent the video, “buy” the video which sort of puzzles me, and subscribe to a stream. The number 10 spot is Google’s own music “channel”. I think that works out to YouTube’s hosting of 10 big draw streams and services. Of those 10, the Google is 20 percent of the action. What percentage will be “Google” properties in a year?

Net net: Monitoring YouTube policy, technical, and creator data may help convert these observations into concrete factoids. On the other hand, you are one click away from what exactly? Answer: Daily Motion or RuTube? Mysterious, right?

Stephen E Arnold, September 17, 2025

Professor Goes Against the AI Flow

September 17, 2025

One thing has Cornell professor Kate Manne dreading the upcoming school year: AI. On her Substack, “More to Hate,” the academic insists, “Yes, It Is Our Job as Professors to Stop our Students Using ChatGPT.” Good luck with that.

Manne knows even her students who genuinely love to learn may give in to temptation when faced with an unrelenting academic schedule. She cites the observations of sociologist Tressie McMillan Cottom as she asserts young, stressed-out students should not bear that burden. The responsibility belongs, she says, to her and her colleagues. How? For one thing, she plans to devote precious class time to having students hand-write essays. See the write-up for her other ideas. It will not be easy, she admits, but it is important. After all, writing assignments are about developing one’s thought processes, not the finished product. Turning to ChatGPT circumvents the important part. And it is sneaky. She writes:

“Again, McMillan Cottom crystallized this perfectly in the aforementioned conversation: learning is relational, and ChatGPT fools you into thinking that you have a relationship with the software. You ask it a question, and it answers; you ask it to summarize a text, and it offers to draft an essay; you request it respond to a prompt, using increasingly sophisticated constraints, and it spits out a response that can feel like your own achievement. But it’s a fake relationship, and a fake achievement, and a faulty simulacrum of learning. It’s not going to office hours, and having a meeting of the minds with your professor; it’s not asking a peer to help you work through a problem set, and realizing that if you do it this way it makes sense after all; it’s not consulting a librarian and having them help you find a resource you didn’t know you needed yet. Your mind does not come away more stimulated or enriched or nourished by the endeavor. You yourself are not forging new connections; and it makes a demonstrable difference to what we’ve come to call ‘learning outcomes.’”

Is it even possible to keep harried students from handing in AI-generated work? Manne knows she is embarking on an uphill battle. But to her, it is a fight worth having. Saddle up, Donna Quixote.

Cynthia Murrell, September 17, 2025

What Happens When Content Management Morphs into AI? A Jargon Blast

September 16, 2025

Dino 5 18 25Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

I did a small project for a killer outfit in Cleveland. The BMW-driving owner of the operation talked about CxO this and CxO that. The jargon meant that “x” was a placeholder for  titles like “Chief People Officer” or “Chief Relationship Officer” or some similar GenX concept.

I suppose I have a built in force shield to some business jargon, but I did turn off my blocker to read CxO Today’s marketing article titled helpfully “Gartner: Optimize Enterprise Search to Equip AI Assistants and Agents.” I was puzzled by the advertising essay, but then I realized that almost anything goes in today’s world of sell stuff by using jargon.

The write up is by an “expert” who used to work in the content management field. I must admit that I have zero idea what content management means. Like knowledge management, the blending of an undefined noun with the word “management” creates jargon that mesmerizes certain types of  “leadership” or “deciders.”

The article (ad in essay form) is chock full of interesting concepts and words. The intent is to cause a “leadership” or “decider” to “reach out” for the consulting firm Gartner and buy reports or sit-downs with “experts.”

I noticed the term “enterprise search” in the title. What is “enterprise search” other than the foundation for the HP Autonomy dust up and the FAST Search & Transfer legal hassle? Most organizations struggle to find information that someone knows exists within an organization. “Leadership” decrees that “enterprise search” must be upgraded, improved, or installed. Today one can download an open source search system, ring up a cloud service offering remote indexing and search of “content,” or tap one of the super-well-funded newcomers like Glean or other AI-enabled search and retrieval systems.

Here’s what the write up advertorial says:

The advent of semantic search through vectorization and generative AI has revolutionized the way information is retrieved and synthesized. Search is no longer just an experience. It powers the experience by augmenting AI assistants. With RAG-based AI assistants and agents, relevant information fragments can be retrieved and resynthesized into new insights, whether interactively or proactively. However, the synthesis of accurate information depends largely on retrieving relevant data from multiple repositories. These repositories and the data they contain are rarely managed to support retrieval and synthesis beyond their primary application.

My translation of this jargon blast is that content proliferation is taking place and AI may be able to help “leadership” or a regular employee find the information needed to complete work. I mean who doesn’t want “RAG-based AI assistants” when trying to find a purchase order or to check the last quality report about a part that is failing 75 percent of the time for a big customer?

The fix is to embrace “touchpoints.” The write up says:

Multiple touchpoints and therefore multiple search services mean overlap in terms of indexes and usage. This results in unnecessary costs. These costs are both direct, such as licenses, subscriptions, compute and storage, and indirect, such as staff time spent on maintaining search services, incorrect decisions due to inaccurate information, and missed opportunities from lack of information. Additionally, relying on diverse technologies and configurations means that query evaluations vary, requiring different skills and expertise for maintenance and optimization.

To remediate this problem — that is, to deliver a useful enterprise search and retrieval system — the organization needs to:

aim for optimum touchpoints to information provided through maximum applications with minimum services. The ideal scenario is a single underlying service catering to all touchpoints, whether delivered as applications or in applications. However, this is often impractical due to the vast number of applications from numerous vendors… so

hire Gartner to figure out who is responsible for what, reduce the number of search vendors, and cut costs “by rationalizing the underlying search and synthesis services and associated technologies.”

In short, start over with enterprise search.

Several observations:

  1. Enterprise search is arguably more difficult than some other enterprise information problems. There are very good reasons for this, and they boil down to the nature of what employees need to do a job or complete a task
  2. AI is not going to solve the problem because these “wrappers” will reflect the problems in the content pools to which the systems have access
  3. Cost cutting is difficult because those given the job to analyze the “costs” of search discover that certain costs cannot be eliminated; therefore, their attendant licensing and support fees continue to become “pay now” invoices.

What do I make of this advertorial or content marketing item in CxO Today. First, I think calling it “news” is problematic. The write up is a bundle of jargon presented as a sales pitch. Second, the information in the marketing collateral is jargon and provides zero concrete information. And, third, the problem of enterprise search is in most organizational situations is usually a compromise forced on the organization because of work processes, legal snarls, secret government projects, corporate paranoia, and general turf battles inside the outfit itself.

The “fix” is not a study. The “fix” is not a search appliance as Google discovered. The “fix” is not smart software. If you want an answer that won’t work, I can identify whom not to call.

Stephen E Arnold, September 19, 2025

Who Needs Middle Managers? AI Outfits. MBAs Rejoice

September 16, 2025

Dino-5-18-25_thumb3No smart software involved. Just a dinobaby’s work.

I enjoy learning about new management trends. In most cases, these hip approaches to reaching a goal using people are better than old Saturday Night Live skits with John Belushi dressed as a bee. Here’s a good one if you enjoy the blindingly obvious insights of modern management thinkers.

Navigate to “Middle Managers Are Essential for AI Success.” That’s a title for you!

The write up reports without a trace of SNL snarkiness:

31% of employees say they’re actively working against their company’s AI initiatives. Middle managers can bridge the gap.

Whoa, Nellie. I thought companies were pushing forward with AI because, AI is everywhere. Microsoft Word, Google “search” (I use the term as a reminder that relevance is long gone), and from cloud providers like Salesforce.com. (Yeah, I know Salesforce is working hard to get the AI thing to go, and it is doing what big companies like to do: Cut costs by terminating humanoids.)

But the guts of the modern management method is a list (possibly assisted by AI?) The article explains without a bit of tongue in cheek élan “ways managers can turn anxious employees into AI champions.”

Here’s the list:

  1. Communicate the AI vision. [My observation: Isn’t that what AI is supposed to deliver? Fewer employees, no health care costs, no retirement costs, and no excess personnel because AI is so darned effective?”]
  2. Say, “I understand” and “Let’s talk about it.” [My observation: How long does psychological- and attitudinal-centric interactions take when there are fires to put out about an unhappy really big customer’s complaint about your firm’s product or service?]
  3. Explain to the employee how AI will pay off for the employee who fears AI won’t work or will cost the person his/her job? [My observation: A middle manager can definitely talk around, rationalize, and lie to make the person’s fear go away. Then the middle manager will write up the issue and forward it to HR or a superior. We don’t need a weak person on our team, right?]
  4. “Walk the talk.” [My observation: That’s a variant of fake it until you make it. The modern middle manager will use AI, probably realize that an AI system can output a good enough response so the “walk the talk” person can do the “walk the walk” to the parking lot to drive home after being replaced by an AI agent.]
  5. Give employees training and a test. [My observation: Adults love going to online training sessions and filling in the on-screen form to capture trainee responses. Get the answers wrong, and there is an automated agent pounding emails to the failing employee to report to security, turn in his/her badge, and get escorted out of the building.]

These five modern management tips or insights are LinkedIn-grade output. Who will be the first to implement these at an AI company or a firm working hard to AI-ify its operations. Millions I would wager.

Stephen E Arnold, September 16, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta