NSO Group: When Marketing and Confidence Mix with Specialized Software

May 13, 2025

dino-orange_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

Some specialized software must remain known only to a small number of professionals specifically involved in work related to national security. This is a dinobaby view, and I am not going to be swayed with “information wants to be free” arguments or assertions about the need to generate revenue to make the investors “whole.” Abandoning secrecy and common sense for glittering generalities and MBA mumbo jumbo is ill advised.

I read “Meta Wins $168 Million in Damages from Israeli Cyberintel Firm in Whatsapp Spyware Scandal.” The write up reports:

Meta won nearly $168 million in damages Tuesday from Israeli cyberintelligence company NSO Group, capping more than five years of litigation over a May 2019 attack that downloaded spyware on more than 1,400 WhatsApp users’ phones.

The decision is likely to be appealed, so the “won” is not accurate. What is interesting is this paragraph:

[Yaron] Shohat [NSO’s CEO] declined an interview outside the Ron V. Dellums Federal Courthouse, where the court proceedings were held.

From my point of view, fewer trade shows, less marketing, and a lower profile should be action items for Mr. Shohat, the NSO Group’s founders, and the firm’s lobbyists.

I watched as NSO Group became the poster child for specialized software. I was not happy as the firm’s systems and methods found their way into publicly accessible Web sites. I reacted negatively as other specialized software firms (these I will not identify) began describing their technology as similar to NSO Group’s.

The desperation of cyber intelligence, specialized software firms, and — yes — trade show operators is behind the crazed idea of making certain information widely available. I worked in the nuclear industry in the early 1970s. From Day One on the job, the message was, “Don’t talk.” I then shifted to a blue chip consulting firm working on a wide range of projects. From Day One on that job, the message was, “Don’t talk.” When I set up my own specialized research firm, the message I conveyed to my team members was, “Don’t talk.”

Then it seemed that everyone wanted to “talk”. Marketing, speeches, brochures, even YouTube videos distributed information that was never intended to be made widely available. Without operating context and quite specific knowledge, jazzy pitches that used terms like “zero day vulnerability” and other crazy sales oriented marketing lingo made specialized software something many people without operating context and quite specific knowledge “experts.”

I see this leakage of specialized software information in the OSINT blurbs on LinkedIn. I see it in social media posts by people with weird online handles like those used in Top Gun films. I see it when I go to a general purpose knowledge management meeting.

Now the specialized software industry is visible. In my opinion, that is not a good thing. I hope Mr. Shohat and others in the specialized software field continue the “decline to comment” approach. Knock off the PR. Focus on the entities authorized to use specialized software. The field is not for computer whiz kids, eGame players, and  wanna be intelligence officers.

Do your job. Don’t talk. Do I think these marketing oriented 21st century specialized software companies will change their behavior? Answer: Oh, sure.

PS. I hope the backstory for Facebook / Meta’s interest in specialized software becomes part of a public court record. I am curious is what I have learned matches up to the court statements. My hunch is that some social media executives have selective memories. That’s a useful skill I have heard.

Stephen E Arnold, May 13, 2025

Alleged Oracle Misstep Leaves Hospitals Without EHR Access for Just Five Days

May 13, 2025

When I was young, hospitals were entirely run on paper records. It was a sight to behold. Recently, 45 hospitals involuntarily harkened back to those days, all because “Oracle Engineers Caused Dayslong Software Outage at U.S. Hospitals,” CNBC reports. Writer Ashley Capoot tells us:

“Oracle engineers mistakenly triggered a five-day software outage at a number of Community Health Systems hospitals, causing the facilities to temporarily return to paper-based patient records. CHS told CNBC that the outage involving Oracle Health, the company’s electronic health record (EHR) system, affected ‘several’ hospitals, leading them to activate ‘downtime procedures.’ Trade publication Becker’s Hospital Review reported that 45 hospitals were hit. The outage began on April 23, after engineers conducting maintenance work mistakenly deleted critical storage connected to a key database, a CHS spokesperson said in a statement. The outage was resolved on Monday, and was not related to a cyberattack or other security incident.”

That is a relief. Because gross incompetence is so much better than getting hacked. Oracle has only been operating the EHR system since 2022, when it bought Cerner. The acquisition made Oracle Health the second largest vendor in that market, after Epic Systems.

But perhaps Oracle is experiencing buyers’ remorse. This is just the latest in a string of stumbles the firm has made in this crucial role. In 2023, the US Department of Veteran Affairs paused deployment of its Oracle-based EHR platform over patient safety concerns. And just this March, the company’s federal EHR system experienced a nationwide outage. That snafu was resolved after six and a half hours, and all it took was a system reboot. Easy peasy. If only replacing deleted critical storage were so simple.

What healthcare system will be the next to go down due to an Oracle Health blunder? Cynthia Murrell, May 13, 2025

Big Numbers and Bad Output: Is This the Google AI Story

May 13, 2025

dino orange_thumbNo AI. Just a dinobaby who gets revved up with buzzwords and baloney.

Alphabet Google reported financials that made stakeholders happy. Big numbers were thrown about. I did not know that 1.5 billion people used Google’s AI Overviews. Well, “use” might be misleading. I think the word might be “see” or “were shown” AI Overviews. The key point is that Google is making money despite its legal hassles and its ongoing battle with infrastructure costs.

I was, therefore, very surprised to read “Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense.” If the information in the write up is accurate, the factoid suggests that a lot of people may be getting bogus information. If true, what does this suggest about Alphabet Google?

The Cnet article says:

…the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched “peanut butter platform heels.” Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure.

Those Nobel prize winners, brilliant Googlers, and long-time wizards like Jeff Dean seem to struggle with simple things. Remember the glue cheese on pizza suggestion before Google’s AI improved.

The article adds by quoting a non-Google wizard:

“They [large language models] are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,” said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. “They are not trained to verify the truth. They are trained to complete the sentence.”

Turning in lousy essay and showing up should be enough for a C grade. Is that enough for smart software with 1.5 billion users every three or four weeks?

The article reminds its readers”

This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls “hallucinating.” When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.

The outputs can be amusing for a person able to identify goofiness. But a grade school kid? Cnet wants users to craft better prompts.

I want to be 17 years old again and be a movie star. The reality is that I am 80 and look like a very old toad.

AI has to make money for Google. Other services are looking more appealing without the weight of legal judgments and hassles in numerous jurisdictions. But Google has already won the AI race. Its DeepMind unit is curing disease and crushing computational problems. I know these facts because Google’s PR and marketing machine is running at or near its red line.

But the 1.5 billion users potentially receiving made up, wrong, or hallucinatory information seems less than amusing to me.

Stephen E Arnold, May 13, 2025

China Smart, US Dumb: Twisting the LLM Daozi

May 12, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

That hard-hitting technology information service Venture Beat published an interesting article. Its title is “Alibaba ZeroSearch Lets AI Learn to Google Itself — Slashing Training Costs by 88 Percent.” The main point of the write up, in my opinion, is that Chinese engineers have done something really “smart.” The knife at the throat of US smart software companies is cost. The money fires will flame out unless more dollars are dumped into the innovation furnaces of smart software.

The Venture Beat story makes the point that “could dramatically reduce the cost and complexity of training AI systems to search for information, eliminating the need for expensive commercial search engine APIs altogether.”

Oh, oh.

This is smart. Buring cash in pursuit of a fractional improvement is dumb, well, actually, stupid, if the write up’s inforamtion is accurate.

The Venture Beat story says:

The technique, called “ZeroSearch,” allows large language models (LLMs) to develop advanced search capabilities through a simulation approach rather than interacting with real search engines during the training process. This innovation could save companies significant API expenses while offering better control over how AI systems learn to retrieve information.

Is this a Snorkel variant hot from Stanford AI lab?

The write up does not delve into the synthetic data short cut to smart software. After some mumbo jumbo, the write up points out the meat of the “innovation”:

The cost savings are substantial. According to the researchers’ analysis, training with approximately 64,000 search queries using Google Search via SerpAPI would cost about $586.70, while using a 14B-parameter simulation LLM on four A100 GPUs costs only $70.80 — an 88% reduction.

Imagine. A dollar in cost becomes $0.12. If accurate, what should a savvy investor do? Pump money into an outfit like OpenAI or the Xai- type entity, or think harder about the China-smart solution?

Venture Beat explains the implication of the alleged cost savings:

The impact could be substantial for the AI industry.

No kidding?

The Venture Beat analysts add this observation:

The irony is clear: in teaching AI to search without search engines, Alibaba may have created a technology that makes traditional search engines less necessary for AI development. As these systems become more self-sufficient, the technology landscape could look very different in just a few years.

Yep, irony. Free transformer technology. Free Snorkle technology. Free kinetic into the core of the LLM money furnace.

If true, the implications are easy to outline. If bogus, the China Smart, US Dumb trope still captured ink and will be embedded in some smart software’s increasingly frequent hallucinatory outputs. At which point, the China Smart, US Dumb information gains traction and becomes “fact” to some.

Stephen  E Arnold, May 12, 2025

Another Duh! Moment: AI Cannot Read Social Situations

May 12, 2025

No AI. Just a dinobaby who gets revved up with buzzwords and baloney.

I promise I won’t write “Duh!” in this blog post again. I read Science Daily’s story “Awkward. Humans Are Still Better Than AI at Reading the Room.” The write up says without total awareness:

Humans, it turns out, are better than current AI models at describing and interpreting social interactions in a moving scene — a skill necessary for self-driving cars, assistive robots, and other technologies that rely on AI systems to navigate the real world.

Yeah, what about in smart weapons, deciding about health care for an elderly patient, or figuring out whether the obstacle is a painted barrier designed to demonstrate that full self driving is a work in progress. (I won’t position myself in front of a car with auto-sensing and automatic braking. You can have at it.)

The write up adds:

Video models were unable to accurately describe what people were doing in the videos. Even image models that were given a series of still frames to analyze could not reliably predict whether people were communicating. Language models were better at predicting human behavior, while video models were better at predicting neural activity in the brain.

Do these findings say to you, “Not ready for prime time?” It does to me.

One of the researchers who was in the weeds with the data points out:

“I think there’s something fundamental about the way humans are processing scenes that these models are missing.”

Okay, I prevaricated. Duh!” (Do marketers care? Duh!)

Stephen E Arnold, May 12, 2025

Google, Its AI Search, and Web Site Traffic

May 12, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

I read “Google’s AI Search Switch Leaves Indie Websites Unmoored.” I think this is a Gen Y way of saying, “No traffic for you, bozos.” Of course, as a dinobaby, I am probably wrong.

Let’s look at the write up. It says:

many publishers said they either need to shut down or revamp their distribution strategy. Experts this effort could ultimately reduce the quality of information Google can access for its search results and AI answers.

Okay, but this is just one way to look at Google’s delicious decision.

May I share some of my personal thoughts about what this traffic downshift means for those blue-chip consultant Googlers in charge:

First, in the good old days before the decline began in 2006, Google indexed bluebirds (sites that had to be checked for new content or “deltas” on an accelerated heart beat. Examples were whitehouse.gov (no, not the whitehouse.com porn site). Then there were sparrows. These plentiful Web sites could be checked on a relaxed schedule. I mean how often do you visit the US government’s National Railway Retirement Web site if it still is maintained and online? Yep, the correct answer is, “Never.” There there were canaries. These were sites which might signal a surge in popularity. They were checked on a heart beat that ensured the Google wouldn’t miss a trend and fail to sell advertising to those lucky ad buyers.

So, bluebirds, canaries, and sparrows.

This shift means that Google can reduce costs by focusing on bluebirds and canaries. The sparrows — the site operated by someone’s grandmother to sell home made quilts — won’t get traffic unless the site operator buys advertising. It’s pay to play. If a site is not in the Google index, it just may not exist. Sure there are alternative Web search systems, but none, as far as I know, are close to the scope of the “old” Google in 2006.

Second, by dropping sparrows or pinging them once in a blue moon will reduce the costs of crawling, indexing, and doing the behind-the-scenes work that consumes Google cash at an astonishing rate. Therefore, the myth of indexing the “Web” is going to persist, but the content of the index is not going to be “fresh.” This is the concept that some sites like whitehouse.gov have important information that must be in search results. Non-priority sites just disappear or fade. Eventually the users won’t know something is missing, which is assisted by the decline in education for some Google users. The top one percent knows bad or missing information. The other 99 percent? Well, good luck.

Third, the change means that publishers will have some options. [a] They can block Google’s spider and chase the options. How’s Yandex.ru sound? [b] They can buy advertising and move forward. I suggest these publishers ask a Google advertising representative what the minimum spend is to get traffic. [c] Publishers can join together and try to come up with a joint effort to resist the increasingly aggressive business actions of Google. Do you have a Google button on your remote? Well, you will. [d] Be innovative. Yeah, no comment.

Net net: This item about the impact of AI Overviews is important. Just consider what Google gains and the pickle publishers and other Web sites now find themselves enjoying.

Stephen E Arnold, May 12, 2025

US Cloud Dominance? China Finds a Gap and Cuts a Path to the Middle East

May 11, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

How China Is Gaining Ground in the Middle East Cloud Computing Race” provides a summary of what may be a destabilizing move in the cloud computing market. The article says:

Huawei and Alibaba are outpacing established U.S. providers by aligning with government priorities and addressing data sovereignty concerns.

The “U.S. providers” are Amazon, Google, Microsoft, Oracle. The Chinese companies making gains in the Middle East include Alibaba, Huawei, and TenCent. Others are likely to follow.

The article notes:

Alibaba Cloud expanded strategically by opening data centers in the UAE in 2022 and Saudi Arabia last year. It entered the Saudi market by setting up a venture with STC. The Saudi Cloud Computing Company will support the kingdom’s Vision 2030 goals, under which the government hopes to diversify the economy away from oil dependency.

What’s China’s marketing angle? The write up identifies alignment and more sensitivity to “data sovereignty” in key Middle Eastern countries. But the secret sauce is, according the the write up:

A key differentiator has been the Chinese providers’ approach to artificial intelligence. While U.S. companies have been slow to adopt AI solutions in the region, Chinese providers have aggressively embedded AI into their offerings at a time when Gulf nations are pursuing AI leadership. During the Huawei Global AI Summit last year, Huawei Cloud’s chief technology officer, Bruno Zhang, showed how its AI could cut Saudi hospital diagnostic times by 40% using localized Arabic language models — a tangible benefit that theoretical AI platforms from Western providers couldn’t match.

This statement may or may not be 100 percent correct. For this blog post, let’s assume that it is close enough for horse shoes. First, the US cloud providers are positioned as “slow”.  What happened to the go fast angle. Wasn’t Microsoft a “leader” in  AI, catching Google napping in its cubicle? Google declared some sort of an emergency and the AI carnival put up its midway.

Second, the Gulf “nations” wanted AI leadership, so Huawei presented a “tangible benefit” in the form of a diagnostic time reduction and localized Arabic language models. I know that US cloud providers provide translation services, but the pointy end of the stick shoved into the couch potato US cloud services was “localized language models.”

Furthermore, the Chinese providers provide cloud services and support on premises plus cloud functions. The “hybrid” angle matches the needs of some Middle Eastern information systems professionals’ ideas. The write up says:

The hybrid approach plays directly to the strengths of Chinese providers, who recognized this market preference early and built their regional strategy around it.

The Chinese vendors provide an approach that matches what prospects want. Seems simple and obvious. However, the article includes a quote that hints at another positive for the Chinese cloud players; to wit:

“The Chinese companies are showing that success in the Middle East depends as much on trust and cooperation as it does on computing power,” Luis Bravo, senior research analyst at Texas-based data center Hawk…

For me the differentiator may not be price, hybrid willingness, or localization. The killer word is trust. If the Gulf States do not trust the US vendors, China is likely to displace yesterday’s “only game in town” crowd.

Yep, trust. A killer benefit in some deals.

Stephen E Arnold, May 11, 2025

Microsoft AI: Little Numbers and Mucho Marketing

May 10, 2025

dino orange_thumb_thumbNo AI. Just a dinobaby who gets revved up with buzzwords and baloney.

I am confused. The big AI outfits have spent and are spending big bucks on [a] marketing, [b] data centers, [c] marketing, [d] chips, [e] reorganizations, and [f] marketing. I think I have the main cost centers, but I may have missed one. Yeah, I did. Marketing.

4 26 stalled ai

Has the AI super machine run into some problems? Thanks, MidJourney, you were online today unlike OpenAI.

What is AI doing? It is definitely selling consulting services. Some wizards are using it to summarize documents because that takes a human time to do. You know: Reading, taking notes, then capturing the juicy bits. Let AI do that. And customer support? Yes, much cheaper some say than paying humans to talk to a mere customer.

Imagine my surprise when I read “Microsoft’s Big AI Hire Can’t Match OpenAI.” Microsoft’s AI leader among AI leaders, according to the write up, “hasn’t delivered the turnaround he was hired to bring.” Microsoft caught the Google by surprise a couple of years ago, caused a Googley Code Red or Yellow or whatever, and helped launch the “AI is the next big thing innovators have been waiting for.”

The write up asserts:

At Microsoft’s annual executive huddle last month, the company’s chief financial officer, Amy Hood, put up a slide that charted the number of users for its Copilot consumer AI tool over the past year. It was essentially a flat line, showing around 20 million weekly users. On the same slide was another line showing ChatGPT’s growth over the same period, arching ever upward toward 400 million weekly users. OpenAI’s iconic chatbot was soaring, while Microsoft’s best hope for a mass-adoption AI tool was idling.

Keep in mind that Google suggested it had 1.5 billion users of its Gemini service, and (I think) Google implied that its AI is the quantumly supreme smart software. I may have that wrong and Google’s approach just wins chess, creates new drugs, and suggests that one can glue cheese on pizza. I may have these achievements confused, but I am an 80 year old dinobaby and easily confused these days.

The write up also contains some information I found a bit troubling; to wit:

And at this point, Microsoft is just not in the running to build a model that can compete with the best from OpenAI, Anthropic, Google, and even xAI. The projects that people have mentioned to me feel incremental, as opposed to leapfrogging the competition.

One can argue that Microsoft does not have to be in the big leagues. The company usually takes three or more releases to get a feature working. (How about those printers that don’t work?) The number of Softie software users is big. Put the new functionality in a release and — bingo! — market leadership. That SharePoint is a wonderful content management system. Just ask some of the security team in the Israeli military struggling with a “squadron” fumble.

Several observations:

  1. Microsoft’s dialing back some data center action may be a response to the under performance of its AI is the future push. If not, then maybe Microsoft has just pulled a Bob or a Clippy?
  2. I am not sure that the payoffs for other AI leaders’ investments are going to grab the brass ring or produce a winning lottery ticket. So many people desperately want AI to deliver dump trucks of gold dust to their cubicles that the neediness is palpable. AI is — it must be — the next big thing.
  3. Users are finding that for some use cases, AI is definitely a winner. College students use it to make more free time for hanging out and using TikTok-type services. Law firms find that AI is good enough to track down obscure cases that can be used in court as long as a human who knows the legal landscape checks the references before handing them over to a judge who can use an ATM machine and a mobile phone for voice calls. For many applications, the hallucination issue looms large.
  4. China’s free smart software models work reasonably well and have ignited such diverse applications as automated pig butchering and proving that cheaper CPUs and GPUs work in a “good enough” way just for less money.

I don’t want to pick on Microsoft, but I want to ask a question, “Is this the first of the big technology companies hungry and thirsty for the next big thing starting to find out that AI may not deliver?”

Stephen E Arnold, May 13, 2025

IBM AI Study: Would The Research Report Get an A in Statistics 202?

May 9, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

IBM, reinvigorated with its easy-to-use, backwards-compatible, AI-capable mainframe released a research report about AI. Will these findings cause the new IBM AI-capable mainframe to sell like Jeopardy / Watson “I won” T shirts?

Perhaps.

The report is “Five Mindshifts to Supercharge  Business Growth.” It runs a mere 40 pages and requires no more time than configuring your new LinuxONE Emperor 5 mainframe. Well, the report can be absorbed in less time, but the Emperor 5 is a piece of cake as IBM mainframes go.

Here are a few of the findings revealed by IBM in its IBM research report;

AI can improve customer “experience”. I think this means that customer service becomes better with AI in it. Study says, “72 percent of those in the sample agree.”

Turbulence becomes opportunity. 100 percent of the IBM marketers assembling the report agree. I am not sure how many CEOs are into this concept; for example, Hollywood motion picture firms or Georgia Pacific which closed a factory and told workers not to come in tomorrow.

Here’s a graphic from the IBM study. Do you know what’s missing? I will give you five seconds as Arvin Haddad, the LA real estate influencer says in his entertaining YouTube videos:

image

The answer is, “Increasing revenues, boosting revenues, and keeping stakeholders thrilled with their payoffs.” The items listed by IBM really don’t count, do they?

“Embrace AI-fueled creative destruction.” Yep, another 100 percenter from the IBM team. No supporting data, no verification, and not even a hint of proof that AI-fueled creative destruction is doing much more than making lots of venture outfits and some of the US AI leaders is improving their lives. That cash burn could set the forest on fire, couldn’t it? Answer: Of course not.

I must admit I was baffled by this table of data:

image

Accelerate growth and efficiency goes down with generative AI. (Is Dr. Gary Marcus right?). Enhanced decision making goes up with generative AI. Are the decisions based on verifiable facts or hallucinated outputs? Maybe busy executives in the sample choose to believe what AI outputs because a computer like the Emperor 5 was involved. Maybe “easy” is better than old-fashioned problem solving which is expensive, slow, and contentious. “Just let AI tell me” is a more modern, streamlined approach to decision making in a time of uncertainty. And the dotted lines? Hmmm.

On page 40 of the report, I spotted this factoid. It is tiny and hard to read.

image

The text says, “50 percent say their organization has disconnected technology due to the pace of recent investments.” I am not exactly sure what this means. Operative words are “disconnected” and “pace of … investments.” I would hazard  an interpretation: “Hey, this AI costs too much and the payoff is just not obvious.”

I wish to offer some observations:

  1. IBM spent some serious money designing this report
  2. The pagination is in terms of double page spreads, so the “study” plus rah rah consumes about 80 pages if one were to print it out. On my laser printer the pages are illegible for a human, but for the designers, the approach showcases the weird ice cubes, the dotted lines, and allows important factoids to be overlooked
  3. The combination of data (which strike me as less of a home run for the AI fan and more of a report about AI friction) and flat out marketing razzle dazzle is intriguing. I would have enjoyed sitting in the meetings which locked into this approach. My hunch is that when someone thought about the allegedly valid results and said, “You know these data are sort of anti-AI,” then the others in the meeting said, “We have to convert the study into marketing diamonds.” The result? The double truck, design-infused, data tinged report.

Good work, IBM. The study will definitely sell truckloads of those Emperor 5 mainframes.

Stephen E Arnold, May 9, 2025

Google: Making Users Cross Their Eyes in Confusion

May 9, 2025

dino orange_thumb_thumb_thumb_thumbNo AI, just a dinobaby watching the world respond to the tech bros.

I read “Don’t Make It Like Google.” The article points out that Google’s “control” extends globally. The company’s approach to software and design are ubiquitous. People just make software like Google because it seems “right.”

The author of the essay says:

Developers frequently aim to make things “like Google” because it feels familiar and, seemingly, the right way to do things. In the past, this was an implicit influence, but now it’s direct: Google became the platform for web applications (Chrome) and mobile applications (Android). It also created a framework for human-machine interaction: Material Design. Now, “doing it like Google” isn’t just desirable; it’s necessary.

Regulators in the European Union have not figured out how to respond to this type of alleged “monopoly.”

The author points out:

Most tech products now look indistinguishable, just a blobby primordial mess of colors.

Why? The author provides an answer:

Google’s actual UI & UX design is terrible. Whether mass-market or enterprise, web or mobile, its interfaces are chaotic and confusing. Every time I use Google Drive or the G Suite admin console, I feel lost. Neither experience nor intuition helps—I feel like an old man seeing a computer for the first time.

I quite like the reference to the author’s feeling like an “old man seeing a computer for the first time.” As a dinobaby, I find Google’s approach to making functions available — note, I am going to use a dinobaby term — stupid. Simple functions to me are sorting emails by sender and a keyword. I have not figured out how to do this in Gmail. I have given up on Google Maps. I have zero clue how to access the “old” street view with a basic map on a mobile device. Hey, am I the only person in an unfamiliar town trying to locate a San Jose-type office building in a tan office park? I assume I am.

The author points out:

Instead of prioritizing objectively good user experiences, the more profitable choice is often to mimic Google’s design. Not because developers are bad or lazy. Not because users enjoy clunky interfaces. But because it “makes sense” from the perspective of development costs and marketing. It’s tricky to praise Apple while criticizing Google because where Google has clumsy interfaces, Apple has bugs and arbitrary restrictions. But if we focus purely on interface design, Apple demonstrates how influence over users and developers can foster generations of well-designed products. On average, an app in Apple’s ecosystem is more polished and user-friendly than one in Google’s.

I am not sure that Apple is that much better than Google, but for me, the essay makes clear that giant US technology companies shape the user’s reality. The way information is presented and what expert users learn may not be appropriate for most people. I understand that these companies have to have a design motif or template. I understand that big companies have “experts” who determine what users do and want.

The author of the essay says:

We’ve become accustomed to the unintuitive interfaces of washing machines and microwaves. A new washing machine may be quieter, more efficient, and more aesthetically pleasing, yet its dials and icons still feel alien; or your washing machine now requires an app. Manufacturers have no incentive to improve this aspect—they just do it “like the Google of their industry.” And the “Google” of any industry inevitably gets worse over time.

I disagree. I think that making interfaces impossible is a great thing. Now here’s my reasoning: Who wants to expend energy figuring out a “better way.” The name of the game is to get eyeballs. Looking like Google or any of the big technology companies means that one just rolls over and takes what these firms offer as a default. Mind control and behavior conditioning is much easier and ultimately more profitable than approaching a problem from the user’s point of view. Why not define what a user gets, make it difficult or impossible to achieve a particular outcome, and force the individual to take what is presented as the one true way.

That makes business sense.

Stephen E Arnold, May 9, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta