Will Amazon Become the Bell Labs of Consumer Products?
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
I did some work at Bell Labs and then at the Judge Greene crafted Bellcore (Bell Communications Research). My recollection is that the place was quiet, uneventful, and had a lousy cafeteria. The Cherry Hill Mall provided slightly better food, just slightly. Most of the people were normal compared to the nuclear engineers at Halliburton and my crazed colleagues at the blue chip consulting firm dumb enough to hire me before I became a dinobaby. (Did you know that security at the Cherry Hill Mall had a gold cart to help Bell Labs’ employees find their vehicle? The reason? Bell Labs hired staff to deal with this recuring problem. Yes, Howard, Alan, and I lost our car when we went to lunch. I finally started parking in the same place and wrote the door exit and lamp number down in my calendar. Problem solved!)
Is Amazon like that? On a visit to Amazon, I formed an impression somewhat different from Bell Labs, Halliburton, and the consulting firm. The staff were not exactly problematic. I just recall having to repeat and explain things. Amazon struck me as an online retailer with money and challenges in handling traffic. The people with whom I interacted when I visited with several US government professionals were nice and different from the technical professionals at the organizations which paid me cash money.
Is this important? Yes. I don’t think of Amazon as particularly innovative. When it wanted to do open source search, it hired some people from Lucid Imagination, now Lucid Works. Amazon just did what other Lucene/Solr large-scale users did: Index content and allow people to run queries. Not too innovative in my book. Amazon also industrialized back office and warehouse projects. These are jobs that require finding existing products and consultants, asking them to propose “solutions,” picking one, and getting the workflow working. Again, not particularly difficult when compared to the holographic memory craziness at Bell Labs or the consulting firm’s business of inventing consumer products for companies in the Fortune 500 that would sell and get the consulting firm’s staggering fees paid in cash promptly. In terms of the nuclear engineering work, Amazon was and probably still is, not in the game. Some of the rocket people are, but the majority of the Amazon workers are in retail, digital plumbing, and creating dark pattern interfaces. This is “honorable” work, but it is not invention in the sense of slick Monte Carlo code cranked out by Halliburton’s Dr. Julian Steyn or multi-frequency laser technology for jamming more data through a fiber optic connection.
I read “Amazon Taps Xbox Co-Founder to Lead new Team Developing Breakthrough Consumer Products.” I asked myself, “Is Amazon now in the Bell Labs’ concept space? The write up tries to answer my question, stating:
The ZeroOne team is spread across Seattle, San Francisco and Sunnyvale, California, and is focused on both hardware and software projects, according to job postings from the past month. The name is a nod to its mission of developing emerging product ideas from conception to launch, or “zero to one.” Amazon has a checkered history in hardware, with hits including the Kindle e-reader, Echo smart speaker and Fire streaming sticks, as well as flops like the Fire Phone, Halo fitness tracker and Glow kids teleconferencing device. Many of the products emerged from Lab126, Amazon’s hardware research and development unit, which is based in Silicon Valley.
Okay, the Fire Phone (maybe Foney) and the Glow thing for kids? Innovative? I suppose. But to achieve success in raw innovation like the firms at which I was an employee? No, Amazon is not in that concept space. Amazon is more comfortable cutting a deal with Elastic instead of “inventing” something like Google’s Transformer or Claude Shannon’s approach to extracting a signal from noise. Amazon sells books and provides an almost clueless interface to managing those on the Kindle eReader.
The write up says (and I believer everything I read on the Internet):
Amazon has pulled in staffers from other business units that have experience developing innovative technologies, including its Alexa voice assistant, Luna cloud gaming service and Halo sleep tracker, according to LinkedIn profiles of ZeroOne employees. The head of a projection mapping startup called Lightform that Amazon acquired is helping lead the group. While Amazon is expanding this particular corner of its devices group, the company is scaling back other areas of the sprawling devices and services division.
Innovation is a risky business. Amazon sells stuff and provides online access with uptime of 98 or 99 percent. It does not “do” innovation. I wrote a book chapter about Amazon’s blockchain patents. What happened to that technology, some of which struck me as promising and sort of novel given the standards for US patents? The answer, based on the information I have seen since I wrote the book chapter, is, “Not much.” In less time, Telegram dumped out dozens of “inventions.” These have ranged from sticking crypto wallets into every Messenger users’ mini app to refining the bot technology to display third-party, off-Telegram Web sites on the fly for about 900 million Messenger users.
Amazon hit a dead end with Alexa and something called Halo.
When an alleged criminal organization operating as an “Airbnb” outfit with no fixed offices and minimal staff can innovate and Amazon with its warehouses cannot, there’s a useful point of differentiation in my mind.
The write up reports:
Earlier this month, Amazon laid off about 100 of the group’s employees. The job cuts included staffers working on Alexa and Amazon Kids, which develops services for children, as well as Lab126, according to public filings and people familiar with the matter who asked not to be named due to confidentiality. More than 50 employees were laid off at Amazon’s Lab126 facilities in Sunnyvale, according to Worker Adjustment and Retraining Notification (WARN) filings in California.
Okay. Fire up a new unit. Will the approach work? I hope for stakeholders’ and employees’ sake, Amazon hits a home run. But in the back of my mind, innovation is difficult. Quite special people are needed. The correct organizational set up or essentially zero set up is required. Then the odds are usually against innovation, which, if truly novel, evokes resistance. New is threatening.
Can the Bezos bulldozer shift into high gear and do the invention thing? I don’t know but I have some nagging doubts.
Stephen E Arnold, June 12, 2025
Musk, Grok, and Banning: Another Burning Tesla?
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
“Elon Musk’s Grok Chatbot Banned by a Quarter of European Firms” reports:
A quarter of European organizations have banned Elon Musk’s generative AI chatbot Grok, according to new research from cybersecurity firm Netskope.
I find this interesting because my own experiences with Grok have been underwhelming. My first query to Grok was, “Can you present only Twitter content?” The answer was a bunch of jabber which meant, “Nope.” Subsequent queries were less than stellar, and I moved it out of my rotation for potentially useful AI tools. Did the sample crafted by Netskope have a similar experience?
The write up says:
Grok has been under the spotlight recently for a string of blunders. They include spreading false claims about a “white genocide” in South Africa and raising doubts about Holocaust facts. Such mishaps have raised concerns about Grok’s security and privacy controls. The report said the chatbot is frequently blocked in favor of “more secure or better-aligned alternatives.”
I did not feel comfortable with Grok because of content exclusion or what I like to call willful or unintentional coverage voids. The easiest way to remove or weaponize content in the commercial database world is to exclude it. When a person searches a for fee database, the editorial policy for that service should make clear what’s in and what’s out. Filtering out is the easiest way to marginalize a concept, push down a particular entity, or shape an information stream.
The cited write up suggests that Grok is including certain content to give it credence, traction, and visibility. Assuming that an electronic information source is comprehensive is a very risky approach to assembling data.
The write up adds another consideration to smart software, which — like it or not — is becoming the new way to become informed or knowledgeable. The information may be shallow, but the notion of relying on weaponized information or systems that spy on the user presents new challenges.
The write up reports:
Stable Diffusion, UK-based Stability AI’s image generator, is the most blocked AI app in Europe, barred by 41% of organizations. The app was often flagged because of concerns around privacy or licensing issues, the report found.
How concerned should users of Grok or any other smart software be? Worries about Grok may be an extension of fear of a burning Tesla or the face of the Grok enterprise. In reality, smart software fosters the illusion of completeness, objectivity, and freshness of the information presented. Users are eager to use a tool that seems to make life easier and them appear more informed.
The risks of reliance on Grok or any other smart software include:
- The output is incomplete
- The output is weaponized or shaped by intentional or factors beyond the developers’ control
- The output is simply wrong, made up, or hallucinated
- Users who act as though shallow knowledge is sufficient for a decision.
The alleged fact that 25 percent of the Netskope sample have taken steps to marginalize Grok is interesting. That may be a positive step based on my tests of the system. However, I am concerned that the others in the sample are embracing a technology which appears to be delivering the equivalent of a sugar rush after a gym workout.
Smart software is being applied in novel ways in many situations. However, what are the demonstrable benefits other than the rather enthusiastic embrace of systems and methods known to output errors? The rejection of Grok is one interesting factoid if true. But against the blind acceptance of smart software, Grok’s down check may be little more than a person stepping away from a burning Tesla. The broader picture is that the buildings near the immolating vehicle are likely to catch on fire.
Stephen E Arnold, June 12, 2025
LLMs, Dread, and Good Enough Software (Fast and Cheap)
June 11, 2025
Just a dinobaby and no AI: How horrible an approach?
More philosopher programmers have grabbed a keyboard and loosed their inner Plato. A good example is the essay “AI: Accelerated Incompetence” by Doug Slater. I have a hypothesis about this embrace of epistemological excitement, but that will appear at the end of this dinobaby post.
The write up posits:
In software engineering, over-reliance on LLMs accelerates incompetence. LLMs can’t replace human critical thinking.
The driver of the essay is that some believe that programmers should use outputs from large language models to generate software. Doug does not focus on Google and Microsoft. Both companies are convinced that smart software can write good enough code. (Good enough is the new standard of excellence at many firms, including the high-flying, thin-air breathing Googlers and Softies.)
The write up identifies three beliefs, memes, or MBAisms about this use of LLMs. These are:
- LLMs are my friend. Actually LLMs are part of a push to get more from humanoids involved in things technical. For a believer, time is gained using LLMs. To a person with actual knowledge, LLMs create work in order to catch errors.
- Humans are unnecessary. This is the goal of the bean counter. The goal of the human is to deliver something that works (mostly). The CFO is supposed to reduce costs and deliver (real or spreadsheet fantasy) profits. Humans, at least for now, are needed when creating software. Programmers know how to do something and usually demonstrate “nuance”; that is, intuitive actions and thoughts.
- LLMs can do what humans do, especially programmers and probably other technical professionals. As evidence of doing what humans do, the anecdote about the robot dog attacking its owner illustrates that smart software has some glitches. Hallucinations? Yep, those too.
The wrap up to the essay states:
If you had hoped that AI would launch your engineering career to the next level, be warned that it could do the opposite. LLMs can accelerate incompetence. If you’re a skilled, experienced engineer and you fear that AI will make you unemployable, adopt a more nuanced view. LLMs can’t replace human engineering. The business allure of AI is reduced costs through commoditized engineering, but just like offshore engineering talent brings forth mixed fruit, LLMs fall short and open risks. The AI hype cycle will eventually peak10. Companies which overuse AI now will inherit a long tail of costs, and they’ll either pivot or go extinct.
As a philosophical essay crafted by a programmer, I think the write up is very good. If I were teaching again, I would award the essay an A minus. I would suggest some concrete examples like “Google suggests gluing cheese on pizza”, for instance.
Now what’s the motivation for the write up. My hypothesis is that some professional developers have a Spidey sense that the diffident financial professional will license smart software and fire humanoids who write code. Is this a prudent decision? For the bean counter, it is self preservation. He or she does not want to be sent to find a future elsewhere. For the programmer, the drum beat of efficiency and the fife of cost reduction are now loud enough to leak through noise reduction head phones. Plato did not have an LLM, and he hallucinated with the chairs and rear view mirror metaphors.
Stephen E Arnold, June 11, 2025
A Decade after WeChat a Marketer Touts OpenAI as the Everything App
June 10, 2025
Just a dinobaby and no AI: How horrible an approach?
Lester thinks OpenAI will become the Internet. Okay, Lester, are you on the everything app bandwagon. That buggy rolled in China and became one of the little engines that could for social scoring? “How ChatGPT Could Replace the Internet As We Know It” provides quite a bit about Lester. Zipping past the winner prose, I noted this passage:
In fact, according to Khyati Hooda of Keywords Everywhere, ChatGPT handles 54% of queries without using traditional search engines. This alarming stat indicates a shift in how users seek information. As the adoption grows and ChatGPT cements itself as the single source of information, the internet as we know it becomes kinda pointless.
One question? Where does the information originate? From intercepted mobile communications, from nifty listening devices like smart TVs, or from WeChat-style methods? The jump from the Internet to an everything app is a nifty way to state that everything is reducible to bits. Get the bits, get the “information.”
Lester says:
Basically, ChatGPT is cutting out the middleman, but what’s even scarier is that it’s working. ChatGPT reached 1 million users in just 5 days and has 400 million weekly active users as of early 2025, making it the fastest-growing consumer app in history. The platform receives over 5.19 billion visits per month, ranking as the 8th most visited website in the world.
He explains:
What started as a chatbot has become a platform where people book travel, plan meals, write emails, create schedules, and even do homework. Surveys show that around 80% of ChatGPT users leverage it for professional tasks such as drafting emails, creating reports, and generating marketing content. This marks a fundamental shift in how we engage with the internet, where more everyday tasks move from web browsing to a prompt.
How likely is this shift, Lester? Lester responds in a ZDNet-type way:
I wouldn’t be surprised if ChatGPT added a super agent that does tasks autonomously by December of this year. Amazed? Sure. But surprised? Nah. It’s not hard to imagine a near future where ChatGPT doesn’t just replace the internet but OpenAI becomes the foundation for future companies, in the same way that roads became the foundation for civilization.
Lester interprets the shift as mostly good news. Jobs will be created. There are a few minor problems; for instance, retraining and changing business models. Otherwise, Lester does not see too many new problems. In fact, he makes his message clear:
If you stand still, never evolve, never improve your skills, and never push yourself to be better, life will decimate you like a gorilla vs 100 men.
But what if the gorilla is Google? And that Google creature has friends like Microsoft and others. A super human like Elon Musk or Pavel Durov might jump into the fray against the men, presumably from OpenAI.
Convergence and collapsing to an “everything” app is logical. However, humans are not logical. Plus smart software has a couple of limitations. These include cost, energy requirements, access to information, pushback from humans who cannot be or do not want to be “retrained,” and making stuff up (you know, hallucinations like gluing cheese on pizza).
Net net: Old school search is now wearing a new furry suit, but WeChat and Telegram are existing “everything” apps. Mr. Musk and Sam AI-Man know or sense there is a future in co-opting the idea, bolting on smart software, and hitting the marketing start button. However, envisioning and pulling off are two different things. China allegedly helped WeChat think about its role; Telegram’s founder visited Russia dozens of times prior to his arrest in France. What nation state will husband a Western European or American “everything” app?
Mr. Musk has a city in Texas. Perhaps that’s why he has participated in a shadow dance with Telegram?
Lester, you have identified the “everything” app. Good work. Don’t forget WeChat débuted in 2011. Telegram rolled out in 2013. Now a decade later, the “everything” app is the next big thing. Okay. But who is the “we” in the essay’s title? It is not this dinobaby.
Stephen E Arnold, June 10, 2025
Google Places a Big Bet, and It May Not Pay Off
June 10, 2025
Just a dinobaby and no AI: How horrible an approach?
Each day brings more AI news. I have playing in the background a video called “The AI Math That Left Number Theorists Speechless.” That word “speechless” does not apply because the interlocutor and the math whiz are chatty Cathies. The video runs a little less that two hours. Speechless? No, when it comes to smart software some people become verbose and excited. I like to be verbose. I don’t like to get excited about artificial intelligence. I am a dinobaby, remember?
I clicked on the first item in my trusty Overflight service and this write up greeted me: “Google Is Burying the Web Alive.” How does one “bury” a digital service? I assumed or inferred that the idea is that the alleged multi-monopoly Google was going to create another monopoly for itself anchored in AI.
The write up says:
[AI Overviews are] Google’s “most powerful AI search, with more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web,” the company says, “breaking down your question into subtopics and issuing a multitude of queries simultaneously on your behalf.” It’s available to everyone. It’s a lot like using AI-first chatbots that have search functions, like those from OpenAI, Anthropic, and Perplexity, and Google says it’s destined for greater things than a small tab. “As we get feedback, we’ll graduate many features and capabilities from AI Mode right into the core Search experience,” the company says.
Let’s slow down the buggy. A completely new product or service has some baggage on board. Like “New Coke”, quite a few people liked “old Coke.” The company figured it out and innovated and finally just started buying beverage outfits that were pulling new customers. Then there is the old chestnut by the buggy stand which says, “Most start ups fail.” Finally, there is the shadow of impatient stakeholders. Fail to keep those numbers up, and consequences manifest themselves.
The write up gallops forward:
From the very first use, however, AI Mode crystallized something about Google’s priorities and in particular its relationship to the web from which the company has drawn, and returned, many hundreds of billions of dollars of value. AI Overviews demoted links, quite literally pushing content from the web down on the page, and summarizing its contents for digestion without clicking…
Those clicks make Google’s money flow. It does not matter if the user clicks to view a YouTube short or a click to view a Web page about a vacation rental. Clicks equal revenue. Fewer clicks may translate to less revenue. If this is true, then what happens?
The write up suggests an answer: The good old Web is marginalized. Kaput. Dead as a door nail:
of course, Google is already working on ads for both Overviews and AI Mode). In its drive to embrace AI, Google is further concealing the raw material that fuels it, demoting links as it continues to ingest them for abstraction. Google may still retain plenty of attention to monetize and perhaps keep even more of it for itself, now that it doesn’t need to send people elsewhere; in the process, however, it really is starving the web that supplies it with data on which to train and from which to draw up-to-date details. (Or, one might say, putting it out of its misery.)
As a dinobaby, I quite like the old Web. Again we have a giant company doing something “new” and “different.” How will those bold innovations work out? That’s the $64 question (a rigged game show my mother told me).
The article concludes:
In any case, the signals from Google — despite its unconvincing suggestions to the contrary — are clear: It’ll do anything to win the AI race. If that means burying the web, then so be it.
Whoa, Nellie!
Let’s think about what the Google is allegedly doing. First, the Google is spending money to index the “Web.” My team tells me that Google is indexing less thoroughly than it was 10 years ago. Google indexes where the traffic is, and quite a bit of that traffic is to Google itself. The losers have been grousing about a lack of traffic for years. I have worked with a consumer Web site since 1993, and the traffic cratered about seven years ago. Why? Google selected sites to boost because of the link between advertiser appetite and clicks. The owner of this consumer Web site cooked up a bit of jargon for what Google was doing; he called it “steering.” The idea is that Google shaped its crawls and “relevance” in order to maximize revenue from known big ad spenders.
Google is not burying anything. The company is selecting to maximize financial benefits. My experience suggests that when Google strays too far from what stakeholders want, the company will be whipped until it gets the horses under control. Second, the AI revolution poses a significant challenge for a number of reasons. Among these is the users’ desire for the information equivalent of a “dumb” mobile phone. The cacophony of digital information is too much and creates a “why bother” need. Google wants to respond in the hope that it can come up with a product or service that produces as much money as the old Yahoo Overture GoTo model. Hope, however, is not reality.
As a dinobaby, I think Google has a reasonably good chance of stratifying its “users”. Some will pay. Some will consume the sponsored by ads AI output. Some will find a way to get the restaurant address surrounded by advertisements.
What about AI?
I am not sure that anyone knows. Both Google and Microsoft have to find a way to produce significant and sustainable revenue from the large language model method which has come to be synonymous with smart software. The costs are massive. The use cases usually focus on firing people for cost savings until the AI doesn’t work. Then the AI supporters just hire people again. That’s the Klarna call to think clearly again.
Net net: The Google is making a big bet that it can increase its revenues with smart software. How probable is it that the “new” Google will turn out like the “New Coke”? How much of the AI hype is just l’entreprise parle dans le vide? The hype may be the inverse of reality. Something will be buried, and it may not be the “Web.”
Stephen E Arnold, June 10, 2025
Is Google Headed for the Big Computer Room in the Sky? Actually Yes It Is
June 9, 2025
Just a dinobaby and no AI: How horrible an approach?
As freshman in college in 1962, I had seen computers like the clunky IBMs at Keystone Steel & Wire Co., where my father worked as some sort of numbers guy, a bean counter, I guessed. “Look but don’t touch,” he said, not even glancing up from his desk with two adding machines, pencils, and ledgers. I looked.
Once I convinced a professor of poetry to hire me to help him index Latin sermons, I was hooked. Next up were Digital Equipment machines. At Halliburton Nuclear a fellow named Bill Montano listened to my chatter about searching text. Then I bopped into a big blue chip consulting firms and there were computing machines in the different offices I visited. When I ended up at the database company in the early 1980s, I had my own Wang in my closet. There you go. A file cabinet sized gizmo, weird hums, and connections to terminals in my little space and to other people who could “touch” their overheated hearts. Then the Internet moved from the research world into the mainstream. Zoom. Things were changing.
Computer companies arrived, surged, and faded. Then personal computer companies arrived, surged, and faded. The cadence of the computer industry was easy to dance to. As Carmen Giménez used to say on American Bandstand in 1959, “I like the beat and it is easy to dance to.” I have been tapping along and doing a little jig in the computer (online) sector for many years, around 60 I think.
I read “Google As You Know It Is Slowly Dying.” Okay, another tech outfit moving through its life cycle. Break out your copy of Elisabeth Kübler-Ross’s On Death and Dying. Jump to Acceptance section, read it, and move on. But, no. It is time for one more “real news” write up to explain that Googzilla is heading toward its elder care facility. This is not news. If it is, fire up your Burroughs B5500 and do your inventory update.
The essay presents the obvious as “new.” The Vox write up says:
Google is dominant enough that two federal judges recently ruled that it’s operating as an illegal monopoly, and the company is currently waiting to see if it will be broken up.
From my point of view, this is an important development. Furthermore, it has nothing to do with the smart software approach to search. After two decades of doing exactly what it wanted, Google — like Apple and Meta — are in the spotlight. Those spotlights are solar powered and likely to remain on for the foreseeable future. That’s news.
In this spotlight are companies providing a “new” way to search. Since search is required to do most things online, the Google has to figure out how to respond in an intelligent way to two — count ‘em — big problems: Government actions and upstarts using Google’s own Transformer innovation.
The intersection of regulatory action and the appearance of an alternative to “search as you know it” is the same old story, just jazzed up with smart software, efficiency, the next big thing, Sky Net, and more. The write up says:
The government might not be the biggest threat to Google dominance, however. AI has been chipping away at the foundation of the web in the past couple of years, as people have increasingly turned to tools like ChatGPT and Perplexity to find information online.
My view is that it is the intersection, not the things themselves that have created the end-of-the-line sign for the Google bullet train. Google will try to do what it has done since Backrub: Appropriate ideas like Yahoo, Overture, and GoTo advertising methods, create a bar in which patrons pay to go in and out (advertisers and users), and treat the world as a bunch of dorks by whiz kids who just know so much more about the digital world. No more.
Google’s legacy is the road map for other companies lucky or skilled enough to replicate the approach. Consequently, the Google is in Code Red, announcing so many “new” products and services I certainly can’t keep them straight, and serving up a combination of hallucinatory output and irrelevant search results. The combination is problematic as the regulators close in.
The write up concludes with this statement:
In the chaotic, early days of the web, Google got popular by simplifying the intimidating task of finding things online, as the Washington Post’s Geoffrey A. Fowler points out. Its supremacy in this new AI-powered future is far less certain. Maybe another startup will come along and simplify things this time around, so you can have a user-friendly bot explain things to you, book travel for you, and make movies for you.
I disagree. Google became popular because it indexed Web sites, used some Clever ideas, and implemented processes that produced pages usually related to the user’s query. Over time, wrapper software provided Google with a way to optimize its revenue. Innovation eluded the company. In the social media “space”, Google bumbled Orkut and then continued to bumble until it pretty much gave up on killing Facebook. In the Microsoft “space,” Google created its own office and it rolled out its cloud service. There have not had a significant impact in the enterprise market when the river of money flows for Microsoft and whatever it calls its alleged monopolistic-inclined services. There are other examples of outright failure.
Now the Google is just spewing smart software products. This reminds me of a person who, shortly before dying, sees bright lights and watches the past flash before them. Then the person dies. My view is that Google is having what are like those near death experiences. The person survives but knows exactly what death is.
Believe me, Google knows that the annoying competitors are more popular; to wit, Sam AI-Man and his ChatGPT, his vision for the “everything” app, and his rather clever deal with Telegram. To wit, Microsoft and its deals with most smart software companies and its software lock in the US Federal government, its boot camp deal with Palantir Technologies, and its mind-boggling array of ways to get access to word processing software.
Google has not proven it can deal with the confluence of regulators demanding money and lesser entities serving up products and services that capture headlines. Code Red and dozens of “new” products each infused with Gemini or whatever the name of the smart software is today is not a solution that returns Google to its glory days.
The patient is going through tough times. Googzilla may survive but search is going to remain finding on point information. LLMs are a current approach that people like. By itself, it will not kill Google or allow it to survive. Google is caught between the reality of meaningful regulatory action and innovators who are more agile.
Googzilla is old and spends some time looking for suitable elder care facilities.
Stephen E Arnold, June 9, 2025
Jobs for Humanoids: AI Output Checker Like a Digital Grocery Clerk
June 9, 2025
George at the Throwable Substack believes humans will forever have a place in software development. In the post, “What’s Next for Software,” the blogger believes code maintenance will always rely on human judgement. This, he imagines, will balance out the code-creation jobs lost to AI. After all, he believes, humans will always be held liable for snafus. He writes:
“While engineers won’t be as responsible for producing code, they will be ultimately responsible for what that code does. A VP or CEO can blame an AI all they want when the system is down, but if the AI can’t solve the problem, it can’t solve the problem. And I don’t expect firing the AI will be very cathartic.”
Maybe not. But do executives value catharsis over saving money? We think they will find a way to cope. Perhaps a season pass to the opera. The post continues:
“It’s hard to imagine a future where humans aren’t the last line of defense for maintenance, debugging, incident response, etc. Paired with the above—that they’re vastly outnumbered by the quantity of services and features and more divorced from the code that’s running than ever before—being that last line of defense is a tall order.”
So tall it can never be assigned to AI? Do not bet on it. In a fast-moving, cost-driven environment, software will act more quickly. Each human layer will be replaced as technology improves. Sticking one’s head in the sand is not the way to prepare for that eventuality.
Cynthia Murrell, June 6, 2025
Lawyers Versus Lawyers: We Need a Spy Versus Spy Cartoon Now
June 5, 2025
Just the dinobaby operating without Copilot or its ilk.
Rupert Murdoch, a media tycoon with some alleged telephone intercept activity, owns a number of “real” news outfits. One of these published “What Is Big Tech Trying to Hide? Amazon, Apple, Google Are All Being Accused of Abusing Legal Privilege in Battles to Strip Away Their Power.” As a dinobaby in rural Kentucky, I have absolutely no idea if the information in the write up is spot on, close enough for horseshoes, or dead solid slam dunk in the information game.
What’s interesting is that the US legal system is getting quite a bit of coverage. Recently a judge in a fly over state found herself in handcuffs. Grousing about biased and unfair judges pops up in social media posts. One of my contacts in Manhattan told me that some judges have been receiving communications implying kinetic action.
Yep, lawyers.
Now the story about US big technology companies using the US legal system in a way that directly benefits these firms reveals “news” that I found mildly amusing. In rural Kentucky, when one gets in trouble or receives a call from law enforcement about a wayward sibling, the first action is to call one of the outstanding legal professionals who advertise in direct mail blasts on the six pm news and put memorable telephone numbers on the sides of the mostly empty bus vehicles that slowly prowl the pot-holed streets.
The purpose of the legal system is to get paid to represent the client. The client pays money or here in rural Kentucky a working pinball machine was accepted as payment by my former, deceased, and dearly beloved attorney. You get the idea: Pay money, get professional services. The understanding in my dealings with legal professionals is that the lawyers listen to their paying customers, discuss options among themselves or here in rural Kentucky with a horse in their barn, and formulate arguments to present their clients’ sides of cases or matters.
Obviously a person with money wants attorneys who [a] want compensation, [b] want to allow the client to prevail in a legal dust up, and [c] push back but come to accept their clients’ positions.
So now the Wall Street Journal reveals that the US legal system works in a transparent, predictable, and straightforward way.
My view of the legal problems the US technology firms face is that these innovative firms rode the wave their products and services created among millions of people. As a person who has been involved in successful start ups, I know how the surprise, thrill, and opportunities become the drivers of business decisions. Most of the high technology start ups fail. The survivors believe their intelligence, decision making, and charisma made success happen. That’s a cultural characteristic of what I call the Sillycon Valley way. (I know about this first hand because I lived in Berkeley and experienced the carnival ride of a technological winner.)
Without exposure to how technologies like “online” work, it was and to some extent still difficult to comprehend the potential impacts of the shift from media anchored in non digital ecosystems to the there is not there there hot house of a successful technology. Therefore, neither the “users” of the technology recognized the impact of consumerizing the most successful technologies nor the regulators could understand what was changing on a daily and sometimes hourly cadence. Even those involved at a fast-growing high technology company had no idea that the fizz of winning would override ethical and moral considerations.
Therefore:
- Not really news
- Standard operating procedure for big technology trials since the MSFT anti-trust matter
- The US ethical fabric plus the invincibility and super hero mindsets maps the future of legal dust ups in my opinion.
Net net: Sigh. William James’s quantum energy is definitely not buzzing.
Stephen E Arnold, June 5, 2025
Can You Detox When Everyone Is Addicted to Online?
June 5, 2025
Digital detox has been a thing for a while and it’s where you go off the grid. No phone. No computer. No Internet. The Internet and mobile devices are so ingrained into our consciousness that it’s a reflex to check for messages, social media, etc. Amanda Kooser at CNet when an entire day without the Internet and describes what happens in: “24 Hours Without Internet: I Tried This Digital Detox and Thrived.”
Kooser set some ground rules to ensure her digital detox would be successful. She unplugged her Internet router to disable WiFi and connected Internet. She enabled Focus Mode on all her devices to silence them.
She started her day by waking up with a non-phone alarm clock, read a book, then headed to work without the use of Google maps. She got lost but used good, old-fashioned directions to arrive at her destination. Kooser also watched TV with an antenna instead of streaming her shows. She learned that antenna TV sucks.
Here’s her overall opinion:
“The best part of having no internet for the day was the pause on micro-interruptions — all the little things that steal attention: neighborhood alerts, store sales and emails that need to be deleted. I enjoyed the quiet so much that I didn’t turn the T-Mobile 5G Home Internet gateway back on until Sunday morning, 36 hours after the digital detox experiment began. I’m working on being better about reaching for my phone for every little thing. Now that I’ve unlocked the full power of Focus Mode, I can put it into service. I can have my quiet moments on top of a mountain where the only alerts are the squirrels calling from the trees. I’ve already developed a sense of nostalgia for my internet-free day. It’s a rosy memory of fun times in the car listening to the classic rock station on the radio, not knowing if we would find our destination, not worrying that it even mattered.”
Now back to the question, “Can you detox when everyone is addicted to online?” Answer: Not easily and maybe not at all. Think a fish in a fish bowl, can that creature stop looking out through his bowl?
Whitney Grace, June 5, 2025
A SundAI Special: Who Will Get RIFed? Answer: News Presenters for Sure
June 1, 2025
Just a dinobaby and some AI: How horrible an approach?
Why would “real” news outfits dump humanoids for AI-generated personalities? For my money, there are three good reasons:
- Cost reduction
- Cost reduction
- Cost reduction.
The bean counter has donned his Ivy League super smart financial accoutrements: Meta smart glasses, an Open AI smart device, and an Apple iPhone with the vaunted AI inside (sorry, Intel, you missed this trend). Unfortunately the “good enough” approach, like a gradient descent does not deal in reality. Sum those near misses and what do you get: Dead organic things. The method applies to flora and fauna, including humanoids with automatable jobs. Thanks, You.com, you beat the pants off Venice.ai which simply does not follow prompts. A perfect solution for some applications, right?
My hunch is that many people (humanoids) will disagree. The counter arguments are:
- Human quantum behavior; that is, flubbing lines, getting into on air spats, displaying annoyance standing in a rain storm saying, “The wind velocity is picking up.”
- The cost of recruitment, training, health care, vacations, and pension plans (ho ho ho)
- The management hassle of having to attend meetings to talk about, become deciders, and — oh, no — accept responsibility for those decisions.
I read “The White-Collar Bloodbath’ Is All Part of the AI Hype Machine.” I am not sure how fear creates an appetite for smart software. The push for smart software boils down to generating revenues. To achieve revenues one can create a new product or service like the iPhone of the original Google search advertising machine. But how often do those inventions doddle down the Information Highway? Not too often because most of the innovative new new next big things are smashed by a Meta-type tractor trailer.
The write up explains that layoff fears are not operable in the CNN dataspace:
If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality. Yet when tech CEOs do the same thing, people tend to perk up. ICYMI: The 42-year-old billionaire Dario Amodei, who runs the AI firm Anthropic, told Axios this week that the technology he and other companies are building could wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years, he said.
First, the killing jobs angle is probably easily understood and accepted by individuals responsible for “cost reduction.” Second, the ICYMI reference means “in case you missed it,” a bit of short hand popular with those are not yet 80 year old dinobabies like me. Third, the source is a member of the AI leadership class. Listen up!
Several observations:
- AI hype is marketing. Money is at stake. Do stakeholders want their investments to sit mute and wait for the old “build it and they will come” pipedream to manifest?
- Smart software does not have to be perfect; it needs to be good enough. Once it is good enough cost reductionists take the stage and employees are ushered out of specific functions. One does not implement cost reductions at random. Consultants set priorities, develop scorecards, and make some charts with red numbers and arrows point up. Employees are expensive in general, so some work is needed to determine which can be replaced with good enough AI.
- News, journalism, and certain types of writing along with customer “support”, and some jobs suitable for automation like reviewing financial data for anomalies are likely to be among the first to be subject to a reduction in force or RIF.
So where does that leave the neutral observer? On one hand, the owners of the money dumpster fires are promoting like crazy. These wizards have to pull rabbit after rabbit out of a hat. How does that get handled? Think P.T. Barnum.
Some AI bean counters, CFOs, and financial advisors dream about dumpsters filled with money burning. This was supposed to be an icon, but Venice.ai happily ignores prompt instructions and includes fruit next to a burning something against a wooden wall. Perfect for the good enough approach to news, customer service, and MBA analyses.
On the other hand, you have the endangered species, the “real” news people and others in the “knowledge business but automatable knowledge business.” These folks are doing what they can to impede the hyperbole machine of smart software people.
Who or what will win? Keep in mind that I am a dinobaby. I am going extinct, so smart software has zero impact on me other than making devices less predictable and resistant to my approach to “work.” Here’s what I see happening:
- Increasing unemployment for those lower on the “knowledge word” food chain. Sorry, junior MBAs at blue chip consulting firms. Make sure you have lots of money, influential parents, or a former partner at a prestigious firm as a mom or dad. Too bad for those studying to purvey “real” news. Junior college graduates working in customer support. Yikes.
- “Good enough” will replace excellence in work. This means that the air traffic controller situation is a glimpse of what deteriorating systems will deliver. Smart software will probably come to the rescue, but those antacid gobblers will be history.
- Increasing social discontent will manifest itself. To get a glimpse of the future, take an Uber from Cape Town to the airport. Check out the low income housing.
Net net: The cited write up is essentially anti-AI marketing. Good luck with that until people realize the current path is unlikely to deliver the pot of gold for most AI implementations. But cost reduction only has to show payoffs. Balance sheets do not reflect a healthy, functioning datasphere.
Stephen E Arnold, June 1, 2025