Okay Business Strategy Experts: What Now for AI Innovation?

October 29, 2025

As AI forces its way into our lives, it requires us to shift our thinking in several areas. On his Substack, Charlie Graham examines how AI may render a key software strategy obsolete. He declares, “’Be Different’ Doesn’t Work for Building Products Anymore.” Personally, we believe coming up with something lots of people want or something rich people must absolutely have is the key to success. But it is also a wise develop something to distinguish oneself from the competition. Or, at least, it was. Now that approach may be wasted effort. Graham writes:

“In the past, the best practice to win in a competitive market was to differentiate yourself – ‘be different,’ as Steve Jobs would say. But product differentiation is no longer effective in this new world.

  • Differentiate on an amazing UX? You used to rely on your awesome UX team for a sustainable advantage. Now, dozens of competitors can screenshot (or soon video) your flow and give it to an AI to reproduce quickly.
  • Differentiate by excelling at one feature? You might get a temporary lead, but it’s now pretty trivial for competitors to get close to your functionality.
  • Differentiate on business model? If it starts working, dozens of your recently started competitors will vibe-code a switch over.
  • Differentiate on ‘proprietary data’? This isn’t the key differentiator it was expected to be, as we are finding data can be simulated or companies can find similar-enough data to get 80% of the way there.

Instead we live in a red ocean where features are copied in days or weeks and everyone is fighting with similar products for the same scraps. So what does work?”

The post proposes several answers to that question. For example, those with large, proprietary distribution networks still have an advantage. Also, obscure, complex niches come with fewer competitors. So does taking on difficult or expensive product integrations. On the darker side, one could guard against customer loss by compounding data lock-in, making migration away as painful as possible. Then there is networking– a consistent necessity; social media and online marketplaces now fill that need. See the post for details on each of these points. What other truisms will AI force us to reconsider?

Cynthia Murrell, October 29, 2025

Woof! Innovation Is Doomed But Novel Gym Shoes Continue

October 23, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have worked for commercial and government firms. My exposure ranges from the fun folks at Bell Labs and Bellcore to the less than forward leaning people at a canned fish outfit not far from MIT. Geography is important … sometimes. I have also worked on “innovation teams,” labored in a new product organization, and sat at the hand of the all-time expert of product innovation, Conrad Jones. Ah, you don’t know the name. That ices you out of some important information about innovation. Too bad.

I read “No Science, No Startups: The Innovation Engine We’re Switching Off.” The write up presents a reasonable and somewhat standard view of the “innovation process.” The basic idea is that there is an ecosystem which permits innovation. Think of a fish tank. Instead of water, we have fish, pet fish to be exact. We have a hobbyist. We have a bubbler and a water feed. We even have toys in the fish tank. The owner of the fish tank is a hobbyist. The professional fish person might be an ichthyologist or a crew member on a North Sea fishing boat.  The hobbyist buys live fish from the pet side of the fish business. The  ichthyologist studies fish. The fishing boat crew member just hauls them in and enjoys every minute of the activity. Winter is particularly fun. I suppose I could point out other aspects of the fish game. How about fish oil? What about those Asian fish sauces? What about the perfume makers who promise that Ambroxan is just as good as ambergris. Then these outfits in Grasse buy whale stuff for their best concoctions.

image

Innovation never stops… with or without a research grant. It may not be AI, but it shows a certain type of thinking. Thanks, Venice.ai, good enough.

The fish business is complicated. Innovation, particularly in technology-centric endeavors, is more complex. The “No Science, No Startups” essay makes innovation simple. Is innovation really little more than science theorists, researchers, and engineers moving insights and knowledge through a poorly disorganized and poorly understood series of activities?

Yes, it is like the fish business. One starts with a herring. Where one ends up can quite surprising, maybe sufficiently baffling to cause people to say, “No way, José.” Here’s an example: Fish bladders use to remove impurities from wine. Eureka! An invention somewhere in the mists of time. That’s fish. Technology in general and digital technology in particular are more slippery. (Yep, a fish reference.)

The cited essay says the pipeline has several process containers filled with people. (Keep in mind that outfits like Google Deepseek want to replace humanoids with smart software. But let’s go with the humans matter approach for this blog post.)

  1. Scientists who find, gather, discover, or stumble upon completely new things. Remember this from grade school, “Come here, Mr. Watson.”
  2. Engineers who recycle, glue together, or apply insight X to problem Y and create something novel as product Y.
  3. MBA-inspired people look and listen (sort of) to what engineers say and experience a Eureka moment. Some moments lead to Pets.com. Others yield a Google-type novelty with help from a National Science Foundation grant. (Check out that PageRank patent.)

The premise is that if the scientific group does not have money, the engineers and the MBA-inspired people will have a tough time coming up with new products, services, applications, or innovations. Is a flawed self-driving system in the family car an innovation or an opportunity to dance with death?

Where is the cited essay going? It is heading toward doom for the US belief that the country is the innovation leader. That’s America’s manifest destiny. The essay says:

Cut U.S. funding, then science will happen in other countries that understand its relationship to making a nation great – like China. National power is derived from investments in Science. Reducing investment in basic and applied science makes America weak.

In general, I think the author of this “No Science, No Startups” is on a logical path. However, I am not sure the cited article’s analysis covers the possibilities of innovation. Let’s go back to fish.

The fish business is complicated and global. The landscape of the fish business changes if an underwater volcano erupts near the fishing areas not too distant from Japan and China. The fish business can take a knock if some once benign microbe does the Darwin thing and rips through the world’s cod. What happens to fish if some countries’ fishing community eat the stock of tuna? What if a TikTok video convinces people not to eat fish or to wear articles of clothing fabricated of fish skin. (Yes, it is a thing.)

Innovation, particularly in technology, has as many if not more points of disruption. The disruptions or to use blue chip consultant speak or exogenous events occur, humanoids have a history of innovating. Vikings in the sixth century kept warm without lighting fires on their wooden boats made water tight with flammable pine tar. (Yep, like those wooden boat hull churches, the spectacle of a big time fire teaches a harsh lesson.)

If I am correct that discontinuities, disruptions, and events humans cannot control occur, here’s what I think about innovation, spending lots of money, and entrepreneurs.

  1. If Maxwell could innovate, so can theorists and scientists today. Does the government have to fund these people? Possibly but mom might produce some cash or the scientist has a side gib.
  2. Will individuals not now recognized as scientists, engineers, and entrepreneurs come up with novel products and services? The answer is, “Yes.” How do I know? Easy. Someone had to figure out how to make a wheel: No lab, no grants, no money, just a log and a need to move something. Eureka worked then and it will work again.
  3. Is technology itself the reason big bucks are needed? My view is yes. Each technological innovation seems to have bigger price tags than the previous technological innovation. How much did Apple spend making a “new and innovative” orange iPhone? Answer: Too much. Why? Answer:   To sell a fashion item.  Is this innovation? Answer: Nope. Its MBA think and that, gentle reader, is infinitely innovative.

If I think about AI, I find myself gravitating to the AI turmoil at Apple and Meta. Money, smart people, and excuses. OpenAI is embracing racy outputs. That’s innovation at three big outfits. World-changing? Nope, stock and financial wobblies. That’s not how innovation is supposed to work, is it?

Net net: The US is definitely churning out wonky products, students who cannot read or calculate, and research that is bogus. The countries who educate, enforce standards, and put wandering young minds in schools and laboratories will generate new products and services. The difference is that these countries will advance in technological innovation. The countries that embrace lower standards, reduced funding for research, and glorify doom scrolling will become third-world outfits. What countries will be the winners in innovation? The answer is not the country that takes the lead in foot ware made of fish skins.

Stephen E Arnold, October 23, 2025

I love

Hey, No Gain without Pain. Very Googley

October 6, 2025

AI firms are forging ahead with their projects despite predictions, sometimes by their own leaders, that artificial intelligence could destroy humanity. Some citizens have had enough. The Telegraph reports, “Anti-AI Doom Prophets Launch Hunger Strike Outside Google.” The article points to hunger strikes at both Google DeepMind’s London headquarters and a separate protest in San Francisco. Writer Matthew Field observes:

“Tech leaders, including Sir Demis of DeepMind, have repeatedly stated that in the near future powerful AI tools could pose potential risks to mankind if misused or in the wrong hands. There are even fears in some circles that a self-improving, runaway superintelligence could choose to eliminate humanity of its own accord. Since the launch of ChatGPT in 2022, AI leaders have actively encouraged these fears. The DeepMind boss and Sam Altman, the founder of ChatGPT developer OpenAI, both signed a statement in 2023 warning that rogue AI could pose a ‘risk of extinction’. Yet they have simultaneously moved to invest hundreds of billions in new AI models, adding trillions of dollars to the value of their companies and prompting fears of a seismic tech bubble.”

Does this mean these tech leaders are actively courting death and destruction? Some believe so, including San Francisco hunger-striker Guido Reichstadter. He asserts simply, “In reality, they’re trying to kill you and your family.” He and his counterparts in London, Michaël Trazzi and Denys Sheremet, believe previous protests have not gone far enough. They are willing to endure hunger to bring attention to the issue.

But will AI really wipe us out? Experts are skeptical. However, there is no doubt that AI systems perpetuate some real harms. Like opaque biases, job losses, turbocharged cybercrime, mass surveillance, deepfakes, and damage to our critical thinking skills, to name a few. Perhaps those are the real issues that should inspire protests against AI firms.

Cynthia Murrell, October 6, 2025

Can Meta Buy AI Innovation and Functioning Demos?

September 22, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

That “move fast and break things” has done a bang up job. Mark Zuckerberg, famed for making friends in Hawaii, demonstrated how “think and it becomes real” works in the real world. “Bad Luck for Zuckerberg: Why Meta Connect’s Live Demos Flopped” reported

two of Meta’s live demos epically failed. (A third live demo took some time but eventually worked.)  During the event, CEO Mark Zuckerberg blamed it on the Wi-Fi connection.

Yep, blame the Wi-Fi. Bad Wi-Fi, not bad management or bad planning or bad prepping or bad decision making. No, it is bad Wi-Fi. Okay, I understand: A modern management method in action at Meta, Facebook, WhatsApp, and Instagram. Or, bad luck. No, bad Wi-Fi.

image

Thanks Venice.ai. You captured the baffled look on the innovator’s face when I asked Ron K., “Where did you get the idea for the hair dryer, the paper bag, and popcorn?”

Let’s think about another management decision. Navigate to the weirdly named write up “Meta Gave Millions to New AI Project Poaches, Now It Has a Problem.” That write up reports that Meta has paid some employees as much as $300 million to work on AI. The write up adds:

Such disparities appear to have unsettled longer-serving Meta staff. Employees were said to be lobbying for higher pay or transfers into the prized AI lab. One individual, despite receiving a grant worth millions, reportedly quit after concluding that newcomers were earning multiples more…

My recollection that there is some research that suggests pay is important, but other factors enter into a decision to go to work for a particular organization. I left the blue chip consulting game decades ago, but I recall my boss (Dr. William P. Sommers) explaining to me that pay and innovation are hoped for but not guaranteed. I saw that first hand when I visited the firm’s research and development unit in a rust belt city.

This outfit was cranking out innovations still able to wow people. A good example is the hot air pop corn pumper. Let that puppy produce popcorn for a group of six-year-olds at a birthday party, and I know it will attract some attention.

Here’s the point of the story. The fellow who came up with the idea for this innovation was an engineer, but not a top dog at the time. His wife organized a birthday party for a dozen six and seven year olds to celebrate their daughter’s birthday. But just as the girls arrived, the wife had to leave for a family emergency. As his wife swept out the door, she said, “Find some way to keep them entertained.”

The hapless engineer looked at the group of young girls and his daughter asked, “Daddy, will you make some popcorn?” Stress overwhelmed the pragmatic engineer. He mumbled, “Okay.” He went into the kitchen and found the popcorn. Despite his engineering degree, he did not know where the popcorn pan was. The noise from the girls rose a notch.

He poked his head from the kitchen and said, “Open your gifts. Be there in a minute.”

Adrenaline pumping, he grabbed the bag of popcorn, took a brown paper sack from the counter, and dashed into the bathroom. He poked a hole in the paper bag. He dumped in a handful of popcorn. He stuck the nozzle of the hair dryer through the hole and turned it on. Ninety seconds later, the kernels began popping.

He went into the family room and said, “Let’s make popcorn in the kitchen. He turned on the hair dryer and popped corn. The kids were enthralled. He let his daughter handle the hair dryer. The other kids scooped out the popcorn and added more kernels. Soon popcorn was every where.

The party was a success even though his wife was annoyed at the mess he and the girls made.

I asked the engineer, “Where did you get the idea to use a hair dryer and a paper bag?”

He looked at me and said, “I have no idea.”

That idea became a multi-million dollar product.

Money would not have caused the engineer to “innovate.”

Maybe Mr. Zuckerberg, once he has resolved his demo problems to think about the assumption that paying a person to innovate is an example of “just think it and it will happen” generates digital baloney?

Stephen E Arnold, September 22, 2025

Innovation Is Like Gerbil Breeding: It Is Tough to Produce a Panda

September 8, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

The problem is innovation is a tough one. I remember getting a job from a top dog at the consulting firm silly enough to employ me. The task was to chase down the Forbes Magazine list of companies ordered by how much they spend on innovation. I recall that the goal was to create an “estimate” or what would be a “model” today of what a company of X size should be spending on “innovation.”

Do that today for an outfit like OpenAI or one of the other US efforts to deliver big money via the next big thing and the result is easy to express; namely, every available penny is spent trying to create something new. Yep, spend the cash innovating. Think it, and the “it” becomes real. Build “it,” and the “it” draws users with cash.

A recent and somewhat long essay plopped in my “Read file.” The article is titled “We’ve Lost the Plot with Smartphones.” (The write up requires signing up and / or paying for access.)

The main idea of the essay is that smartphones, once heralded as revolutionary devices for communication and convenience, have evolved into tools that undermine our attention and well-being. I agree. However, innovation may not fix the problem. In my view, the fix may be an interesting effort, but as long as there are gizmos, the status quo will return.

The essay suggests that the innovation arc of such devices like a toaster or the mobile phone solves problems or adds obvious convenience to a user otherwise unfamiliar with the device. Like Steve Jobs suggested, users have to see and use a device. Words alone don’t do the job.  Pushing deck chairs around a technology yacht does not add much to the value of the device. This is the “me too” approach to innovation or what is often called “featuritis.”

Several observations:

  1. Innovations often arise without warning, no matter what process is used
  2. The US is supporting “old” businesses, and other countries are pushing applied AI, which may be a better bet
  3. Big money innovation usually surfs on month, years, or decades of previous work. Once that previous work is exhausted, the brutal odds of innovation success kick in. A few winners will emerge from many losers.

One of the oddities is the difficulty of identifying a significant or substantive innovation. That seems to be as difficult to do as set up a system to generate innovation. In short, technology innovation reminds me of gerbils. Start with a few and quickly have lots of gerbils. The problem is that you have gerbils and what you want is something different.

Good luck.

Stephen E Arnold, September 8, 2025

And the Problem for Enterprise AI Is … Essentially Unsolved

August 26, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

I try not to let my blood pressure go up when I read “our system processes all your organization’s information.” Not only is this statement wildly incorrect it is probably some combination of [a] illegal, [b] too expensive, and [c] too time consuming.

Nevertheless, vendors either repeat the mantra or imply it. When I talk with representatives of these firms, over time, fewer and fewer recognize the craziness of the assertion. Apparently the reality of trying to process documents related to a legal matter, medical information, salary data, government-mandated secrecy cloaks, data on a work-from-home contractor’s laptop which contains information about payoffs in a certain country to win a contract, and similar information is not part of this Fantasyland.

I read “Immature Data Strategies Threaten Enterprise AI Plans.” The write up is a hoot. The information is presented in a way to avoid describing certain ideas as insane or impossible. Let’s take a look at a couple of examples. I will in italics offer my interpretation of what the online publication is trying to coat with sugar and stick inside a Godiva chocolate.

Here’s the first snippet:

Even as senior decision-makers hold their data strategies in high regard, enterprises face a multitude of challenges. Nearly 90% of data pros reported difficulty with scaling and complexity, and more than 4 in 5 pointed to governance and compliance issues. Organizations also grapple with access and security risks, as well as data quality, trust and skills gaps.

My interpretation: Executives (particularly leadership types) perceive their organizations as more buttoned up than they are in reality. Ask another employee, and you will probably hear something like “overall we do very well.” The fact of the matter is that leadership and satisfied employees have zero clue about what is required to address a problem. Looking too closely is not a popular way to get that promotion or to keep the Board of Directors and stakeholders happy. When you have to identify an error use a word like “governance” or “regulations.”

Here’s the second snippet:

To address the litany of obstacles, organizations are prioritizing data governance. More than half of those surveyed expect strengthened governance to significantly improve AI implementation, data quality and trust in business decisions.

My interpretation: Let’s talk about governance, not how poorly procurement is handled and the weird system problems that just persist. What is “governance”? Organizations are unsure how they continue to operate. The purpose of many organizations is — believe it or not — lost. Make money is the yardstick. Do what’s necessary to keep going. That’s why in certain organizations an employee from 30 years ago could return and go to a meeting. Why? No change. Same procedures, same thought processes, just different people. Incrementalism and momentum power the organization.

So what? Organizations are deciding to give AI a whirl or third parties are telling them to do AI. Guess what? Major change is difficult. Systems-related activities repeat the same cycle. Here’s one example: “We want to use Vendor X to create an enterprise knowledge base.” Then the time, cost, and risks are slowly explained. The project gets scaled back because there is neither time, money, employee cooperation, or totally addled attorneys to make organization spanning knowledge available to smart software.

The pitch sounds great. It has for more than 60 years. It is still a difficult deliverable, but it is much easier to market today. Data strategies are one thing; reality is anther.

Stephen E Arnold, August 26, 2025

Learning Is Hard Work: AI Is Not Part of My Game Plan

August 25, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Dinobaby here—a lifetime of unusual education packed into a single childhood. I kicked off in a traditional Illinois kindergarten, then traded finger painting for experimental learning at a “new-idea” grade school in Maryland after a family move near DC. Soon, Brazil called: I landed in Campinas, but with zero English spoken, I lasted a month. Fifth through seventh grade became a solo mission—Calvert Course worksheets, a jungle missionary who mailed my work to Baltimore, and eventually, after the tutor died, pure self-guided study from thousands of miles away. I aced my assignments, but no one in Maryland had any idea of my world. My Portuguese tutor mixed French and German with local lingo; ironically, her English rocketed while my Portuguese crawled.

Back in the States, I dove into “advanced” classes and spent a high school semester at the University of Illinois—mainly reading, testing, and reading. A scholarship sent me to Bradley, a few weeks removed from a basketball cheating inquiry. A professor hooked me on coding in the library, building Latin sermon indexes using the school’s IBM. That led to a Duquesne fellowship; then the University of Arkansas wanted me for their PhD program. But I returned to Illinois, wrote code for Milton texts instead of Latin under Arthur Barker’s mentorship, and gave talks that landed me a job offer. One conference center chat brought me to DC and into the nuclear division at Halliburton. That’s my wild educational ride.

Notice that it did not involve much traditional go-to-class activity. I have done okay despite my somewhat odd educational journey. Most important: No smart software.

Now why did I provide this bit of biographical trivia? I read “AI in the Classroom Is Important for Real-World Skills, College Professors Say.” I did not have access to “regular” school through grade school, high school, and college. I am not sure how many high school students took classes at the U of I when they were 15 years old, but that experience was not typical among my high school class.

I did start working with computers and software in 1962, but there wasn’t much smart software floating around then. The trick for me has been my ability to read quickly, recognize what’s important, and remember information. Again there was no AI. Today, as I finish my Telegram Labyrinth monograph, AI has not been of any importance. Most of the source material is in Russian language documents. The English information is not thoroughly indexed by Telegram nor by the Web search engines. The LLM content suckers are not doing too much with information outside the English speaking world. Maybe China is pushing forward, but my tests with Chinese language Web search engines did not provide much, if any, information my team and I already had reviewed.

Obviously I don’t think AI is something that fits into my “real world skills.” The write up says:

“If integrated well, AI in the classroom can strengthen the fit between what students learn and what students will see in the workforce and world around them,” argued Victor Lee, associate professor at Stanford’s Graduate School of Education. GenAI companies are certainly doing their part to lure students into using their tools by offering new learning and essay-writing features. Google has gone so far as to offer Gemini free for one year, and OpenAI late last month introduced “Study Mode” to help students “work through problems step by step instead of just getting an answer,” the company said in a blog post.

Maybe.

My personal approach to learning involves libraries, for fee online databases, Web research, and more reading. I still take notes on 4×6 notecards just as I did when I was trying to index those Latin sermons. Once I process the “note”, I throw it away. I am lucky because once I read, write, and integrate the factoid into something I am writing — I remember the information.  I don’t use digital calendars. I don’t use integrated to do lists. I just do what has been old fashioned information acquisition work.

The computer is wonderful for writing, Web research, and cooking up PowerPoint pablum. But the idea that using a tool that generates incorrect information strikes me as plain crazy.

The write up says:

Longji Cuo, an associate professor at the University of Colorado, in Boulder, teaches a course on AI and machine learning to help mechanical engineering students learn to use the technology to solve real-world engineering problems. Cuo encourages students to use AI as an agent to help with teamwork, projects, coding, and presentations in class. “My expectation on the quality of the work is much higher,” Cuo said, adding that students need to “demonstrate creativity on the level of a senior-level doctoral student or equivalent.”

Maybe. I am not convinced. Engineering issues are cascading across current and new systems. AI doesn’t seem to stem the tide. What about AI cyber security? Yeah, it’s working great. What about coding assistants? Yeah, super. I just uninstalled another Microsoft Windows 11 update. This one can kill my data storage devices. Copilot? Yeah, wonderful.

The write up concludes with this assertion from an “expert”:

one day, AI agents will be able to work with students on their personalized education needs. “Rather than having one teacher for 30 students, you’ll have one AI agent personalized to each student that will guide them along.”

Learning is hard work. The silliness of computer aid instruction, laptops, iPads, mobile phones, etc. makes one thing clear, learning is not easy. A human must focus, develop discipline, refine native talents, demonstrate motivate, curiosity, and an ability to process information into something more useful than remembering the TikTok icon’s design.

I don’t buy this. I am glad I am old.

Stephen E Arnold, August 25, 2025

Airships and AI: A Similar Technology Challenge

August 14, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Vaclav Smil writes books about the environment and technology. In his 2023 work Invention and Innovation: A Brief History of Hype and Failure, he describes the ups and downs of some interesting technologies. I thought of this book when I read  “A Best Case Scenario for AI?” The author is a wealthy person who has some interaction in the relaxing crypto currency world. The item appeared on X.com.

I noted a passage in the long X.com post; to wit:

… the latest releases of AI models show that model capabilities are more decentralized than many predicted. While there is no guarantee that this continues — there is always the potential for the market to accrete to a small number of players once the investment super-cycle ends — the current state of vigorous competition is healthy. It propels innovation forward, helps America win the AI race, and avoids centralized control. This is good news — that the Doomers did not expect.

Reasonable. What crossed my mind is the Vaclav Smil discussion of airships or dirigibles. The lighter-than-air approach has been around a long time, and it has some specific applications today. Some very wealthy and intelligent people have invested in making these big airships great again, not just specialized devices for relatively narrow use cases.

So what? The airship history spans the 18th, 19th, 20th, and 21st century. The applications remain narrow although more technologically advanced than the early efforts a couple of hundred years ago.

What is smart software is a dirigible type of innovation? The use cases may remain narrow. Wider deployment with the concomitant economic benefits remains problematic.

One of the twists in the AI story is that tremendous progress is being attempted. The innovations as they are rolled out are incremental improvements. Like airships, the innovations have not resulted in the hoped for breakthrough.

There are numerous predictions about the downsides of smart software. But what if AI is little more than a modern version of the dirigible. We have a remarkable range of technologies, but each next steps is underwhelming. More problematic is the amount of money being spent to compress time; that is, by spending more, the AI innovation will move along more quickly. Perhaps that is not the case. Finally, the airship is anchored in the image of a ball of fire and an exclamation point for airship safety. Will their be a comparable moment for AI?

Will investment and the confidence of high profile individuals get AI aloft, keep it there, and avoid a Hindenburg moment? Much has been invested to drive AI forward and make it “the next big thing.” The goal is to generate money, substantial sums.

The X.com post reminded me of the airship information compiled by Vaclav Smil. I can’t shake the image. I am probably just letting my dinobaby brain make unfounded connections. But, what if….? We could ask Google and its self-shaming smart software. Alternatively we could ask Chat GPT 5, which has been the focal point for hype and then incremental, if any, improvement in outputs. We could ask Apple, Amazon, or Telegram. But what if…?

I think an apt figure of speech might be “pushing a string.”

Stephen E Arnold, August 14, 2025

Cannot Read? Students Cannot Imagine Either

August 8, 2025

Students are losing the ability to imagine and self-reflect on their own lives says the HuffPost in the article: “I Asked My Students To Write An Essay About Their Lives. The Reason 1 Student Began To Panic Left Me Stunned.” While Millennials were the first generation to be completely engrossed in the Internet, Generation Z is the first generation to have never lived without screens. Because of the Internet’s constant presence, kids have unfortunately developed bad habits where they zone out and don’t think.

Zen masters work for years to shut off their brains, but Gen Z can do it automatically with a screen. This is a horrible thing for critical thinking skills and imagination, because these kids don’t know how to think without the assistance of AI. The article writer Liz Rose Shulman is a teacher of high school and college students. She assigned them essays and without hesitation all of them rely on AI to complete the assignments.

The students either use Grammarly to help them write everything or the rely on ChatGPT to generate an essay. The over reliance on AI tools means they don’t know how to use their brains. They’re unfamiliar with the standard writing process, problem solving, and being creative. The kids don’t believe there’s a problem using AI. Many teachers also believe the same thing and are adopting it into their curriculums.

The students are flummoxed when they’re asked to write about themselves:

I assigned a writing prompt a few weeks ago that asked my students to reflect on a time when someone believed in them or when they believed in someone else.

One of my students began to panic.

‘I have to ask Google the prompt to get some ideas if I can’t just use AI,’ she pleaded and then began typing into the search box on her screen, ‘A time when someone believed in you.’ ‘It’s about you,’ I told her. ‘You’ve got your life experiences inside of your own mind.’ It hadn’t occurred to her — even with my gentle reminder — to look within her own imagination to generate ideas. One of the reasons why I assigned the prompt is because learning to think for herself now, in high school, will help her build confidence and think through more complicated problems as she gets older — even when she’s no longer in a classroom situation.”

What’s even worse is that kids are addicted to their screens and they lack basic communication skills. Every generations goes through issues with older generations. Society will adapt and survive but let’s start teaching how to think and imagine again! Maybe if they brought back recess and enforced time without screens that would help, even with older people.

Whitney Grace, August 8, 2025

Yahoo: An Important Historical Milestone

August 5, 2025

Dino 5 18 25_thumbSorry, no smart software involved. A dinobaby’s own emergent thoughts.

I read “What Went Wrong for Yahoo.” At one time, my team and I followed Yahoo. We created The Point (Top 5% of the Internet) in the early 1990s. Some perceived The Point as a variant. I suppose it was, but we sold the property after a few years. The new owners, something called CMGI, folded The Point into Lycos, and — poof — The Point was gone.

But Yahoo chugged along. The company became the poster child for the Web 1 era. Web search was not comprehensive, and most of the “search engines” struggled to deal with several thorny issues:

  1. New sites were flooding the Web One Internet. Indexing was a bottleneck. In the good old days, one did not spin up a virtual machine on a low cost vendor in Romania. Machines and gizmos were expensive, and often there was a delay of six months or more for a Sun Microsystems Sparc. Did I mention expensive? Everyone in search was chasing low cost computer and network access.
  2. The search-and-retrieval tools were in “to be” mode. If one were familiar with IBM Almaden, a research group was working on a system called Clever. There were interesting techniques in many companies. Some popped up and faded. I am not sure of the dates but there was Lycos, which I mentioned, Excite, and one developed by the person who created Framemaker, among others. (I am insufficiently motivated too chase down my historical files, and I sure don’t want to fool around trying to get historical information from Bing, Google, Yandex, and (heaven help me! Qwant). The ideas were around, but it took time for the digital DNA to create a system that mostly worked. I wish I could remember the system that emerged from Cambridge University, but I cannot.
  3. Old-fashioned search methods like those used by NASA Recon, SDC Orbit, Dialog, and STAIRS were developed to work on bounded content, precisely structured, indexed or “tagged” in today’s jargon, and coded for mainframes. Figuring out how to use smaller machines was not possible. In my lectures from that era, I pointed out that once something is coded, sort of works, and seems to be making money — changes is not conceivable. Therefore, the systems with something that worked sailed along like aircraft carriers until they rusted and sank.

What’s this got to do with Yahoo?

Yahoo was a directory. Directories are good because the content is bounded. Yahoo did not exercise significant editorial control. The Point, on the other hand, was curated like the commercial databases with which I was associated: ABI/INFORM, Business Dateline (the first online information service which corrected erroneous information after a content object went live), Pharmaceutical News Index, and some others we sold to Cambridge Scientific Abstracts.

Indexing the Web is not bounded. Yahoo tried to come up with a way to index what was a large amount of digital content. Adding to Yahoo’s woes was the need to indexed changed content or the “deltas” as we called them in our attempt at The Point to sound techno-literate.

Because of the cost and revenue problems, decisions at Yahoo — according to the people whom we knew and with whom we spoke — went like this:

  1. Assemble a group with different expertise
  2. State the question, “What can we do now to make money?”
  3. Gather ideas
  4. Hold a meeting to select one or two
  5. Act on the “best ideas”

The flaw in this method is that a couple of smart fellows in a Stanford dorm were fooling around with Backrub. It incorporated ideas from their lectures, what they picked up about new ideas from students, and what they read (no ChatGPT then, sorry).

I am not going to explain what Backrub at first did not do (work reliably despite the weird assemblage of computers and gear the students used) and focus on the three ideas that did work for what became Google, a pun on a big number’s name:

  1. Hook mongrel computers to indexing when those computers were available and use anything that remotely seemed to solve a problem. Is that an old router? Cool, let’s use that. This was a very big idea because fooling around with computer systems could kill those puppies with an errant instruction.
  2. Find inspiration in the IBM Clever system; that is, determine relevance by looking at links to a source document. This was a variation on Gene Garfield’s approach to citation analysis
  3. Index new Web pages when the appeared. If the crawler / indexer crashed, skip the page and go to the next url. The dorm boys looked at the sites that killed the crawler and figured out how to accommodate those changes; thus, the crawler / indexer became “smart”. This was a very good idea because commercial content indexing systems forced content to be structured a certain way. However, in the Web 1 days, rules were either non existent, ignored, or created problems that creators of Web pages wrote around.

Yahoo did none of these things.

Now let me point out Yahoo’s biggest mistake, and, believe me, the company is an ideal source of insight about what not to do.

Yahoo acquired GoTo.com. The company and software emerged from IdeaLab, I think. What GoTo.com created was an online version of a pay-to-play method. The idea was a great one and obvious to those better suited to be the love child of Cardinal Richelieu and Cosimo de’ Medici. To keep the timeline straight, Sergey Brin and Larry Page did the deed and birthed Google with the GoTo.com (Overture)  model to create Google’s ad foundation. Why did Google need money? The person who wrote a check to the Backrub boys said, “You need to earn money.” The Backrub boys looked around and spotted the GoTo method, now owned by Yahoo. The Backrub boys emulated it.

Yahoo, poor old confused Yahoo, took legal action against the Backrub boys, settled for $1 billion, and became increasingly irrelevant. Therefore, Yahoo’s biggest opportunity was to buy the Backrub boys and their Google search system, but they did not. Then Yahoo allowed their GoTo to inspire Google advertising.

From my point of view, Cardinal Richelieu and Cosimo were quite proud that the two computer science students, some of the dorm crowd, and bits and pieces glued together to create Google search emerged as a very big winner.

Yahoo’s problem is that committee think in a fast changing, high technology context is likely to be laughably wrong. Now Google is Yahoo-like. The cited article nails it:

Buying everything in sight clearly isn’t the best business strategy. But if indiscriminately buying everything in sight would have meant acquiring Google and Facebook, Yahoo might have been better off doing that rather than what it did.

Can Google think like the Backrub boys? I don’t think so. The company is spinning money, but the cash that burnishes Google leadership’s image comes from the Yahoo, GoTo.com, and Overture model. Yahoo had so many properties, the Yahooligans had zero idea how to identify a property with value and drive that idea forward. How many “things” does Google operate now? How many things does Facebook operate now? How many things does Telegram operate now? I think that “too many” may hold a clue to the future of these companies. And Yahooooo? An echo, not the yodel.

Stephen E Arnold, August 5, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta