Win Big at the Stock Market: AI Can Predict What Humans Will Do

July 10, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

AI is hot. Click bait is hotter. And the hottest is AI figuring out what humans will do “next.” Think stock picking. Think pitching a company “known” to buy what you are selling. The applications of predictive smart software make intelligence professionals gaming the moves of an adversary quiver with joy.

New Mind-Reading’ AI Predicts What Humans Will Do Next, And It’s Shockingly Accurate” explains:

Researchers have developed an AI called Centaur that accurately predicts human behavior across virtually any psychological experiment. It even outperforms the specialized computer models scientists have been using for decades. Trained on data from more than 60,000 people making over 10 million decisions, Centaur captures the underlying patterns of how we think, learn, and make choices.

Since I believe everything I read on the Internet, smart software definitely can pull off this trick.

How does this work?

Rather than building from scratch, researchers took Meta’s Llama 3.1 language model (the same type powering ChatGPT) and gave it specialized training on human behavior. They used a technique that allows them to modify only a tiny fraction of the AI’s programming while keeping most of it unchanged. The entire training process took only five days on a high-end computer processor.

Hmmm. The Zuck’s smart software. Isn’t Meta in the midst of playing  catch up. The company is believed to be hiring OpenAI professionals and other wizards who can convert the “also in the race” to “winner” more quickly than one can say “billions of dollar spent on virtual reality.”

The write up does not just predict what a humanoid or a dinobaby will do. The write up reports:

n a surprising discovery, Centaur’s internal workings had become more aligned with human brain activity, even though it was never explicitly trained to match neural data. When researchers compared the AI’s internal states to brain scans of people performing the same tasks, they found stronger correlations than with the original, untrained model. Learning to predict human behavior apparently forced the AI to develop internal representations that mirror how our brains actually process information. The AI essentially reverse-engineered aspects of human cognition just by studying our choices. The team also demonstrated how Centaur could accelerate scientific discovery.

I am sold. Imagine. These researchers will be able to make profitable investments, know when to take an alternate path to a popular tourist attraction, and discover a drug that will cure male pattern baldness. Amazing.

My hunch is that predictive analytics hooked up to a semi-hallucinating large language model can produce outputs. Will these predict human behavior? Absolutely. Did the Centaur system predict that I would believe this? Absolutely. Was it hallucinating? Yep, poor Centaur.

Stephen E Arnold, July 10, 2025

Apple and Telegram: Victims of Their Strategic Hubris

July 9, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

What’s “strategic hubris”? I use this bound phrase to signal that an organization manifests decisions that combine big thinking with a destructive character flow. Strategy is the word I use to capture the most important ideas to get an organization to generate revenue and win in its business and political battles. Now hubris. A culture of superiority may be the weird instinct of a founder; it may be marketing jingo that people start believing; or it is jargon learned in school. When the two come together, some organizations can make expensive, often laughable, mistakes. Examples range from Windows and its mobile phone to the Ford Edsel.

I read “Apple Reaches Out to OpenAI, Anthropic to Build Out Siri technology.” In my opinion, this illustrates strategic hubris operating on two pivot points like a merry-go-round: Up and down; round and round.

The cited article states:

… over the past year or so it [Apple] has  faced a variety of leadership and technological challenges developing Apple Intelligence, which is based on in-house foundation models. The more personalized Siri technology with more personalized AI-driven features is now due in 2026, according to a statement by Apple …

This “failure” is a result of strategic hubris. Apple’s leadership believed it could handle smart software. The company taught China how to be a manufacturing super power could learn and do AI. Apple’s leadership seems to have followed the marketing rule: Fire, Aim, Ready. Apple announced AI  or Apple Intelligence and then failed to deliver. Then Apple reorganized and it failed again. Now Apple is looking at third party firms to provide the “intelligence” for Apple.

Personally I think smart software is good at some things and terrible at others. Nevertheless, a failure to provide or “do” smart software is the digital equivalent of having a teacher put a dunce cap on a kid’s head and making him sit in the back of the classroom. In the last 18 months, Apple has been playing fast and loose with court decisions, playing nice with China, and writing checks for assorted fines levied by courts. But the premier action has been the firm’s failure in the alleged “next big thing”.

Let me shift from Apple because there is a firm in the same boat as the king of Cupertino. Telegram has no smart software. Nikolai Durov is, according to Pavel (the task master) working on AI. However, like Apple, Telegram has been chatting up (allegedly) Elon Musk. The Grok AI system, some rumors have it, would / could / should be integrated into the Telegram platform. Telegram has the same strategic hubris I associated with Apple. (These are not the only two firms afflicted with this digital SARS variant.)

I want to identify several messages I extracted from the Apple and Telegram AI anecdotes:

  1. Both companies were doing other things when the smart software yachts left the docks in Half Moon Bay
  2. Both companies have the job of integrating another firm’s smart software into large, fast-moving companies with many moving parts, legal problems, and engineers who are definitely into “strategic hubris”
  3. Both companies have to deliver AI that does not alienate existing users and attract new customers at the same time.

Will these firms be able to deliver a good enough AI solution? Probably. However, both may be vulnerable to third parties who hop on a merry-go-round. There is a predictable and actually no-so-smart pony named Apple and one named Messenger. The threat is that Apple and Telegram have been transmogrified into little wooden ponies. The smart people just ride them until the time is right to jump off.

That’s one scenario for companies with strategic hubris who missed the AI yachts when they were under construction and who were not on the expensive machines when they cast off. Can the costs of strategic hubris be recovered? The stakeholders hope so.

Stephen E Arnold, July 9, 2025

Humans May Be Important. Who Knew?

July 9, 2025

Here is an AI reality check. Futurism reports, “Companies that Replaced Humans with AI Are Realizing their Mistake.” You don’t say. Writer Joe Wilkins tells us:

“As of April, even the best AI agent could only finish 24 percent of the jobs assigned to it. Still, that didn’t stop business executives from swarming to the software like flies to roadside carrion, gutting entire departments worth of human workers to make way for their AI replacements. But as AI agents have yet to even pay for themselves — spilling their employer’s embarrassing secrets all the while — more and more executives are waking up to the sloppy reality of AI hype. A recent survey by the business analysis and consulting firm Gartner, for instance, found that out of 163 business executives, a full half said their plans to ‘significantly reduce their customer service workforce’ would be abandoned by 2027. This is forcing corporate PR spinsters to rewrite speeches about AI ‘transcending automation,’ instead leaning on phrases like ‘hybrid approach’ and ‘transitional challenges’ to describe the fact that they still need humans to run a workplace.”

Few workers would be surprised to learn AI is a disappointment. The write-up points to a report from GoTo and Workplace Intelligence that found 62% of employees say AI is significantly overhyped. Meanwhile, 45 percent of IT managers surveyed paint AI rollouts as scattered and hasty. Security concerns and integration challenges were the main barriers, 56% of them reported.

Anyone who has watched firm after firm make a U-turn on AI-related layoffs will not be surprised by these findings. For example, after cutting staff by 22% last year, finance startup Klarna announced a recruitment drive in May. Wilkins quotes tech critic Ed Zitron, who wrote in September:

“These ‘agents’ are branded to sound like intelligent lifeforms that can make intelligent decisions, but are really just trumped-up automations that require enterprise customers to invest time programming them.”

Companies wanted a silver bullet. Now they appear to be firing blanks.

Cynthia Murrell, July 9, 2025

We Have a Cheater Culture: Quite an Achievement

July 8, 2025

The annual lamentations about AI-enabled cheating have already commenced. Professor Elizabeth Wardle of Miami University would like to reframe that debate. In an opinion piece published at Cincinnati.com, she declares, “Students Aren’t Cheating Because they Have AI, but Because Colleges Are Broken.” Reasons they are broken, she writes, include factors like reduced funding and larger class sizes. Fundamentally, though, the problem lies in universities’ failure to sufficiently evolve.

Some suggest thwarting AI with a return to blue-book essays. Wardle, though, believes that would be a step backward. She notes early U.S. colleges were established before today’s specialized workforce existed. The handwritten assignments that served to train the wealthy, liberal-arts students of yesteryear no longer fit the bill. Instead, students need to understand how things work in the present and how to pivot with change. Yes, including a fluency with AI tools. Graduates must be “broadly literate,” the professor writes. She advises:

“Providing this kind of education requires rethinking higher education altogether. Educators must face our current moment by teaching the students in front of us and designing learning environments that meet the times. Students are not cheating because of AI. When they are cheating, it is because of the many ways that education is no longer working as it should. But students using AI to cheat have perhaps hastened a reckoning that has been a long time coming for higher ed.”

Who is to blame? For one, state legislatures. Many incentivize universities to churn out students with high grades in majors that match certain job titles. State funding, Wardle notes, is often tied to graduates hitting high salaries out of the gate. Her frustration is palpable as she asserts:

“Yes, graduates should be able to get jobs, but the jobs of the future are going to belong to well-rounded critical thinkers who can innovate and solve hard problems. Every column I read by tech CEOs says this very thing, yet state funding policies continue to reward colleges for being technical job factories.”

Professor Wardle is not all talk. In her role as Director of the Howe Center for Writing Excellence, she works with colleagues to update higher-learning instruction. One of their priorities has been how to integrate AI into curricula. She writes:

“The days when school was about regurgitating to prove we memorized something are over. Information is readily available; we don’t need to be able to memorize it. However, we do need to be able to assess it, think critically about it, and apply it. The education of tomorrow is about application and innovation.”

Indeed. But these urgent changes cannot be met as long funding continues to dwindle. In fact, Wardle argues, we must once again funnel significant tax money into higher education. Believe it or not, that is something we used to do as a society. (She recommends Christopher Newfield’s book “The Great Mistake” to learn how and why free, publicly funded higher ed fell apart.) Yes, we suspect there will not be too much US innovation if universities are broken and stay that way. Where will that leave us?

Cynthia Murrell, July 8, 2025

Google Fireworks: No Boom, Just Ka-ching from the EU Regulators

July 7, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

The EU celebrates the 4th of July with a fire cracker for the Google. No bang, just ka-ching, which is the sound of the cash register ringing … again. “Exclusive: Google’s AI Overviews Hit by EU Antitrust Complaint from Independent Publishers.” The trusted news source which reminds me that it is trustworthy reports:

Alphabet’s Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters. Google’s AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May.

Will the fine alter the trajectory of the Google? Answer: Does a snowball survive a fly by of the sun?

Several observations:

  1. Google, like Microsoft, absolutely has to make its smart software investments pay off and pay off in a big way
  2. The competition for AI talent makes fat, confused ducks candidates for becoming foie gras. Mr. Zuckerberg is going to buy the best ducks he can. Sports and Hollywood star compensation only works if the product pays off at the box office.
  3. Google’s “leadership” operates as if regulations from mere governments are annoyances, not rules to be obeyed.
  4. The products and services appear to be multiplying like rabbits. Confusion, not clarity, seems to be the consequence of decisions operating without a vision.

Is there an easy, quick way to make Google great again? My view is that the advertising model anchored to matching messages with queries is the problem. Ad revenue is likely to shift from many advertisers to blockbuster campaigns. Up the quotas of the sales team. However, the sales team may no longer be able to sell at a pace that copes with the cash burn for the alleged next big thing, super intelligence.

Reuters, the trusted outfit, says:

Google said numerous claims about traffic from search are often based on highly incomplete and skewed data.

Yep, highly incomplete and skewed data. The problem for Google is that we have a small tank of nasty cichlids. In case you don’t have ChatGPT at hand, a cichlid is fish that will kill and eat its children. My cichlids have names: Chatty, Pilot girl, Miss Trall, and Dee Seeka. This means that when stressed or confined our cichlids are going to become killers. What happens then?

Stephen E Arnold, July 7, 2025

Worthless College Degrees. Hey, Where Is Mine?

July 4, 2025

Dino 5 18 25Smart software involved in the graphic, otherwise just an addled dinobaby.

This write up is not about going “beyond search.” Heck, search has just changed adjectives and remains mostly a frustrating and confusing experience for employees. I want to highlight the information (which I assume to be 100 percent dead accurate like other free data on the Internet) about the “17 Most Useless College Degrees Employers Don’t Want Today.” Okay, high school seniors, pay attention. According to the estimable Finance Buzz, do not study these subjects and — heaven forbid — expect to get a job when you graduate from an online school, the local college, or a big-time, big-bucks university; I have grouped the write up’s earthworm list into some categories; to wit:

Do gooder work

  • Criminal justice
  • Education (Who needs an education when there is YouTube?)

Entertainment

  • Fashion design
  • Film, video, and photographic arts
  • Music
  • Performing arts

Information

  • Advertising
  • Creative writing (like Finance Buzz research articles?)
  • Communications
  • Computer science
  • Languages (Emojis and some English are what is needed I assume)

Real losers

  • Anthropology and archaeology (I thought these were different until Finance Buzz cleared up my confusion)
  • Exercise science
  • Religious studies

Waiting tables and working the midnight check in desk

  • Culinary arts (Fry cook until the robots arrive)
  • Hospitality (Smile and show people their table)
  • Tourism (Do not fall into the volcano)

Assume the write up is providing verifiable facts. (I know, I know, this is the era of alternative facts.) If we flash forward five years, the already stretched resources for law enforcement and education will be in an even smaller pickle barrel. Good for the bad actors and the people who don’t want to learn. Perhaps less beneficial to others in society. I assume that one can make TikTok-type videos and generate a really bigly income until the Googlers change the compensation rules or TikTok is banned from the US. With the world awash in information and open source software available, who needs to learn anything. AI will do this work. Who in the heck gets a job in archaeology when one can learn from UnchartedX and Brothers of the Serpent? Exercise. Play football and get a contract when you are in middle school like talented kids in Brazil. And the cruise or specialty restaurant business? Those contracts are for six months for a reason. Plus cruise lines have started enforcing no video rules on the staff who were trying to make day in my life videos about the wonderful cruise ship environment. (Weren’t these vessels once called “prison ships”?) My hunch is that whoever assembled this stellar research at Finance Buzz was actually but indirectly writing about smart software and robots. These will decimate many jobs in the idenfied

What should a person study? Nuclear physics, mathematics (applied and theoretical maybe), chemistry, biogenetics, materials science, modern financial management, law (aren’t there enough lawyers?), medicine, and psychology until the DRG codes are restricted.

Excellent way to get a  job. And in what field was my degree? Medieval religious literature. Perfect for  life-long employment as a dinobaby essayist.

Stephen E Arnold, July 4, 2025

Apple Fix: Just Buy Something That Mostly Works

July 4, 2025

Dino 5 18 25No smart software involved. Just an addled dinobaby.

A year ago Apple announced AI which means, of course, Apple Intelligence. Well, Apple was “held back”. In 2025, the powerful innovation machine made the iPhone and Macs look a bit like the Windows see-through motif. Okay.

I read “Apple Reportedly Has a Secret Plan to Quickly Gain Ground in the AI Race.” I won’t point out that if information is circulating AND appears in an article, that information is not secret. It is public relations and marketing output. Second, forget the split infinitive. Since few recognize that datum is singular and data is plural or that the word none is singular, I won’t mention it. Obviously few “real” journalists care.

Now to the write up. In my opinion, the big secret revealed and analyzed is …

Sources report that the company is giving serious consideration to bidding for the startup Perplexity AI, which would allow it to transplant a chunk of expertise and ready-made technology into Apple Park and leapfrog many of the obstacles it currently faces. Perplexity runs an AI-powered search engine which can already perform the contextual tricks which Apple advertised ahead of the iPhone 16 launch but hasn’t yet managed to build into Siri.

Analysis of this “secret” is a bit underwhelming. Here’s the paragraph that is supposed to make sense of this non-secret secret:

Historically, Apple has been wary of large acquisitions, whereas rivals, such as Facebook (buying WhatsApp for $22 billion) and Google (acquiring cloud security platform Wiz for $32 billion), have spent big to scoop up companies. It could be a mark of how worried Apple is about the AI situation that it’s considering such a major and out-of-character move. But after a year of headaches and obstacles, it also could pay off in a big way.

Okay, but what about Google acquiring Motorola? What about Microsoft’s clever purchase of Nokia? And there are other examples. Big companies buying other companies can work out or fizzle. Where is Dodgeball now? Orkut?

The actual issue strikes me as Apple’s failure to recognize that smart software — whether it works particularly well or not — was a marketing pony to ride in the technical circus. Microsoft got the message, and it seems that the marketing play triggered Google. But the tie up seems to be under a bit of stress as of June 2025.

Another problem is that buying AI requires that the purchaser manage the operation, ensure continued innovation of an order slightly more demanding that imitating a Windows interface, and getting the wizard huskies to remain hooked to the dog sled.

What seems to be taking place is a division of the smart software world into three sectors:

  1. Companies that “do” large language models; for example, Google, OpenAI, and others
  2. Companies that “wrap” large language models and generate start ups that are presented as AI but are interfaces
  3. Companies that “integrate” or “glue on” AI to an existing service, platform, or system.

Apple failed at number 1. It hasn’t invented anything in the AI world. (I think I learned about Siri in a Stanford Research Institute presentation many, many years ago. (No, it did not work particularly well even in the demo.)

Apple is not too good at wrapping anything. Safari doesn’t wrap. Safari blazes its own weird trail which is okay for those who love Apple software. For someone like me, I find it annoying.

Apple has demonstrated that it could not “glue on” SIRI.

Okay, Apple has not scored a home run with either approach one, two, or three.

Thus, the analysis, in my opinion, is that Apple like some other outfits now realize smart software — whether it is 100 percent reliable — continues to generate buzz. The task for Apple, therefore, is to figure out how to convert whatever it does into buzz. Skip the cost of invention. Sidestep wrapping AI and look for “partners” who do what department stores in the 1950s: Wrap my holiday gifts. And, three, try to make “glue on” work.

Net net: Will Apple undertake auto de fe and see the light?

Stephen E Arnold, July 4, 2025

Read This Essay and Learn Why AI Can Do Programming

July 3, 2025

dino-orange_thumb_thumb_thumb_thumb_[1]_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zillennials.

I, entirely by accident since Web search does not work too well, an essay titled “Ticket-Driven Development: The Fastest Way to Go Nowhere.” I would have used a different title; for example, “Smart Software Can Do Faster and Cheaper Code” or “Skip Computer Science. Be a Plumber.” Despite my lack of good vibe coding from the essay’s title, I did like the information in the write up. The basic idea is that managers just want throughput. This is not news.

The most useful segment of the write up is this passage:

You don’t need a process revolution to fix this. You need permission to care again. Here’s what that looks like:

  • Leave the code a little better than you found it — even if no one asked you to.
  • Pair up occasionally, not because it’s mandated, but because it helps.
  • Ask why. Even if you already know the answer. Especially then.
  • Write the extra comment. Rename the method. Delete the dead file.
  • Treat the ticket as a boundary, not a blindfold.

Because the real job isn’t closing tickets it’s building systems that work.

I wish to offer several observations:

  1. Repetitive boring, mindless work is perfect for smart software
  2. Implementing dot points one to five will result in a reprimand, transfer to a salubrious location, or termination with extreme prejudice
  3. Spending long hours with an AI version of an old-fashioned psychiatrist because you will go crazy.

After reading the essay, I realized that the managerial approach, the “ticket-driven workflow”, and the need for throughput applies to many jobs. Leadership no longer has middle managers who manage. When leadership intervenes, one gets [a] consultants or [b] knee-jerk decisions or mandates.

The crisis is in organizational set up and management. The developers? Sorry, you have been replaced. Say, “hello” to our version of smart software. Her name is No Kidding.

Stephen E Arnold, July 3, 2025

AI Management: Excellence in Distancing Decisions from Consequences

July 2, 2025

Dino 5 18 25_thumb[3]Smart software involved in the graphic, otherwise just an addled dinobaby.

This write up “Exclusive: Scale AI’s Spam, Security Woes Plagued the Company While Serving Google” raises two minor issues and one that is not called out in the headline or the subtitle:

$14 billion investment from Meta struggled to contain ‘spammy behavior’ from unqualified contributors as it trained Gemini.

Who can get excited about a workflow and editorial quality issue. What is “quality”? In one of my Google monographs I pointed out that Google used at one time a number of numerical recipes to figure out “quality.” Did that work? Well, it was good enough to help get the Yahoo-inspired Google advertising program off the ground. Then quality became like those good brownies from 1953: Stuffed with ingredients no self-respecting Stanford computer science graduate would eat for lunch.

I believe some caution is required when trying to understand a very large and profitable company from someone who is no longer working at the company. Nevertheless, the article presents a couple of interesting assertions and dodges what I consider the big issue.

Consider this statement in the article:

In a statement to Inc., Scale AI spokesperson Joe Osborne said: “This story is filled with so many inaccuracies, it’s hard to keep track. What these documents show, and what we explained to Inc ahead of publishing, is that we had clear safeguards in place to detect and remove spam before anything goes to customers.” [Editor’s Note: “this” means the rumor that Scale cut corners.]

The story is that a process included data that would screw up the neural network.

And the security issue? I noted this passage:

The [spam] episode raises the question of whether or not Google at one point had vital data muddied by workers who lacked the credentials required by the Bulba program. It also calls into question Scale AI’s security and vetting protocols.  “It was a mess. They had no authentication at the beginning,” says the former contributor. [Editor’s Note: Bulba means “Bard.”]

A person reading the article might conclude that Scale AI was a corner cutting outfit. I don’t know. But when big money starts to flow and more can be turned on, some companies just do what’s expedient. The signals in this Scale example are the put the pedal to the metal approach to process and the information that people knew that bad data was getting pumped into Googzilla.

But what’s the big point that’s missing from the write up? In my opinion, Google management made a decision to rely on Scale. Then Google management distanced itself from the operation. In the good old days of US business, blue-suited informed middle managers pursued quality, some companies would have spotted the problems and ridden herd on the subcontractor.

Google did not do this in an effective manner.

Now Scale AI is beavering away for Meta which may be an unexpected win for the Google. Will Meta’s smart software begin to make recommendations like “glue your cheese on the pizza”? My personal view is that I now know why Google’s smart software has been more about public relations and marketing, not about delivering something that is crystal clear about its product line up, output reliability, and hallucinatory behaviors.

At least Google management can rely on Deepseek to revolutionize understanding the human genome. Will the company manage in as effective a manner as its marketing department touts its achievements?

Stephen E Arnold, July 2, 2025

Microsoft and OpenAI: An Expensive Sitcom

July 1, 2025

Dino 5 18 25No smart software involved. Just an addled dinobaby.

I remember how clever I thought the book title “Who Says Elephants Can’t Dance?: Leading a Great Enterprise Through Dramatic Change.” I find the break dancing content between Microsoft and OpenAI even more amusing. Bloomberg “real” news reported that Microsoft is “struggling to sell its Copilot solutions. Why? Those Microsoft customers want OpenAI’s ChatGPT. That’s a hoot.

Computerworld adds to this side show more Monte Python twists. “Microsoft and OpenAI: Will They Opt for the Nuclear Option?” (I am not too keen on the use of the word “nuclear.” People bandy it about without understanding exactly what the actual consequences of such an opton means. Please, do a bit of homework before suggesting that two enterprises are doing anything remotely similar.)

The estimable Computerworld reports:

Microsoft needs access to OpenAI technologies to keep its worldwide lead in AI and grow its valuation beyond its current more than $3.5 trillion. OpenAI needs Microsoft to sign a deal so the company can go public via an IPO. Without an IPO, the company isn’t likely to keep its highly valued AI researchers — they’ll probably be poached by companies willing to pay hundreds of millions of dollars for the talent.

The problem seems to be that Microsoft is trying to sell its version of smart software. The enterprise customers and even dinobabies like myself prefer the hallucinatory and unpredictable ChatGPT to the downright weirdness of Copilot in Notepad. The Computerworld story says:

Hovering over it all is an even bigger wildcard. Microsoft’s and OpenAI’s existing agreement dramatically curtails Microsoft’s rights to OpenAI technologies if the technologies reach what is called artificial general intelligence (AGI) — the point at which AI becomes capable of human reasoning. AGI wasn’t defined in that agreement. But Altman has said he believes AGI might be reached as early as this year.

People cannot agree over beach rights and school taxes. The smart software (which may remain without regulation for a decade) is a much bigger deal. The dollars at stake are huge. Most people do not know that a Board of Directors for a Fortune 1000 company will spend more time arguing about parking spaces than a $300 million acquisition. The reason? Most humans cannot conceive of the numbers of dollars associated with artificial intelligence. If the AI next big thing does not work, quite a few outfits are going to be selling snake oil from tables at flea markets.

Here’s the humorous twist from my vantage point. Microsoft itself kicked off the AI boom with its announcements a couple of years ago. Google, already wondering how it can keep the money gushing to pay the costs of simply being Google, short circuited and hit the switch for Code Red, Yellow, Orange, and probably the color only five people on earth have ever seen.

And what’s happened? The Google-spawned methods aren’t eliminating hallucinations. The OpenAI methods are not eliminating hallucinations. The improvements are more and more difficult to explain. Meanwhile start ups are doing interesting things with AI systems that are good enough for certain use cases. I particularly like consulting and investment firms using AI to get rid of MBAs.

The punch line for this joke is that the Microsoft version of ChatGPT seems to have more brand deliciousness. Microsoft linked with OpenAI, created its own “line of AI,” and now finds that the frisky money burner OpenAI is more popular and can just define artificial general intelligence to its liking and enjoy the philosophical discussions among AI experts and lawyers.

One cannot make this sequence up. Jack Benny’s radio scripts came close, but I think the Microsoft – OpenAI program is a prize winner.

Stephen E Arnold, July 1, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta