Just What You Want: Information about Footnotes

July 11, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

I am completing my 14th monograph. Some of these 150 page plus documents became books. Examples range from The Google Legacy, published in 2003 for a client and then as a public document in 2004 by Infonortics Ltd., a specialty publisher somewhere in England. Others were published by Panda Press in Sweden. Martin White and I published a book about enterprise search management, and I do not recall what outfit published the book. When I started writing texts to accompany my lectures for ISS Telestrategies, the US National Cyber Crime events, and other specialized conferences, I decided to generate Adobe PDF files and make these “books” available to those in my classes and lectures. Dark Web Notebook and CyberOSINT were “self published.” Why? The commercial specialty publishers were going out of business or did not have a way to market the books I wrote. I wrote a couple of monographs about Japan’s investments in database technology in the early 1990s for the US Office of Technology Assessment. But I have lost track of these “books.”

When I read “Give Footnotes the Boot,” I thought about how I had handled “notes” in my long form writings. For this blog which is a collection of “notes” to myself given the appearance of an essay, I usually cite an article. I then add my preliminary thoughts about the write up, usually including a couple of the source document’s “interesting” statements. The blog, therefore, is an online notebook with 20,000 plus entries written for an audience of one: Me.

I noted that the cited “footnote” article says:

If the footnote markers are links, then the user can use the back button/gesture to return to the main content. But, even though this restores the previous scroll position, the user is still left with the challenge of finding their previous place in a wall of text6. We could try to solve that problem by dynamically pulling the content from the footnotes and displaying it in a popover. In some browsers (including yours) that will display like a tooltip, pointing directly back to the footnote marker. Thanks to modern web features, this can be done entirely without JavaScript7. But this is still shit! I see good, smart people, who’d always avoid using “click here” as link text, littering their articles with link texts such as 1, 7, and sometimes even 12. Not only is this as contextless as “click here”, it also provides the extra frustration of a tiny-weeny hit target. Update: Adrian Roselli pointed out that there are numerous bugs with accessibility tooling and superscript. And all this for what? To cargo-cult academia? Stop it! Stop it now! Footnotes are a shitty hack built on the limitations of printed media. It’s dumb to build on top of those limitations when they don’t exist on the web platform. So I ask you to break free of footnotes and do something better.

The essay omits one option; that is, just write as if the information in the chapter, book, paragraph is common knowledge. The result is fewer footnotes.

I am giving this footnote free approach a try in the book I am working on to accompany my lectures about Telegram for law enforcement, cyber attorneys, and intelligence professionals. I know that most people do not know that a specific quote I include from Pavel Durov originated from a Russia language blog. However, citing the Russian blog, presenting the title of  the blog post in Cyrillic, including the English translation, and adding comments like “no longer online” would be the appropriate way to let my reader know I did not make up Pavel’s statement about having more than 100 children.

I am assuming that every person on earth knows that Pavel thinks he  is a super human and has the duty to spawn more Pavels.

How will this work out? My hunch is that my readers will use my Telegram Labyrinth monograph to get oriented to a service alleged to be a criminal enterprise by the French judiciary. If someone wants to know where one of my “facts” originates, I will go through my notes, including blog posts, for the link to the document I read. Will those sources be findable in 2025 when the book comes out? Probably not.

Online information is disappearing at an alarming rate. The search systems I use “disappear” content even though I have a PDF of the source document in my electronic file. Intermediaries go out of business or filters block access to content.

I like the ideas in Jake Archibald’s essay. I also like the academic rigor of footnotes. But for the Telegram Labyrinth, I am minimizing footnotes. I assume that every investigator, intelligence professional, and government lawyer will know about Telegram. Therefore, what’s in my new book is common knowledge. That means, “Sorry, Miss Dalton, Stevie is dumping 95 percent of the footnotes.” (I should footnote that Miss Dalton was one of my teachers who wanted footnotes in Modern Language Association style for everything single thing her students wrote.) Nope. Blame Web rot, blame my laziness, blame the wild social media environment.

You will live and probably have some of that Telegram common knowledge refreshed; for example, the Telegram programming language FIFT is like FORTH only better. Get the pun. The Durovs have a sense of humor.

Stephen E Arnold, July1, 2025

Apple and Telegram: Victims of Their Strategic Hubris

July 9, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

What’s “strategic hubris”? I use this bound phrase to signal that an organization manifests decisions that combine big thinking with a destructive character flow. Strategy is the word I use to capture the most important ideas to get an organization to generate revenue and win in its business and political battles. Now hubris. A culture of superiority may be the weird instinct of a founder; it may be marketing jingo that people start believing; or it is jargon learned in school. When the two come together, some organizations can make expensive, often laughable, mistakes. Examples range from Windows and its mobile phone to the Ford Edsel.

I read “Apple Reaches Out to OpenAI, Anthropic to Build Out Siri technology.” In my opinion, this illustrates strategic hubris operating on two pivot points like a merry-go-round: Up and down; round and round.

The cited article states:

… over the past year or so it [Apple] has  faced a variety of leadership and technological challenges developing Apple Intelligence, which is based on in-house foundation models. The more personalized Siri technology with more personalized AI-driven features is now due in 2026, according to a statement by Apple …

This “failure” is a result of strategic hubris. Apple’s leadership believed it could handle smart software. The company taught China how to be a manufacturing super power could learn and do AI. Apple’s leadership seems to have followed the marketing rule: Fire, Aim, Ready. Apple announced AI  or Apple Intelligence and then failed to deliver. Then Apple reorganized and it failed again. Now Apple is looking at third party firms to provide the “intelligence” for Apple.

Personally I think smart software is good at some things and terrible at others. Nevertheless, a failure to provide or “do” smart software is the digital equivalent of having a teacher put a dunce cap on a kid’s head and making him sit in the back of the classroom. In the last 18 months, Apple has been playing fast and loose with court decisions, playing nice with China, and writing checks for assorted fines levied by courts. But the premier action has been the firm’s failure in the alleged “next big thing”.

Let me shift from Apple because there is a firm in the same boat as the king of Cupertino. Telegram has no smart software. Nikolai Durov is, according to Pavel (the task master) working on AI. However, like Apple, Telegram has been chatting up (allegedly) Elon Musk. The Grok AI system, some rumors have it, would / could / should be integrated into the Telegram platform. Telegram has the same strategic hubris I associated with Apple. (These are not the only two firms afflicted with this digital SARS variant.)

I want to identify several messages I extracted from the Apple and Telegram AI anecdotes:

  1. Both companies were doing other things when the smart software yachts left the docks in Half Moon Bay
  2. Both companies have the job of integrating another firm’s smart software into large, fast-moving companies with many moving parts, legal problems, and engineers who are definitely into “strategic hubris”
  3. Both companies have to deliver AI that does not alienate existing users and attract new customers at the same time.

Will these firms be able to deliver a good enough AI solution? Probably. However, both may be vulnerable to third parties who hop on a merry-go-round. There is a predictable and actually no-so-smart pony named Apple and one named Messenger. The threat is that Apple and Telegram have been transmogrified into little wooden ponies. The smart people just ride them until the time is right to jump off.

That’s one scenario for companies with strategic hubris who missed the AI yachts when they were under construction and who were not on the expensive machines when they cast off. Can the costs of strategic hubris be recovered? The stakeholders hope so.

Stephen E Arnold, July 9, 2025

New Business Tactics from Google and Meta: Fear-Fueled Management

July 8, 2025

Dino 5 18 25No smart software. Just a dinobaby and an old laptop.

I like to document new approaches to business rules or business truisms. Examples range from truisms like “targeting is effective” to “two objectives is no objectives.” Today July 1, 2025, I spotted anecdotal evidence of two new “rules.” Both seem customed tailored to the GenX, GenY, GenZ, and GenAI approach to leadership. Let’s look at each briefly and then consider how effective these are likely to be.

The first example of new management thinking appears in “Google Embraces AI in the Classroom with New Gemini Tools for Educators, Chatbots for Students, and More.” The write up explains that Google has:

introduced more than 30 AI tools for educators, a version of the Gemini app built for education, expanded access to its collaborative video creation app Google Vids, and other tools for managed Chromebooks.

Forget the one objective idea when it comes to products. Just roll out more than two dozen AI services. That will definitely catch the attention of grade, middle school, high school, junior college, and university teachers in the US and elsewhere. I am not a teacher, but I know that when I attend neighborhood get togethers, the teachers at these functions often ask me about smart software. From these interactions, very few understand that smart software comes in different “flavors.” AI is still a mostly unexplored innovation. But Google is chock full of smart people who certainly know how teachers can rush to two dozen new products and services in a jiffy.

The second rule is that organizations are hierarchical. Assuming this is the approach, one person should lead an organization and then one person should lead a unit and one person should lead a department and so on. This is the old Great Chain of Being slapped on an enterprise. My father worked in this type of company, and he liked it. He explained how work flowed from one box on the organization chart to another. With everything working the way my father liked things to work, bulldozers and mortars appeared on the loading docks. Since I grew up with this approach, it made sense to me. I must admit that I still find this type of set up appealing, and I am usually less than thrilled to work in an matrix management, let’s just roll with it set up.

In “Nikita Bier, The Founder Of Gas And TBH, Who Once Asked Elon Musk To Hire Him As VP Of Product At Twitter, Has Joined X: ‘Never Give Up‘” I learned that Meta is going with the two bosses approach to smart software. The write up reports as real news as opposed to news release news:

On Monday, Bier announced on X that he’s officially taking the reins as head of product. "Ladies and gentlemen, I’ve officially posted my way to the top: I’m joining @X as Head of Product," Bier wrote.

Earlier in June 2025, Mark Zuckerberg pumped money into Scale.io (an indexing outfit) and hired Alexandr Wang to be the top dog of Meta’s catch up in AI initiative. It appears that Meta is going to give the two bosses are better than one approach its stamp of management genius approval. OpenAI appeared to emulate this approach, and it seemed to have spawned a number of competitors and created an environment in which huge sums of money could attract AI wizards to Mr. Zuckerberg’s social castle.

The first new management precept is that an organization can generate revenue by shotgunning more than two dozen new products and services to what Google sees as the education market. The outmoded management approach would focus on one product and service, provide that to a segment of the education market with some money to spend and a problem to solve. Then figure out how to make that product more useful and grow paying customers in that segment. That’s obviously stupid and not GenAI. The modern approach is to blast that bird shot somewhere in the direction of a big fuzzy market and go pick up the dead ducks for dinner.

The second new management precept is to have an important unit, a sense of desperation born from failure, and put two people in charge. I think this can work, but in most of the successful outfits to which I have been exposed, there is one person at the top. He or she may be floating above the fray, but the idea is that someone, in theory, is in charge.

Several observations are warranted:

  1. The chaos approach to building a business has taken root and begun to flower at Google and Meta. Out with the old and in with the new. I am willing to wait and see what happens because when either success or failure arrives, the stories of VCs jumping from tall buildings or youthful managers buying big yachts will circulate.
  2. The innovations in management at Google and Meta suggest to me a bit of desperation. Both companies perceive that each is falling behind or in danger of losing. That perception may be accurate because once the AI payoff is not evident, Google and Meta may find themselves paddling up the river, not floating down the river.
  3. The two innovations viewed as discrete actions are expensive, risky, and illustrative of the failure of management at both firms. Employees, stakeholders, and users have a lot to win or lose.

I heard a talk by someone who predicted that traditional management consulting would be replaced by smart software. In the blue chip firm in which I worked years ago, management decisions like these would be guaranteed to translate to old-fashioned, human-based consulting projects.

In today’s world, decisions by “leadership” are unlikely to be remediated by smart software. Fixing up the messes will require individuals with experience, knowledge, and judgment.

As Julius Caesar allegedly said:

In summo periculo timor miericordiam non recipit.

This means something along the lines, “In situations of danger, fear feels no pity.” These new management rules suggest that both Google and Meta’s “leadership” are indeed fearful and grandstanding in order to overcome those inner doubts. The decisions to go against conventional management methods seem obvious and logical to them. To others, perhaps the “two bosses” and “a blast of AI products and service” are just ill advised or not informed?

Stephen E Arnold, July 8, 2025

We Have a Cheater Culture: Quite an Achievement

July 8, 2025

The annual lamentations about AI-enabled cheating have already commenced. Professor Elizabeth Wardle of Miami University would like to reframe that debate. In an opinion piece published at Cincinnati.com, she declares, “Students Aren’t Cheating Because they Have AI, but Because Colleges Are Broken.” Reasons they are broken, she writes, include factors like reduced funding and larger class sizes. Fundamentally, though, the problem lies in universities’ failure to sufficiently evolve.

Some suggest thwarting AI with a return to blue-book essays. Wardle, though, believes that would be a step backward. She notes early U.S. colleges were established before today’s specialized workforce existed. The handwritten assignments that served to train the wealthy, liberal-arts students of yesteryear no longer fit the bill. Instead, students need to understand how things work in the present and how to pivot with change. Yes, including a fluency with AI tools. Graduates must be “broadly literate,” the professor writes. She advises:

“Providing this kind of education requires rethinking higher education altogether. Educators must face our current moment by teaching the students in front of us and designing learning environments that meet the times. Students are not cheating because of AI. When they are cheating, it is because of the many ways that education is no longer working as it should. But students using AI to cheat have perhaps hastened a reckoning that has been a long time coming for higher ed.”

Who is to blame? For one, state legislatures. Many incentivize universities to churn out students with high grades in majors that match certain job titles. State funding, Wardle notes, is often tied to graduates hitting high salaries out of the gate. Her frustration is palpable as she asserts:

“Yes, graduates should be able to get jobs, but the jobs of the future are going to belong to well-rounded critical thinkers who can innovate and solve hard problems. Every column I read by tech CEOs says this very thing, yet state funding policies continue to reward colleges for being technical job factories.”

Professor Wardle is not all talk. In her role as Director of the Howe Center for Writing Excellence, she works with colleagues to update higher-learning instruction. One of their priorities has been how to integrate AI into curricula. She writes:

“The days when school was about regurgitating to prove we memorized something are over. Information is readily available; we don’t need to be able to memorize it. However, we do need to be able to assess it, think critically about it, and apply it. The education of tomorrow is about application and innovation.”

Indeed. But these urgent changes cannot be met as long funding continues to dwindle. In fact, Wardle argues, we must once again funnel significant tax money into higher education. Believe it or not, that is something we used to do as a society. (She recommends Christopher Newfield’s book “The Great Mistake” to learn how and why free, publicly funded higher ed fell apart.) Yes, we suspect there will not be too much US innovation if universities are broken and stay that way. Where will that leave us?

Cynthia Murrell, July 8, 2025

Google Fireworks: No Boom, Just Ka-ching from the EU Regulators

July 7, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

The EU celebrates the 4th of July with a fire cracker for the Google. No bang, just ka-ching, which is the sound of the cash register ringing … again. “Exclusive: Google’s AI Overviews Hit by EU Antitrust Complaint from Independent Publishers.” The trusted news source which reminds me that it is trustworthy reports:

Alphabet’s Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters. Google’s AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May.

Will the fine alter the trajectory of the Google? Answer: Does a snowball survive a fly by of the sun?

Several observations:

  1. Google, like Microsoft, absolutely has to make its smart software investments pay off and pay off in a big way
  2. The competition for AI talent makes fat, confused ducks candidates for becoming foie gras. Mr. Zuckerberg is going to buy the best ducks he can. Sports and Hollywood star compensation only works if the product pays off at the box office.
  3. Google’s “leadership” operates as if regulations from mere governments are annoyances, not rules to be obeyed.
  4. The products and services appear to be multiplying like rabbits. Confusion, not clarity, seems to be the consequence of decisions operating without a vision.

Is there an easy, quick way to make Google great again? My view is that the advertising model anchored to matching messages with queries is the problem. Ad revenue is likely to shift from many advertisers to blockbuster campaigns. Up the quotas of the sales team. However, the sales team may no longer be able to sell at a pace that copes with the cash burn for the alleged next big thing, super intelligence.

Reuters, the trusted outfit, says:

Google said numerous claims about traffic from search are often based on highly incomplete and skewed data.

Yep, highly incomplete and skewed data. The problem for Google is that we have a small tank of nasty cichlids. In case you don’t have ChatGPT at hand, a cichlid is fish that will kill and eat its children. My cichlids have names: Chatty, Pilot girl, Miss Trall, and Dee Seeka. This means that when stressed or confined our cichlids are going to become killers. What happens then?

Stephen E Arnold, July 7, 2025

Apple Fix: Just Buy Something That Mostly Works

July 4, 2025

Dino 5 18 25No smart software involved. Just an addled dinobaby.

A year ago Apple announced AI which means, of course, Apple Intelligence. Well, Apple was “held back”. In 2025, the powerful innovation machine made the iPhone and Macs look a bit like the Windows see-through motif. Okay.

I read “Apple Reportedly Has a Secret Plan to Quickly Gain Ground in the AI Race.” I won’t point out that if information is circulating AND appears in an article, that information is not secret. It is public relations and marketing output. Second, forget the split infinitive. Since few recognize that datum is singular and data is plural or that the word none is singular, I won’t mention it. Obviously few “real” journalists care.

Now to the write up. In my opinion, the big secret revealed and analyzed is …

Sources report that the company is giving serious consideration to bidding for the startup Perplexity AI, which would allow it to transplant a chunk of expertise and ready-made technology into Apple Park and leapfrog many of the obstacles it currently faces. Perplexity runs an AI-powered search engine which can already perform the contextual tricks which Apple advertised ahead of the iPhone 16 launch but hasn’t yet managed to build into Siri.

Analysis of this “secret” is a bit underwhelming. Here’s the paragraph that is supposed to make sense of this non-secret secret:

Historically, Apple has been wary of large acquisitions, whereas rivals, such as Facebook (buying WhatsApp for $22 billion) and Google (acquiring cloud security platform Wiz for $32 billion), have spent big to scoop up companies. It could be a mark of how worried Apple is about the AI situation that it’s considering such a major and out-of-character move. But after a year of headaches and obstacles, it also could pay off in a big way.

Okay, but what about Google acquiring Motorola? What about Microsoft’s clever purchase of Nokia? And there are other examples. Big companies buying other companies can work out or fizzle. Where is Dodgeball now? Orkut?

The actual issue strikes me as Apple’s failure to recognize that smart software — whether it works particularly well or not — was a marketing pony to ride in the technical circus. Microsoft got the message, and it seems that the marketing play triggered Google. But the tie up seems to be under a bit of stress as of June 2025.

Another problem is that buying AI requires that the purchaser manage the operation, ensure continued innovation of an order slightly more demanding that imitating a Windows interface, and getting the wizard huskies to remain hooked to the dog sled.

What seems to be taking place is a division of the smart software world into three sectors:

  1. Companies that “do” large language models; for example, Google, OpenAI, and others
  2. Companies that “wrap” large language models and generate start ups that are presented as AI but are interfaces
  3. Companies that “integrate” or “glue on” AI to an existing service, platform, or system.

Apple failed at number 1. It hasn’t invented anything in the AI world. (I think I learned about Siri in a Stanford Research Institute presentation many, many years ago. (No, it did not work particularly well even in the demo.)

Apple is not too good at wrapping anything. Safari doesn’t wrap. Safari blazes its own weird trail which is okay for those who love Apple software. For someone like me, I find it annoying.

Apple has demonstrated that it could not “glue on” SIRI.

Okay, Apple has not scored a home run with either approach one, two, or three.

Thus, the analysis, in my opinion, is that Apple like some other outfits now realize smart software — whether it is 100 percent reliable — continues to generate buzz. The task for Apple, therefore, is to figure out how to convert whatever it does into buzz. Skip the cost of invention. Sidestep wrapping AI and look for “partners” who do what department stores in the 1950s: Wrap my holiday gifts. And, three, try to make “glue on” work.

Net net: Will Apple undertake auto de fe and see the light?

Stephen E Arnold, July 4, 2025

Read This Essay and Learn Why AI Can Do Programming

July 3, 2025

dino-orange_thumb_thumb_thumb_thumb_[1]_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zillennials.

I, entirely by accident since Web search does not work too well, an essay titled “Ticket-Driven Development: The Fastest Way to Go Nowhere.” I would have used a different title; for example, “Smart Software Can Do Faster and Cheaper Code” or “Skip Computer Science. Be a Plumber.” Despite my lack of good vibe coding from the essay’s title, I did like the information in the write up. The basic idea is that managers just want throughput. This is not news.

The most useful segment of the write up is this passage:

You don’t need a process revolution to fix this. You need permission to care again. Here’s what that looks like:

  • Leave the code a little better than you found it — even if no one asked you to.
  • Pair up occasionally, not because it’s mandated, but because it helps.
  • Ask why. Even if you already know the answer. Especially then.
  • Write the extra comment. Rename the method. Delete the dead file.
  • Treat the ticket as a boundary, not a blindfold.

Because the real job isn’t closing tickets it’s building systems that work.

I wish to offer several observations:

  1. Repetitive boring, mindless work is perfect for smart software
  2. Implementing dot points one to five will result in a reprimand, transfer to a salubrious location, or termination with extreme prejudice
  3. Spending long hours with an AI version of an old-fashioned psychiatrist because you will go crazy.

After reading the essay, I realized that the managerial approach, the “ticket-driven workflow”, and the need for throughput applies to many jobs. Leadership no longer has middle managers who manage. When leadership intervenes, one gets [a] consultants or [b] knee-jerk decisions or mandates.

The crisis is in organizational set up and management. The developers? Sorry, you have been replaced. Say, “hello” to our version of smart software. Her name is No Kidding.

Stephen E Arnold, July 3, 2025

Microsoft and OpenAI: An Expensive Sitcom

July 1, 2025

Dino 5 18 25No smart software involved. Just an addled dinobaby.

I remember how clever I thought the book title “Who Says Elephants Can’t Dance?: Leading a Great Enterprise Through Dramatic Change.” I find the break dancing content between Microsoft and OpenAI even more amusing. Bloomberg “real” news reported that Microsoft is “struggling to sell its Copilot solutions. Why? Those Microsoft customers want OpenAI’s ChatGPT. That’s a hoot.

Computerworld adds to this side show more Monte Python twists. “Microsoft and OpenAI: Will They Opt for the Nuclear Option?” (I am not too keen on the use of the word “nuclear.” People bandy it about without understanding exactly what the actual consequences of such an opton means. Please, do a bit of homework before suggesting that two enterprises are doing anything remotely similar.)

The estimable Computerworld reports:

Microsoft needs access to OpenAI technologies to keep its worldwide lead in AI and grow its valuation beyond its current more than $3.5 trillion. OpenAI needs Microsoft to sign a deal so the company can go public via an IPO. Without an IPO, the company isn’t likely to keep its highly valued AI researchers — they’ll probably be poached by companies willing to pay hundreds of millions of dollars for the talent.

The problem seems to be that Microsoft is trying to sell its version of smart software. The enterprise customers and even dinobabies like myself prefer the hallucinatory and unpredictable ChatGPT to the downright weirdness of Copilot in Notepad. The Computerworld story says:

Hovering over it all is an even bigger wildcard. Microsoft’s and OpenAI’s existing agreement dramatically curtails Microsoft’s rights to OpenAI technologies if the technologies reach what is called artificial general intelligence (AGI) — the point at which AI becomes capable of human reasoning. AGI wasn’t defined in that agreement. But Altman has said he believes AGI might be reached as early as this year.

People cannot agree over beach rights and school taxes. The smart software (which may remain without regulation for a decade) is a much bigger deal. The dollars at stake are huge. Most people do not know that a Board of Directors for a Fortune 1000 company will spend more time arguing about parking spaces than a $300 million acquisition. The reason? Most humans cannot conceive of the numbers of dollars associated with artificial intelligence. If the AI next big thing does not work, quite a few outfits are going to be selling snake oil from tables at flea markets.

Here’s the humorous twist from my vantage point. Microsoft itself kicked off the AI boom with its announcements a couple of years ago. Google, already wondering how it can keep the money gushing to pay the costs of simply being Google, short circuited and hit the switch for Code Red, Yellow, Orange, and probably the color only five people on earth have ever seen.

And what’s happened? The Google-spawned methods aren’t eliminating hallucinations. The OpenAI methods are not eliminating hallucinations. The improvements are more and more difficult to explain. Meanwhile start ups are doing interesting things with AI systems that are good enough for certain use cases. I particularly like consulting and investment firms using AI to get rid of MBAs.

The punch line for this joke is that the Microsoft version of ChatGPT seems to have more brand deliciousness. Microsoft linked with OpenAI, created its own “line of AI,” and now finds that the frisky money burner OpenAI is more popular and can just define artificial general intelligence to its liking and enjoy the philosophical discussions among AI experts and lawyers.

One cannot make this sequence up. Jack Benny’s radio scripts came close, but I think the Microsoft – OpenAI program is a prize winner.

Stephen E Arnold, July 1, 2025

Publishing for Cash: What Is Here Is Bad. What Is Coming May Be Worse

July 1, 2025

Dino 5 18 25Smart software involved in the graphic, otherwise just an addled dinobaby.

Shocker. Pew Research discovers that most “Americans” do not pay for news. Amazing. Is it possible that the Pew professionals were unaware of the reason newspapers, radio, and television included comic strips, horoscopes, sports scores, and popular music in their “real” news content? I read in the middle of 2025 the research report “Few Americans Pay for News When They Encounter Paywalls.” For a number of years I worked for a large publishing company in Manhattan. I also worked at a privately owned publishing company in fly over country.

image

The sky looks threatening. Is it clouds, locusts, or the specter of the new Dark Ages? Thanks, you.com. Good enough.

I learned several things. Please, keep in mind that I am a dinobaby and I have zero in common with GenX, Y, Z, or the horrific GenAI. The learnings:

  • Publishing companies spend time and money trying to figure out how to convert information into cash. This “problem” extended from the time I took my first real job in 1972 to yesterday when I received an email from a former publisher who is thinking about batteries as the future.
  • Information loses its value as it diffuses; that is, if I know something, I can generate money IF I can find the one person who recognizes the value of that information. For anyone else, the information is worthless and probably nonsense because that individual does not have the context to understand the “value” of an item of information.
  • Information has a tendency to diffuse. It is a bit like something with a very short half life. Time makes information even more tricky. If the context changes exogenously, the information I have may be rendered valueless without warning.

So what’s the solution? Here are the answers I have encountered in my professional life:

  1. Convert the “information” into magic and the result of a secret process. This is popular in consulting, certain government entities, and banker types. Believe me, people love the incantations, the jargon talk, and the scent of spontaneous ozone creation.
  2. Talk about “ideals,” and deliver lowest common denominator content. The idea that the comix and sports scores will “sell” and the revenue can be used to pursue ideals. (I worked at an outfit like this, and I liked its simple, direct approach to money.)
  3. Make the information “exclusive” and charge a very few people a whole lot of money to access this “special” information. I am not going to explain how lobbying, insider talk, and trade show receptions facilitate this type of information wheeling and dealing. Just get a LexisNexis-type of account, run some queries, and check out the bill. The approach works for certain scientific and engineering information, financial data, and information people have no idea is available for big bucks.
  4. Embrace the “if it bleeds, it leads” approach. Believe me this works. Look at YouTube thumbnails. The graphics and word choice make clear that sensationalism, titillation, and jazzification are the order of the day.

Now back to the Pew research. Here’s a passage I noted:

The survey also asked anyone who said they ever come across paywalls what they typically do first when that happens. Just 1% say they pay for access when they come across an article that requires payment. The most common reaction is that people seek the information somewhere else (53%). About a third (32%) say they typically give up on accessing the information.

Stop. That’s the key finding: one percent pay.

Let me suggest:

  1. Humans will take the easiest path; that is, they will accept what is output or what they hear from their “sources”
  2. Humans will take “facts” and glue they together to come up with more “facts”. Without context — that is, what used to be viewed as a traditional education and a commitment to lifelong learning, these people will lose the ability to think. Some like this result, of course.
  3. Humans face a sharper divide between the information “haves” and the information “have nots.”

Net net: The new dark ages are on the horizon. How’s that for a speculative conclusion from the Pew research?

Stephen E Arnold, July 1, 2025

Add On AI: Sounds Easy, But Maybe Just a Signal You Missed the Train

June 30, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

I know about Reddit. I don’t post to Reddit. I don’t read Reddit. I do know that like Apple, Microsoft, and Telegram, the company is not a pioneer in smart software. I think it is possible to bolt on Item Z to Product B. Apple pulled this off with the Mac and laser printer bundle. Result? Desktop publishing.

Can Reddit pull off a desktop publishing-type of home run? Reddit sure hopes it can (just like Apple, Microsoft, and Telegram, et al).

At 20 Years Old, Reddit Is Defending Its Data and Fighting AI with AI” says:

Reddit isn’t just fending off AI. It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others’ web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week. Huffman has been pitching Reddit Answers as a best-of-both worlds tool, gluing together the simplicity of AI chatbots with Reddit’s corpus of commentary. He used the feature after seeing electronic music group Justice play recently in San Francisco.

The question becomes, “Will users who think about smart software as ChatGPT be happy with a Reddit AI which is an add on?”

Several observations:

  1. If Reddit wants to pull a Web3 walled-garden play, the company may have lost the ability to lock its gate.
  2. ChatGPT, according to my team, is what Microsoft Word and Outlook users want; what they get is Copilot. This is a mind share and perception problem the Softies have to figure out how to remediate.
  3. If the uptake of ChatGPT or something from the “glue cheese on pizza” outfit, Reddit may have to face a world similar to the one that shunned MySpace or Webvan.
  4. Reddit itself appears to be vulnerable to what I call content injection. The idea is that weaponized content like search engine optimization posts are posted (injected) to Reddit. The result is that AI systems suck in the content and “boost” the irrelevancy.

My hunch is that an outfit like Reddit may find that its users may prefer asking ChatGPT or migrating to one of the new Telegram-type services now being coded in Silicon Valley.

Like Yahoo, the portal to the Internet in 1990s, Reddit may not have a front page that pulls users. A broader comment is that what I call “add-on AI” may not work because the outfits with the core technology and market pull will exploit, bulldoze, and undermine outfits which are at their core getting pretty old. We need a new truism, “When AIs fight, only the stakeholders get trampled.”

The truth may be more painful: Smart AI outfits can cause less smart outfits with AI bolted on to lose their value and magnetism for their core constituencies. Is there a fix? Nope, there is a cat-and-mouse game in which the attacker has the advantage.

Stephen E Arnold, June 30, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta