Just What You Want: Information about Footnotes
July 11, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
I am completing my 14th monograph. Some of these 150 page plus documents became books. Examples range from The Google Legacy, published in 2003 for a client and then as a public document in 2004 by Infonortics Ltd., a specialty publisher somewhere in England. Others were published by Panda Press in Sweden. Martin White and I published a book about enterprise search management, and I do not recall what outfit published the book. When I started writing texts to accompany my lectures for ISS Telestrategies, the US National Cyber Crime events, and other specialized conferences, I decided to generate Adobe PDF files and make these “books” available to those in my classes and lectures. Dark Web Notebook and CyberOSINT were “self published.” Why? The commercial specialty publishers were going out of business or did not have a way to market the books I wrote. I wrote a couple of monographs about Japan’s investments in database technology in the early 1990s for the US Office of Technology Assessment. But I have lost track of these “books.”
When I read “Give Footnotes the Boot,” I thought about how I had handled “notes” in my long form writings. For this blog which is a collection of “notes” to myself given the appearance of an essay, I usually cite an article. I then add my preliminary thoughts about the write up, usually including a couple of the source document’s “interesting” statements. The blog, therefore, is an online notebook with 20,000 plus entries written for an audience of one: Me.
I noted that the cited “footnote” article says:
If the footnote markers are links, then the user can use the back button/gesture to return to the main content. But, even though this restores the previous scroll position, the user is still left with the challenge of finding their previous place in a wall of text6. We could try to solve that problem by dynamically pulling the content from the footnotes and displaying it in a popover. In some browsers (including yours) that will display like a tooltip, pointing directly back to the footnote marker. Thanks to modern web features, this can be done entirely without JavaScript7. But this is still shit! I see good, smart people, who’d always avoid using “click here” as link text, littering their articles with link texts such as 1, 7, and sometimes even 12. Not only is this as contextless as “click here”, it also provides the extra frustration of a tiny-weeny hit target. Update: Adrian Roselli pointed out that there are numerous bugs with accessibility tooling and superscript. And all this for what? To cargo-cult academia? Stop it! Stop it now! Footnotes are a shitty hack built on the limitations of printed media. It’s dumb to build on top of those limitations when they don’t exist on the web platform. So I ask you to break free of footnotes and do something better.
The essay omits one option; that is, just write as if the information in the chapter, book, paragraph is common knowledge. The result is fewer footnotes.
I am giving this footnote free approach a try in the book I am working on to accompany my lectures about Telegram for law enforcement, cyber attorneys, and intelligence professionals. I know that most people do not know that a specific quote I include from Pavel Durov originated from a Russia language blog. However, citing the Russian blog, presenting the title of the blog post in Cyrillic, including the English translation, and adding comments like “no longer online” would be the appropriate way to let my reader know I did not make up Pavel’s statement about having more than 100 children.
I am assuming that every person on earth knows that Pavel thinks he is a super human and has the duty to spawn more Pavels.
How will this work out? My hunch is that my readers will use my Telegram Labyrinth monograph to get oriented to a service alleged to be a criminal enterprise by the French judiciary. If someone wants to know where one of my “facts” originates, I will go through my notes, including blog posts, for the link to the document I read. Will those sources be findable in 2025 when the book comes out? Probably not.
Online information is disappearing at an alarming rate. The search systems I use “disappear” content even though I have a PDF of the source document in my electronic file. Intermediaries go out of business or filters block access to content.
I like the ideas in Jake Archibald’s essay. I also like the academic rigor of footnotes. But for the Telegram Labyrinth, I am minimizing footnotes. I assume that every investigator, intelligence professional, and government lawyer will know about Telegram. Therefore, what’s in my new book is common knowledge. That means, “Sorry, Miss Dalton, Stevie is dumping 95 percent of the footnotes.” (I should footnote that Miss Dalton was one of my teachers who wanted footnotes in Modern Language Association style for everything single thing her students wrote.) Nope. Blame Web rot, blame my laziness, blame the wild social media environment.
You will live and probably have some of that Telegram common knowledge refreshed; for example, the Telegram programming language FIFT is like FORTH only better. Get the pun. The Durovs have a sense of humor.
Stephen E Arnold, July1, 2025
Google and the EU: A Couple That Do Not Get Along
July 11, 2025
Google’s EU legal woes are in the news again. The Mercury News shares the Bloomberg piece, “Google Suffers Setback in Fight Over EU’s 4.1 Billion Euros Fine.” An advisor to the EU’s Court of Justice, Advocate General Juliane Kokott, agrees with regulators’ choice to punish google for abusing Android’s market power and discredits the company’s legal arguments. She emphasized:
“Google held a dominant position in several markets of the Android ecosystem and thus benefited from network effects that enabled it to ensure that users used Google Search. As a result, Google obtained access to data that enabled it in turn to improve its service.”
Though Kokott’s opinion is not binding, the court is known to rely heavily on its adviser’s opinions in final rulings. For its part, Google insists any market advantage it has is solely “due to innovation.” Sure, rigging the Search environment in its favor was plenty innovative. Just not legal. Not in the EU, anyway. Samuel Stolton reports:
“The top EU court’s final decision could prove pivotal for the future of the Android business model — which has provided free software in exchange for conditions imposed on mobile phone manufacturers. Such contracts provoked the ire of the commission in 2018, when the watchdog accused Alphabet Inc.’s Google of three separate types of illegal behavior that helped cement the dominance of its search engine, accompanying the order with the record fine. First, it said Google was illegally forcing handset makers to pre-install the Google Search app and the Chrome browser as a condition for licensing its Play Store — the marketplace for Android apps. Second, the EU said Google made payments to some large manufacturers and operators on condition that they exclusively pre-installed the Google Search app. Lastly, the EU said the Mountain View, California-based company prevented manufacturers wishing to pre-install apps from running alternative versions of Android not approved by Google.”
Meanwhile, the company is also in hot water over the EU’s Digital Markets Act. We learn that, in March, regulators scolded the firm elevating its own services over others and actively preventing app developers from guiding users to offers outside its app store. These practices violate the act, Google was told, and continuing to do so could lead to more fines. But are fines, even $4 billion ones, enough to deter the tech giant?
Cynthia Murrell, July 11, 2025
Win Big at the Stock Market: AI Can Predict What Humans Will Do
July 10, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
AI is hot. Click bait is hotter. And the hottest is AI figuring out what humans will do “next.” Think stock picking. Think pitching a company “known” to buy what you are selling. The applications of predictive smart software make intelligence professionals gaming the moves of an adversary quiver with joy.
“New Mind-Reading’ AI Predicts What Humans Will Do Next, And It’s Shockingly Accurate” explains:
Researchers have developed an AI called Centaur that accurately predicts human behavior across virtually any psychological experiment. It even outperforms the specialized computer models scientists have been using for decades. Trained on data from more than 60,000 people making over 10 million decisions, Centaur captures the underlying patterns of how we think, learn, and make choices.
Since I believe everything I read on the Internet, smart software definitely can pull off this trick.
How does this work?
Rather than building from scratch, researchers took Meta’s Llama 3.1 language model (the same type powering ChatGPT) and gave it specialized training on human behavior. They used a technique that allows them to modify only a tiny fraction of the AI’s programming while keeping most of it unchanged. The entire training process took only five days on a high-end computer processor.
Hmmm. The Zuck’s smart software. Isn’t Meta in the midst of playing catch up. The company is believed to be hiring OpenAI professionals and other wizards who can convert the “also in the race” to “winner” more quickly than one can say “billions of dollar spent on virtual reality.”
The write up does not just predict what a humanoid or a dinobaby will do. The write up reports:
n a surprising discovery, Centaur’s internal workings had become more aligned with human brain activity, even though it was never explicitly trained to match neural data. When researchers compared the AI’s internal states to brain scans of people performing the same tasks, they found stronger correlations than with the original, untrained model. Learning to predict human behavior apparently forced the AI to develop internal representations that mirror how our brains actually process information. The AI essentially reverse-engineered aspects of human cognition just by studying our choices. The team also demonstrated how Centaur could accelerate scientific discovery.
I am sold. Imagine. These researchers will be able to make profitable investments, know when to take an alternate path to a popular tourist attraction, and discover a drug that will cure male pattern baldness. Amazing.
My hunch is that predictive analytics hooked up to a semi-hallucinating large language model can produce outputs. Will these predict human behavior? Absolutely. Did the Centaur system predict that I would believe this? Absolutely. Was it hallucinating? Yep, poor Centaur.
Stephen E Arnold, July 10, 2025
BBC Warns Perplexity That the Beeb Lawyers Are Not Happy
July 10, 2025
The BBC has had enough of Perplexity AI gobbling up and spitting out its content. Sometimes with errors. The news site declares, “BBC Threatened AI Firm with Legal Action over Unauthorised Content Use.” Well, less a threat and more a strongly worded letter. Tech reporter Liv McMahon writes:
“The BBC is threatening to take legal action against an artificial intelligence (AI) firm whose chatbot the corporation says is reproducing BBC content ‘verbatim’ without its permission. The BBC has written to Perplexity, which is based in the US, demanding it immediately stops using BBC content, deletes any it holds, and proposes financial compensation for the material it has already used. … The BBC also cited its research published earlier this year that found four popular AI chatbots – including Perplexity AI – were inaccurately summarising news stories, including some BBC content. Pointing to findings of significant issues with representation of BBC content in some Perplexity AI responses analysed, it said such output fell short of BBC Editorial Guidelines around the provision of impartial and accurate news.”
Perplexity answered the BBC’s charges with an odd reference to a third party:
“In a statement, Perplexity said: ‘The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.’ It did not explain what it believed the relevance of Google was to the BBC’s position, or offer any further comment.”
Huh? Of course, Perplexity is not the only AI firm facing such complaints, nor is the BBC the only publisher complaining. The Professional Publishers Association, which represents over 300 media brands, seconds the BBC’s allegations. In fact, the organization charges, Web-scraping AI platforms constantly violate UK copyrights. Though sites can attempt to block models with the Robots Exclusion Protocol (robots.txt), compliance is voluntary. Perplexity, the BBC claims, has not respected the protocol on its site. Perplexity denies that accusation.
Cynthia Murrell, July 10, 2025
Apple and Telegram: Victims of Their Strategic Hubris
July 9, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
What’s “strategic hubris”? I use this bound phrase to signal that an organization manifests decisions that combine big thinking with a destructive character flow. Strategy is the word I use to capture the most important ideas to get an organization to generate revenue and win in its business and political battles. Now hubris. A culture of superiority may be the weird instinct of a founder; it may be marketing jingo that people start believing; or it is jargon learned in school. When the two come together, some organizations can make expensive, often laughable, mistakes. Examples range from Windows and its mobile phone to the Ford Edsel.
I read “Apple Reaches Out to OpenAI, Anthropic to Build Out Siri technology.” In my opinion, this illustrates strategic hubris operating on two pivot points like a merry-go-round: Up and down; round and round.
The cited article states:
… over the past year or so it [Apple] has faced a variety of leadership and technological challenges developing Apple Intelligence, which is based on in-house foundation models. The more personalized Siri technology with more personalized AI-driven features is now due in 2026, according to a statement by Apple …
This “failure” is a result of strategic hubris. Apple’s leadership believed it could handle smart software. The company taught China how to be a manufacturing super power could learn and do AI. Apple’s leadership seems to have followed the marketing rule: Fire, Aim, Ready. Apple announced AI or Apple Intelligence and then failed to deliver. Then Apple reorganized and it failed again. Now Apple is looking at third party firms to provide the “intelligence” for Apple.
Personally I think smart software is good at some things and terrible at others. Nevertheless, a failure to provide or “do” smart software is the digital equivalent of having a teacher put a dunce cap on a kid’s head and making him sit in the back of the classroom. In the last 18 months, Apple has been playing fast and loose with court decisions, playing nice with China, and writing checks for assorted fines levied by courts. But the premier action has been the firm’s failure in the alleged “next big thing”.
Let me shift from Apple because there is a firm in the same boat as the king of Cupertino. Telegram has no smart software. Nikolai Durov is, according to Pavel (the task master) working on AI. However, like Apple, Telegram has been chatting up (allegedly) Elon Musk. The Grok AI system, some rumors have it, would / could / should be integrated into the Telegram platform. Telegram has the same strategic hubris I associated with Apple. (These are not the only two firms afflicted with this digital SARS variant.)
I want to identify several messages I extracted from the Apple and Telegram AI anecdotes:
- Both companies were doing other things when the smart software yachts left the docks in Half Moon Bay
- Both companies have the job of integrating another firm’s smart software into large, fast-moving companies with many moving parts, legal problems, and engineers who are definitely into “strategic hubris”
- Both companies have to deliver AI that does not alienate existing users and attract new customers at the same time.
Will these firms be able to deliver a good enough AI solution? Probably. However, both may be vulnerable to third parties who hop on a merry-go-round. There is a predictable and actually no-so-smart pony named Apple and one named Messenger. The threat is that Apple and Telegram have been transmogrified into little wooden ponies. The smart people just ride them until the time is right to jump off.
That’s one scenario for companies with strategic hubris who missed the AI yachts when they were under construction and who were not on the expensive machines when they cast off. Can the costs of strategic hubris be recovered? The stakeholders hope so.
Stephen E Arnold, July 9, 2025
Humans May Be Important. Who Knew?
July 9, 2025
Here is an AI reality check. Futurism reports, “Companies that Replaced Humans with AI Are Realizing their Mistake.” You don’t say. Writer Joe Wilkins tells us:
“As of April, even the best AI agent could only finish 24 percent of the jobs assigned to it. Still, that didn’t stop business executives from swarming to the software like flies to roadside carrion, gutting entire departments worth of human workers to make way for their AI replacements. But as AI agents have yet to even pay for themselves — spilling their employer’s embarrassing secrets all the while — more and more executives are waking up to the sloppy reality of AI hype. A recent survey by the business analysis and consulting firm Gartner, for instance, found that out of 163 business executives, a full half said their plans to ‘significantly reduce their customer service workforce’ would be abandoned by 2027. This is forcing corporate PR spinsters to rewrite speeches about AI ‘transcending automation,’ instead leaning on phrases like ‘hybrid approach’ and ‘transitional challenges’ to describe the fact that they still need humans to run a workplace.”
Few workers would be surprised to learn AI is a disappointment. The write-up points to a report from GoTo and Workplace Intelligence that found 62% of employees say AI is significantly overhyped. Meanwhile, 45 percent of IT managers surveyed paint AI rollouts as scattered and hasty. Security concerns and integration challenges were the main barriers, 56% of them reported.
Anyone who has watched firm after firm make a U-turn on AI-related layoffs will not be surprised by these findings. For example, after cutting staff by 22% last year, finance startup Klarna announced a recruitment drive in May. Wilkins quotes tech critic Ed Zitron, who wrote in September:
“These ‘agents’ are branded to sound like intelligent lifeforms that can make intelligent decisions, but are really just trumped-up automations that require enterprise customers to invest time programming them.”
Companies wanted a silver bullet. Now they appear to be firing blanks.
Cynthia Murrell, July 9, 2025
Can AI Do What Jesus Enrique Rosas Does?
July 8, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
I learned about a YouTube video via a buried link in a story in my newsfeed. The video is titled “Analysis of Jeffrey Epstein’s Cell Block Video Released by the FBI.” I know little about Mr. Rosas. He is a body language “expert.” I know zero about this field. He gives away a book about body language, and I assume that he gets inquiries and sells services. He appears to have developed what he calls a Knesix Code. He does not disclose his academic background.
But …
His video analysis of the Epstein surveillance camera data makes clear that Sr. Rosas has an eye for detail. Let me cite two examples:
First, he notes that in some of the footage released by the FBI, a partial image of a video editing program’s interface appears. Not only does it appear, but the image appears in several separate sectors of the FBI-released video. Mr. Rosas raises the possibility that the FBI footage (described as unaltered) was modified.
Here is an example of that video editing “tell” or partial image:
Second, Sr. Rosas spots a time gap in the FBI video. Here’s where the “glitch” appears:
How much is missing from the unedited video file? More than a minute.
Observations:
- I feed the interface image into a couple of smart software systems. None was able to identify the specific program’s interface from the partial image
- Mr. Rosas’ analysis identified two interesting anomalies in the video
- The allegedly unedited video appears to have been edited.
Net net: AI is not able to do what Sr. Rosas did. I do not want to speculate how “no videos” became this one video. I do not want to speculate why an unedited video contains two editing indications. I don’t want to think much about Jeffrey Epstein, the kiddie trafficking, and the individuals associating with him. I will stick with my observation, “AI does not seem to have the ability to do what Sr. Rosas did.”
Stephen E Arnold, July 8, 2025
Curation and Editorial Policies: Useful and Are Net Positives
July 8, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
The write up “I Deleted My Second Brain.” The author is Joan Westenberg. I had to look her up. She is writer and entrepreneur. She sells subscriptions to certain essays. Okay, that’s the who. Now what does the essay address?
It takes a moment to convert “Zettelkasten slip” into a physical notecard, but I remembered learning this from the fellows who were pitching a note card retrieval system via a company called Remac. (No, I have no idea what happened to that firm. But I understand note cards. My high school debate coach in 1958 explained the concept to me.)
Ms. Westenberg says:
For years, I had been building what technologists and lifehackers call a “second brain.” The premise: capture everything, forget nothing. Store your thinking in a networked archive so vast and recursive it can answer questions before you know to ask them. It promises clarity. Control. Mental leverage. But over time, my second brain became a mausoleum. A dusty collection of old selves, old interests, old compulsions, piled on top of each other like geological strata. Instead of accelerating my thinking, it began to replace it. Instead of aiding memory, it froze my curiosity into static categories. And so… Well, I killed the whole thing.
I assume Ms. Westenberg is not engaged in a legal matter. The intentional deletion could lead to some interesting questions. On the other hand, for a person who does public relations and product positioning, deletion may not present a problem.
I liked the reference to Jorge Luis Borges (1899 to 1986) , a writer with some interesting views about the nature of reality. As Ms. Westenberg notes:
But Borges understood the cost of total systems. In “The Library of Babel,” he imagines an infinite library containing every possible book. Among its volumes are both perfect truth and perfect gibberish. The inhabitants of the library, cursed to wander it forever, descend into despair, madness, and nihilism. The map swallows the territory. PKM systems promise coherence, but they often deliver a kind of abstracted confusion. The more I wrote into my vault, the less I felt. A quote would spark an insight, I’d clip it, tag it, link it – and move on. But the insight was never lived. It was stored. Like food vacuum-sealed and never eaten, while any nutritional value slips away. Worse, the architecture began to shape my attention. I started reading to extract. Listening to summarize. Thinking in formats I could file. Every experience became fodder. I stopped wondering and started processing.
I think the idea of too much information causing mental torpor is interesting for two reasons: [a] digital information has a mass of sorts and [b] information can disrupt the functioning of an information processing organ; that is, the human brain.
The fix? Just delete everything. Ms. Westenberg calls this “destruction by design.” She is now (presumably) lighter and more interested in taking notes. I think this is the modern equivalent of throwing out junk from the garage. My mother would say, after piling my ball bat, scuffed shoes, and fossils into the garbage can, “There. That feels better.” I would think because I did not want to suffer the wrath of mom, “No, mother, you are destroying objects which are meaningful to me. You are trashing a chunk of my self with each spring cleaning.” Destruction by design may harm other people. In the case of a legal matter, destruction by design can cost the person hitting delete big time.
What’s interesting is that the essay reveals something about Ms. Westenberg; for example, [a] A person who can destroy information can destroy other intangible “stuff” as well. How does that work in an organization? [b] The sudden realization that one has created a problem leads to a knee jerk reaction. What does that say about measured judgment? [c] The psychological boost from hitting the delete key clears the path to begin the collecting again. Is hoarding an addiction? What’s the recidivism rate for an addict who does the rehabilitation journey?
My major takeaway may surprise you. Here it is: Ms. Westenberg learned by trial and error over many years that curation is a key activity in knowledge work. Google began taking steps to winnow non-compliant Web sites from its advertising program. The decision reduces lousy content and advances Google’s agenda to control in digital Gutenberg machines. Like Ms. Westenberg, Google is realizing that indexing and saving “everything” is a bubbling volcano of problems.
Librarians know about curation. Entities like Ms. Westenberg and Google are just now realizing why informed editorial policies are helpful. I suppose it is good news that Ms. Westenberg and Google have come to the same conclusion. Too bad it took years to accept something one could learn at any library in five minutes.
Stephen E Arnold, July 8, 2025
New Business Tactics from Google and Meta: Fear-Fueled Management
July 8, 2025
No smart software. Just a dinobaby and an old laptop.
I like to document new approaches to business rules or business truisms. Examples range from truisms like “targeting is effective” to “two objectives is no objectives.” Today July 1, 2025, I spotted anecdotal evidence of two new “rules.” Both seem customed tailored to the GenX, GenY, GenZ, and GenAI approach to leadership. Let’s look at each briefly and then consider how effective these are likely to be.
The first example of new management thinking appears in “Google Embraces AI in the Classroom with New Gemini Tools for Educators, Chatbots for Students, and More.” The write up explains that Google has:
introduced more than 30 AI tools for educators, a version of the Gemini app built for education, expanded access to its collaborative video creation app Google Vids, and other tools for managed Chromebooks.
Forget the one objective idea when it comes to products. Just roll out more than two dozen AI services. That will definitely catch the attention of grade, middle school, high school, junior college, and university teachers in the US and elsewhere. I am not a teacher, but I know that when I attend neighborhood get togethers, the teachers at these functions often ask me about smart software. From these interactions, very few understand that smart software comes in different “flavors.” AI is still a mostly unexplored innovation. But Google is chock full of smart people who certainly know how teachers can rush to two dozen new products and services in a jiffy.
The second rule is that organizations are hierarchical. Assuming this is the approach, one person should lead an organization and then one person should lead a unit and one person should lead a department and so on. This is the old Great Chain of Being slapped on an enterprise. My father worked in this type of company, and he liked it. He explained how work flowed from one box on the organization chart to another. With everything working the way my father liked things to work, bulldozers and mortars appeared on the loading docks. Since I grew up with this approach, it made sense to me. I must admit that I still find this type of set up appealing, and I am usually less than thrilled to work in an matrix management, let’s just roll with it set up.
In “Nikita Bier, The Founder Of Gas And TBH, Who Once Asked Elon Musk To Hire Him As VP Of Product At Twitter, Has Joined X: ‘Never Give Up‘” I learned that Meta is going with the two bosses approach to smart software. The write up reports as real news as opposed to news release news:
On Monday, Bier announced on X that he’s officially taking the reins as head of product. "Ladies and gentlemen, I’ve officially posted my way to the top: I’m joining @X as Head of Product," Bier wrote.
Earlier in June 2025, Mark Zuckerberg pumped money into Scale.io (an indexing outfit) and hired Alexandr Wang to be the top dog of Meta’s catch up in AI initiative. It appears that Meta is going to give the two bosses are better than one approach its stamp of management genius approval. OpenAI appeared to emulate this approach, and it seemed to have spawned a number of competitors and created an environment in which huge sums of money could attract AI wizards to Mr. Zuckerberg’s social castle.
The first new management precept is that an organization can generate revenue by shotgunning more than two dozen new products and services to what Google sees as the education market. The outmoded management approach would focus on one product and service, provide that to a segment of the education market with some money to spend and a problem to solve. Then figure out how to make that product more useful and grow paying customers in that segment. That’s obviously stupid and not GenAI. The modern approach is to blast that bird shot somewhere in the direction of a big fuzzy market and go pick up the dead ducks for dinner.
The second new management precept is to have an important unit, a sense of desperation born from failure, and put two people in charge. I think this can work, but in most of the successful outfits to which I have been exposed, there is one person at the top. He or she may be floating above the fray, but the idea is that someone, in theory, is in charge.
Several observations are warranted:
- The chaos approach to building a business has taken root and begun to flower at Google and Meta. Out with the old and in with the new. I am willing to wait and see what happens because when either success or failure arrives, the stories of VCs jumping from tall buildings or youthful managers buying big yachts will circulate.
- The innovations in management at Google and Meta suggest to me a bit of desperation. Both companies perceive that each is falling behind or in danger of losing. That perception may be accurate because once the AI payoff is not evident, Google and Meta may find themselves paddling up the river, not floating down the river.
- The two innovations viewed as discrete actions are expensive, risky, and illustrative of the failure of management at both firms. Employees, stakeholders, and users have a lot to win or lose.
I heard a talk by someone who predicted that traditional management consulting would be replaced by smart software. In the blue chip firm in which I worked years ago, management decisions like these would be guaranteed to translate to old-fashioned, human-based consulting projects.
In today’s world, decisions by “leadership” are unlikely to be remediated by smart software. Fixing up the messes will require individuals with experience, knowledge, and judgment.
As Julius Caesar allegedly said:
In summo periculo timor miericordiam non recipit.
This means something along the lines, “In situations of danger, fear feels no pity.” These new management rules suggest that both Google and Meta’s “leadership” are indeed fearful and grandstanding in order to overcome those inner doubts. The decisions to go against conventional management methods seem obvious and logical to them. To others, perhaps the “two bosses” and “a blast of AI products and service” are just ill advised or not informed?
Stephen E Arnold, July 8, 2025
We Have a Cheater Culture: Quite an Achievement
July 8, 2025
The annual lamentations about AI-enabled cheating have already commenced. Professor Elizabeth Wardle of Miami University would like to reframe that debate. In an opinion piece published at Cincinnati.com, she declares, “Students Aren’t Cheating Because they Have AI, but Because Colleges Are Broken.” Reasons they are broken, she writes, include factors like reduced funding and larger class sizes. Fundamentally, though, the problem lies in universities’ failure to sufficiently evolve.
Some suggest thwarting AI with a return to blue-book essays. Wardle, though, believes that would be a step backward. She notes early U.S. colleges were established before today’s specialized workforce existed. The handwritten assignments that served to train the wealthy, liberal-arts students of yesteryear no longer fit the bill. Instead, students need to understand how things work in the present and how to pivot with change. Yes, including a fluency with AI tools. Graduates must be “broadly literate,” the professor writes. She advises:
“Providing this kind of education requires rethinking higher education altogether. Educators must face our current moment by teaching the students in front of us and designing learning environments that meet the times. Students are not cheating because of AI. When they are cheating, it is because of the many ways that education is no longer working as it should. But students using AI to cheat have perhaps hastened a reckoning that has been a long time coming for higher ed.”
Who is to blame? For one, state legislatures. Many incentivize universities to churn out students with high grades in majors that match certain job titles. State funding, Wardle notes, is often tied to graduates hitting high salaries out of the gate. Her frustration is palpable as she asserts:
“Yes, graduates should be able to get jobs, but the jobs of the future are going to belong to well-rounded critical thinkers who can innovate and solve hard problems. Every column I read by tech CEOs says this very thing, yet state funding policies continue to reward colleges for being technical job factories.”
Professor Wardle is not all talk. In her role as Director of the Howe Center for Writing Excellence, she works with colleagues to update higher-learning instruction. One of their priorities has been how to integrate AI into curricula. She writes:
“The days when school was about regurgitating to prove we memorized something are over. Information is readily available; we don’t need to be able to memorize it. However, we do need to be able to assess it, think critically about it, and apply it. The education of tomorrow is about application and innovation.”
Indeed. But these urgent changes cannot be met as long funding continues to dwindle. In fact, Wardle argues, we must once again funnel significant tax money into higher education. Believe it or not, that is something we used to do as a society. (She recommends Christopher Newfield’s book “The Great Mistake” to learn how and why free, publicly funded higher ed fell apart.) Yes, we suspect there will not be too much US innovation if universities are broken and stay that way. Where will that leave us?
Cynthia Murrell, July 8, 2025

