Brainyfone or Foneybrain?
June 16, 2025
If you spend too much time on your phone raise your hand. We’re not snoops, so we haven’t activated your device’s camera to spy on you. We’ll just affirm that you have and tell you what the BBC wrote: “How Mobile Phones Have Changed Our Brains.” We feel guilty about being on the phone so much, but it’s a very convenient tool.
Adults check their phone on average 344 times a day-once every four minutes. YIKES! We use our phones to complete a task and that leads to other activities like checking email, visiting social media, etc. Our neural pathways are being restructured to rely on phones. Here’s what it does:
“As you might expect, with our societal dependence on devices increasing rapidly every year, the research struggles to keep up. What we do know is that the simple distraction of checking a phone or seeing a notification can have negative consequences. This isn’t very surprising; we know that, in general, multitasking impairs memory and performance. One of the most dangerous examples is phone use while driving. One study found that merely speaking on the phone, not texting, was enough to make drivers slower to react on the road. It’s true for everyday tasks that are less high-stakes, too. Simply hearing a notification "ding" made participants of another study perform far worse on a task – almost as badly as participants who were speaking or texting on the phone during the task.”
Phones don’t contribute entirely to brain drain. The article did report on a study that did support the theory phones atrophy memory. Another study supported that phones helped improve memory when participants were allowed to make notes with their phone.
The articles makes a thought-provoking assertion:
“Individuals who think that our brains have "limited" resources (such as that resisting one temptation makes it harder to resist the next) are indeed more likely to exhibit this phenomenon in testing. But for those who think that the more we resist temptation, the more we’re strengthening the capacity to keep resisting temptation – that our brains, in other words, have unlimited resources. Exerting self-control or mental fatigue on one task doesn’t negatively affect their performance on the next one.
More fascinatingly still, whether we have a limited or non-limited view of the brain may be largely cultural – and that Western countries like the US may be more likely to think the mind is limited compared to other cultures, such as India.”
We’re not as limited as we think we are and the brain we adapt to mobile devices. However, it’s still healthy to get off your phones.
Whitney Grace, June 16, 2025
Up for a Downer: The Limits of Growth… Baaaackkkk with a Vengeance
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
Where were you in 1972? Oh, not born yet. Oh, hanging out in the frat house or shopping with sorority pals? Maybe you were working at a big time consulting firm?
An outfit known as Potomac Associates slapped its name on a thought piece with some repetitive charts. The original work evolved from an outfit contributing big ideas. The Club of Rome lassoed William W. Behrens, Dennis and Donella Meadows, and Jørgen Randers to pound data into the then-state-of-the-art World3 model allegedly developed by Jay Forrester at MIT. (Were there graduate students involved? Of course not.)
The result of the effort was evidence that growth becomes unsustainable and everything falls down. Business, government systems, universities, etc. etc. Personally I am not sure why the idea that infinite growth with finite resources will last forever was a big deal. The idea seems obvious to me. I was able to get my little hands on a copy of the document courtesy of Dominique Doré, the super great documentalist at the company which employed my jejune and naive self. Who was I too think, “This book’s conclusion is obvious, right?” Was I wrong. The concept of hockey sticks that had handles to the ends of the universe was a shocker to some.
The book’s big conclusion is the focus of “Limits to Growth Was Right about Collapse.” Why? I think the idea that the realization is a novel one to those who watched their shares in Amazon, Google, and Meta zoom to the sky. Growth is unlimited, some believed. The write up in “The Next Wave,” an online newsletter or information service happily quotes an update to the original Club of Rome document:
This improved parameter set results in a World3 simulation that shows the same overshoot and collapse mode in the coming decade as the original business as usual scenario of the LtG standard run.
Bummer. The kiddie story about Chicken Little had an acorn plop on its head. Chicken Little promptly proclaimed in a peer reviewed academic paper with non reproducible research and a YouTube video:
The sky is falling.
But keep in mind that the kiddie story is fiction. Humans are adept at survival. Maslow’s hierarchy of needs captures the spirit of species. Will life as modern CLs perceive it end?
I don’t think so. Without getting to philosophical, I would point to Gottlief Fichte’s thesis, antithesis, synthesis as a reasonably good way to think about change (gradual and catastrophic). I am not into philosophy so when life gives you lemons, one can make lemonade. Then sell the business to a local food service company.
Collapse and its pal chaos create opportunities. The sky remains.
The cited write up says:
Economists get over-excited when anyone mentions ‘degrowth’, and fellow-travelers such as the Tony Blair Institute treat climate policy as if it is some kind of typical 1990s political discussion. The point is that we’re going to get degrowth whether we think it’s a good idea or not. The data here is, in effect, about the tipping point at the end of a 200-to-250-year exponential curve, at least in the richer parts of the world. The only question is whether we manage degrowth or just let it happen to us. This isn’t a neutral question. I know which one of these is worse.
See de-growth creates opportunities. Chicken Little was wrong when the acorn beaned her. The collapse will be just another chance to monetize. Today is Friday the 13th. Watch out for acorns and recycled “insights.”
Stephen E Arnold, June 13, 2025
Just Cheat Your Way Through Life: Hey, It Is 2025. Get with It, Loser
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
I am a dinobaby. I lived in Campinas, Brazil. The power was on and off most days of the week. Mostly off, though. My family in the 1950s was one of the few American units in that town. My father planned for my education. I attended the local school for a few weeks. Then the director sent me home. The school was not set up for non-Portuguese speakers. There were a few missionaries in Campinas, and one of them became my Calvert Course tutor. He went to visit a smaller town, tangled with a snake, and died. That meant that I had to read the World Books my father bought as a replacement for the years of schooling I missed.
Bummer. No ChatGPT. Not much of anything except reading the turgid prose of the World Books and answering questions my mother and father presented for the section I read that day. “What was the capital of Tasmania?” I answered, “Hobart.” I guess that meant I passed. So it went for several years.
What would I have done if I had a laptop, electricity, and an Internet connection? I can tell you straight away that I would have let the smart software do my homework. Skip the reading. Let ChatGPT, You.com, Venice.ai, or some similar system do the work. I had a leather soccer (football) and the locals let me play even though I sucked.
When I read “AI Cheating Is So Out of Hand In America’s Schools That the Blue Books Are Coming Back,” I immediately sat down and wrote this blog post. I don’t need smart software, thank you. I have access to it and other magical computer software. I actually like doing research, analysis, and critical thinking. I am happy when someone tells me I am wrong, uninformed, or off base. I take note, remember the input, and try not to make the same mistake again.
But the reality of today is that smart software is like the World Books my parents made me read, memorize facts, and answer questions based on whatever baloney those volumes contained. AI is here; education has changed; and most students are not going to turn their backs on smart software, speed, and elimination of what is for most people the painful process of learning.
People are not stupid. Most just stop learning anything they don’t absolutely have to master. Now why learn anything? Whip out the smart phone, punch the icon for smart software, and let the system do the thinking.
The write up says:
… as AI tears through America’s elite educational system, lobotomizing tomorrow’s young leaders as it goes, could it be that blue books have been refashioned from a villain of the pre-AI age to a hero for our algorithmically-poisoned times? More and more, it seems like they’re the dark knight that America’s illiterate masses needs. The Journal notes that Roaring Spring Paper Products, the family-owned paper company that produces a majority of the blue books that are sold on college campuses, admits that the new AI era has ironically been good for its business.
Nifty. Lobotomize: I wonder if the author of the article knows exactly how unpredictable the procedure was and probably still is in some remote part of the modern world. Will using LLMs make people stupider? No, what makes people stupider is the inability, the motivation, and the curiosity required to learn. Doom scrolling is popular because young people are learning to follow trends, absorb video techniques, and learn how to “do” their fingernails. These may be more important than my knowing that the longest snake known when the World Books were published was over 20 feet long, specifically, the reticulated python. (Thank goodness, the snake lived in Indonesia, not Brazil.)
The write up says:
Indeed, if the return of pen and paper is a promising sign, America’s educators aren’t out of the woods yet—not even close. A recent survey found that 89% of college students had admitted to using ChatGPT to complete a homework assignment. AI-detection tools designed to spot cheating also routinely fail. Increasingly, America’s youth seem to view their educations as a high-stakes video game to be algorithmically juked. In short, more drastic measures (like the formulation of new laws and regulations around AI use) may need to be taken if the onset of America’s aggressive stupidification is to be halted.
My personal view is that a cultural shift has taken place. People don’t want to “work.” Families are no longer nuclear; they are not one mother, one father, and 2.4 children and maybe a dog, probably a boxer or a Labrador. Students no longer grab a book; they only have two hands and both are required to operate a mobile phone or a laptop. Teachers are no longer authority figures; they are viewed as problems, particularly by upper middle class and wealthy parents or parent as the case may be.
The blue book thing is mildly interesting, but I am not sure these are a solution. Students cannot read or write cursive; they print. This means that answers will be shorter, maybe like social media posts. If a student has a knack for art, icons may be included next to an insightful brief statement. A happy face signals the completion of the test. I would, if I were 13, draw a star and a calligraphic “A” on the front of my blue book.
What type of world will this educational milieu deliver? To be honest, I am glad I am old and will die before I have to experience to much of the LLM world.
Stephen E Arnold, June 13, 2025
Another Vote for the Everything App
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
An online information service named 9 to 5 Mac published an essay / interview summary titled “Nothing CEO says Apple No Longer Creative; Smartphone Future Is a Single App.” The write up focuses on the “inventor / coordinator” of the OnePlus mobile devices and the Nothing Phone. The key point of the write up is the idea that at some point in the future, one will have a mobile device and a single app, the everything app.
The article quotes a statement Carl Pei (the head of the Nothing Phone) made to another publication; to wit:
I believe that in the future, the entire phone will only have one app—and that will be the OS. The OS will know its user well and will be optimized for that person […] The next step after data-driven personalization, in my opinion, is automation. That is, the system knows you, knows who you are, and knows what you want. For example, the system knows your situation, time, place, and schedule, and it suggests what you should do. Right now, you have to go through a step-by-step process of figuring out for yourself what you want to do, then unlocking your smartphone and going through it step by step. In the future, your phone will suggest what you want to do and then do it automatically for you. So it will be agentic and automated and proactive.
This type of device will arrive in seven to 10 years.
For me, the notion of an everything app or a super app began in 2010, but I am not sure who first mentioned the phrase to me. I know that WeChat, the Chinese everything app, became available in 2011. The Chinese government was aware at some point that an “everything” app would make surveillance, social scoring, and filtering much easier. The “let many approved flowers bloom” approach of the Apple and Google online app stores was inefficient. One app was more direct, and I think the A to B approach to tracking and blocking online activity makes sense to many in the Middle Kingdom. The trade off of convenience for a Really Big Brother was okay with citizens of China. Go along and get along may have informed the uptake of WeChat.
Now the everything app seems like a sure bet. The unknown is which outstanding technology firm will prevail. The candidates are WeChat, Telegram, X.com, Sam Altman’s new venture, or a surprise player. Will other apps (the not everything apps from restaurant menus to car washes) survive? Sure. But if Sam AI-Man is successful with his Ive smart device and his stated goal of buying the Chrome browser from the Google catch on, the winner may be a CEO who was fired by his board, came back, and cleaned out those who did not jump on the AI-Man’s bandwagon.
That’s an interesting thought. It is Friday the 13th, Google. You too Microsoft. And Apple. How could I have forgotten Tim Cook and his team of AI adepts?
Stephen E Arnold, June 13, 2025
Will Amazon Become the Bell Labs of Consumer Products?
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
I did some work at Bell Labs and then at the Judge Greene crafted Bellcore (Bell Communications Research). My recollection is that the place was quiet, uneventful, and had a lousy cafeteria. The Cherry Hill Mall provided slightly better food, just slightly. Most of the people were normal compared to the nuclear engineers at Halliburton and my crazed colleagues at the blue chip consulting firm dumb enough to hire me before I became a dinobaby. (Did you know that security at the Cherry Hill Mall had a gold cart to help Bell Labs’ employees find their vehicle? The reason? Bell Labs hired staff to deal with this recuring problem. Yes, Howard, Alan, and I lost our car when we went to lunch. I finally started parking in the same place and wrote the door exit and lamp number down in my calendar. Problem solved!)
Is Amazon like that? On a visit to Amazon, I formed an impression somewhat different from Bell Labs, Halliburton, and the consulting firm. The staff were not exactly problematic. I just recall having to repeat and explain things. Amazon struck me as an online retailer with money and challenges in handling traffic. The people with whom I interacted when I visited with several US government professionals were nice and different from the technical professionals at the organizations which paid me cash money.
Is this important? Yes. I don’t think of Amazon as particularly innovative. When it wanted to do open source search, it hired some people from Lucid Imagination, now Lucid Works. Amazon just did what other Lucene/Solr large-scale users did: Index content and allow people to run queries. Not too innovative in my book. Amazon also industrialized back office and warehouse projects. These are jobs that require finding existing products and consultants, asking them to propose “solutions,” picking one, and getting the workflow working. Again, not particularly difficult when compared to the holographic memory craziness at Bell Labs or the consulting firm’s business of inventing consumer products for companies in the Fortune 500 that would sell and get the consulting firm’s staggering fees paid in cash promptly. In terms of the nuclear engineering work, Amazon was and probably still is, not in the game. Some of the rocket people are, but the majority of the Amazon workers are in retail, digital plumbing, and creating dark pattern interfaces. This is “honorable” work, but it is not invention in the sense of slick Monte Carlo code cranked out by Halliburton’s Dr. Julian Steyn or multi-frequency laser technology for jamming more data through a fiber optic connection.
I read “Amazon Taps Xbox Co-Founder to Lead new Team Developing Breakthrough Consumer Products.” I asked myself, “Is Amazon now in the Bell Labs’ concept space? The write up tries to answer my question, stating:
The ZeroOne team is spread across Seattle, San Francisco and Sunnyvale, California, and is focused on both hardware and software projects, according to job postings from the past month. The name is a nod to its mission of developing emerging product ideas from conception to launch, or “zero to one.” Amazon has a checkered history in hardware, with hits including the Kindle e-reader, Echo smart speaker and Fire streaming sticks, as well as flops like the Fire Phone, Halo fitness tracker and Glow kids teleconferencing device. Many of the products emerged from Lab126, Amazon’s hardware research and development unit, which is based in Silicon Valley.
Okay, the Fire Phone (maybe Foney) and the Glow thing for kids? Innovative? I suppose. But to achieve success in raw innovation like the firms at which I was an employee? No, Amazon is not in that concept space. Amazon is more comfortable cutting a deal with Elastic instead of “inventing” something like Google’s Transformer or Claude Shannon’s approach to extracting a signal from noise. Amazon sells books and provides an almost clueless interface to managing those on the Kindle eReader.
The write up says (and I believer everything I read on the Internet):
Amazon has pulled in staffers from other business units that have experience developing innovative technologies, including its Alexa voice assistant, Luna cloud gaming service and Halo sleep tracker, according to LinkedIn profiles of ZeroOne employees. The head of a projection mapping startup called Lightform that Amazon acquired is helping lead the group. While Amazon is expanding this particular corner of its devices group, the company is scaling back other areas of the sprawling devices and services division.
Innovation is a risky business. Amazon sells stuff and provides online access with uptime of 98 or 99 percent. It does not “do” innovation. I wrote a book chapter about Amazon’s blockchain patents. What happened to that technology, some of which struck me as promising and sort of novel given the standards for US patents? The answer, based on the information I have seen since I wrote the book chapter, is, “Not much.” In less time, Telegram dumped out dozens of “inventions.” These have ranged from sticking crypto wallets into every Messenger users’ mini app to refining the bot technology to display third-party, off-Telegram Web sites on the fly for about 900 million Messenger users.
Amazon hit a dead end with Alexa and something called Halo.
When an alleged criminal organization operating as an “Airbnb” outfit with no fixed offices and minimal staff can innovate and Amazon with its warehouses cannot, there’s a useful point of differentiation in my mind.
The write up reports:
Earlier this month, Amazon laid off about 100 of the group’s employees. The job cuts included staffers working on Alexa and Amazon Kids, which develops services for children, as well as Lab126, according to public filings and people familiar with the matter who asked not to be named due to confidentiality. More than 50 employees were laid off at Amazon’s Lab126 facilities in Sunnyvale, according to Worker Adjustment and Retraining Notification (WARN) filings in California.
Okay. Fire up a new unit. Will the approach work? I hope for stakeholders’ and employees’ sake, Amazon hits a home run. But in the back of my mind, innovation is difficult. Quite special people are needed. The correct organizational set up or essentially zero set up is required. Then the odds are usually against innovation, which, if truly novel, evokes resistance. New is threatening.
Can the Bezos bulldozer shift into high gear and do the invention thing? I don’t know but I have some nagging doubts.
Stephen E Arnold, June 12, 2025
Musk, Grok, and Banning: Another Burning Tesla?
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
“Elon Musk’s Grok Chatbot Banned by a Quarter of European Firms” reports:
A quarter of European organizations have banned Elon Musk’s generative AI chatbot Grok, according to new research from cybersecurity firm Netskope.
I find this interesting because my own experiences with Grok have been underwhelming. My first query to Grok was, “Can you present only Twitter content?” The answer was a bunch of jabber which meant, “Nope.” Subsequent queries were less than stellar, and I moved it out of my rotation for potentially useful AI tools. Did the sample crafted by Netskope have a similar experience?
The write up says:
Grok has been under the spotlight recently for a string of blunders. They include spreading false claims about a “white genocide” in South Africa and raising doubts about Holocaust facts. Such mishaps have raised concerns about Grok’s security and privacy controls. The report said the chatbot is frequently blocked in favor of “more secure or better-aligned alternatives.”
I did not feel comfortable with Grok because of content exclusion or what I like to call willful or unintentional coverage voids. The easiest way to remove or weaponize content in the commercial database world is to exclude it. When a person searches a for fee database, the editorial policy for that service should make clear what’s in and what’s out. Filtering out is the easiest way to marginalize a concept, push down a particular entity, or shape an information stream.
The cited write up suggests that Grok is including certain content to give it credence, traction, and visibility. Assuming that an electronic information source is comprehensive is a very risky approach to assembling data.
The write up adds another consideration to smart software, which — like it or not — is becoming the new way to become informed or knowledgeable. The information may be shallow, but the notion of relying on weaponized information or systems that spy on the user presents new challenges.
The write up reports:
Stable Diffusion, UK-based Stability AI’s image generator, is the most blocked AI app in Europe, barred by 41% of organizations. The app was often flagged because of concerns around privacy or licensing issues, the report found.
How concerned should users of Grok or any other smart software be? Worries about Grok may be an extension of fear of a burning Tesla or the face of the Grok enterprise. In reality, smart software fosters the illusion of completeness, objectivity, and freshness of the information presented. Users are eager to use a tool that seems to make life easier and them appear more informed.
The risks of reliance on Grok or any other smart software include:
- The output is incomplete
- The output is weaponized or shaped by intentional or factors beyond the developers’ control
- The output is simply wrong, made up, or hallucinated
- Users who act as though shallow knowledge is sufficient for a decision.
The alleged fact that 25 percent of the Netskope sample have taken steps to marginalize Grok is interesting. That may be a positive step based on my tests of the system. However, I am concerned that the others in the sample are embracing a technology which appears to be delivering the equivalent of a sugar rush after a gym workout.
Smart software is being applied in novel ways in many situations. However, what are the demonstrable benefits other than the rather enthusiastic embrace of systems and methods known to output errors? The rejection of Grok is one interesting factoid if true. But against the blind acceptance of smart software, Grok’s down check may be little more than a person stepping away from a burning Tesla. The broader picture is that the buildings near the immolating vehicle are likely to catch on fire.
Stephen E Arnold, June 12, 2025
Developers: Try to Kill ‘Em Off and They Come Back Like Giant Hogweeds
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
Developers, which probably extends to “coders” and “programmers”, have been an employee category of note for more than a half century. Even the esteemed Institute of Advanced Study enforced some boundaries between the “real” thinking mathematicians and the engineers who fooled around in the basement with a Stone Age computer.
Giant hogweeds can have negative impacts on humanoids who interact with them. Some say the same consequences ensue when accountants, lawyers, and MBAs engage in contact with programmers: Skin irritation and possibly blindness.
“The Recurring Cycle of ‘Developer Replacement’ Hype” addresses this boundary. The focus is on smart software which allegedly can do heavy-lifting programming. One of my team (Howard, the recipient of the old and forgotten Information Industry Association award for outstanding programming) is skeptical that AI can do what he does. I think that our work on the original MARS system which chugged along on the AT&T IBM MVS installation in Piscataway in the 1980s may have been a stretch for today’s coding wonders like Claude and ChatGPT. But who knows? Maybe these smart systems would have happily integrated Information Dimensions database with the MVS and allowed the newly formed Baby Bells to share certain data and “charge” one another for those bits? Trivial work now I suppose in the wonderful world of PL/1, Assembler, and the Basis “GO” instruction in one of today’s LLMs tuned to “do” code.
The write up points out that the tension between bean counters, MBAs and developers follows a cycle. Over time, different memes have surfaced suggesting that there was a better, faster, and cheaper way to “do” code than with programmers. Here are the “movements” or “memes” the author of the cited essay presents:
- No code or low code. The idea is that working in PL/1 or any other “language” can be streamlined with middleware between the human and the executables, the libraries, and the control instructions.
- The cloud revolution. The idea is that one just taps into really reliable and super secure services or micro services. One needs to hook these together and a robust application emerges.
- Offshore coding. The concept is simple: Code where it is cheap. The code just has to be good enough. The operative word is cheap. Note that I did not highlight secure, stable, extensible, and similar semi desirable attributes.
- AI coding assistants. Let smart software do the work. Microsoft allegedly produces oodles of code with its smart software. Google is similarly thrilled with the idea that quirky wizards can be allowed to find their future elsewhere.
The essay’s main point is that despite the memes, developers keep cropping up like those pesky giant hogweeds.
The essay states:
Here’s what the "AI will replace developers" crowd fundamentally misunderstands: code is not an asset—it’s a liability. Every line must be maintained, debugged, secured, and eventually replaced. The real asset is the business capability that code enables. If AI makes writing code faster and cheaper, it’s really making it easier to create liability. When you can generate liability at unprecedented speed, the ability to manage and minimize that liability strategically becomes exponentially more valuable. This is particularly true because AI excels at local optimization but fails at global design. It can optimize individual functions but can’t determine whether a service should exist in the first place, or how it should interact with the broader system. When implementation speed increases dramatically, architectural mistakes get baked in before you realize they’re mistakes. For agency work building disposable marketing sites, this doesn’t matter. For systems that need to evolve over years, it’s catastrophic. The pattern of technological transformation remains consistent—sysadmins became DevOps engineers, backend developers became cloud architects—but AI accelerates everything. The skill that survives and thrives isn’t writing code. It’s architecting systems. And that’s the one thing AI can’t do.
I agree, but there are some things programmers can do that smart software cannot. Get medical insurance.
Stephen E Arnold, June 12, 2025
Why Emulating Oxford University in the US Is an Errand for Fools
June 11, 2025
Just a dinobaby and no AI: How horrible an approach?
I read an essay with the personal touches I admire in writing: A student sleeping on the floor, an earnest young man eating KY fry on a budget airline, and an individual familiar with Laurel and Hardy comedies. This person write an essay, probably by hand on a yellow tablet with an ink pen titled “5 Ways to Stop AI Cheating.”
What are these five ways? The ones I noted are have rules and punish violators. Humiliation in front of peers is a fave. Presumably these students did not have weapons or belong to a street gang active in the school. The other five ways identified in the essay are:
- Handwrite everything. No typewriters, no laser printers, and no computers. (I worked with a fellow on a project for Persimmon IT which did some work on the DEC Alpha, and he used computers. (Handwriting was a no go for interacting with the DECs equipped with the “hot” chip way back when.)
- Professors interact with a student and talk or interrogate the young scholar to be
- Examinations were oral or written. One passed or failed. None of this namby pamby “gentleman’s C” KY fry stuff
- Inflexibility about knowing or not knowing. Know and one passes. Not knowing one becomes a member of Parliament or a titan of industry
- No technology. (I would not want to suggest that items one and five are redundant and that would be harshly judged by some of my less intellectually gifted teachers at assorted so-so US institutions of inferior learning.
Now let’s think about the fool’s errand. The US is definitely a stratified society, just like the UK. If one is a “have,” life is going to be much easier than if one is a “have not.” Why? Money, family connections, exposure to learning opportunities, possibly tutors, etc. In the US, technology is ubiquitous. I do not want to repeat myself, so a couple of additional thoughts will appear in item five below.
Next, grilling a student one on one is something that is an invitation to trouble. A student with hurt feelings need only say, “He/she is not treating me fairly.” Bingo. Stuff happens. I am not sure about a one on one in a private space would be perceived by a neutral third party. If one has to meet, meet in a public place.
Third, writing in blue books poses two problems. The first is that the professor has to read what the student has set forth in handwriting. Second, many students can neither write legible cursive or print out letters in an easily recognizable form. These are hurdles in the US. Elsewhere, I am not sure.
Fourth, inflexibility is a characteristic of some factions in the US. However, helicopter parents and assorted types of “outrage” can make inflexibility for a public servant a risky business. If Debbie is a dolt, one must find a way to be flexible if her parents are in the upper tier of American economic strata. Inflexibility means litigation or some negative networking or a TikTok video.
Finally, the problem with the no-tech approach is that it just won’t work. Consider smart software. Teachers use it and have LLMs fix up “original research.” Students use it to avoid reading and writing. Some schools ban mobile devices. Care to try that at an American university when shooters can prowl the campus?
The essay, like the fantasies of people who want to live like those in Florence in the 15th century are nuts. Pestilence, poverty, filth, violence, and big time corruption— there were everyday companions.
Cheating is here to stay. Politician is a code word for crook. Faculty (at least at Harvard) is the equivalent of bad research. Students are the stuff of YouTube shorts. Writing in blue books? A trend which may not have the staying power of Oxford’s stasis. I do like the bookstore, however.
Stephen E Arnold, June 11, 2025
Lights, Ready the Smart Software, Now Hit Enter
June 11, 2025
Just a dinobaby and no AI: How horrible an approach?
I like snappy quotes. Here’s a good one from “You Are Not Prepared for This Terrifying New Wave of AI-Generated Videos.” The write up says:
I don’t mean to be alarmist, but I do think it’s time to start assuming everything you see online is fake.
I like the categorical affirmative. I like the “alarmist.” I particularly like “fake.”
The article explains:
Something happened this week that only made me more pessimistic about the future of truth on the internet. During this week’s Google I/O event, Google unveiled Veo 3, its latest AI video model. Like other competitive models out there, Veo 3 can generate highly realistic sequences, which Google showed off throughout the presentation. Sure, not great, but also, nothing really new there. But Veo 3 isn’t just capable of generating video that might trick your eye into thinking its real: Veo 3 can also generate audio to go alongside the video. That includes sound effects, but also dialogue—lip-synced dialogue.
If the Google-type synths are good enough and cheap, I wonder how many budding film directors will note the capabilities and think about their magnum opus on smart software dollies. Cough up a credit card and for $250 per month imagine what videos Google may allow you to make. My hunch is that Mother Google will block certain topics, themes, and “treatments.” (How easy would it be for a Google-type service to weaponize videos about the news, social movements, and recalcitrant advertisers?)
The write worries gently as well, stating:
We’re in scary territory now. Today, it’s demos of musicians and streamers. Tomorrow, it’s a politician saying something they didn’t; a suspect committing the crime they’re accused of; a “reporter” feeding you lies through the “news.” I hope this is as good as the technology gets. I hope AI companies run out of training data to improve their models, and that governments take some action to regulate this technology. But seeing as the Republicans in the United States passed a bill that included a ban on state-enforced AI regulations for ten years, I’m pretty pessimistic on that latter point. In all likelihood, this tech is going to get better, with zero guardrails to ensure it advances safely. I’m left wondering how many of those politicians who voted yes on that bill watched an AI-generated video on their phone this week and thought nothing of it.
My view is that several questions may warrant some noodling by a humanoid or possibly an “ethical” smart software system; for example:
- Can AI detectors spot and flag AI-generated video? Ignoring or missing may have interesting social knock on effects.
- Will a Google-type outfit ignore videos that praise an advertiser whose products are problematic? (Health and medical videos? Who defines “problematic”?)
- Will firms with video generating technology self regulate or just do what yields revenue? (Producers of adult content may have some clever ideas, and many of these professionals are willing to innovate.)
Net net: When will synth videos win an Oscar?
Stephen E Arnold, June 11, 2025
LLMs, Dread, and Good Enough Software (Fast and Cheap)
June 11, 2025
Just a dinobaby and no AI: How horrible an approach?
More philosopher programmers have grabbed a keyboard and loosed their inner Plato. A good example is the essay “AI: Accelerated Incompetence” by Doug Slater. I have a hypothesis about this embrace of epistemological excitement, but that will appear at the end of this dinobaby post.
The write up posits:
In software engineering, over-reliance on LLMs accelerates incompetence. LLMs can’t replace human critical thinking.
The driver of the essay is that some believe that programmers should use outputs from large language models to generate software. Doug does not focus on Google and Microsoft. Both companies are convinced that smart software can write good enough code. (Good enough is the new standard of excellence at many firms, including the high-flying, thin-air breathing Googlers and Softies.)
The write up identifies three beliefs, memes, or MBAisms about this use of LLMs. These are:
- LLMs are my friend. Actually LLMs are part of a push to get more from humanoids involved in things technical. For a believer, time is gained using LLMs. To a person with actual knowledge, LLMs create work in order to catch errors.
- Humans are unnecessary. This is the goal of the bean counter. The goal of the human is to deliver something that works (mostly). The CFO is supposed to reduce costs and deliver (real or spreadsheet fantasy) profits. Humans, at least for now, are needed when creating software. Programmers know how to do something and usually demonstrate “nuance”; that is, intuitive actions and thoughts.
- LLMs can do what humans do, especially programmers and probably other technical professionals. As evidence of doing what humans do, the anecdote about the robot dog attacking its owner illustrates that smart software has some glitches. Hallucinations? Yep, those too.
The wrap up to the essay states:
If you had hoped that AI would launch your engineering career to the next level, be warned that it could do the opposite. LLMs can accelerate incompetence. If you’re a skilled, experienced engineer and you fear that AI will make you unemployable, adopt a more nuanced view. LLMs can’t replace human engineering. The business allure of AI is reduced costs through commoditized engineering, but just like offshore engineering talent brings forth mixed fruit, LLMs fall short and open risks. The AI hype cycle will eventually peak10. Companies which overuse AI now will inherit a long tail of costs, and they’ll either pivot or go extinct.
As a philosophical essay crafted by a programmer, I think the write up is very good. If I were teaching again, I would award the essay an A minus. I would suggest some concrete examples like “Google suggests gluing cheese on pizza”, for instance.
Now what’s the motivation for the write up. My hypothesis is that some professional developers have a Spidey sense that the diffident financial professional will license smart software and fire humanoids who write code. Is this a prudent decision? For the bean counter, it is self preservation. He or she does not want to be sent to find a future elsewhere. For the programmer, the drum beat of efficiency and the fife of cost reduction are now loud enough to leak through noise reduction head phones. Plato did not have an LLM, and he hallucinated with the chairs and rear view mirror metaphors.
Stephen E Arnold, June 11, 2025