An AI Outfit Promises Something Infinite: Er, Really? Infinite?
November 28, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
Venture Beat publishes some write ups that strike me like marketing collateral. I am not sure what to make of an article that asserts an “infinite” service. Let’s not skip over the idea of infinity. I think a couple of mathy types drove themselves crazy thinking about infinities of infinities working two ways: Bigger numbers and then smaller numbers. Yep, those close to these mathy types checked them into a facility for those sufficiently unusual. Would this article “Anthropic’s Claude Opus 4.5 Is Here” Cheaper, AI, Infinite Chats, and Coding Skills That Beat Humans (sic).”

Georg Cantor makes an appearance at the Happy Valley Rest Home. He tells the nurse delivering his scopolamine that he wants to talk about the central infinite-cardinality problem before he is placed in isolation. Thanks, Venice.ai. Close enough, although the setting does not look too much like a 19th century facility. Oh, well.
The first step on what will be a short journey I assure you. Let’s let Anthropic Claude 4.5 define “infinite”, shall we? Here’s the response:
Infinity represents unboundedness—a concept of endlessness without limit. In mathematics, it describes quantities exceeding any finite value or processes continuing indefinitely. Infinity isn’t a number but an abstract notion of limitlessness, fundamental to calculus, set theory, and cosmology.
Notice that Claude 4.5 did not mention marketing and PR speak. This is important for “Anthropic’s Claude Opus 4.5 Is Here: Cheaper AI, Infinite Chats, and Coding Skills That Beat Humans.” Note the two-word phrase “infinite chats.” Nope, I don’t think so. Not even Amazon’s money pipe can serve “infinite chats.” Come on, Claude. Get real. Amazon cannot even keep its system online if my memory serves me. But infinite? Hmmm.
The write up says that Venture Beat engaged in an exclusive interview with Alex Albert, “Anthropic’s head of developer relations.” (I think this means a marketing job. What do you think?) Here’s a statement that caught my attention:
The new model, Claude Opus 4.5, scored higher on Anthropic’s most challenging internal engineering assessment than any human job candidate in the company’s history, according to materials reviewed by VentureBeat. The result underscores both the rapidly advancing capabilities of AI systems and growing questions about how the technology will reshape white-collar professions. The Amazon-backed company is pricing Claude Opus 4.5 at $5 per million input tokens and $25 per million output tokens — a dramatic reduction from the $15 and $75 rates for its predecessor, Claude Opus 4.1, released earlier this year. The move makes frontier AI capabilities accessible to a broader swath of developers and enterprises while putting pressure on competitors to match both performance and pricing.
Does this strike you as what I caller a “better, faster, cheaper” approach. I like to add cheaper is the operative word. Yep, buy the market and raise prices later. Does this surprise me? No. From a company that promises to deliver something that costs money infinitely, cheaper does not surprise me one whit.
Here’s another statement I found interesting:
Albert said employee testers consistently reported that the model demonstrates improved judgment and intuition across diverse tasks — a shift he described as the model developing a sense of what matters in real-world contexts. “The model just kind of gets it,” Albert said. “It just has developed this sort of intuition and judgment on a lot of real world things that feels qualitatively like a big jump up from past models.”
To test this statement, I fired up Claude 4.5 again and asked, “Is it possible for a commercial enterprise supported by modest customer sales and venture capital to deliver an “infinite service”; for example, infinite chats for users?”
Here’s what the “just kind of gets it” Claude told me:
No, it’s not sustainably possible. Here’s why:
- Real Costs Scale with Usage: Every chat consumes computational resources (servers, energy, bandwidth). Even efficient models incur marginal costs per interaction that accumulate with heavy users.
- Finite Capital: Venture funding and modest sales provide limited runway. Without usage caps, a small subset of power users could drain resources disproportionately, making the business model unsustainable.
- Economic Reality: “Unlimited” offerings typically include hidden constraints (throttling during peak times, deprioritization, or eventual tier upgrades). True infinity would require infinite capital and infrastructure.
- Tragedy of the Commons: Without limits, rational users maximize personal use, collectively overwhelming the system—forcing either service degradation or imposed restrictions.
Practical Outcome: Most companies offer generous but capped services (e.g., “50 free messages/day”) or tiered pricing. Marketing terms like “unlimited” often mean “more than typical users need,” not literally infinite. Would you like examples of how AI companies balance accessibility with sustainability?
I think that Mr. Albert might want to check his assertions with his product that “infinite” is a tough promise. But let’s not stop with the infinite. What about this statement in the write up?
Opus 4.5 outscores all human candidates on company’s toughest engineering test.
For me, the sticking point is the word “all.” Categorical affirmatives are problematic. Black swans aside, “all” is a cousin of infinite. Its use implies a rather sweeping approach to those who take a tough engineering test. What’s the sample? One, 100, 10,000? Yeah. Oh, well.
What’s the proof? Here’s the chart that did not knock me over with shock. I know it is difficult to read. Navigate to the cited article on a boat anchor computer with a big screen, and you can sort of read the numbers. Look for this:

Claude 4.5 is better than also-rans like Google and OpenAI. Well, why not? Anthropic has whipped infinite and tamed all. Dealing with weak wristed systems like Googzilla and ChatGPT is trivial.
Mr. Albert offered a statement which Venture Beat uses to complete this remarkable feat of content marketing, hyperbole, and logical impossibilities:
When asked about the engineering exam results and what they signal about AI’s trajectory, Albert was direct: “I think it’s a really important signal to pay attention to.”
Yep, pay attention. I did.
Stephen E Arnold, November 28, 2025
IBM on the Path to Dyson Spheres But Quantum Networks Come First
November 28, 2025
This essay is the work of a dumb dinobaby. No smart software required.
How does one of the former innovators in Fear, Uncertainty, and Doubt respond to the rare atmosphere of smart software? The answer, in my opinion, appears in “IBM, Cisco Outline Plans for Networks of Quantum Computers by Early 2030s.” My prediction was wrong about IBM. I thought that with a platform like Watson, IBM would aim directly at Freeman Dyson’s sphere. The idea is to build a sphere in space to gather energy and power advanced computing systems. Well, one can’t get to the Dyson sphere without a network of quantum computers. And the sooner the better.

A big thinker conceptualizes inventions anticipated by science fiction writers. The expert believes that if he thinks it, that “it” will become real. Sure, but usually more than a couple of years are needed for really big projects like affordable quantum computers linked via quantum networks. Thanks, Venice.ai. Good enough.
The write up from the “trust” outfit Thomson Reuters says:
IBM and Cisco Systems … said they plan to link quantum computers over long distances, with the goal of demonstrating the concept is workable by the end of 2030. The move could pave the way for a quantum internet, though executives at the two companies cautioned that the networks would require technologies that do not currently exist and will have to be developed with the help of universities and federal laboratories.
Imagine artificial general intelligence is like to arrive about the same time. IBM has Watson. Does this mean that Watson can run on quantum computers. Those can solve the engineering challenges of the Dyson sphere. IBM can then solve the world’s energy requirements. This sequence seems like a reasonable tactical plan.
The write up points out that building a quantum network poses a few engineering problems. I noted this statement in the news report:
The challenge begins with a problem: Quantum computers like IBM’s sit in massive cryogenic tanks that get so cold that atoms barely move. To get information out of them, IBM has to figure out how to transform information in stationary “qubits” – the fundamental unit of information in a quantum computer – into what Jay Gambetta, director of IBM Research and an IBM fellow, told Reuters are “flying” qubits that travel as microwaves. But those flying microwave qubits will have to be turned into optical signals that can travel between Cisco switches on fiber-optic cables. The technology for that transformation – called a microwave-optical transducer – will have to be developed with the help of groups like the Superconducting Quantum Materials and Systems Center, led by the Fermi National Accelerator Laboratory near Chicago, among others.
Trivial compared to the Dyson sphere confection. It is now sundown for year 2025. IBM and its partner target being operational in 2029. That works out to 24 months. Call it 36 just to add a margin of error.
Several observations:
- IBM and its partner Cisco Systems are staking out their claims to the future of computing
- Compared to the Dyson sphere idea, quantum computers networked together to provide the plumbing for an Internet that makes Jack Dorsey’s Web 5 vision seem like something from a Paleolithic sketch on the wall of the Lescaux Caves.
- Watson and IBM’s other advanced AI technologies probably assisted the IBM marketing professionals with publicizing Big Blue’s latest idea for moving beyond the fog of smart software.
Net net: The spirit of avid science fiction devotees is effervescing. Does the idea of a network of quantum computers tickle your nose or your fancy? I have marked my calendar.
Stephen E Arnold, November 28, 2025
Coca-Cola and AI: Things May Not Be Going Better
November 27, 2025
Coca-Cola didn’t learn its lesson last year with a less than bad AI-generated Christmas commercial. It repeated the mistake again in 2025. Although the technology has improved, the ad still bears all the fake-ness of early CGI (when examined in hindsight of course). Coca-Cola, according to Creative Bloq, did want to redeem itself, so the soft drink company controlled every detail in the ad: “Devastating Graphic Shows Just How Bad The Coca-Cola Christmas Ad Really Is.”
Here’s how one expert viewed it:
“In a post on LinkedIn, the AI consultant Dino Burbidge points out the glaring lack of consistency and continuity in the design of the trucks in the new AI Holidays are Coming ad, which was produced by AI studio Secret Level. At least one of the AI-generated vehicles appears to completely defy physics, putting half of the truck’s payload beyond the last wheel.
Dino suggests that the problem with the ad is not AI per se, but the fact that no human appears to have checked what the AI models generated… or that more worryingly they checked but didn’t care, which is extraordinary when the truck is the main character in the ad.”
It’s been suggested that Coca-Cola used AI to engage in rage bait instead of building a genuinely decent Christmas ad. There was a behind the scenes video of how the ad was made and even that used AI VoiceOver.
I liked the different horse drawn wagons. Very consistent.
Whitney Grace, November 27, 2025
Watson: Transmission Is Doing Its Part
November 25, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read an article that stopped me in my tracks. It was “IBM Revisits 2011 AI Jeopardy Win to Capture B2B Demand.” The article reports that a former IBM executive said:
People want AI to be able to do what it can’t…. and immature technology companies are not disciplined enough to correct that thinking.
I find the statement fascinating. IBM Watson was supposed to address some of the challenges cancer patients faced. The reality is that cancer docs in Houston and Manhattan provided IBM with some feedback that shattered IBM’s own ill-disciplined marketing of Watson. What about that building near NYU that was stuffed with AI experts? What about IBM’s sale of its medical unit to Francisco Partners? Where is that smart software today? It is Merative Health, and it is not clear if the company is hitting home runs and generating a flood of cash. So that Watson technology is no longer part of IBM’s smart software solution.

Thanks, Venice.ai. Good enough.
The write up reports that a company called Transmission, which is a business to business or B2B marketing agency, made a documentary about Watson AI. It is not clear from the write up if the documentary was sponsored or if Transmission just had the idea to revisit Watson. According to the write up:
The documentary [“Who is…Watson? The Day AI Went Primetime”] underscores IBM’s legacy of innovation while framing its role in shaping an ethical, inclusive future for AI, a critical differentiator in today’s competitive landscape.
The Transmission/Earnest documentary is a rah rah for IBM and its Watsonx technology. Think of this as Watson Version 2 or Version 3. The Transmission outfit and its Earnest unit (yes, that is its name) in London, England, wants to land more IBM work. Furthermore, rumors suggest that the video created by Celia Aniskovich as a “spec project.” High quality videos running 18 minutes can burn through six figures quickly. A cost of $250,000 or $300,000 is not unexpected. Add to this the cost of the PR campaign to push Transmission brand story telling capability, and the investment strikes me as a bad-economy sales move. If a fat economy, a marketing outfit would just book business at trade shows or lunch. Now, it is rah rah time and cash outflow.
The write up makes clear that Transmission put its best foot forward. I learned:
The documentary was grounded in testimonials from former IBM staff, and more B2B players are building narratives around expert commentary. B2B marketers say thought leaders and industry analysts are the most effective influencer types (28%), according to an April LinkedIn and Ipsos survey. AI pushback is a hot topic, and so is creating more entertaining B2B content. The biggest concern among leveraging AI tools among adults worldwide is the loss of human jobs, according to a May Kantar survey. The primary goal for video marketing is brand awareness (35%), according to an April LinkedIn and Ipsos survey. In an era where AI is perceived as “abstract or intimidating,” this documentary attempts to humanize it while embracing the narrative style that makes B2B brands stand out,
The IBM message is important. Watson Jeopardy was “good” AI. The move fast, break things, and spend billions approach used today is not like IBM’s approach to Watson. (Too bad about those cancer docs not embracing Watson, a factoid not mentioned in the cited write up.)
The question is. “Will the Watson video go viral?” The Watson Jeopardy dust up took place in 2011, but the Watson name lives on. Google is probably shaking its talons at the sky wishing it had a flashy video too. My hunch is that Google would let its AI make a video or one of the YouTubers would volunteer hoping that an act of goodness would reduce the likelihood Google would cut their YouTube payments. I guess I could ask Watson when it thinks, but I won’t. Been there. Done that.
Stephen E Arnold, November 25, 2025
Microsoft Factoid: 30 Percent of Our Code Is Vibey
November 24, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
Is Microsoft cranking out one fifth to one third of its code using vibey methods? A write up from Ibrahim Diallo seeks to answer this question in his essay “Is 30% of Microsoft’s Code Really AI-Generated?” My instinctive response was, “Nope. Marketing.” Microsoft feels the heat. The Google is pushing the message that it will deliver the Agentic Operating System for the emergence of a new computing epoch. In response, Microsoft has been pumping juice into its market collateral. For example, Microsoft is building data center systems that span nations. Copilot will make your Notepad “experience” more memorable. Visio, a step child application, is really cheap. Add these steps together, and you get a profile of a very large company under pressure and showing signs of cracking. Why? Google is turning up the heat and Microsoft feels it.
Mr. Diallo writes:
A few months back, news outlets were buzzing with reports that Satya Nadella claimed 30% of the code in Microsoft’s repositories was AI-generated. This fueled the hype around tools like Copilot and Cursor. The implication seemed clear: if Microsoft’s developers were now “vibe coding,” everyone should embrace the method.
Then he makes a pragmatic observation:
The line between “AI-generated” and “human-written” code has become blurrier than the headlines suggest. And maybe that’s the point. When AI becomes just another tool in the development workflow, like syntax highlighting or auto-complete, measuring its contribution as a simple percentage might not be meaningful at all.
Several observations:
- Microsoft’s leadership is outputting difficult to believe statements
- Microsoft apparently has been recycling code because those contributions from Stack Overflow are not tabulated
- Marketing is now the engine making AI the future of Microsoft unfold.
I would assert that the answer to the Mr. Diallo’s question is, “Whatever unfounded assertion Microsoft offers is actual factual.” That’s okay with me, but some people may be hooked by Google’s Agentic Operating System pitch.
Stephen E Arnold, November 24, 2025
Waymo Mows Down a Mission Cat
November 21, 2025
Cat lovers in San Francisco have a new reason to be angry at Waymo, Google’s self-driving car division. The outrage has reached all the way to the UK, where the Metro reports, “Robotaxi Runs Over and Kills Popular Cat that Greeted People in a Corner Shop.” Reporter Sarah Hooper writes:
“KitKat, the beloved pet cat at Randa’s Market, was run over by an automated car on October 27. He was rushed to a hospital by a bartender working nearby, but was pronounced dead. KitKat’s death has sparked an outpouring of fury and sadness from those who loved him – and questions about the dangers posed by self-driving cars. Randa’s Market owner Mike Zeidan told Rolling Stone: ‘He was a special cat. You can tell by the love and support he’s getting from the community that he was amazing.’ San Francisco Supervisor Jackie Fielder spoke out publicly, saying: ‘Waymo thinks they can just sweep this under the rug and we will all forget, but here in the Mission, we will never forget our sweet KitKat.’ Anger in the community has increased after it was revealed that on the same day KitKat was killed, Waymo co-CEO Tekedra Mawakana said she thought society is ‘ready to accept deaths’ caused by automated cars. But KitKat’s owner pointed out that next time, the death could be that of a child, not just a beloved pet.”
Good point. In a statement, the company insists the tabby “darted” under the car as it pulled away. Perhaps. But do the big dogs at Google really feel “deepest sympathies” for those grieving their furry friend, as the statement claims? It was one of them, after all, who asserted the world is ready to trade deaths for her firm’s technology.
Curious readers can navigate to the write-up to see a couple photos of the charismatic kitty.
Cynthia Murrell, November 21, 2025
Microsoft Knows How to Avoid an AI Bubble: Listen Up, Grunts, Discipline Now!
November 18, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I relish statements from the leadership of BAIT (big AI tech) outfits. A case in point is Microsoft. The Fortune story “AI Won’t Become a Bubble As Long As Everyone Stays thoughtful and Disciplined, Microsoft’s Brad Smith Says.” First, let’s consider the meaning of the word “everyone.” I navigated to Yandex.com and used its Alice smart software to get the definition of “everyone”:
The word “everyone” is often used in social and organizational contexts, and to denote universal truths or principles.
That’s a useful definition. Universal truths and principles. If anyone should know, it is Yandex.

Thanks, Venice.ai. Good enough, but the Russian flag is white, blue, and red. Your inclusion of Ukraine yellow was one reason why AI is good enough, not a slam dunk.
But isn’t there a logical issue with the subjective flag “if” and then a universal assertion about everyone? I find the statement illogical. It mostly sounds like English, but it presents a wild and crazy idea at a time when agreement about anything is quite difficult to achieve. Since I am a dinobaby, my reaction to the Fortune headline is obviously out of touch with the “real” world as it exists are Fortune and possibly Microsoft.
Let’s labor forward with the write up, shall we?
I noted this statement in the cited article attributed to Microsoft’s president Brad Smith:
“I obviously can’t speak about every other agreement in the AI sector. We’re focused on being disciplined but being ambitious. And I think it’s the right combination,” he said. “Everybody’s going to have to be thoughtful and disciplined. Everybody’s going to have to be ambitious but grounded. I think that a lot of these companies are [doing that].”
It was not Fortune’s wonderful headline writers who stumbled into a logical swamp. The culprit or crafter of the statement was “1000 Russian programmers did it” Smith. It is never Microsoft’s fault in my view.
But isn’t this the AI go really fast, don’t worry about the future, and break things?
Mr. Smith, according the article said,
“We see ongoing growth in demand. That’s what we’ve seen over the past year. That’s what we expect today, and frankly our biggest challenge right now is to continue to add capacity to keep pace with it.”
I wonder if Microsoft’s hiring social media influencers is related to generating demand and awareness, not getting people to embrace Copilot. Despite its jumping off the starting line first, Microsoft is now lagging behind its “partner” OpenAI and a two or three other BAIT entities.
The Fortune story includes supporting information from a person who seems totally, 100 percent objective. Here’s the quote:
At Web Summit, he met Anton Osika, the CEO of Lovable, a vibe-coding startup that lets anyone create apps and software simply by talking to an AI model. “What they’re doing to change the prototyping of software is breathtaking. As much as anything, what these kinds of AI initiatives are doing is opening up technology opportunities for many more people to do more things than they can do before…. This will be one of the defining factors of the quarter century ahead…”
I like the idea of Microsoft becoming a “defining factor” for the next 25 years. I would raise the question, “What about the Google? Is it chopped liver?
Several observations:
- Mr. Smith’s informed view does not line up with hiring social media influencers to handle the “growth and demand.” My hunch is that Microsoft fears that it is losing the consumer perception of Microsoft as the really Big Dog. Right now, that seems to be Super sized OpenAI and the mastiff-like Gemini.
- The craziness of “everybody” illustrates a somewhat peculiar view of consensus today. Does everybody include those fun-loving folks fighting in the Russian special operation or the dust ups in Sudan to name two places where “everybody” could be labeled just plain crazy?
- Mr. Smith appears to conflate putting Copilot in Notepad and rolling out Clippy in Yeezies with substantive applications not prone to hallucinations, mistakes, and outputs that could get some users of Excel into some quite interesting meetings with investors and clients.
Net net: Yep, everybody. Not going to happen. But the idea is a-thoughtful, which is interesting to me.
Stephen E Arnold, November 18, 2025
Surprise! Countries Not Pals with the US Are Using AI to Spy. Shocker? Hardly
November 17, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The Beeb is a tireless “real” news outfit. Like some Manhattan newscasters, fixing up reality to make better stories, the BBC allowed a couple of high-profile members of leadership to find their future elsewhere. Maybe the chips shop in Slough?

Thanks, Venice.ai. You are definitely outputting good enough art today.
I am going to suspend my disbelief and point to a “real” news story about a US company. The story is “AI Firm Claims Chinese Spies Used Its Tech to Automate Cyber Attacks.” The write up reveals information that should not surprise anyone except the Beeb. The write up reports:
The makers of artificial intelligence (AI) chatbot Claude claim to have caught hackers sponsored by the Chinese government using the tool to perform automated cyber attacks against around 30 global organizations. Anthropic said hackers tricked the chatbot into carrying out automated tasks under the guise of carrying out cyber security research. The company claimed in a blog post this was the “first reported AI-orchestrated cyber espionage campaign”.
What’s interesting is that Anthropic itself was surprised. If Google and Microsoft are making smart software part of the “experience,” why wouldn’t bad actors avail themselves of the tools. Information about lashing smart software to a range of online activities is not exactly a secret.
What surprises me about this “news” is:
- Why is Anthropic spilling the beans about a nation state using its technology. Once such an account is identified, block it. Use pattern matching to determine if others are doing substantially similar exploits. Block those. If you want to become a self appointed police professional, get used to the cat-and-mouse game. You created the system. Deal with it.
- Why is the BBC presenting old information as something new? Perhaps its intrepid “real” journalists should pay attention to the public information distributed by cyber security firms? I think that is called “research”, but that may be surfing on news releases or running queries against ChatGPT or Gemini. Why not try Qwen, the China-affiliated system.
- I wonder why the Google-Anthropic tie up is not mentioned in the write up. Google released information about a quite specific smart exploit a few months ago. Was this information used by Anthropic to figure out that an bad actor was an Anthropic user? Is there a connection here? I don’t know, but that’s what investigative types are supposed to consider and address.
My personal view is that Anthropic is positioning itself as a tireless defender of truth, justice, and the American way. The company may also benefit from some of Google’s cyber security efforts. Google owns Mandiant and is working hard to make the Wiz folks walk down the yellow brick road to the Googleplex.
Net net: Bad actors using low cost, subsidized, powerful, and widely available smart software is not exactly a shocker.
Stephen E Arnold, November 17, 2025
Someone Is Not Drinking the AI-Flavored Kool-Aid
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The future of AI is in the hands of the masters of the digital PT Barnum’s. A day or so ago, I wrote about Copilot in Excel. Allegedly a spreadsheet can be enhanced by Microsoft. Google is beavering away with a new enthusiasm for content curation. This is a short step to weaponizing what is indexed, what is available to Googlers and Mama, and what is provided to Google users. Heroin dealers do not provide consumer oriented labels with ingredients.

Thanks, Venice.ai. Good enough.
Here’s another example of this type of soft control: “I’ll Never Use Grammarly Again — And This Is the Reason Every Writer Should Care.” The author makes clear that Grammarly, developed and operated from Ukraine, now wants to change her writing style. The essay states:
What once felt like a reliable grammar checker has now turned into an aggressive AI tool always trying to erase my individuality.
Yep, that’s what AI companies and AI repackagers will do: Use the technology to improve the human. What a great idea? Just erase the fingerprints of the human. Introduce AI drivel and lowest common denominator thinking. Human, the AI says, take a break. Go to the yoga studio or grab a latte. AI has your covered.
The essay adds:
Superhuman [Grammarly’s AI solution for writers] wants to manage your creative workflow, where it can predict, rephrase, and automate your writing. Basically, a simple tool that helped us write better now wants to replace our words altogether. With its ability to link over a hundred apps, Superhuman wants to mimic your tone, habits, and overall style. Grammarly may call it personalized guidance, but I see it as data extraction wrapped with convenience. If we writers rely on a heavily AI-integrated platform, it will kill the unique voice, individual style, and originality.
One human dumped Grammarly, writing:
I’m glad I broke up with Grammarly before it was too late. Well, I parted ways because of my principles. As a writer, my dedication is towards original writing, and not optimized content.
Let’s go back to ubiquitous AI (some you know is there and other AI that operates in dark pattern mode). The object of the game for the AI crowd is to extract revenue and control information. By weaponizing information and making life easy, just think who will be in charge of many things in a few years. If you think humans will rule the roost, you are correct. But the number of humans pushing the buttons will be very small. These individuals have zero self awareness and believe that their ideas — no matter how far out and crazy — are the right way to run the railroad.
I am not sure most people will know that they are on a train taking them to a place they did not know existed and don’t want to visit.
Well, tough luck.
Stephen E Arnold, November 11, 2025
Innovation Cored, Diced, Cooked and Served As a Granny Scarf
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I do not pay much attention to Vogue, once a giant, fat fashion magazine. However, my trusty newsfeed presented this story to me this morning at 626 am US Eastern: “Apple and Issey Miyake Unite for the iPhone Pocket. It’s a Moment of Connecting the Dots.” I had not idea what an Issey Miyake was. I navigated to Yandex.com (a more reliable search service than Google which is going to bail out the sinking Apple AI rowboat) and learned:
Issey Miyake … the brand name under which designer clothing, shoes, accessories and perfumes are produced.
Okay, a Japanese brand selling collections of clothes, women’s clothes with pleating, watches, perfumes, and a limited edition of an Evian mineral water in bottles designed by someone somewhere, probably Southeast Asia.
But here’s the word that jarred me: Moment. A moment?
The Vogue write up explains:
It’s a moment of connecting the dots.
Moment? Huh.
Upon further investigation, the innovation is a granny scarf; that is, a knitted garment with a pocket for an iPhone. I poked around and here’s what the “moment” looks like:
Source: Engadget, November 2025
I don’t recall my great grandmother (my father’s mother had a mother. This person was called “Granny” or “Gussy”, and I know she was alive in 1958. She died at the age of 102 or 103. She knitted and tatted scarfs, odd little white cloths called antimacassars and small circular or square items called doilies (singular “doily”).
Apple and the Japanese fashion icon have inadvertently emulated some of the outputs of my great grandmother “Granny” or “Gussy.” Were she, my grandmother, and my father alive, one or all of them would have taken legal action. But time makes us fools, and “the spirits of the wise sit in the clouds and mock” scarfs with pouches like an NBA bound baby kangaroo.
But the innovation which may be either Miyake’s, Apple’s, or a combo brainstorm of Miyake and Apple comes in short and long sizes. My Granny cranked out her knit confections like a laborer in a woolen mill in Ipswich in the 19th century. She gave her outputs away.
You can acquire this pinnacle of innovation for US $150 or US $230.
Several observations:
- Apple’s skinny phone flopped; Apple’s AI flopped. Therefore, Apple is into knitted scarfs to revivify its reputation for product innovation. Yeah, innovative.
- Next to Apple’s renaming Apple iTV as Apple TV, one may ask, “Exactly what is going on in Cupertino other than demanding that I log into an old iPhone I use to listen to podcasts?” Desperation gives off an interesting vibe. I feel it. Do you?
- Apple does good hardware. It does not do soft goods with the same élan. Has its leadership lost the thread?
Smell that desperation yet? Publicity hunger, the need to be fashionable and with it, and taking the hard edges off a discount Mac laptop.
Net net: I like the weird pink version, but why didn’t the geniuses behind the Genius Bar do the zippy orange of the new candy bar but otherwise indistinguishable mobile device rolled out a short time ago? Orange? Not in the scarf palate.
Granny’s white did not make the cut.
Stephen E Arnold, November 11, 2025


