Big Ideas, Clicks, and Judgment A-Plenty
January 30, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Last post of January 2026: What’s up with Harvard?
It’s Friday and I want to ask a question. Why didn’t I have professors who presented remarkable ideas to their students and to the world? Lousy school, dumb instructors, or truly stupid students.
I am not sure if you recall the comet that zipped through our obscure cluster of planets a few months ago. I spotted an article titled “What If 3I/ATLAS Is AI/ATLAS? Harvard Professor Unveils 18 Anomalies of Interstellar Object.” This Harvard expert continues to suggest that the comet named 3I/Atlas is something more than a chunk of stuff from somewhere out there. Among the possibilities this esteemed Harvard professors has offered now that the 3I/Atlas object is far away include chemical fingerprints and mini jets. The big idea, however, seems to be that this professor suggests that the 3I/Atlas is a manifestation of artificial intelligence. Yeah, okay.

An academic luminary has concluded that angels, AI robots, and Albert Einstein have come together in one world changing idea: The thruple will have a child who combines the best of each parent. It will be a significant event because the baby will be able to attend a certain prestigious university for free and then become a professor. Thanks, Venice.ai.
But Avi Loeb has just been one-upped by another Harvardoid. This esteemed world renowned expert, Dr. Michael Guillen, is a former Harvard Harvardoid. “Former Harvard Physicist Claims He’s Found Heaven Exactly Like in the Bible” reports:
Dr. Michael Guillen, a former Harvard physics professor, claims he’s discovered Heaven’s exact location at the Cosmic horizon.
Okay, Dr. Guillen. You nailed it. Are you still puzzled? The write up explains:
According to the Bible, the lowest level of Heaven is Earth’s atmosphere. The mid-level heaven is outer space. The highest-level heaven is what we’re talking about: It’s where God dwells.” He suggests that anything situated past the Cosmic Horizon possesses extraordinary characteristics, operating beyond the constraints of space-time as humans comprehend it. According to him, these attributes align perfectly with biblical accounts of Heaven from antiquity. Not only does it remain perpetually “up, but it’s unreachable by mortal beings.
Therefore, he has found Heaven.
Let’s think about this Harvard thing. Why is Harvard woven into click bait like AI comets and finding Heaven?
I have a theory.
My view is that some people want to attract attention. Doing substantive research, writing papers, and generating meaningful feedback is slow. Harvard-tinged types want to speed up the process. I call this ego accelerationism. Instead of doing one’s job and plugging along, something magnetic has to be cooked up and then marketed.
Harvard appears to be a source of interesting ideas about how to market. I do want to point out that this fine university used its deep knowledge resources to Jeffrey Epstein. Someone at a law enforcement conference mentioned that Mr. Epstein had an office or a personal workspace at Harvard. And why not? He donated money to some whiz bang project to figure out evolutionary dynamics. Sound familiar?
When one talks about a university education today, is it required to know about AI comets, where Heaven is, and the administrative judgment of having a brilliant thinker like Jeffrey Epstein on campus? Sure, absolutely.
Yikes. I think I know the answer. Yikes.
Stephen E Arnold, January 30, 2026
Rage Baiting Tim Cook And Sundar Pichai
January 30, 2026
Rage baiting makes the Internet go round and The Verge published an editorial taking aim at two of Big Tech’s leaders: “Tim Cook And Sundar Pichai Are Cowards.” Article writer Elizabeth Lopatto dubbed Cook and Pichai as “cowards,” because of some disgusting actions by X users. X users utilized Grok to make AI images that undressed women and minors. That’s not good.
Lopatto thought these actions would inspire Pichai and Cook to remove X from Google’s and Apple’s app stores. She claims that these two are too afraid of Elon Musk to remove X. Lopatto cites the developer guidelines for Google and Apple app stores. Both guidelines don’t allow these disgusting action.
An enraged Lopatto wrote that Pichai and Cook won’t remove X (despite the breech of guidelines), because they don’t want to upset a right-wing media ecosystem that Musk owns. Each of these Big Tech leaders have too much to lose in her summation:
“Cook’s Apple has a massive dependency on China, and smartphones, computers, and chips are currently exempt from the tariffs on China. Cook can present Donald Trump with as many golden gifts as he wants, but those tariffs don’t have to stay that way. Google’s Pichai is similarly weak. Trump has threatened Google numerous times over his placement in search results, and so far YouTube has managed to mostly avoid scrutiny over its content moderation policies because Pichai has been content to coddle Trump with promises that everything he does is the biggest thing in Google search history.”
She continues to claim these men “sold their principles for power” and “don’t even control their own companies.” Lopatto is correct that Pichai and Cook are hypocrites and so is everyone in Big Tech. It is concerning that AI algorithms are making degrading images of women and minors. It is 2026, and perhaps as the Verge works hard to become the industry standard for rock solid technology news and analysis, new intellectual paths just have to be clicked. Ooops. Sorry, I meant explored. My bad.
Whitney Grace, January 30, 2026
AI and the Cult of Personality
January 29, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
AI or smart software may be in a cluster of bubbles or a cluster of something. One thing is certain we now have a cult of personality emerging around software that makes humans expendable or a bit like the lucky worker who dragged megaliths from quarry to nice places near the Nile River.
Let me highlight a few of the people who have emerged as individuals who saw the future and decided it would be AI. These are people who know when a technology is the next big thing. Maybe the biggest next big thing to come along since fire or the wheel. Here are a few names:
Brett Adcock
Sam Altman
Marc Benioff
Chris Cox
Jeff Dean
Timnit Gebru
Ian Goodfellow
Demis Hassabis
Dan Hendrycks
Yann LeCun
Fei-Fei Li
Satya Nadella
Andrew Ng
Elham Tabassi
Steve Yun
I want to add another name to his list, which can be expanded far beyond the names I pulled off the top of my head. This is none other than Ed Zitron.

Thanks, Venice.ai. Good enough.
I know this because I read “Ed Zitron on Big tech, Backlash, Boom and Bust: AI Has Taught Us That People Are Excited to Replace Human Beings.” The write up says:
Zitron’s blunt, brash skepticism has made him something of a cult figure. His tech newsletter, Where’s Your Ed At, now has more than 80,000 subscribers; his weekly podcast, Better Offline, is well within the Top 20 on the tech charts; he’s a regular dissenting voice in the media; and his subreddit has become a safe space for AI sceptics, including those within the tech industry itself – one user describes him as “a lighthouse in a storm of insane hyper capitalist bullsh#t”.
I think it is classy to use a colloquial term for animal excrement in a major newspaper. I wonder, however, if this write up is more about what the writer perceives as wrong with big AI than admiration for a PR and marketing person?
The write up says:
Explaining Zitron’s thesis about why generative AI is doomed to fail is not simple: last year he wrote a 19,000-word essay, laying it out. But you could break it down into two, interrelated parts. One is the actual efficacy of the technology; the other is the financial architecture of the AI boom. In Zitron’s view, the foundations are shaky in both cases.
The impending failure of AI is based upon the fact that it is lousy technology; that is, it outputs incorrect information and hallucinates. Plus, the financial structure of the training, legal cases, pings, pipes, and plumbing is money thrown into a dumpster fire.
The article humanizes Mr. Zitron, pointing out:
Growing up in Hammersmith, west London, his parents were loving and supportive, Zitron says. His father was a management consultant; his mother raised him and his three elder siblings. But “secondary school was very bad for me, and that’s about as much as I’ll go into.” He has dyspraxia – a coordination disability – and he was diagnosed with ADHD in his 20s. “I think I failed every language and every science, and I didn’t do brilliant at maths,” he says. “But I’ve always been an #sshole over the details.”
Yes, another colloquialism. Anal issues perhaps?
The write up ends on a note that reminds me of good old Don Quixote:
He just wants to tell it like it is. “It’d be much easier to just write mythology and fan fiction about what AI could do. What I want to do is understand the truth.”
Several observations:
- I am not sure if the write up is about Mr. Zitron or the Guardian’s sense that high technology has burned Fleet Street and replaced it with businesses that offer AI services
- A film about Mr. Zitron does seem to be one important point in the write up. Will it be a TikTok-type of film or a direct-to-YouTube documentary with embedded advertising?
- AI is now the punching bag for those who are not into big tech, no matter what they say to their editors. Social media gang bangs are out of style. Get those AI people.
Net net: Amusing. I wonder if Mr. Beast will tackle the video opportunity.
Stephen E Arnold, January 29, 2026
Microsoft: Budgeting Data Centers How Exactly?
January 20, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Here’s how bean counters and MBA work. One gathers assumptions. Depending on the amount of time one has, the data collection can be done blue chip consulting style; that is, over weeks or months of billable hours. Alternatively, bean counters and MBA can sit in a conference room, talk, jot stuff on a white board, and let one person pull the assumptions into an Excel-type of spreadsheet. Then fill in the assumptions, some numbers based on hard data (unlikely in many organizations) and some guess-timates and check the “flow” of the numbers. Once the numbers have the right optics, reconvene in the conference room, talk, write on the white board, and the same lucky person gets to make the fixes. Once the flow is close enough for horse shoes, others can eyeball the numbers. Maybe a third party will be asked to “validate” the analysis? And maybe not?

Softies are working overtime on the AI data center budget. Thanks, Venice.ai. Good enough. Apologies to Copilot. Your output is less useful than Venice’s. Bummer.
We now know that Microsoft’s budgeting for its big beautiful build out of data centers will need to be reworked. Not to worry. If the numbers are off, the company can raise the price of an existing service or just fire however many people needed to free up some headroom. Isn’t this how Boeing-type companies design and build aircraft? (Snide comments about cutting corners to save money are not permitted. Thank you.)
How do “we” know this? I read in GeekWire (an estimable source I believe) this story: “Microsoft Responds to AI Data Center Revolt, Vowing to Cover Full Power Costs and Reject Local Tax Breaks.” I noted this passage about Microsoft’s costs for AI infrastructure:
The new plan, announced Tuesday morning [January 13, 2026) in Washington, D.C, includes pledges to pay the company’s full power costs, reject local property tax breaks, replenish more water than it uses, train local workers, and invest in AI education and community programs.
Yep, pledges. Just ask any religious institution about the pledge conversion ratio. Pledges are not cold, hard cash. Why are the Softies doing executive level PR about its plans to build environmentally friendly and family friendly data centers? People in fly over areas are not thrilled with increased costs for power and water, noise pollution from clever repurposed jet engines and possible radiation emission from those refurbed nuclear power plants in unused warships, and the general idea of watching good old empty land covered with football stadium sized tan structures.
Now back to the budget estimates for Microsoft’s data center investments. Did those bean counters include set asides for the “full power costs,” property taxes, water management, and black hole costs for “invest in AI education and community.”
Nope.
That means “pledges” are likely to be left fuzzy, defined on the fly, or just forgotten like Bob, the clever interface with precursor smart software. Sorry, Copilot Bob’s help and Clippy missed the mark. You may too for one reason: Apple and Google have teamed up in an even bigger way than before.
That brings me back to the bean counters at Microsoft, a uni shuttle bus full of MBAs, and a couple of railroad passenger cars filled with legal eagles. The money assumptions need a rethink.
Brad Smith, the Microsoft member of “leadership” blaming security breaches on 1,000 Russian hackers, wants this PR to work. The write up reports that Mr. Smith, the member of Microsoft leadership said:
Smith promised new levels of transparency… The companies that succeed with data centers in the long run will be the companies that have a strong and healthy relationship with local communities. Microsoft’s plan starts by addressing the electricity issue, pledging to work with utilities and regulators to ensure its electricity costs aren’t passed on to residential customers. Smith cited a new “Very Large Customers” rate structure in Wisconsin as a model, where data centers pay the full cost of the power they use, including grid upgrades required to support them.
And that’s not all. The pledge includes this ethically charged corporate commitment. I quote:
- “A 40% improvement in water efficiency by 2030, plus a pledge to replenish more water than it uses in each district where it operates. (Microsoft cited a recent $25 million investment in water and sewer upgrades in Leesburg, Va., as an example.)
- A new partnership with North America’s Building Trades Unions for apprenticeship programs, and expansion of its Datacenter Academy for operations training.
- Full payment of local property taxes, with no requests for municipal tax breaks.
- AI training through schools, libraries, and chambers of commerce, plus new Community Advisory Boards at major data center sites.”
I hear the background music. I think it is the double fugue or Kyrie in Mozart’s Requiem, but I may be wrong. Yes, I am. That is the sound track for the group reworking the numbers for Microsoft’s “beat Google” data center spend.
You can listen to Mozart’s Requiem on YouTube. My hunch is that the pledge is likely to be part of the Microsoft legacy. As a dinobaby, I would suggest that Microsoft’s legacy is blaming users for Microsoft’s security issues and relying on PR when it miscalculates [a] how people react to the excellent company’s moves and [b] its budget estimates for Copilot, the aircraft, the staff, the infrastructure, and the odds and ends.
Net net: Microsoft’s PR better be better than its AI budgeting.
Stephen E Arnold, January 20, 2026
Can AI Buzzwording Land a Job?
January 19, 2026
Since the application process was automated, job hunting has been a problem. It’s worsened with the implementation of AI. AI makes job hunting worse, because if your resume doesn’t include the correct buzzwords you won’t make it to the next round. To that effect, job hunters are packing their resume with the right words, but perspective employers are doing the same. ZDNet explains the hard knock reality of job hunting in an AI world: “AI Buzzwords Are Making The Job Hunt Harder – For Everyone.”
Packing resumes and job classifieds with AI buzzwords is a trend called “AI language inflation.” It’s a double edged sword. Employers are loading their job advertisements with buzzwords to be cutting edge (sometimes highlighting aspects that aren’t essential for job). Job hunters are then adding more AI buzzwords to get past the algorithms.
This means that job hunters must learn the different AI “flavors” and properly curate their resumes. Even the word AI is a buzzword:
“AI is used very loosely and almost like a buzzword. What I’m looking for is the appropriate use of the word AI. How well are people really starting to understand, more deeply, what is artificial intelligence or digital labor? Or how they’re using it, so it’s not just as a buzzword but day-to-day practical examples?”
If you do want to do the jargon dance, here’s a resource for you. The list is published from a company with a remarkably opaque name, Service Now. The helpful listing is “AI Terms Explained.” Service Now (does anyone or any thing deliver “service” now?) Here are three examples of what the list will deliver. Plus you can watch a little video about the “comprehensive guide.”
Artificial general intelligence (or AGI). Artificial General Intelligence (AGI) refers to an AI system that possesses a wide range of cognitive abilities, much like humans, enabling them to learn, reason, adapt to new situations, and devise creative solutions across various tasks and domains, rather than being limited to specific tasks as narrow AI systems are. [Note that you have to use “AI” a couple of times when defining “AGI.” That’s intellectually impressive even though Ms. Sperling told me in the 10th grade, don’t define a word by using that word in your definition. Obviously Ms. Sperling in 1959 would need to up her game in 2026.]
Here’s another definition from the service outfit Service Now which is serving you:
Explainability. Explainability refers to techniques that make AI model decisions and predictions interpretable and understandable to humans. [Note: Would someone at a Google or OpenAI-type company provide me with some information about the hallucination function? [Note: As a human (I think), providing incorrect, made up, or off point information is difficult for me to understand especially when I have to pay for wrongness.]
The final example from the serving me now Service Now listing of AI buzzwords that will definitely serve you well:
Natural Language Generation (or NLG). A subfield of AI that produces natural written or spoken language. [Note: What is “natural”? What is “language”? The explanation does not serve me well.]
This list contains more than 90 terms.
Several observations:
- Buzzwords won’t deliver the bacon
- Lists generated by AI are not particularly helpful
- Do not define AI, in case anyone asks you, by using the acronym AI or the word artificial.
Net net: I am skeptical about AI which is a utility function related to search and retrieval. I am also not sure about the “service” thing. Especially service “now.” I have to wait everywhere; for instance, getting a clear definition of AI.
Stephen E Arnold, January 19. 2025
Apple and Google: Lots of Nots, Nos, and Talk
January 15, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
This is the dinobaby, an 81 year old dinobaby. In my 60 plus year work career I have been around, in, and through what I call “not and no” PR. The basic idea is that one floods the zones with statements about what an organization will do. Examples range from “our Wi-Fi sniffers will not log home access point data” to “our AI service will not capture personal details” to “our security policies will not hamper usability of our devices.” I could go on, but each of these statements were uttered in meetings, in conference “hallway” conversations, or in public podcasts.
Thanks, Venice.ai. Good enough. See I am prevaricating. This image sucks. The logos are weird. GW looks like a wax figure.
I want to tell you that if the Nots and Nos identified in the flood of write ups about the Apple Google AI tie up immutable like Milton’s description of his God, the nots and nos are essentially pre-emptive PR. Both firms are data collection systems. The nature of the online world is that data are logged, metadata captured and mindlessly processed for a statistical signal, and content processed simply because “why not?”
Here’s a representative write up about the Apple Google nots and nos: “Report: Apple to Fine-Tune Gemini Independently, No Google Branding on Siri, More.” So what’s the more that these estimable firms will not do? Here’s an example:
Although the final experience may change from the current implementation, this partly echoes a Bloomberg report from late last year, in which Mark Gurman said: “I don’t expect either company to ever discuss this partnership publicly, and you shouldn’t expect this to mean Siri will be flooded with Google services or Gemini features already found on Android devices. It just means Siri will be powered by a model that can actually provide the AI features that users expect — all with an Apple user interface.”
How about this write up: “Official: Apple Intelligence & Siri To Be Powered By Google Gemini.”
Source details how Apple’s Gemini deal works: new Siri features launching in spring and at WWDC, Apple can finetune Gemini, no Google branding, and more
Let’s think about what a person who thinks the way my team does. Here are what we can do with these nots and nos:
- Just log everything and don’t talk about the data
- Develop specialized features that provide new information about use of the AI service
- Monitor the actions of our partners so we can be prepared or just pounce on good ideas captured with our “phone home” code
- Skew the functionality so that our partners become more dependent on our products and services; for example, exclusive features only for their users.
The possibilities are endless. Depending upon the incentives and controls put in place for this tie up, the employees of Apple and Google may do what’s needed to hit their goals. One can do PR about what won’t happen but the reality of certain big technology companies is that these outfits defy normal ethical boundaries, view themselves as the equivalent of nation states, and have a track record of insisting that bending mobile devices do not bend and that information of a personal nature is not cross correlated.
Watch the pre-emptive PR moves by Apple and Google. These outfits care about their worlds, not those of the user.
Just keep in mind that I am an old, very old, dinobaby. I have some experience in these matters.
Stephen E Arnold, January 15, 2025
Security Chaos: So We Just Live with Failure?
January 14, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read a write up that baffled me. The article appeared in what I consider a content marketing or pay to play publication. I may be wrong, but the content usually hits me as an infomercial. The story arresting my attention this morning (January 13, 2026) is “The 11 Runtime Attacks Breaking AI Security — And How CISOs Are Stopping Them.” I expected a how to. What did the write up deliver? Confusion and a question, “So we just give up?”
The article contains this cheerful statement from a consulting firm. Yellow lights flashed. I read this:
Gartner’s research puts it bluntly: “Businesses will embrace generative AI, regardless of security.” The firm found 89% of business technologists would bypass cybersecurity guidance to meet a business objective. Shadow AI isn’t a risk — it’s a certainty.
Does this mean that AI takes precedence over security?
The article spells out 11 different threats and provides solutions to each. The logic of the “stopping runtime attacks” with methods now available struck me as a remarkable suggestion.

The mice are the bad actors. Notice that the capable security system is now unable to deal with the little creatures. The realtime threats overwhelmed the expensive much hyped-cyber cat. Thanks, Venice.ai. Good enough.
Let’s look at three of the 11 threats and their solutions. Please, read the entire write up and make you own decision about the other eight problems presented and allegedly solved.
The first threat is called “multi turn crescendo attacks.” I had no idea what this meant when I read the phrase. That’s okay. I am a dinobaby and a stupid one at that. It turns out that this fancy phrase means that a bad actor plans prompts that work incrementally. The AI system responds. Then responds to another weaponized prompt. Over a series of prompts, the bad actor gets what he or she wants out of the system. ChatGPT and Gemini are vulnerable to this orchestrated prompt sequence. What’s the fix? I quote:
Stateful context tracking, maintaining conversation history, and flagging escalation patterns.
Really? I am not sure that LLM outfits or licensees have the tools and the technical resources to implement these linked functions. Furthermore, in the cat and mouse approach to security, the mice are many. The find and react approach is not congruent with runtime threats.
Another threat is synthetic identify fraud. The idea is that AI creates life like humans, statements, and supporting materials. For me, synthetic identities are phishing attacks on steroids. People are fooled by voice, video and voice, email, and SMS attacks. Some companies hire people who are not people because AI technology advances in real time. How does one fix this? The solution is, and I quote:
Multi-factor verification incorporating behavioral signals beyond static identity attributes, plus anomaly detection trained on synthetic identity patterns.
But when AI synthetic identity technology improves how will today’s solutions deal with the new spin from bad actors? Answer: They have not, cannot, and will not with the present solutions.
The last threat I will highlight is obfuscation attacks or fiddling with AI prompts. Developers of LLMs are in a cat and mouse game. Right now the mice are winning for one simple reason: The wizards developing these systems don’t have the perspective of bad actors. LLM developers just want to ship and slap on fixes that stop a discovered or exposed attack vector. What’s the fix? The solution, and I quote, is:
Wrap retrieved data in delimiters, instructing the model to treat content as data only. Strip control tokens from vector database chunks before they enter the context window.
How does this work when new attacks occur and are discovered? Not very well because the burden falls upon the outfit using the LLM. Do licensees have appropriate technical resources to “wrap retrieved data in delimiters” when the exploit may just work but no one is exactly sure why. Who knew that prompts in iambic pentameter or gibberish with embedded prompts ignore “guardrails”? The realtime is the killer. Licensees are not equipped to react and I am not confident smart AI cyber security systems are either.
Net net: Amazon Web Services will deal with these threats. Believe it or not. (I don’t believe it, but your mileage may vary.)
Stephen E Arnold, January 14, 2026
So What Smart Software Is Doing the Coding for Lagging Googlers?
January 13, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “Google Programmer Claims AI Solved a Problem That Took Human Coders a Year.” I assume that I am supposed to divine that I should fill in “to crack,” “to solve,” or “to develop”? Furthermore, I don’t know if the information in the write up is accurate or if it is a bit of fluff devised by an art history major who got a job with a PR firm supporting Google.
I like the way a Googler uses Anthropic to outperform Googlers (I think). Anyway, thanks, ChatGPT, good enough.
The company’s commitment to praise its AI technology is notable. Other AI firms toss out some baloney before their “leadership” has a meeting with angry investors. Google, on the other hand, pumps out toots and confetti with appalling regularity.
This particular write up states:
Paul [a person with inside knowledge about Google’s AI coding system] passed on secondhand knowledge from "a Principal Engineer at Google [that] Claude Code matched 1 year of team output in 1 hour."
Okay, that’s about as unsupported an assertion I have seen this morning. The write up continues:
San Francisco-based programmer Jaana Dogan chimed in, outing herself as the Google engineer cited by Paul. "We have been trying to build distributed agent orchestrators at Google since last year," she commented. "There are various options, not everyone is aligned … I gave Claude Code a description of the problem, it generated what we built last year in an hour."
So the “anonymous” programmer is Jaana Dogan. She did not use Opal, Google’s own smart software. Ms. Dogan used the coding tools from Anthropic? Is this what the cited passage is telling me?
Let’s think about these statements for a moment:
- Perhaps Google’s coders were doom scrolling, playing Foosball, or thinking about how they could land a huge salary at Meta now that AI staff are allegedly jump off the good ship Zuck Up? Therefore, smart software could indeed produce code that took the Googlers one year to produce. Googlers are not necessarily productive unless it is in the PR department or the legal department.
- Is Google’s own coding capability so lousy that Googlers armed with Opal and other Googley smart software could not complete a project with software Google is pitching as the greatest thing since Google landed a Nobel Prize?
- Is the Anthropic software that much better than Google’s or Microsoft’s smart coding system? My experience is that none of these systems are that different from one another. In fact, I am not sure that new releases are much better than the systems we have tested over the last 12 months.
The larger question is, “Why does Google have to promote its approach to AI so relentlessly?” Why is Google using another firm’s smart software and presenting its use in a confusing way?
My answer to both these questions is, “Google has a big time inferiority complex. It is as if the leadership of Google believes that grandma is standing behind them when they were 12 years old. When attention flags doing homework, grandma bats the family loser with her open palm. “Do better. Concentrate,” she snarls at the hapless student.
Thus, PR emanates PR that seems to be about its own capabilities and staff while promoting a smart coding tool from another firm. What’s clear is that the need for PR coverage outpaces common sense and planning. Google is trying hard to convince people that AI is the greatest thing since ping pong tables at the office.
Stephen E Arnold, January 13, 2025
The Lineage of Bob: Microsoft to IBM
January 8, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Product names have interested me. I am not too clever. I started “Beyond Search” in 2008. The name wasn’t my idea. At lunch someone said, “We do a lot of search stuff.” Another person (maybe Shakes) said, “Let’s go beyond search, okay.” That was it. I was equally uninspired when I named our new information service “Telegram Notes.” One of my developers said, “What’s with these notecards?” I replied, “Those are my Telegram notes.” There it was. The name.
I wrote a semi-humorous for me post about Microsoft Cowpilot. Oh, sorry, I meant Copilot. The write up featured a picture of a cow standing in a bit of a mess of its own making. Yeah, hah hah. I referenced New Coke and a couple of other naming decisions that just did not work out.
In October 2025, just when I thought the lawn mowing season was ending because the noise drives me bonkers, I read about “Project Bob.” If you have not heard of it, this is an IBM initiative or what IBM calls a product. I know that IBM is a consulting and integration outfit, but this Bob is a product. IBM said when the leaves were choking my gutters:
IBM Project Bob isn’t just another coding assistant—it’s your AI development partner. Designed to work the way you do, Project Bob adapts to your workflow from design to deployment. Whether you’re modernizing legacy systems or building something entirely new, Bob helps you ship quality code faster. With agentic workflows, built-in security and enterprise-grade deployment flexibility, Bob doesn’t just automate tasks—it transforms the entire software development lifecycle. From modernization projects to new application builds, Bob makes development smarter, safer and more efficient. — Neel Sundares, General Manager, Automation and AI, IBM
I gave a couple of lectures about this time. In one of them I illustrated AI coding using Anthropic Claude. The audience yawned. Getting some smart software to write simple scripts was not exactly a big time insight.
But Bob, according to Mr. Sundares, General Manager of Automation and AI is different. He wrote:
Think of Bob as your AI-first IDE and pair developer: a tool that understands your intent, your codebase and your organization’s standards.
- Understands your intent: Switch into Architect Mode to scope and design complex systems or collaborate in Code Mode to move fast and iterate efficiently.
- Understands your repo: Bob reads your codebase, modernizes frameworks, refactors at scale and re-platforms with full context.
- Understands your standards: With built-in expertise for FedRAMP, HIPAA and PCI, Project Bob helps you deliver secure, production-ready code every time.
The Register, a UK online publication, wrote:
Security researchers at PromptArmor have been evaluating Bob prior to general release and have found that IBM’s “AI development partner” can be manipulated into executing malware. They report that the CLI is vulnerable to prompt injection attacks that allow malware execution and that the IDE is vulnerable to common AI-specific data exfiltration vectors.
Bob, if the Register is on the money, has some exploitable features too.
Okay, no surprise.
What is interesting is that IBM chose the name Bob for this “product”, the one with exploitable features.
Does anyone remember Microsoft Bob? I do. My recollection is that it presented a friendly, cartoon like interface. The objects in the room represented Microsoft applications. For example, click on the paper and word processing would open. Want to know the time? Click on the clock. If you did not know what to do, you could click on the dog. That was the help. The dog would guide you.
Screenshot from Habr.ru, but I am sure the image is the property of the estimable Microsoft Corporation. I provide this for its educational and inspirational value.
Rover was the precursor to Clippy I think. And Clippy yielded to Cowpilot. Ooops. Sorry, I meant to type Copilot. My bad. Bob died after a year, maybe less. Bill Gates seemed okay with Bob, and he was more than okay with its leadership as I recall. The marriage lasted longer than Bob.
So what?
First, I find it remarkable that IBM would use the product name “Bob” for the firm’s AI coding assistant. That’s what happens when one relies on young people and leadership unfamiliar with the remarkable Bob from Microsoft. Some of these creatives probably don’t know how to use a mimeograph machine either.
Second, apply the name Bob to an AI service which seems, according to the Register article cited above, has some flaws or as some bad actors might say “features.” I wonder if someone on the IBM Bob marketing team knew the IBM AI product would face some headwinds and was making a sly joke. IBM leadership has a funny bone, but if the reference does not compute, the joke just sails on by.
Third, Mr. Neel Sundares, General Manager, Automation and AI, IBM, said: “The future of AI-powered coding isn’t years away—it’s already here.” That’s right, sir. Anthropic, ChatGPT, Google, and the Chinese AI models output code. Today, once can orchestrate across these services. One can build agents using one of a dozen different services. Yep, it’s already here.
Net net: First, BackRub became Google and then Alphabet. Facebook morphed into Meta which now means AI yiiii AI. Karen became Jennifer. Now IBM embraces Bob. Watson is sad, very sad.
Stephen E Arnold, January 8, 2026
The Branding Genius of Cowpilot: New Coke and Jaguar Are No Longer the Champs
January 6, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
We are beavering away on our new Telegram Notes series. I opened one of my newsfeeds and there is was. Another gem of a story from PCGamer. As you may know, PCGamer inspired this bit of art work from the and AI system. I thought I would have a “cow” when I saw. Here’s the visual gag again:

Is that cow standing in its output? Could that be a metaphor for “cowpilot” output? I don’t know. Qwen, like other smart software can hallucinate. Therefore, I see a semi-sacred bovine standing in a muddy hole. I do not see AI output. If you do, I am not sure you are prepared for the contents about which I shall comment; that is, the story from PCGamer called “In a Truly Galaxy-Brained Rebrand, Microsoft Office Is Now the Microsoft 365 Copilot App, but Copilot Is Also Still the Name of the AI Assistant.”
I thought New Coke was an MBA craziness winner. I thought the Jaguar rebrand was an even crazier MBA craziness winner. I thought the OpenAI smart software non mobile phone rebranding effort that looks like a 1950s dime store fountain pen was in the running for crazy. Nope. We have a a candidate for the rebranding that tops the leader board.
Microsoft Office is now the M3CA or Microsoft 365 Copilot App.
The PCGamer write up says:
Copilot is the app for launching the other apps, but it’s also a chatbot inside the apps.
Yeah, I have a few. But what else does PCGamer say in this write up?

An MBA study group discusses the branding strategy behind Cowpilot. Thanks, Qwen. Nice consistent version of the heifer.
Here’s a statement I circled:
Copilot is, notably, a thing that already exists! But as part of the ongoing effort to juice AI assistant usage numbers by making it impossible to not use AI, Microsoft has decided to just call its whole productivity software suite Copilot, I guess.
Yep, a “guess.” That guess wording suggests that Microsoft is simply addled. Why name a product that causes a person to guess? Not even Jaguar made people “guess” about a weird square car painted some jazzy semi hip color. Even the Atlanta semi-behemoth slapped “new” Coke on something that did not have that old Coke vibe. Oh, both of these efforts were notable. I even remember when the brain trust at NBC dumped the peacock for a couple of geometric shapes. But forcing people to guess? That’s special.
Here’s another statement that caught my dinobaby brain:
Should Microsoft just go ahead and rebrand Windows, the only piece of its arsenal more famous than Office, as Copilot, too? I do actually think we’re not far off from that happening. Facebook rebranded itself “Meta” when it thought the metaverse would be the next big thing, so it seems just as plausible that Microsoft could name the next version of Windows something like “Windows with Copilot” or just “Windows AI.” I expect a lot of confusion around whatever Office is called now, and plenty more people laughing at how predictably silly this all is.
I don’t agree with this statement. I don’t think “silly” captures what Microsoft is attempting to do. In my experience, Microsoft is a company that bet on the AI revolution. That turned into a cost sink hole. Then AI just became available. Suddenly Microsoft has to flog its business customers to embrace not just Azure, Teams, and PowerPoint. Microsoft has to make it so users of these services have to do Copilot.
Take your medicine, Stevie. Just like my mother’s approach to giving me cough medicine. Take your medicine or I will nag you to your grave. My mother haunting me for the rest of my life was a bummer thought. Now I have the Copilot thing. Yikes, I have to take my Copilot medicines whether I want to or not. That’s not “silly.” This is desperation. This is a threat. This is a signal that MBA think has given common sense a pink slip.
Stephen E Arnold, January 6, 2026

