Apple Google Prediction: Get Real, Please

January 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Prediction is a risky business. I read “No, Google Gemini Will Not Be Taking Over Your iPhone, Apple Intelligence, or Siri.” The write up asserts:

Apple is licensing a Google Gemini model to help make Apple Foundation Models better. The deal isn’t a one-for-one swap of Apple Foundation Models for Gemini ones, but instead a system that will let Apple keep using its proprietary models while providing zero data to Google.

Yes, the check is in the mail. I will jump on that right now. Let’s have lunch.

image

Two giant creatures find joy in their deepening respect and love for one another. Will these besties step on the ants and grass under their paws? Will they leave high-value information on the shelf? What a beautiful relationship! Will these two get married? Thanks, Venice.ai. Good enough.

Each of these breezy statements sparks a chuckle in those who have heard direct statements and know that follow through is unlikely.

The article says:

Gemini is not being weaved into Apple’s operating systems. Instead, everything will remain Apple Foundation Models, but Gemini will be the "foundation" of that.

Yep, absolutely. The write up presents this interesting assertion:

To reiterate: everything the end user interacts with will be Apple technology, hosted on Apple-controlled server hardware, or on-device and not seen by Apple or anybody else at all. Period.

Plus, Apple is a leader in smart software. Here’s the article’s presentation of this interesting idea:

Apple has been a dominant force in artificial intelligence development, regardless of what the headlines and doom mongers might say. While Apple didn’t rush out a chatbot or claim its technology could cause an apocalypse, its work in the space has been clearly industry leading. The biggest problem so far is that the only consumer-facing AI features from Apple have been lackluster and got a tepid consumer response. Everything else, the research, the underlying technology, the hardware itself, is industry leading.

Okay. Several observations:

  1. Apple and Google have achieved significant market share. A basic rule of online is that efficiency drives the logic of consolidation. From my point of view, we now have two big outfits, their markets, their products, and their software getting up close and personal.
  2. Apple and Google may not want to hook up, but the financial upside is irresistible. Money is important.
  3. Apple, like Telegram, is taking time to figure out how to play the AI game. The approach is positioned as a smart management move. Why not figure out how to keep those users within the friendly confines of two great companies? The connection means that other companies just have to be more innovative.

Net net: When information flows through online systems, metadata about those actions presents an opportunity to learn more about what users and customers want. That’s the rationale for leveraging the information flows. Words may not matter. Money, data, and control do.

Stephen E Arnold, January 13, 2026

So What Smart Software Is Doing the Coding for Lagging Googlers?

January 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read “Google Programmer Claims AI Solved a Problem That Took Human Coders a Year.” I assume that I am supposed to divine that I should fill in “to crack,” “to solve,” or “to develop”? Furthermore, I don’t know if the information in the write up is accurate or if it is a bit of fluff devised by an art history major who got a job with a PR firm supporting Google.

image

I like the way a Googler uses Anthropic to outperform Googlers (I think). Anyway, thanks, ChatGPT, good enough.

The company’s commitment to praise its AI technology is notable. Other AI firms toss out some baloney before their “leadership” has a meeting with angry investors. Google, on the other hand, pumps out toots and confetti with appalling regularity.

This particular write up states:

Paul [a person with inside knowledge about Google’s AI coding system] passed on secondhand knowledge from "a Principal Engineer at Google [that] Claude Code matched 1 year of team output in 1 hour."

Okay, that’s about as unsupported an assertion I have seen this morning. The write up continues:

San Francisco-based programmer Jaana Dogan chimed in, outing herself as the Google engineer cited by Paul. "We have been trying to build distributed agent orchestrators at Google since last year," she commented. "There are various options, not everyone is aligned … I gave Claude Code a description of the problem, it generated what we built last year in an hour."

So the “anonymous” programmer is Jaana Dogan. She did not use Opal, Google’s own smart software. Ms. Dogan used the coding tools from Anthropic? Is this what the cited passage is telling me?

Let’s think about these statements for a moment:

  1. Perhaps Google’s coders were doom scrolling, playing Foosball, or thinking about how they could land a huge salary at Meta now that AI staff are allegedly jump off the good ship Zuck Up? Therefore, smart software could indeed produce code that took the Googlers one year to produce. Googlers are not necessarily productive unless it is in the PR department or the legal department.
  2. Is Google’s own coding capability so lousy that Googlers armed with Opal and other Googley smart software could not complete a project with software Google is pitching as the greatest thing since Google landed a Nobel Prize?
  3. Is the Anthropic software that much better than Google’s or Microsoft’s smart coding system? My experience is that none of these systems are that different from one another. In fact, I am not sure that new releases are much better than the systems we have tested over the last 12 months.

The larger question is, “Why does Google have to promote its approach to AI so relentlessly?” Why is Google using another firm’s smart software and presenting its use in a confusing way?

My answer to both these questions is, “Google has a big time inferiority complex. It is as if the leadership of Google believes that grandma is standing behind them when they were 12 years old. When attention flags doing homework, grandma bats the family loser with her open palm. “Do better. Concentrate,” she snarls at the hapless student.

Thus, PR emanates PR that seems to be about its own capabilities and staff while promoting a smart coding tool from another firm. What’s clear is that the need for PR coverage outpaces common sense and planning. Google is trying hard to convince people that AI is the greatest thing since ping pong tables at the office.

Stephen E Arnold, January 13, 2025

Fortune Magazine Is Not Hiding Its AI Skepticism

January 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Fortune Magazine appears to be less fearful of expressing its AI skepticism. However, instead of pointing out that the multiple cash fueled dumpster fires continue to burn, Fortune Magazine focuses on an alleged knock on effect of smart software.

AI Layoffs Are Looking More and More Like Corporate Fiction That’s Masking a Darker Reality, Oxford Economics Suggests” uses a stalking horse to deliver the news. The write up reports:

“firms don’t appear to be replacing workers with AI on a significant scale,” suggesting instead that companies may be using the technology as a cover for routine headcount reductions.

The idea seems to be a financially acceptable way to dump people and get the uplift by joining the front runners in smart use of artificial intelligence.

Fortune’s story blows away this smoke screen.

image

Are you kidding, you geezer? AI is now running the show until the part-time, sub-minimum wage folks show up at 1 am. Thanks, Venice.ai. Good enough.

The write up says:

The primary motivation for this rebranding of job cuts appears to be investor relations. The report notes that attributing staff reductions to AI adoption “conveys a more positive message to investors” than admitting to traditional business failures, such as weak consumer demand or “excessive hiring in the past.” By framing layoffs as a technological pivot, companies can present themselves as forward-thinking innovators rather than businesses struggling with cyclical downturns.

The write points out:

While AI was cited as the reason for nearly 55,000 U.S. job cuts in the first 11 months of 2025—accounting for over 75% of all AI-related cuts reported since 2023—this figure represents a mere 4.5% of total reported job losses…. AI-related job losses are still relatively limited.

True to form, the Fortune article tries hard to not ruffle feathers. The write up says:

recent data from the Bureau of Labor Statistics confirms that the “low-hire, low-fire” labor market is morphing into a “jobless expansion,” KPMG chief economist Diane Swonk previously told Fortune‘s Eva Roytburg.

Yep, that’s clear.

Several observations are warranted:

  1. Companies are dumping people to cut costs. We have noticed this across industries from outfits like Walgreen’s to Fancy Dan operations like American Airlines.
  2. AI is becoming an easy way to herd people over 40 into AI training classes and using class performance to winnow the herd. If one needs to replace an actual human, check out India’s work-from-Bangalore options.
  3. The smoke screen is dissipating. What will the excuse be then?

Net net: The true believers in AI created a work related effect that few want to talk about openly. That’s why we get the “low hire, low fire” gibberish. Nice work, AI.

Stephen E Arnold, January 12, 2026

Dell Reveals the Future of AI for Itself

January 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I was flipping through my Russian technology news feed and spotted an interesting story. “Consumers Don’t Buy Devices Because They Have AI. Dell Has Admitted That AI in Products Can Confuse Customers.” Yep, Russian technology media pays attention to AI signals from big outfits.

The write up states:

the company admits that at least for now this is not particularly important for users.

The Russian article then quotes from a Dell source:

You’ll notice one thing: we didn’t prioritize artificial intelligence as our primary goal when developing our products. So there’s been some shift from a year ago when we focused entirely on AI PCs. We are very focused on realizing the power of artificial intelligence in devices — in fact, all the devices we announce use a neural processor —, but over the course of this year we have realized that consumers are not buying devices because of the presence of AI. In fact, I think AI is more likely to confuse them than to help them understand a particular outcome.

image

Good enough, ChatGPT.

The chatter about an AI bubble notwithstanding, this Russian news item probes an important issue. AI may not be a revolution. It is “confusing” to some computer customers. The true believers are the ones writing checks to fund the engineering and plumbing required to allow an inanimate machine behave like a human. The word “confusing” is an important one. The messages about smart software don’t make sense to some people.

Dell, to its credit, listened to its customers and changed its approach. The AI goodness is in the device, but the gizmo is presented as a computer that a user can, confused or not, use to check email, write a message, and watch doom scroll by.

Let’s look at this from a different viewpoint. Google and Microsoft want to create AI operating systems. The decade old or older bits of software plumbing have to be upgraded. If the future is smart software, then the operating systems have to be built on smart software. To the believers, the need to AI everything is logical and obvious.

If we look at it from the point of view of a typical Dell customer, the AI jabber is confusing. What’s confusing mean? To me, confusing means unclear. AI marketing is definitely that. I am not sure I understand how typing a query and getting a response is not presented as “search and retrieval.” AI is also bewildering. I have watched a handful of YouTube AI explainer videos. I think I understand, but the reality for me is that AI seems to be a collection of methods developed over the last couple hundred years integrated to index text and output probabilistic sequences. Some make sense to an eighth grader wanting help with a 200 word paragraph about the Lincoln-Douglas debates. However, it wou8ld be difficult for the same kid to get information about Honest Abe’s sleeping with a guy for years. Yep, baffling. Explaining to a judge why an AI system made up case citations to legal actions that did  not take place is not just mystifying. The use of AI costs the lawyer money, credibility, and possibly the law license. Yep, puzzling.

Thus, an AI enabled Dell laptop doesn’t make sense to some consumers. Their child needs a laptop to do homework. What’s with the inclusion of AI. AI is available everywhere. Why double up on AI? Dell sidesteps the issue by focusing on its computers as computers.

Several observations are warranted:

  1. The AI shift at Dell is considered “news” in Russia. In the US, I am not sure how many people will get the Dell message. Maybe someone on TikTok or Reels will cover the Dell story?
  2. The Google- and Microsoft-type companies don’t care about Dell. These outfits are inventing the future. The firms are spending billions and now dumping staff to help pay for the vision of their visionaries. If it doesn’t work, these people will join the lawyers caught using made up information working as servers at the local Rooster’s chicken joint.
  3. The idea that “if they think it, the ‘it’ will become reality is fueling the AI boom. Stoked on the sci-fi novels consumed when high school students, the wizards in the AI game are convinced they can deliver smart software. Conviction is useful. However, a failure to deliver will be interesting to watch… from a distance.

Net net: Score one for Dell. No points for Google or Microsoft. Consumers are in the negative column. They are confused and there is one thing that the US economy abhors is a bewildered person. Believe or be gone.

Stephen E Arnold, January 12, 2026

Just Train AI with AI Output: What Could Go Wrong?

January 9, 2026

AI is still dumb technology and needs to be trained to improve. Unfortunately AI training datasets are limited. Patronus AI claims it has a solution to training problem and the news is told on VentureBeat in the article, “AI Agents Fail 63% Of The Time On Complex Tasks. Patronus AI Says Its New ‘Living’ Training Worlds Can Fix That.” Patronus AI is a new AI startup with backing from Datadog and Lightspeed Venture Partners.

The company’s newest project is called “Generative Simulators” that creates simulated environments that continuously generate new challenges for AI algorithms to evaluate. AI Patronus could potentially be a critical tool for the AI industry. Research discovered that AI algorithms with a 1% error rate per step compound a 63% chance of failure.

Patronus AI explains that traditional datasets and measurements are like standardized tests: “they measure specific capabilities at a fixed point in time but struggle to capture the messy, unpredictable nature of real work.” The new Generative Simulators produces environments and assignments that adapt based on how the algorithm responds:

“The technology builds on reinforcement learning — an approach where AI systems learn through trial and error, receiving rewards for correct actions and penalties for mistakes. Reinforcement learning is an approach where AI systems learn to make optimal decisions by receiving rewards or penalties for their actions, improving through trial and error. RL can help agents improve, but it typically requires developers to extensively rewrite their code. This discourages adoption, even though the data these agents generate could significantly boost performance through RL training.”

Patronus AI said that training has improved AI algorithm’s task completion by 10-20%. The company also says that Big Tech can’t build all of their AI training tools in house because the amount of specialized training needed for niche fields is infinite. It’s a natural place for third party companies like Patronus AI.

Patronus AI founds its niche and is cashing in! But that failure rate? No problem.

Whitney Grace, January 9, 2026

The Lineage of Bob: Microsoft to IBM

January 8, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Product names have interested me. I am not too clever. I started “Beyond Search” in 2008. The name wasn’t my idea. At lunch someone said, “We do a lot of search stuff.” Another person (maybe Shakes) said, “Let’s go beyond search, okay.” That was it. I was equally uninspired when I named our new information service “Telegram Notes.” One of my developers said, “What’s with these notecards?” I replied, “Those are my Telegram notes.” There it was. The name.

I wrote a semi-humorous for me post about Microsoft Cowpilot. Oh, sorry, I meant Copilot. The write up featured a picture of a cow standing in a bit of a mess of its own making. Yeah, hah hah. I referenced New Coke and a couple of other naming decisions that just did not work out.

In October 2025, just when I thought the lawn mowing season was ending because the noise drives me bonkers, I read about “Project Bob.” If you have not heard of it, this is an IBM initiative or what IBM calls a product. I know that IBM is a consulting and integration outfit, but this Bob is a product. IBM said when the leaves were choking my gutters:

IBM Project Bob isn’t just another coding assistant—it’s your AI development partner. Designed to work the way you do, Project Bob adapts to your workflow from design to deployment. Whether you’re modernizing legacy systems or building something entirely new, Bob helps you ship quality code faster. With agentic workflows, built-in security and enterprise-grade deployment flexibility, Bob doesn’t just automate tasks—it transforms the entire software development lifecycle. From modernization projects to new application builds, Bob makes development smarter, safer and more efficient. — Neel Sundares, General Manager, Automation and AI, IBM

I gave a couple of lectures about this time. In one of them I illustrated AI coding using Anthropic Claude. The audience yawned. Getting some smart software to write simple scripts was not exactly a big time insight.

But Bob, according to Mr. Sundares, General Manager of Automation and AI is different. He wrote:

Think of Bob as your AI-first IDE and pair developer: a tool that understands your intent, your codebase and your organization’s standards.

  • Understands your intent: Switch into Architect Mode to scope and design complex systems or collaborate in Code Mode to move fast and iterate efficiently.
  • Understands your repo: Bob reads your codebase, modernizes frameworks, refactors at scale and re-platforms with full context.
  • Understands your standards: With built-in expertise for FedRAMP, HIPAA and PCI, Project Bob helps you deliver secure, production-ready code every time.

The Register, a UK online publication, wrote:

Security researchers at PromptArmor have been evaluating Bob prior to general release and have found that IBM’s “AI development partner” can be manipulated into executing malware. They report that the CLI is vulnerable to prompt injection attacks that allow malware execution and that the IDE is vulnerable to common AI-specific data exfiltration vectors.

Bob, if the Register is on the money, has some exploitable features too.

Okay, no surprise.

What is interesting is that IBM chose the name Bob for this “product”, the one with exploitable features.

Does anyone remember Microsoft Bob? I do. My recollection is that it presented a friendly, cartoon like interface. The objects in the room represented Microsoft applications. For example, click on the paper and word processing would open. Want to know the time? Click on the clock. If you did not know what to do, you could click on the dog. That was the help. The dog would guide you.

image

Screenshot from Habr.ru, but I am sure the image is the property of the estimable Microsoft Corporation. I provide this for its educational and inspirational value.

Rover was the precursor to Clippy I think. And Clippy yielded to Cowpilot. Ooops. Sorry, I meant to type Copilot. My bad. Bob died after a year, maybe less. Bill Gates seemed okay with Bob, and he was more than okay with its leadership as I recall. The marriage lasted longer than Bob.

So what?

First, I find it remarkable that IBM would use the product name “Bob” for the firm’s AI coding assistant. That’s what happens when one relies on young people and leadership unfamiliar with the remarkable Bob from Microsoft. Some of these creatives probably don’t know how to use a mimeograph machine either.

Second, apply the name Bob to an AI service which seems, according to the Register article cited above, has some flaws or as some bad actors might say “features.” I wonder if someone on the IBM Bob marketing team knew the IBM AI product would face some headwinds and was making a sly joke. IBM leadership has a funny bone, but if the reference does not compute, the joke just sails on by.

Third, Mr. Neel Sundares, General Manager, Automation and AI, IBM, said: “The future of AI-powered coding isn’t years away—it’s already here.” That’s right, sir. Anthropic, ChatGPT, Google, and the Chinese AI models output code. Today, once can orchestrate across these services. One can build agents using one of a dozen different services. Yep, it’s already here.

Net net: First, BackRub became Google and then Alphabet. Facebook morphed into Meta which now means AI yiiii AI. Karen became Jennifer. Now IBM embraces Bob. Watson is sad, very sad.

Stephen E Arnold, January 8, 2026

OpenCode: A Little Helper for Good Guys and the Obverse

January 7, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I gave a couple of talks late in 2025 to cyber investigators in one of those square, fly-over states. In those talks, I showed code examples. Some were crafted by the people on my team and a complete Telegram app was coded by Anthropic Claude. Yeah, the smart software needed some help. Telegram bots are not something lots of Claude users whip up. But the system worked.

image

Here I am reading the OpenCode “privacy” document. I look pretty good when Venice.ai converts me from an old dinobaby to a young dinobaby.

One of my team called my attention to OpenCode. The idea is to make it easier than ever for a good actor or a not-so-good actor to create scripts, bots, and functional applications. OpenCode is an example of what I call “meta ai”; that is, the OpenCode service is a wrapper that allows a lot of annoying large language model operations, library chasing, and just finding stuff to take place under one roof. If you are a more hip cat than I, you would probably say that OpenCode is a milestone on the way to a point and click dashboard to make writing goodware and malware easier and more reliable than previously possible. I will let you ponder the implications of this statement.

According to the organization:

OpenCode is an open source agent that helps you write and run code with any AI model. It’s available as a terminal-based interface, desktop app, or IDE extension.

The sounds as if an AI contributed to the sentences. But that’s to be expected. The organization is OpenCode.ai.

The organization says:

OpenCode comes with a set of free models that you can use without creating an account. Aside from these, you can use any of the popular coding models by creating a Zen account. While we encourage users to use Zen*, OpenCode also works with all popular providers such as OpenAI, Anthropic, xAI etc. You can even connect your local models. [*Zen gives you access to a handpicked set of AI models that OpenCode has tested and benchmarked specifically for coding agents. No need to worry about inconsistent performance and quality across providers, use validated models that work.]

The system is open source. As of January 7, 2026, it is free to use. You will have to cough up money if you want to use OpenCode with the for fee smart software.

The OpenCode Web site makes a big deal about privacy. You can find about 10,000 words explaining what the developers of the system bundle in their “privacy” write up. It is a good idea to read the statement. It includes some interesting features; for example:

  1. Accepting the privacy agreement allows home buyers to be data recipients
  2. Fuzzy and possibly contradictory statements about user data sales
  3. Continued use of the service means you accept terms which can be changed.

I won’t speculate on how a useful service needs a long and somewhat challenging “privacy” statement. But this is 2026. I still can’t figure out why home buyers are involved, but I am a clueless dinobaby. Remember there are good actors and the other type too.

Stephen E Arnold, January 7, 2026

Meta 2026: Grousing Baby Dinobabies and Paddling Furiously

January 7, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I am an 81 year old dinobaby. I get a kick out of a baby dinobaby complaining about young wizards. Imagine how that looks to me. A pretend dinobaby with many years to rock and roll complaining about how those “young people” are behaving. What a hoot!

image

A dinobaby explains to a young, brilliant entrepreneur, “You are clueless.” My response: Yeah, but who has a job? Thanks, Qwen. Good enough.

Why am I thinking about age classification of dinobabies? I read  “Former Meta Scientist Says Mark Zuckerberg’s New AI Chief Is ‘Young’ And ‘Inexperienced’—Warns ‘Lot Of People’ Who Haven’t Yet Left Meta ‘Will Leave’.” Now this is a weird newsy story or a story presented as a newsy release by an outfit called Benzinga. I don’t know much about Benzinga. I will assume it is a version of the estimable Wall Street Journal or the more estimable New York Times. With that nod to excellence in mind, what is this write up about?

Answer: A staff change and what I call departure grief. People may hate their job. However, when booted from that job, a problem looms. No matter how important a person’s family, no matter how many technical accolades a person has accrued, and no matter the sense of self worth — the newly RIFed, terminated, departed, or fired feels bad.

Many Xooglers fume online after losing their status as Googlers. These essays are amusing to me. Then when Mother Google kicks them out of the quiet pod, the beanbag, or the table tennis room — these people fume. I think that’s what this Benzinga “zinger” of a write up conveys.

Let’s take a quick look, shall we?

First, the write up reports that the French-born Yann LeCun is allegedly 65 years old. I noted this passage about Alexandr [sic] Wang is the top dog in Meta’s Superintelligence Labs (MSL) or “MISSILE” I assume. That’s quite a metaphor. Missiles are directed or autonomous. Sometimes they work and sometimes they explode at wedding parties in some countries. Whoops. Now what does the baby dinobaby Mr. LeCun say about the 28 year old sprout Alexandr (sic) Wang, founder of Scale AI. Keep in mind that the genius Mark Zuckerberg paid $14 billion dollars for this company in the middle of 2025.

Alexandr Wang is intelligent and learns quickly, but does not yet grasp what attracts — or alienates — top researchers.

Okay, we have a baby dinobaby complaining about the younger generation. Nothing new here except that Mr. Wang is still employed by the genius Mark Zuckerberg. Mr. LeCun is not as far as I know.

Second, the article notes:

According to LeCun, internal confidence eroded after Meta was criticized for allegedly overstating benchmark results tied to Llama 4. He said the controversy angered Zuckerberg and led him to sideline much of Meta’s existing generative AI organization.

And why? According the zinger write up:

LLMs [are] a dead end.

But was Mr. LeCun involved in these LLMs and was he tainted by the failure that appears to have sparked the genius Mark Zuckerberg to pay $14 billion for an indexing and content-centric company? I would assume that the answer is, “Yep, Mr. LeCun was in his role for about 13 years.” And the result of that was a “dead end.”

I would suggest that the former Facebook and Meta employee was not able to get the good ship Facebook and its support boats Instagram and WhatsApp on course despite 156 months of navigation, charting, and inputting.

Several observations:

  1. Real dinobabies and pretend dinobabies complain. No problem. Are the complaints valid? One must know about the mental wiring of said dinobaby. Young dinobabies may be less mature complainers.
  2. Geniuses with a lot of money can be problematic. Mr. LeCun may not appreciate the wisdom of this dinobaby’s statement … yet.
  3. The genius Mr. Zuckerberg is going to spend his way back into contention in the AI race.

Net net: Meta (Facebook) appears to have floundered with the virtual worlds thing and now is paddling furiously as the flood of AI solutions rushes past him. Can geniuses paddle harder or just buy bigger and more powerful boats? Yep, zinger.

Stephen E Arnold, January 7, 2026

ChatGPT Channels Telegram

January 7, 2026

Just what everyone needs: Telegram type apps on the Sam AI-Man platform. What will bad actors do? Obviously nothing. Apps will be useful, do good, and make the world a better place.k

ChatGPT now connects to apps without leaving the AI interface.? ? According to Mashable, “ChatGPT Launches Apps Beta: 8 Big Apps You Can Now Use In ChatGPT.”? ? ChatGPT wants its users to easily access apps or take suggestions during conversations with the AI.? ? The idea is that ChatGPT will be augmented by apps and extend conversations.

App developers will also be able to use ChatGPT to build chat-native experiences to bring context and action directly into conversations.

The new app integration is described as:

“While some commentators have referred to the new Apps beta as a ChatGPT app store, at this time, it’s more of an app directory. However, in the “Looking Ahead” section of its announcement post, OpenAI does note that this tool could eventually ‘expand the ways developers can reach users and monetize their work.’”

The apps that are integrated into ChatGPT are Zillow, Target, Expedia, Tripadvisor, Instacart, DoorDash, Apple Music, and Spotify.? ? This sounds similar to what Telegram did.? ? Does this mean OpenAI is on the road to Telegram like services?

Just doing good. Bad actors will pay no attention.

Whitney Grace, January 7, 2025

The Branding Genius of Cowpilot: New Coke and Jaguar Are No Longer the Champs

January 6, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

We are beavering away on our new Telegram Notes series. I opened one of my newsfeeds and there is was. Another gem of a story from PCGamer. As you may know, PCGamer inspired this bit of art work from the and AI system. I thought I would have a “cow” when I saw. Here’s the visual gag again:

1 3 26 cow in slop

Is that cow standing in its output? Could that be a metaphor for “cowpilot” output? I don’t know. Qwen, like other smart software can hallucinate. Therefore, I see a semi-sacred bovine standing in a muddy hole. I do not see AI output. If you do, I am not sure you are prepared for the contents about which I shall comment; that is, the story from PCGamer called “In a Truly Galaxy-Brained Rebrand, Microsoft Office Is Now the Microsoft 365 Copilot App, but Copilot Is Also Still the Name of the AI Assistant.”

I thought New Coke was an MBA craziness winner. I thought the Jaguar rebrand was an even crazier MBA craziness winner. I thought the OpenAI smart software non mobile phone rebranding effort that looks like a 1950s dime store fountain pen was in the running for crazy. Nope. We have a a candidate for the rebranding that tops the leader board.

Microsoft Office is now the M3CA or Microsoft 365 Copilot App.

The PCGamer write up says:

Copilot is the app for launching the other apps, but it’s also a chatbot inside the apps.

Yeah, I have a few. But what else does PCGamer say in this write up?

image

An MBA study group discusses the branding strategy behind Cowpilot. Thanks, Qwen. Nice consistent version of the heifer.

Here’s a statement I circled:

Copilot is, notably, a thing that already exists! But as part of the ongoing effort to juice AI assistant usage numbers by making it impossible to not use AI, Microsoft has decided to just call its whole productivity software suite Copilot, I guess.

Yep, a “guess.” That guess wording suggests that Microsoft is simply addled. Why name a product that causes a person to guess? Not even Jaguar made people “guess” about a weird square car painted some jazzy semi hip color. Even the Atlanta semi-behemoth slapped “new” Coke on something that did not have that old Coke vibe. Oh, both of these efforts were notable. I even remember when the brain trust at NBC dumped the peacock for a couple of geometric shapes. But forcing people to guess? That’s special.

Here’s another statement that caught my dinobaby brain:

Should Microsoft just go ahead and rebrand Windows, the only piece of its arsenal more famous than Office, as Copilot, too? I do actually think we’re not far off from that happening. Facebook rebranded itself “Meta” when it thought the metaverse would be the next big thing, so it seems just as plausible that Microsoft could name the next version of Windows something like “Windows with Copilot” or just “Windows AI.” I expect a lot of confusion around whatever Office is called now, and plenty more people laughing at how predictably silly this all is.

I don’t agree with this statement. I don’t think “silly” captures what Microsoft is attempting to do. In my experience, Microsoft is a company that bet on the AI revolution. That turned into a cost sink hole. Then AI just became available. Suddenly Microsoft has to flog its business customers to embrace not just Azure, Teams, and PowerPoint. Microsoft has to make it so users of these services have to do Copilot.

Take your medicine, Stevie. Just like my mother’s approach to giving me cough medicine. Take your medicine or I will nag you to your grave. My mother haunting me for the rest of my life was a bummer thought. Now I have the Copilot thing. Yikes, I have to take my Copilot medicines whether I want to or not. That’s not “silly.” This is desperation. This is a threat. This is a signal that MBA think has given common sense a pink slip.

Stephen E Arnold, January 6, 2026

Next Page »

  • Archives

  • Recent Posts

  • Meta