Grok Is Spicy and It Did Not Get the Apple Deal

January 16, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

First, Gary Marcus makes clear that AI is not delivering the goods. Then a fellow named Tom Renner explains that LLMs are just a modern variation of a “confidence trick” that’s been in use for centuries. I then bumbled into a paywalled post from an outfit named Vox. The write up about AI is “Grok’s Nonconsensual Porn Problem is Part of a Long, Gross Legacy.”

Unlike Dr. Marcus and Mr. Renner, Vox focuses on a single AI outfit. Is this fair? Nah, but it does offer some observations that may apply to the entire band of “if we build it, they will come” wizards. Spoiler: I am not coming for AI. I will close with an observation about the desperation that is roiling some of the whiz kids.

First, however, what does Vox have to say about the “I am a genius. I want to spawn more like me. I want to colonize Mars” superman. I urge you to subscribe the Vox. I will highlight a couple of passages about the genius Elon Musk. (I promised I won’t mention the department of government efficiency project. I project. DOGE DOGE DOGE. Yep, I lied must like some outfits have. Thank goodness I am an 81 year old dinobaby in rural Kentucky. I can avoid AI, but DOGE DOGE DOGE, not a chance.

Here’s the first statement I circled on my print out of the expensive Vox article:

Elon Musk claims tech needs a “spicy mode” to dominate. Is he right?

I can answer this question: No, only those who want to profit from salacious content want a spicy mode. People who deal in spicy modes made VHS tapes a thing much to the chagrin of Sony. People who create spicy mode content helped sell some virtual reality glasses. I sure didn’t buy any. Spicy at age 81 is walking from room to room in my two room log cabin in the Kentucky hollow in which I live.

Here’s the second passage in the write up I check marked:

Musk has remained committed to the idea that Grok would be the sexiest AI model. On X, Musk has defended the choice on business grounds, citing the famous tale of how VHS beat Betamax in the 1980s after the porn industry put its weight behind VHS, with its larger storage capacity. “VHS won in the end,” Musk posted, “in part because they allowed spicy mode.

Does this mean that Elon analyzed the p_rn industry when he was younger? For business reasons only I presume. I wonder if he realizes that Grok and perhaps the Tesla businesses may be adversely affected by the spicy stuff. No, I won’t. I won’t. Yes, I will. DOGE DOGE DOGE

Here’s the final snip:

A more accurate phrasing, however, might be to say that in our misogynistic society, objectifying and humiliating the bodies of unconsenting women is so valuable that the fate of world-altering technologies depends on how good they are at facilitating it. AI was always going to be used for this, one way or the other. But only someone as brutally uncaring and willing to cut corners as Elon Musk would allow it to go this wrong.

Snappy.

But the estimable Elon Musk has another thorn in the driver’s seat of his Tesla. Apple, a company once rumored to be thinking about buying the car company, signed another deal with Apple. The gentle and sometimes smarmy owner of Android, online advertising, and surveillance technology is going to provide AI to the wonderful wonderful Apple.

I think Mr. Musk’s Grok is a harbinger of a spring time blossoming of woe for much of the AI sector. There are data center pushbacks. There are the Chinese models available for now as open source. There are regulators in the European Union who want to hear the ka-ching of cash registers after another fine is paid by an American AI outfit.

I think the spicy angle just helps push Mr. Musk and Grok to the head of the line for AI pushback. I hope not. I wonder if Mr. Musk will resume talks with Pavel Durov about providing Grok as an AI engine for Nikolai Durov’s new approach to smart software. I await spring.

Stephen E Arnold, January 16, 2026

Shall We Recall This Nvidia Prediction?

January 16, 2026

Nvidia. Champion circular investor. Leather jacketed wizard. I dug up this item as a reference point: “ ‘China Is Going To Win The AI Race’ – Nvidia CEO Jensen Huang Makes Bold Proclamation, Says We All Need A Little Less "Cynicism" In Our Lives.”

Nvidia warns the US that China is seconds behind it (literally nanoseconds) in the AI race and the country shouldn’t ignore its eastern neighbor. Huang suggests that not only will China win the technology race, but also the US should engage with China’s developer base. Doing so will help the US maintain its competitive edge. Huang also warns that ignoring China would have negative long-term consequences for AI adoption.

Huang makes a valid point about China, but his remarks could also be self-serving regarding some recent restrictions from the US.

“Nvidia has faced restrictions in China due to governmental policies, preventing the sale of its latest processors, central to AI tools and applications, which are essential for research, deployment, and scaling of AI workloads.

Huang suggested limiting Chinese access may inadvertently slow the spread of American technology, even as policymakers focus on national security.”

Hardware is vital for AI technology because a lot of processing power and energy is needed to run AI models. Huang warns (yet again) that if the US remains exclusionary of China with its technology that Chinese developers will be forced to design their own. It means less reliance on US technology and an AI ecosystem outside of the US’s sphere of influence. Huang said:

“ ‘We want America to win this AI race. No doubt about that,’ Huang said at a recent Nvidia developers’ conference. ‘We want the world to be built on American tech stack. Absolutely the case. But we also need to be in China to win their developers. A policy that causes America to lose half of the world’s AI developers is not beneficial in the long term, it hurts us more,’ he added.”

Huang’s statement is self-serving for Nvidia and maybe he’s angling for a professorship at Beijing University? But he’s also right. It’s better for Chinese developers to favor the US over their red uncle.

Whitney Grace, January 16, 2025

Apple and Google: Lots of Nots, Nos, and Talk

January 15, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

This is the dinobaby, an 81 year old dinobaby. In my 60 plus year work career I have been around, in, and through what I call “not and no” PR. The basic idea is that one floods the zones with statements about what an organization will do. Examples range from “our Wi-Fi sniffers will not log home access point data” to “our AI service will not capture personal details” to “our security policies will not hamper usability of our devices.” I could go on, but each of these statements were uttered in meetings, in conference “hallway” conversations, or in public podcasts.

image

Thanks, Venice.ai. Good enough. See I am prevaricating. This image sucks. The logos are weird. GW looks like a wax figure.

I want to tell you that if the Nots and Nos identified in the flood of write ups about the Apple Google AI tie up immutable like Milton’s description of his God, the nots and nos are essentially pre-emptive PR. Both firms are data collection systems. The nature of the online world is that data are logged, metadata captured and mindlessly processed for a statistical signal, and content processed simply because “why not?”

Here’s a representative write  up about the Apple Google nots and nos: “Report: Apple to Fine-Tune Gemini Independently, No Google Branding on Siri, More.” So what’s the more that these estimable firms will not do? Here’s an example:

Although the final experience may change from the current implementation, this partly echoes a Bloomberg report from late last year, in which Mark Gurman said: “I don’t expect either company to ever discuss this partnership publicly, and you shouldn’t expect this to mean Siri will be flooded with Google services or Gemini features already found on Android devices. It just means Siri will be powered by a model that can actually provide the AI features that users expect — all with an Apple user interface.”

How about this write up: “Official: Apple Intelligence & Siri To Be Powered By Google Gemini.”

Source details how Apple’s Gemini deal works: new Siri features launching in spring and at WWDC, Apple can finetune Gemini, no Google branding, and more

Let’s think about what a person who thinks the way my team does. Here are what we can do with these nots and nos:

  1. Just log everything and don’t talk about the data
  2. Develop specialized features that provide new information about use of the AI service
  3. Monitor the actions of our partners so we can be prepared or just pounce on good ideas captured with our “phone home” code
  4. Skew the functionality so that our partners become more dependent on our products and services; for example, exclusive features only for their users.

The possibilities are endless. Depending upon the incentives and controls put in place for this tie up, the employees of Apple and Google may do what’s needed to hit their goals. One can do PR about what won’t happen but the reality of certain big technology companies is that these outfits defy normal ethical boundaries, view themselves as the equivalent of nation states, and have a track record of insisting that bending mobile devices do not bend and that information of a personal nature is not cross correlated.

Watch the pre-emptive PR moves by Apple and Google. These outfits care about their worlds, not those of the user.

Just keep in mind that I am an old, very old, dinobaby. I have some experience in these matters.

Stephen E Arnold, January 15, 2025

AI Still Wrong after All These Years

January 15, 2026

Josh Brandon at Digital Trends was curious what would happen if he asked two chatbots to fact check each other. He shared the results in, “I Asked Google Gemini To Fact-Check ChatGPT. The Results Were Hilarious.” He brilliantly calls ChatGPT the Wikipedia of the modern generation. Chatbots spit out details like overconfident, self-assured narcissists. People take the information for granted.

ChatGPT tends to hallucinate fake facts and makes up great stories, while Google Gemini doesn’t create as many mirages. Brandon asked Gemini and ChatGPT about the history of electric cars, some historical information, and a few other things to see if they’d hallucinate. He found that the chatbots have trouble understanding user intent. They also wrongly attribute facts, although Gemini is correct more than ChatGPT. When it came to research questions, the results were laughable:

“Prompt used: ‘Find me some academic quotes about the psychological impact of social media.’ This one is comical and fascinating. ChatGPT invented so many details in a response about the psychological impact of social media that it makes you wonder what the bot was smoking. ‘This is a fantastic and dangerous example of partial hallucination, where real information is mixed with fabricated details, making the entire output unreliable. About 60% of the information here is true, but the 40% that is false makes it unusable for academic purposes.’”

The question becomes, “When will a user become sufficiently skeptical about AI output to abandon a system?” OpenAI declared a red alert or some similar silliness when it learned that Googzilla’s AI was better. But is Google’s AI much better than ChatGPT or any other vendors’ AI. We know that one Googler used Anthropic’s Claude system to duplicate the work of a gaggle of Googlers in one hour. The Googlers needed a year to write the application. Maybe some information about which software was more efficient and accurate would be helpful? We don’t get that type of information. AI companies are deploying systems that are difficult to differentiate from one another. Perhaps it is because these firms rely on algorithms taught in school with a cup or two of Google’s Transformer goodness.

Available models output errors. Available models are making up information. Available models’ output requires the human user to figure out what’s wrong, fix it, and then proceed with the task requiring AI input in the first place. The breezy dismissals of issues about accuracy, environmental costs, and the crazy investments in data centers in places not known for their depth of technical talent strike me as reasons for skepticism.

AI can output text suitable for a high school student’s one page essay. What about outputting a treatment for a sick child. Yeah, maybe for some parents. But the marketing and PR is fairly good. Will there be an AI Super Bowl ad?

Whitney Grace, January 15, 2026

Security Chaos: So We Just Live with Failure?

January 14, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read a write up that baffled me. The article appeared in what I consider a content marketing or pay to play publication. I may be wrong, but the content usually hits me as an infomercial. The story arresting my attention this morning (January 13, 2026) is “The 11 Runtime Attacks Breaking AI Security — And How CISOs Are Stopping Them.” I expected a how to. What did the write up deliver? Confusion and a question, “So we just give up?”

The article contains this cheerful statement from a consulting firm. Yellow lights flashed. I read this:

Gartner’s research puts it bluntly: “Businesses will embrace generative AI, regardless of security.” The firm found 89% of business technologists would bypass cybersecurity guidance to meet a business objective. Shadow AI isn’t a risk — it’s a certainty.

Does this mean that AI takes precedence over security?

The article spells out 11 different threats and provides solutions to each. The logic of the “stopping runtime attacks” with methods now available struck me as a remarkable suggestion.

image

The mice are the bad actors. Notice that the capable security system is now unable to deal with the little creatures. The realtime threats overwhelmed the expensive much hyped-cyber cat. Thanks, Venice.ai. Good enough.

Let’s look at three of the 11 threats and their solutions. Please, read the entire write up and make you own decision about the other eight problems presented and allegedly solved.

The first threat is called “multi turn crescendo attacks.” I had no idea what this meant when I read the phrase. That’s okay. I am a dinobaby and a stupid one at that. It turns out that this fancy phrase means that a bad actor plans prompts that work incrementally. The AI system responds. Then responds to another weaponized prompt. Over a series of prompts, the bad actor gets what he or she wants out of the system. ChatGPT and Gemini are vulnerable to this orchestrated prompt sequence. What’s the fix? I quote:

Stateful context tracking, maintaining conversation history, and flagging escalation patterns.

Really? I am not sure that LLM outfits or licensees have the tools and the technical resources to implement these linked functions. Furthermore, in the cat and mouse approach to security, the mice are many. The find and react approach is not congruent with runtime threats.

Another threat is synthetic identify fraud. The idea is that AI creates life like humans, statements, and supporting materials. For me, synthetic identities are phishing attacks on steroids. People are fooled by voice, video and voice, email, and SMS attacks. Some companies hire people who are not people because AI technology advances in real time. How does one fix this? The solution is, and I quote:

Multi-factor verification incorporating behavioral signals beyond static identity attributes, plus anomaly detection trained on synthetic identity patterns.

But when AI synthetic identity technology improves how will today’s solutions deal with the new spin from bad actors? Answer: They have not, cannot, and will not with the present solutions.

The last threat I will highlight is obfuscation attacks or fiddling with AI prompts. Developers of LLMs are in a cat and mouse game. Right now the mice are winning for one simple reason: The wizards developing these systems don’t have the perspective of bad actors. LLM developers just want to ship and slap on fixes that stop a discovered or exposed attack vector. What’s the fix? The solution, and I quote, is:

Wrap retrieved data in delimiters, instructing the model to treat content as data only. Strip control tokens from vector database chunks before they enter the context window.

How does this work when new attacks occur and are discovered? Not very well because the burden falls upon the outfit using the LLM. Do licensees have appropriate technical resources to “wrap retrieved data in delimiters” when the exploit may just work but no one is exactly sure why. Who knew that prompts in iambic pentameter or gibberish with embedded prompts ignore “guardrails”? The realtime is the killer. Licensees are not equipped to react and I am not confident smart AI cyber security systems are either.

Net net: Amazon Web Services will deal with these threats. Believe it or not. (I don’t believe it, but your mileage may vary.)

Stephen E Arnold, January 14, 2026

Apple Google Prediction: Get Real, Please

January 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Prediction is a risky business. I read “No, Google Gemini Will Not Be Taking Over Your iPhone, Apple Intelligence, or Siri.” The write up asserts:

Apple is licensing a Google Gemini model to help make Apple Foundation Models better. The deal isn’t a one-for-one swap of Apple Foundation Models for Gemini ones, but instead a system that will let Apple keep using its proprietary models while providing zero data to Google.

Yes, the check is in the mail. I will jump on that right now. Let’s have lunch.

image

Two giant creatures find joy in their deepening respect and love for one another. Will these besties step on the ants and grass under their paws? Will they leave high-value information on the shelf? What a beautiful relationship! Will these two get married? Thanks, Venice.ai. Good enough.

Each of these breezy statements sparks a chuckle in those who have heard direct statements and know that follow through is unlikely.

The article says:

Gemini is not being weaved into Apple’s operating systems. Instead, everything will remain Apple Foundation Models, but Gemini will be the "foundation" of that.

Yep, absolutely. The write up presents this interesting assertion:

To reiterate: everything the end user interacts with will be Apple technology, hosted on Apple-controlled server hardware, or on-device and not seen by Apple or anybody else at all. Period.

Plus, Apple is a leader in smart software. Here’s the article’s presentation of this interesting idea:

Apple has been a dominant force in artificial intelligence development, regardless of what the headlines and doom mongers might say. While Apple didn’t rush out a chatbot or claim its technology could cause an apocalypse, its work in the space has been clearly industry leading. The biggest problem so far is that the only consumer-facing AI features from Apple have been lackluster and got a tepid consumer response. Everything else, the research, the underlying technology, the hardware itself, is industry leading.

Okay. Several observations:

  1. Apple and Google have achieved significant market share. A basic rule of online is that efficiency drives the logic of consolidation. From my point of view, we now have two big outfits, their markets, their products, and their software getting up close and personal.
  2. Apple and Google may not want to hook up, but the financial upside is irresistible. Money is important.
  3. Apple, like Telegram, is taking time to figure out how to play the AI game. The approach is positioned as a smart management move. Why not figure out how to keep those users within the friendly confines of two great companies? The connection means that other companies just have to be more innovative.

Net net: When information flows through online systems, metadata about those actions presents an opportunity to learn more about what users and customers want. That’s the rationale for leveraging the information flows. Words may not matter. Money, data, and control do.

Stephen E Arnold, January 13, 2026

So What Smart Software Is Doing the Coding for Lagging Googlers?

January 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read “Google Programmer Claims AI Solved a Problem That Took Human Coders a Year.” I assume that I am supposed to divine that I should fill in “to crack,” “to solve,” or “to develop”? Furthermore, I don’t know if the information in the write up is accurate or if it is a bit of fluff devised by an art history major who got a job with a PR firm supporting Google.

image

I like the way a Googler uses Anthropic to outperform Googlers (I think). Anyway, thanks, ChatGPT, good enough.

The company’s commitment to praise its AI technology is notable. Other AI firms toss out some baloney before their “leadership” has a meeting with angry investors. Google, on the other hand, pumps out toots and confetti with appalling regularity.

This particular write up states:

Paul [a person with inside knowledge about Google’s AI coding system] passed on secondhand knowledge from "a Principal Engineer at Google [that] Claude Code matched 1 year of team output in 1 hour."

Okay, that’s about as unsupported an assertion I have seen this morning. The write up continues:

San Francisco-based programmer Jaana Dogan chimed in, outing herself as the Google engineer cited by Paul. "We have been trying to build distributed agent orchestrators at Google since last year," she commented. "There are various options, not everyone is aligned … I gave Claude Code a description of the problem, it generated what we built last year in an hour."

So the “anonymous” programmer is Jaana Dogan. She did not use Opal, Google’s own smart software. Ms. Dogan used the coding tools from Anthropic? Is this what the cited passage is telling me?

Let’s think about these statements for a moment:

  1. Perhaps Google’s coders were doom scrolling, playing Foosball, or thinking about how they could land a huge salary at Meta now that AI staff are allegedly jump off the good ship Zuck Up? Therefore, smart software could indeed produce code that took the Googlers one year to produce. Googlers are not necessarily productive unless it is in the PR department or the legal department.
  2. Is Google’s own coding capability so lousy that Googlers armed with Opal and other Googley smart software could not complete a project with software Google is pitching as the greatest thing since Google landed a Nobel Prize?
  3. Is the Anthropic software that much better than Google’s or Microsoft’s smart coding system? My experience is that none of these systems are that different from one another. In fact, I am not sure that new releases are much better than the systems we have tested over the last 12 months.

The larger question is, “Why does Google have to promote its approach to AI so relentlessly?” Why is Google using another firm’s smart software and presenting its use in a confusing way?

My answer to both these questions is, “Google has a big time inferiority complex. It is as if the leadership of Google believes that grandma is standing behind them when they were 12 years old. When attention flags doing homework, grandma bats the family loser with her open palm. “Do better. Concentrate,” she snarls at the hapless student.

Thus, PR emanates PR that seems to be about its own capabilities and staff while promoting a smart coding tool from another firm. What’s clear is that the need for PR coverage outpaces common sense and planning. Google is trying hard to convince people that AI is the greatest thing since ping pong tables at the office.

Stephen E Arnold, January 13, 2025

Fortune Magazine Is Not Hiding Its AI Skepticism

January 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Fortune Magazine appears to be less fearful of expressing its AI skepticism. However, instead of pointing out that the multiple cash fueled dumpster fires continue to burn, Fortune Magazine focuses on an alleged knock on effect of smart software.

AI Layoffs Are Looking More and More Like Corporate Fiction That’s Masking a Darker Reality, Oxford Economics Suggests” uses a stalking horse to deliver the news. The write up reports:

“firms don’t appear to be replacing workers with AI on a significant scale,” suggesting instead that companies may be using the technology as a cover for routine headcount reductions.

The idea seems to be a financially acceptable way to dump people and get the uplift by joining the front runners in smart use of artificial intelligence.

Fortune’s story blows away this smoke screen.

image

Are you kidding, you geezer? AI is now running the show until the part-time, sub-minimum wage folks show up at 1 am. Thanks, Venice.ai. Good enough.

The write up says:

The primary motivation for this rebranding of job cuts appears to be investor relations. The report notes that attributing staff reductions to AI adoption “conveys a more positive message to investors” than admitting to traditional business failures, such as weak consumer demand or “excessive hiring in the past.” By framing layoffs as a technological pivot, companies can present themselves as forward-thinking innovators rather than businesses struggling with cyclical downturns.

The write points out:

While AI was cited as the reason for nearly 55,000 U.S. job cuts in the first 11 months of 2025—accounting for over 75% of all AI-related cuts reported since 2023—this figure represents a mere 4.5% of total reported job losses…. AI-related job losses are still relatively limited.

True to form, the Fortune article tries hard to not ruffle feathers. The write up says:

recent data from the Bureau of Labor Statistics confirms that the “low-hire, low-fire” labor market is morphing into a “jobless expansion,” KPMG chief economist Diane Swonk previously told Fortune‘s Eva Roytburg.

Yep, that’s clear.

Several observations are warranted:

  1. Companies are dumping people to cut costs. We have noticed this across industries from outfits like Walgreen’s to Fancy Dan operations like American Airlines.
  2. AI is becoming an easy way to herd people over 40 into AI training classes and using class performance to winnow the herd. If one needs to replace an actual human, check out India’s work-from-Bangalore options.
  3. The smoke screen is dissipating. What will the excuse be then?

Net net: The true believers in AI created a work related effect that few want to talk about openly. That’s why we get the “low hire, low fire” gibberish. Nice work, AI.

Stephen E Arnold, January 12, 2026

Dell Reveals the Future of AI for Itself

January 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I was flipping through my Russian technology news feed and spotted an interesting story. “Consumers Don’t Buy Devices Because They Have AI. Dell Has Admitted That AI in Products Can Confuse Customers.” Yep, Russian technology media pays attention to AI signals from big outfits.

The write up states:

the company admits that at least for now this is not particularly important for users.

The Russian article then quotes from a Dell source:

You’ll notice one thing: we didn’t prioritize artificial intelligence as our primary goal when developing our products. So there’s been some shift from a year ago when we focused entirely on AI PCs. We are very focused on realizing the power of artificial intelligence in devices — in fact, all the devices we announce use a neural processor —, but over the course of this year we have realized that consumers are not buying devices because of the presence of AI. In fact, I think AI is more likely to confuse them than to help them understand a particular outcome.

image

Good enough, ChatGPT.

The chatter about an AI bubble notwithstanding, this Russian news item probes an important issue. AI may not be a revolution. It is “confusing” to some computer customers. The true believers are the ones writing checks to fund the engineering and plumbing required to allow an inanimate machine behave like a human. The word “confusing” is an important one. The messages about smart software don’t make sense to some people.

Dell, to its credit, listened to its customers and changed its approach. The AI goodness is in the device, but the gizmo is presented as a computer that a user can, confused or not, use to check email, write a message, and watch doom scroll by.

Let’s look at this from a different viewpoint. Google and Microsoft want to create AI operating systems. The decade old or older bits of software plumbing have to be upgraded. If the future is smart software, then the operating systems have to be built on smart software. To the believers, the need to AI everything is logical and obvious.

If we look at it from the point of view of a typical Dell customer, the AI jabber is confusing. What’s confusing mean? To me, confusing means unclear. AI marketing is definitely that. I am not sure I understand how typing a query and getting a response is not presented as “search and retrieval.” AI is also bewildering. I have watched a handful of YouTube AI explainer videos. I think I understand, but the reality for me is that AI seems to be a collection of methods developed over the last couple hundred years integrated to index text and output probabilistic sequences. Some make sense to an eighth grader wanting help with a 200 word paragraph about the Lincoln-Douglas debates. However, it wou8ld be difficult for the same kid to get information about Honest Abe’s sleeping with a guy for years. Yep, baffling. Explaining to a judge why an AI system made up case citations to legal actions that did  not take place is not just mystifying. The use of AI costs the lawyer money, credibility, and possibly the law license. Yep, puzzling.

Thus, an AI enabled Dell laptop doesn’t make sense to some consumers. Their child needs a laptop to do homework. What’s with the inclusion of AI. AI is available everywhere. Why double up on AI? Dell sidesteps the issue by focusing on its computers as computers.

Several observations are warranted:

  1. The AI shift at Dell is considered “news” in Russia. In the US, I am not sure how many people will get the Dell message. Maybe someone on TikTok or Reels will cover the Dell story?
  2. The Google- and Microsoft-type companies don’t care about Dell. These outfits are inventing the future. The firms are spending billions and now dumping staff to help pay for the vision of their visionaries. If it doesn’t work, these people will join the lawyers caught using made up information working as servers at the local Rooster’s chicken joint.
  3. The idea that “if they think it, the ‘it’ will become reality is fueling the AI boom. Stoked on the sci-fi novels consumed when high school students, the wizards in the AI game are convinced they can deliver smart software. Conviction is useful. However, a failure to deliver will be interesting to watch… from a distance.

Net net: Score one for Dell. No points for Google or Microsoft. Consumers are in the negative column. They are confused and there is one thing that the US economy abhors is a bewildered person. Believe or be gone.

Stephen E Arnold, January 12, 2026

Just Train AI with AI Output: What Could Go Wrong?

January 9, 2026

AI is still dumb technology and needs to be trained to improve. Unfortunately AI training datasets are limited. Patronus AI claims it has a solution to training problem and the news is told on VentureBeat in the article, “AI Agents Fail 63% Of The Time On Complex Tasks. Patronus AI Says Its New ‘Living’ Training Worlds Can Fix That.” Patronus AI is a new AI startup with backing from Datadog and Lightspeed Venture Partners.

The company’s newest project is called “Generative Simulators” that creates simulated environments that continuously generate new challenges for AI algorithms to evaluate. AI Patronus could potentially be a critical tool for the AI industry. Research discovered that AI algorithms with a 1% error rate per step compound a 63% chance of failure.

Patronus AI explains that traditional datasets and measurements are like standardized tests: “they measure specific capabilities at a fixed point in time but struggle to capture the messy, unpredictable nature of real work.” The new Generative Simulators produces environments and assignments that adapt based on how the algorithm responds:

“The technology builds on reinforcement learning — an approach where AI systems learn through trial and error, receiving rewards for correct actions and penalties for mistakes. Reinforcement learning is an approach where AI systems learn to make optimal decisions by receiving rewards or penalties for their actions, improving through trial and error. RL can help agents improve, but it typically requires developers to extensively rewrite their code. This discourages adoption, even though the data these agents generate could significantly boost performance through RL training.”

Patronus AI said that training has improved AI algorithm’s task completion by 10-20%. The company also says that Big Tech can’t build all of their AI training tools in house because the amount of specialized training needed for niche fields is infinite. It’s a natural place for third party companies like Patronus AI.

Patronus AI founds its niche and is cashing in! But that failure rate? No problem.

Whitney Grace, January 9, 2026

Next Page »

  • Archives

  • Recent Posts

  • Meta