Grok Is Spicy and It Did Not Get the Apple Deal

January 16, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

First, Gary Marcus makes clear that AI is not delivering the goods. Then a fellow named Tom Renner explains that LLMs are just a modern variation of a “confidence trick” that’s been in use for centuries. I then bumbled into a paywalled post from an outfit named Vox. The write up about AI is “Grok’s Nonconsensual Porn Problem is Part of a Long, Gross Legacy.”

Unlike Dr. Marcus and Mr. Renner, Vox focuses on a single AI outfit. Is this fair? Nah, but it does offer some observations that may apply to the entire band of “if we build it, they will come” wizards. Spoiler: I am not coming for AI. I will close with an observation about the desperation that is roiling some of the whiz kids.

First, however, what does Vox have to say about the “I am a genius. I want to spawn more like me. I want to colonize Mars” superman. I urge you to subscribe the Vox. I will highlight a couple of passages about the genius Elon Musk. (I promised I won’t mention the department of government efficiency project. I project. DOGE DOGE DOGE. Yep, I lied must like some outfits have. Thank goodness I am an 81 year old dinobaby in rural Kentucky. I can avoid AI, but DOGE DOGE DOGE, not a chance.

Here’s the first statement I circled on my print out of the expensive Vox article:

Elon Musk claims tech needs a “spicy mode” to dominate. Is he right?

I can answer this question: No, only those who want to profit from salacious content want a spicy mode. People who deal in spicy modes made VHS tapes a thing much to the chagrin of Sony. People who create spicy mode content helped sell some virtual reality glasses. I sure didn’t buy any. Spicy at age 81 is walking from room to room in my two room log cabin in the Kentucky hollow in which I live.

Here’s the second passage in the write up I check marked:

Musk has remained committed to the idea that Grok would be the sexiest AI model. On X, Musk has defended the choice on business grounds, citing the famous tale of how VHS beat Betamax in the 1980s after the porn industry put its weight behind VHS, with its larger storage capacity. “VHS won in the end,” Musk posted, “in part because they allowed spicy mode.

Does this mean that Elon analyzed the p_rn industry when he was younger? For business reasons only I presume. I wonder if he realizes that Grok and perhaps the Tesla businesses may be adversely affected by the spicy stuff. No, I won’t. I won’t. Yes, I will. DOGE DOGE DOGE

Here’s the final snip:

A more accurate phrasing, however, might be to say that in our misogynistic society, objectifying and humiliating the bodies of unconsenting women is so valuable that the fate of world-altering technologies depends on how good they are at facilitating it. AI was always going to be used for this, one way or the other. But only someone as brutally uncaring and willing to cut corners as Elon Musk would allow it to go this wrong.

Snappy.

But the estimable Elon Musk has another thorn in the driver’s seat of his Tesla. Apple, a company once rumored to be thinking about buying the car company, signed another deal with Apple. The gentle and sometimes smarmy owner of Android, online advertising, and surveillance technology is going to provide AI to the wonderful wonderful Apple.

I think Mr. Musk’s Grok is a harbinger of a spring time blossoming of woe for much of the AI sector. There are data center pushbacks. There are the Chinese models available for now as open source. There are regulators in the European Union who want to hear the ka-ching of cash registers after another fine is paid by an American AI outfit.

I think the spicy angle just helps push Mr. Musk and Grok to the head of the line for AI pushback. I hope not. I wonder if Mr. Musk will resume talks with Pavel Durov about providing Grok as an AI engine for Nikolai Durov’s new approach to smart software. I await spring.

Stephen E Arnold, January 16, 2026

Shall We Recall This Nvidia Prediction?

January 16, 2026

Nvidia. Champion circular investor. Leather jacketed wizard. I dug up this item as a reference point: “ ‘China Is Going To Win The AI Race’ – Nvidia CEO Jensen Huang Makes Bold Proclamation, Says We All Need A Little Less "Cynicism" In Our Lives.”

Nvidia warns the US that China is seconds behind it (literally nanoseconds) in the AI race and the country shouldn’t ignore its eastern neighbor. Huang suggests that not only will China win the technology race, but also the US should engage with China’s developer base. Doing so will help the US maintain its competitive edge. Huang also warns that ignoring China would have negative long-term consequences for AI adoption.

Huang makes a valid point about China, but his remarks could also be self-serving regarding some recent restrictions from the US.

“Nvidia has faced restrictions in China due to governmental policies, preventing the sale of its latest processors, central to AI tools and applications, which are essential for research, deployment, and scaling of AI workloads.

Huang suggested limiting Chinese access may inadvertently slow the spread of American technology, even as policymakers focus on national security.”

Hardware is vital for AI technology because a lot of processing power and energy is needed to run AI models. Huang warns (yet again) that if the US remains exclusionary of China with its technology that Chinese developers will be forced to design their own. It means less reliance on US technology and an AI ecosystem outside of the US’s sphere of influence. Huang said:

“ ‘We want America to win this AI race. No doubt about that,’ Huang said at a recent Nvidia developers’ conference. ‘We want the world to be built on American tech stack. Absolutely the case. But we also need to be in China to win their developers. A policy that causes America to lose half of the world’s AI developers is not beneficial in the long term, it hurts us more,’ he added.”

Huang’s statement is self-serving for Nvidia and maybe he’s angling for a professorship at Beijing University? But he’s also right. It’s better for Chinese developers to favor the US over their red uncle.

Whitney Grace, January 16, 2025

More Obvious Commentary about the Smart Phone That Makes People Stupid

January 16, 2026

Adults are rabid about protecting kids. Whether it’s chalked up to instinct, love, or simple common sense, no one can argue that it’s necessary to guide and guard younger humans. A big debate these days is when it is appropriate to give kids their first smartphone. According to The New York Times, that should probably be never: “A Smartphone Before Age 12 Could Carry Health Risks, Study Says.”

The journal named Pediatrics reported that when kids younger than twelve are given a smartphone, they’re at a greater risk for poor sleep, obesity, and depression. These results were from the Adolescent Brain Cognitive Development Study that surveyed 10,500 kids. This is what they discovered:

“The younger that children under 12 were when they got their first smartphones, the study found, the greater their risk of obesity and poor sleep. The researchers also focused on a subset of children who hadn’t received a phone by age 12 and found that a year later, those who had acquired one had more harmful mental health symptoms and worse sleep than those who hadn’t.”

Kids equipped with smartphones spent less time socializing in person and are less inclined to exercise or prioritize sleep. All these activities are exceedingly important for developing minds. They’re stunting and seriously harming their growth with smartphones.

Smartphones are a tool like anything else. They’re seriously addicting because of their engagement. Videogames were given the same bad rep when they became popular. At least videogames had the social interaction of arcades back in the day.

Just ban all smartphones for kids. That could work if the lobbyists and political funding policies undergo a little change. If not, duh.

Whitney Grace, January 16, 2026

Apple and Google: Lots of Nots, Nos, and Talk

January 15, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

This is the dinobaby, an 81 year old dinobaby. In my 60 plus year work career I have been around, in, and through what I call “not and no” PR. The basic idea is that one floods the zones with statements about what an organization will do. Examples range from “our Wi-Fi sniffers will not log home access point data” to “our AI service will not capture personal details” to “our security policies will not hamper usability of our devices.” I could go on, but each of these statements were uttered in meetings, in conference “hallway” conversations, or in public podcasts.

image

Thanks, Venice.ai. Good enough. See I am prevaricating. This image sucks. The logos are weird. GW looks like a wax figure.

I want to tell you that if the Nots and Nos identified in the flood of write ups about the Apple Google AI tie up immutable like Milton’s description of his God, the nots and nos are essentially pre-emptive PR. Both firms are data collection systems. The nature of the online world is that data are logged, metadata captured and mindlessly processed for a statistical signal, and content processed simply because “why not?”

Here’s a representative write  up about the Apple Google nots and nos: “Report: Apple to Fine-Tune Gemini Independently, No Google Branding on Siri, More.” So what’s the more that these estimable firms will not do? Here’s an example:

Although the final experience may change from the current implementation, this partly echoes a Bloomberg report from late last year, in which Mark Gurman said: “I don’t expect either company to ever discuss this partnership publicly, and you shouldn’t expect this to mean Siri will be flooded with Google services or Gemini features already found on Android devices. It just means Siri will be powered by a model that can actually provide the AI features that users expect — all with an Apple user interface.”

How about this write up: “Official: Apple Intelligence & Siri To Be Powered By Google Gemini.”

Source details how Apple’s Gemini deal works: new Siri features launching in spring and at WWDC, Apple can finetune Gemini, no Google branding, and more

Let’s think about what a person who thinks the way my team does. Here are what we can do with these nots and nos:

  1. Just log everything and don’t talk about the data
  2. Develop specialized features that provide new information about use of the AI service
  3. Monitor the actions of our partners so we can be prepared or just pounce on good ideas captured with our “phone home” code
  4. Skew the functionality so that our partners become more dependent on our products and services; for example, exclusive features only for their users.

The possibilities are endless. Depending upon the incentives and controls put in place for this tie up, the employees of Apple and Google may do what’s needed to hit their goals. One can do PR about what won’t happen but the reality of certain big technology companies is that these outfits defy normal ethical boundaries, view themselves as the equivalent of nation states, and have a track record of insisting that bending mobile devices do not bend and that information of a personal nature is not cross correlated.

Watch the pre-emptive PR moves by Apple and Google. These outfits care about their worlds, not those of the user.

Just keep in mind that I am an old, very old, dinobaby. I have some experience in these matters.

Stephen E Arnold, January 15, 2025

AI Still Wrong after All These Years

January 15, 2026

Josh Brandon at Digital Trends was curious what would happen if he asked two chatbots to fact check each other. He shared the results in, “I Asked Google Gemini To Fact-Check ChatGPT. The Results Were Hilarious.” He brilliantly calls ChatGPT the Wikipedia of the modern generation. Chatbots spit out details like overconfident, self-assured narcissists. People take the information for granted.

ChatGPT tends to hallucinate fake facts and makes up great stories, while Google Gemini doesn’t create as many mirages. Brandon asked Gemini and ChatGPT about the history of electric cars, some historical information, and a few other things to see if they’d hallucinate. He found that the chatbots have trouble understanding user intent. They also wrongly attribute facts, although Gemini is correct more than ChatGPT. When it came to research questions, the results were laughable:

“Prompt used: ‘Find me some academic quotes about the psychological impact of social media.’ This one is comical and fascinating. ChatGPT invented so many details in a response about the psychological impact of social media that it makes you wonder what the bot was smoking. ‘This is a fantastic and dangerous example of partial hallucination, where real information is mixed with fabricated details, making the entire output unreliable. About 60% of the information here is true, but the 40% that is false makes it unusable for academic purposes.’”

The question becomes, “When will a user become sufficiently skeptical about AI output to abandon a system?” OpenAI declared a red alert or some similar silliness when it learned that Googzilla’s AI was better. But is Google’s AI much better than ChatGPT or any other vendors’ AI. We know that one Googler used Anthropic’s Claude system to duplicate the work of a gaggle of Googlers in one hour. The Googlers needed a year to write the application. Maybe some information about which software was more efficient and accurate would be helpful? We don’t get that type of information. AI companies are deploying systems that are difficult to differentiate from one another. Perhaps it is because these firms rely on algorithms taught in school with a cup or two of Google’s Transformer goodness.

Available models output errors. Available models are making up information. Available models’ output requires the human user to figure out what’s wrong, fix it, and then proceed with the task requiring AI input in the first place. The breezy dismissals of issues about accuracy, environmental costs, and the crazy investments in data centers in places not known for their depth of technical talent strike me as reasons for skepticism.

AI can output text suitable for a high school student’s one page essay. What about outputting a treatment for a sick child. Yeah, maybe for some parents. But the marketing and PR is fairly good. Will there be an AI Super Bowl ad?

Whitney Grace, January 15, 2026

Interruptions: The Productivity Killer for Some

January 15, 2026

Do you have a sign on your door that says, “Please don’t interrupt”?

Uninterrupted focus is an engaged species says Can Duruk in his Off By One blog post, “The Math of Why You Can’t Focus at Work.” Duruk explains that uninterrupted focus has been dying since the turn of the century and Paul Graham wrote about it in 2009. Graham’s commentary was about how a single meeting can ruin being “in the zone.” These days it’s worse with Slack, Teams, social media, email, texts, and more demanding attention.

Duruk then does what any nerd would do: uses math to prove how much interruptions ruin a day. He runs through all the formulas and uses advanced math to prove his point:

"In this post, I’ll show you what interruption-driven work looks like when you model it with math. Three simple parameters determine whether your day is productive or a write-off. We’ll simulate hundreds of days and build a map of the entire parameter space so you can see exactly where you are and what happens when you change.”

This is above my pay grade but it is a useful tool to prove that too many meetings are a waste of time. Leave people alone to hyper focus on work. It’ll make them happier and more productive.

Whitney Grace, January 15, 2026

Telegram Notes: Mama Durova and Her Inner Circle

January 14, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

We filtered our notes for my new book “The Telegram Labyrinth.” Information about Pavel Durov’s mom was sparse. What we had, however, was interesting. The inner circle boils down to her ex-husbands and her three sons. In Part One of a two-part write up, you can get a snapshot of the individuals who facilitated the technical and business plumbing for VKontakte until its sale to Kremlin-approved buyers and then for the Telegram messaging service. You can find part one of this interesting group on my Telegram Notes online service.

Stephen E Arnold, January 14, 2026

The Drivers for 2026

January 14, 2026

The new year is here. Decrypt.co runs down the high and lows in, “Emerge’s 2025 Story of the Year: How the AI Race Fractured the Global Tech Order.” The main events of 2025 revolve around China and the US battling for dominance over the AI market. With only $256,000, the young Chinese startup Deepseek claimed it trained an AI model that matched OpenAI. OpenAI spent over a hundred million to arrive at the same result.

After Deepseek hit the Apple app store, Nvidia lost $600 billion in revenue as the largest drop in market history. Nvidia’s China market share fell from 95% to zero. The Chinese government banned all foreign AI chips from its datacenters, then the US Pentagon signed $10 billion in AI defense contracts.

China and the US are now warring a cold technology war. Deepseek exposed the US’s belief that controlling advanced chips would hinder China. Here’s how the US responded:

“The AI market entered panic mode. Stocks tanked, politicians started polishing their patriotic speeches, analysis exposed the intricacies of what could end up in a bubble, and enthusiasts mocked American models that cost orders of magnitude more than the Chinese counterparts, which were free, cheap and required a fraction of the money and resources to train.

Washington’s response was swift and punishing. The Trump administration expanded export controls throughout the year, banning even downgraded chips designed specifically for the Chinese market. By April, Trump restricted Nvidia from shipping its H20 chips.”

Meanwhile China retaliated:

“The tit-for-tat escalated into full decoupling. A new China’s directive issued in September banned Nvidia, AMD, and Intel chips from any data center receiving government money—a market worth over $100 billion since 2021. Jensen Huang revealed the company’s market share in China had hit "zero, compared to 95% in 2022.”

The US lost a big market for chips and China’s chip manufacturers increased domestic production by 40%. The US then implemented tariffs, then China responded by exerting its control over the physical elements needed to make technology in the strictest rare earth export controls ever. China wants to hit US defenses hard.

The Pentagon then invested in MP Materials with a cool $400 million. Trump also signed the Genesis Mission executive order, a Department of Energy-led AI initiative that the Trump administration compared to the Manhattan Project. Then China did…etc, etc.

Net net: Hype and hostility are the fuels for the months ahead. Hey, that’s positive Decrypt.

Whitney Grace, January 14, 2026

Security Chaos: So We Just Live with Failure?

January 14, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read a write up that baffled me. The article appeared in what I consider a content marketing or pay to play publication. I may be wrong, but the content usually hits me as an infomercial. The story arresting my attention this morning (January 13, 2026) is “The 11 Runtime Attacks Breaking AI Security — And How CISOs Are Stopping Them.” I expected a how to. What did the write up deliver? Confusion and a question, “So we just give up?”

The article contains this cheerful statement from a consulting firm. Yellow lights flashed. I read this:

Gartner’s research puts it bluntly: “Businesses will embrace generative AI, regardless of security.” The firm found 89% of business technologists would bypass cybersecurity guidance to meet a business objective. Shadow AI isn’t a risk — it’s a certainty.

Does this mean that AI takes precedence over security?

The article spells out 11 different threats and provides solutions to each. The logic of the “stopping runtime attacks” with methods now available struck me as a remarkable suggestion.

image

The mice are the bad actors. Notice that the capable security system is now unable to deal with the little creatures. The realtime threats overwhelmed the expensive much hyped-cyber cat. Thanks, Venice.ai. Good enough.

Let’s look at three of the 11 threats and their solutions. Please, read the entire write up and make you own decision about the other eight problems presented and allegedly solved.

The first threat is called “multi turn crescendo attacks.” I had no idea what this meant when I read the phrase. That’s okay. I am a dinobaby and a stupid one at that. It turns out that this fancy phrase means that a bad actor plans prompts that work incrementally. The AI system responds. Then responds to another weaponized prompt. Over a series of prompts, the bad actor gets what he or she wants out of the system. ChatGPT and Gemini are vulnerable to this orchestrated prompt sequence. What’s the fix? I quote:

Stateful context tracking, maintaining conversation history, and flagging escalation patterns.

Really? I am not sure that LLM outfits or licensees have the tools and the technical resources to implement these linked functions. Furthermore, in the cat and mouse approach to security, the mice are many. The find and react approach is not congruent with runtime threats.

Another threat is synthetic identify fraud. The idea is that AI creates life like humans, statements, and supporting materials. For me, synthetic identities are phishing attacks on steroids. People are fooled by voice, video and voice, email, and SMS attacks. Some companies hire people who are not people because AI technology advances in real time. How does one fix this? The solution is, and I quote:

Multi-factor verification incorporating behavioral signals beyond static identity attributes, plus anomaly detection trained on synthetic identity patterns.

But when AI synthetic identity technology improves how will today’s solutions deal with the new spin from bad actors? Answer: They have not, cannot, and will not with the present solutions.

The last threat I will highlight is obfuscation attacks or fiddling with AI prompts. Developers of LLMs are in a cat and mouse game. Right now the mice are winning for one simple reason: The wizards developing these systems don’t have the perspective of bad actors. LLM developers just want to ship and slap on fixes that stop a discovered or exposed attack vector. What’s the fix? The solution, and I quote, is:

Wrap retrieved data in delimiters, instructing the model to treat content as data only. Strip control tokens from vector database chunks before they enter the context window.

How does this work when new attacks occur and are discovered? Not very well because the burden falls upon the outfit using the LLM. Do licensees have appropriate technical resources to “wrap retrieved data in delimiters” when the exploit may just work but no one is exactly sure why. Who knew that prompts in iambic pentameter or gibberish with embedded prompts ignore “guardrails”? The realtime is the killer. Licensees are not equipped to react and I am not confident smart AI cyber security systems are either.

Net net: Amazon Web Services will deal with these threats. Believe it or not. (I don’t believe it, but your mileage may vary.)

Stephen E Arnold, January 14, 2026

Apple Google Prediction: Get Real, Please

January 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Prediction is a risky business. I read “No, Google Gemini Will Not Be Taking Over Your iPhone, Apple Intelligence, or Siri.” The write up asserts:

Apple is licensing a Google Gemini model to help make Apple Foundation Models better. The deal isn’t a one-for-one swap of Apple Foundation Models for Gemini ones, but instead a system that will let Apple keep using its proprietary models while providing zero data to Google.

Yes, the check is in the mail. I will jump on that right now. Let’s have lunch.

image

Two giant creatures find joy in their deepening respect and love for one another. Will these besties step on the ants and grass under their paws? Will they leave high-value information on the shelf? What a beautiful relationship! Will these two get married? Thanks, Venice.ai. Good enough.

Each of these breezy statements sparks a chuckle in those who have heard direct statements and know that follow through is unlikely.

The article says:

Gemini is not being weaved into Apple’s operating systems. Instead, everything will remain Apple Foundation Models, but Gemini will be the "foundation" of that.

Yep, absolutely. The write up presents this interesting assertion:

To reiterate: everything the end user interacts with will be Apple technology, hosted on Apple-controlled server hardware, or on-device and not seen by Apple or anybody else at all. Period.

Plus, Apple is a leader in smart software. Here’s the article’s presentation of this interesting idea:

Apple has been a dominant force in artificial intelligence development, regardless of what the headlines and doom mongers might say. While Apple didn’t rush out a chatbot or claim its technology could cause an apocalypse, its work in the space has been clearly industry leading. The biggest problem so far is that the only consumer-facing AI features from Apple have been lackluster and got a tepid consumer response. Everything else, the research, the underlying technology, the hardware itself, is industry leading.

Okay. Several observations:

  1. Apple and Google have achieved significant market share. A basic rule of online is that efficiency drives the logic of consolidation. From my point of view, we now have two big outfits, their markets, their products, and their software getting up close and personal.
  2. Apple and Google may not want to hook up, but the financial upside is irresistible. Money is important.
  3. Apple, like Telegram, is taking time to figure out how to play the AI game. The approach is positioned as a smart management move. Why not figure out how to keep those users within the friendly confines of two great companies? The connection means that other companies just have to be more innovative.

Net net: When information flows through online systems, metadata about those actions presents an opportunity to learn more about what users and customers want. That’s the rationale for leveraging the information flows. Words may not matter. Money, data, and control do.

Stephen E Arnold, January 13, 2026

Next Page »

  • Archives

  • Recent Posts

  • Meta