US Government Procurement Changes: Like Silicon Valley, Really? I Mean For Sure?
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I learned about the US Department of War overhaul of its procurement processes by reading “The Department of War Just Shot the Accountants and Opted for Speed.” Rumblings of procurement hassles have been reaching me for years. The cherished methods of capture planning, statement of work consulting, proposal writing, and evaluating bids consumes many billable hours by consultants. The processes involve thousands of government professionals: Lawyers, financial analysts, technical specialists, administrative professionals, and consultants. I can’t omit the consultants.
According to the essay written by Steve Blank (a person unfamiliar to me):
Last week the Department of War finally killed the last vestiges of Robert McNamara’s 1962 Planning, Programming, and Budgeting System (PPBS). The DoW has pivoted from optimizing cost and performance to delivering advanced weapons at speed.
The write up provides some of the history of the procurement process enshrined in such documents as FAR or the Federal Acquisition Regulations. If you want the details, Mr. Blank provides I urge you to read his essay in full.
I want to highlight what I think is an important point to the recent changes. Mr. Bloom writes:
The war in Ukraine showed that even a small country could produce millions of drones a year while continually iterating on their design to match changes on the battlefield. (Something we couldn’t do.) Meanwhile, commercial technology from startups and scaleups (fueled by an immense pool of private capital) has created off-the-shelf products, many unmatched by our federal research development centers or primes, that can be delivered at a fraction of the cost/time. But the DoW acquisition system was impenetrable to startups. Our Acquisition system was paralyzed by our own impossible risk thresholds, its focus on process not outcomes, and became risk averse and immoveable.
Based on my experience, much of it working as a consultant on different US government projects, the horrific “special operation” delivered a number of important lessons about modern warfare. Reading between the lines of the passage cited above, two important items of information emerged from what I view as an illegal international event:
- Under certain conditions human creativity can blossom and then grow into major business operations. I would suggest that Ukraine’s innovations in the use of drones, how the drones are deployed in battle conditions, and how the basic “drone idea” reduce the effectiveness of certain traditional methods of warfare
- Despite disruptions to transportation and certain third-party products, Ukraine demonstrated that just-in-time production facilities can be made operational in weeks, sometimes days.
- The combination of innovative ideas, battlefield testing, and right-sized manufacturing demonstrated that a relatively small country can become a world-class leader in modern warfighting equipment, software, and systems.
Russia, with its ponderous planning and procurement process, has become the fall guy to a president who was a stand up comedian. Who is laughing now? It is not the perpetrators of the “special operation.” The joke, as some might say, is on individuals who created the “special operation.”
Mr. Blank states about the new procurement system:
To cut through the individual acquisition silos, the services are creating Portfolio Acquisition Executives (PAEs). Each Portfolio Acquisition Executive (PAE) is responsible for the entire end-to-process of the different Acquisition functions: Capability Gaps/Requirements, System Centers, Programming, Acquisition, Testing, Contracting and Sustainment. PAEs are empowered to take calculated risks in pursuit of rapidly delivering innovative solutions.
My view of this type of streamlining is that it will become less flexible over time. I am not sure when the ossification will commence, but bureaucratic systems, no matter how well designed, morph and become traditional bureaucratic systems. I am not going to trot out the academic studies about the impact of process, auditing, and legal oversight on any efficient process. I will plainly state that the bureaucracies to which I have been exposed in the US, Europe, and Asia are fundamentally the same.

Can the smart software helping enable the Silicon Valley approach to procurement handle the load and keep the humanoids happy? Thanks, Venice.ai. Good enough.
Ukraine is an outlier when it comes to the organization of its warfighting technology. Perhaps other countries if subjected to a similar type of “special operation” would behave as the Ukraine has. Whether I was giving lectures for the Japanese government or dealing with issues related to materials science for an entity on Clarendon Terrace, the approach, rules, regulations, special considerations, etc. were generally the same.
The question becomes, “Can a new procurement system in an environment not at risk of extinction demonstrate the speed, creativity, agility, and productivity of the Ukrainian model?”
My answer is, “No.”
Mr. Blank writes before he digs into the new organizational structure:
The DoW is being redesigned to now operate at the speed of Silicon Valley, delivering more, better, and faster. Our warfighters will benefit from the innovation and lower cost of commercial technology, and the nation will once again get a military second to none.
This is an important phrase: Silicon Valley. It is the model for making the US Department of War into a more flexible and speedy entity, particularly with regard to procurement, the use of smart software (artificial intelligence), and management methods honed since Bill Hewlett and Dave Packard sparked the garage myth.
Silicon Valley has been an model for many organizations and countries. However, who thinks much about the Silicon Fen? I sure don’t. I would wager a slice of cheese that many readers of this blog post have never, ever heard of Sophia Antipolis. Everyone wants to be a Silicon Valley and high-technology, move fast and break things outfit.
But we have but one Silicon Valley. Now the question is, “Will the US government be a successful Silicon Valley, or will it fizzle out?” Based on my experience, I want to go out on a very narrow limb and suggest:
- Cronyism was important to Silicon Valley, particularly for funding and lawyering. The “new” approach to Department of War procurement is going to follow a similar path.
- As the stakes go up, growth becomes more important than fiscal considerations. As a result, the cost of becoming bigger, faster, cheaper spikes. Costs for the majority of Silicon Valley companies kill off most start ups. The failure rate is high, and it is exacerbated by the need of the winners to continue to win.
- Silicon Valley management styles produce some negative consequences. Often overlooked are such modern management methods as [a] a lack of common sense, [b] decisions based on entitlement or short term gains, and [c] a general indifference to the social consequences of an innovation, a product, or a service.
If I look forward based on my deeply flawed understanding of this Silicon Valley revolution I see monopolistic behavior emerging. Bureaucracies will emerge because people working for other people create rules, procedures, and processes to minimize the craziness of doing the go fast and break things activities. Workers create bureaucracies to deal with chaos, not cause chaos.
Mr. Blank’s essay strikes me as generally supportive of this reinvention of the Federal procurement process. He concludes with:
Let’s hope these changes stick.
My personal view is that they won’t. Ukraine’s created a wartime Silicon Valley in a real-time, shoot-and-survive conflict. The urgency is not parked in a giant building in Washington, DC, or a Silicon Valley dream world. A more pragmatic approach is to partition procurement methods. Apply Silicon Valley thinking in certain classes of procurement; modify the FAR to streamline certain processes; and leave some of the procedures unchanged.
AI is a go fast and break things technology. It also hallucinates. Drones from Silicon Valley companies don’t work in Ukraine. I know because someone with first hand information told me. What will the new methods of procurement deliver? Answer: Drones that won’t work in a modern asymmetric conflict. With decisions involving AI, I sure don’t want to find myself in a situation about which smart software makes stuff up or operates on digital mushrooms.
Stephen E Arnold, November 12, 2025
Someone Is Not Drinking the AI-Flavored Kool-Aid
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The future of AI is in the hands of the masters of the digital PT Barnum’s. A day or so ago, I wrote about Copilot in Excel. Allegedly a spreadsheet can be enhanced by Microsoft. Google is beavering away with a new enthusiasm for content curation. This is a short step to weaponizing what is indexed, what is available to Googlers and Mama, and what is provided to Google users. Heroin dealers do not provide consumer oriented labels with ingredients.

Thanks, Venice.ai. Good enough.
Here’s another example of this type of soft control: “I’ll Never Use Grammarly Again — And This Is the Reason Every Writer Should Care.” The author makes clear that Grammarly, developed and operated from Ukraine, now wants to change her writing style. The essay states:
What once felt like a reliable grammar checker has now turned into an aggressive AI tool always trying to erase my individuality.
Yep, that’s what AI companies and AI repackagers will do: Use the technology to improve the human. What a great idea? Just erase the fingerprints of the human. Introduce AI drivel and lowest common denominator thinking. Human, the AI says, take a break. Go to the yoga studio or grab a latte. AI has your covered.
The essay adds:
Superhuman [Grammarly’s AI solution for writers] wants to manage your creative workflow, where it can predict, rephrase, and automate your writing. Basically, a simple tool that helped us write better now wants to replace our words altogether. With its ability to link over a hundred apps, Superhuman wants to mimic your tone, habits, and overall style. Grammarly may call it personalized guidance, but I see it as data extraction wrapped with convenience. If we writers rely on a heavily AI-integrated platform, it will kill the unique voice, individual style, and originality.
One human dumped Grammarly, writing:
I’m glad I broke up with Grammarly before it was too late. Well, I parted ways because of my principles. As a writer, my dedication is towards original writing, and not optimized content.
Let’s go back to ubiquitous AI (some you know is there and other AI that operates in dark pattern mode). The object of the game for the AI crowd is to extract revenue and control information. By weaponizing information and making life easy, just think who will be in charge of many things in a few years. If you think humans will rule the roost, you are correct. But the number of humans pushing the buttons will be very small. These individuals have zero self awareness and believe that their ideas — no matter how far out and crazy — are the right way to run the railroad.
I am not sure most people will know that they are on a train taking them to a place they did not know existed and don’t want to visit.
Well, tough luck.
Stephen E Arnold, November 11, 2025
Temptation Is Powerful: Will Big AI Tech Take the Bait
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I have been using the acronym BAIT for “big AI tech” in my talks. I find it an easy way to refer to the companies with the money and the drive to try to take over the use of software to replace most humans’ thinking. I want to raise a question, “Will BAIT take the bait?”
What is this lower case “bait”? In my opinion, lower case “bait” refers to information people and organizations would consider proprietary, secret, off limits, and out of bounds. Digital data about health, contracts, salaries, inventions, interpersonal relations, and similar categories of information would fall into the category of “none of your business” or something like “it’s secret.”

A calculating predator is about to have lunch. Thanks, Venice.ai. Not what I specified but good enough like most things in 2025.
Consider what happens when a large BAIT outfit gains access to the contents of a user’s mobile device, a personal computer, storage devices, images, and personal communications? What can a company committed to capturing information to make its smart software models more intelligent and better informed learn from these types of data? What if that data acquisition takes place in real time? In an organization or a personal life situation, an individual entity may not be able to cross tabulate certain data. The information is in the organization or the data stream for a household, but it is not connected. Data processing can acquire the information, perform the calculations, and “identify” the significant items. These can be sued to predict or guess what response, service, action, or investment can be made.
Microsoft’s efforts with Copilot in Excel raise the possibility and opportunity to examine an organization’s or a person’s financial calculations as part of a routine “let’s make the Excel experience better.” If you don’t know that data are local or on a cloud provider server, access to that information may not be important to you. But are those data important to a BAIT outfit? I think those data are tempting, desirable, and ultimately necessary for the AI company to “learn.”
One possible solution is for the BAIT company to tap into personal data, offering assurances that these types of information are not fodder for training smart software. Can people resist temptation? Some can. But others, with large amounts of money at stake, can’t.
Let’s consider a recent news announcement and then ask some hypothetical questions. I am just asking questions, and I am not suggesting that today’s AI systems are sufficiently organized to make use of the treasure trove of secret information. I do have enough experience to know that temptation is often hard to resist in a certain type of organization.
The article I noted today (November 6, 2025) is “Gemini Deep Research Can Tap into Your Gmail and Google Drive.” The write up reports what I assume to be accurate data:
After adding PDF support in May [2025], [Google] Gemini Deep Research can now directly tap information stored in your Gmail and Google Chat conversations, as well as Google Drive files…. Now, [Google] Deep Research can “draw on context from your [Google] Gmail, Drive and Chat and work it directly into your research.” [Google] Gemini will look through Docs, Slides, Sheets and PDFs stored in your Drive, as well as emails and messages across Google Workspace. [Emphasis added by Beyond Search for clarity]
Can Google resist the mouth watering prospect of using these data sources to train its large language models and its experimental AI technology?
There are some other hypotheticals to consider:
- What informational boundaries is Google allegedly crossing with this omnivorous approach to information?
- How can Google put meaningful barriers around certain information to prevent data leakage?
- What recourse do people or organizations have if Google’s smart software exposes sensitive information to a party not authorized to view these data?
- How will Google’s advertising algorithms use such data to shape or weaponize information for an individual or an organization?
- Will individuals know when a secret has been incorporated in a machine generated report for a government entity?
Somewhere in my reading I recall a statement attributed to Napoleon. My recollection is that in his letters or some other biographical document about Napoleon’s approach to war, he allegedly said something like:
Information in nine tenths of any battle.
The BAIT organizations are moving with purpose and possibly extreme malice toward systems and methods that give them access to data never meant to be used to train smart software. If Copilot in Excel happens and if Google processes data in their grasp, will these types of organizations be able to resist juicy, unique, high-calorie morsels zeros and ones?
I am not sure these types of organizations can or will exercise self control. There is money and power and prestige at stake. Humans have a long track record of doing some pretty interesting things. Is this omnivorous taking of information wrapped in making one’s life easier an example of overreach?
Will BAIT outfits take the bait? Good question.
Stephen E Arnold, November 12, 2025
Innovation Cored, Diced, Cooked and Served As a Granny Scarf
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I do not pay much attention to Vogue, once a giant, fat fashion magazine. However, my trusty newsfeed presented this story to me this morning at 626 am US Eastern: “Apple and Issey Miyake Unite for the iPhone Pocket. It’s a Moment of Connecting the Dots.” I had not idea what an Issey Miyake was. I navigated to Yandex.com (a more reliable search service than Google which is going to bail out the sinking Apple AI rowboat) and learned:
Issey Miyake … the brand name under which designer clothing, shoes, accessories and perfumes are produced.
Okay, a Japanese brand selling collections of clothes, women’s clothes with pleating, watches, perfumes, and a limited edition of an Evian mineral water in bottles designed by someone somewhere, probably Southeast Asia.
But here’s the word that jarred me: Moment. A moment?
The Vogue write up explains:
It’s a moment of connecting the dots.
Moment? Huh.
Upon further investigation, the innovation is a granny scarf; that is, a knitted garment with a pocket for an iPhone. I poked around and here’s what the “moment” looks like:
Source: Engadget, November 2025
I don’t recall my great grandmother (my father’s mother had a mother. This person was called “Granny” or “Gussy”, and I know she was alive in 1958. She died at the age of 102 or 103. She knitted and tatted scarfs, odd little white cloths called antimacassars and small circular or square items called doilies (singular “doily”).
Apple and the Japanese fashion icon have inadvertently emulated some of the outputs of my great grandmother “Granny” or “Gussy.” Were she, my grandmother, and my father alive, one or all of them would have taken legal action. But time makes us fools, and “the spirits of the wise sit in the clouds and mock” scarfs with pouches like an NBA bound baby kangaroo.
But the innovation which may be either Miyake’s, Apple’s, or a combo brainstorm of Miyake and Apple comes in short and long sizes. My Granny cranked out her knit confections like a laborer in a woolen mill in Ipswich in the 19th century. She gave her outputs away.
You can acquire this pinnacle of innovation for US $150 or US $230.
Several observations:
- Apple’s skinny phone flopped; Apple’s AI flopped. Therefore, Apple is into knitted scarfs to revivify its reputation for product innovation. Yeah, innovative.
- Next to Apple’s renaming Apple iTV as Apple TV, one may ask, “Exactly what is going on in Cupertino other than demanding that I log into an old iPhone I use to listen to podcasts?” Desperation gives off an interesting vibe. I feel it. Do you?
- Apple does good hardware. It does not do soft goods with the same élan. Has its leadership lost the thread?
Smell that desperation yet? Publicity hunger, the need to be fashionable and with it, and taking the hard edges off a discount Mac laptop.
Net net: I like the weird pink version, but why didn’t the geniuses behind the Genius Bar do the zippy orange of the new candy bar but otherwise indistinguishable mobile device rolled out a short time ago? Orange? Not in the scarf palate.
Granny’s white did not make the cut.
Stephen E Arnold, November 11, 2025
Marketers and AI: Killing Sales and Trust. Collateral Damage? Meh
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
“The Trust Collapse: Infinite AI Content Is Awful” is an impassioned howl in the arctic space of zeros and ones. The author is what one might call collateral damage. When a UPS airplane drops a wing on take off, the people in the path of the fuel spewing aircraft are definitely not thrilled.
AI is a bit like a UPS-type of aircraft arrowing to an unhappy end, at least for the author of the cited article. The write up makes clear that smart software can vaporize a writer of marketing collateral. The cost for the AI is low. The penalty for the humanoid writer is collateral damage.

Let’s look at some of the points in the cited essay:
For the first time since, well, ever, the cost of creating content has dropped to essentially zero. Not “cheaper than before”, but like actually free. It’s so easy to generate a thousand blog posts or ten thousand ”personalized” emails and it barely costs you anything (for now).
Yep, marketing content appears on some of the lists I have mentioned in this blog. Usually customer service professionals top the list, but advertising copywriters and email pitch writers usually appear in the top five of AI-terminated jobs.
The write up explains:
What they [a prospect] actually want to know is “why the hell would I buy it from you instead of the other hundred companies spamming my inbox with identical claims?” And because everything is AI slop now, answering that question became harder for them.
The idea is that expensive, slow, time-consuming relationship selling is eroding under a steady stream of low cost, high volume marketing collateral produced by … smart software. Yes, AI and lousy AI at that.
The write up provides an interesting example of how low cost, high volume AI content has altered the sales landscape:
Old World (…-2024):
- Cost to produce credible, personalized outreach: $50/hour (human labor)
- Volume of credible outreach a prospect receives: ~10/week
- Prospect’s ability to evaluate authenticity: Pattern recognition works ~80% of time
New World (2025-…):
- Cost to produce credible, personalized outreach: effectively 0
- Volume of credible outreach a prospect receives: ~200/week
- Prospect’s ability to evaluate authenticity: Pattern recognition works ~20% of time
The signal-to-noise ratio has hit a breaking point where the cost of verification exceeds the expected value of engagement.
So what? The write up answers this question:
You’re representing a brand. And your brand must continuously earn that trust, even if you blend of AI-powered relevance. We still want that unmistakable human leadership.
Yep, that’s it. What’s the fix? What does a marketer do? What does a customer looking for a product or service to solve a problem?
There’s no answer. That’s the way smart software from Big AI Tech (BAIT) is supposed to work. The question, however, “What’s the fix? What’s your recommendation, dear author? Whom do you suggest solve this problem you present in a compelling way?”
Crickets.
AI systems and methods disintermediate as a normal function. The author of the essay does not offer a solution. Why? There isn’t one short of an AI implosion. At this time, too many big time people want AI to be the next big thing. They only look for profits and growth, not collateral damage. Traditional work is being rubble-ized. The build up of opportunity will take place where the cost of redevelopment is low. Some new business may be built on top of the remains of older operations, but moving a few miles down the road may be a more appealing option.
Stephen E Arnold, November 11, 2025
Agentic Software: Close Enough for Horse Shoes
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I read a document that I would describe as tortured. The lingo was trendy. The charts and graphs sported trendy colors. The data gathering seemed to be a mix of “interviews” and other people’s research. Plus the write up was a bit scattered. I prefer the rigidity of old-fashioned organization. Nevertheless, I did spot one chunk of information that I found interesting.
The title of the research report (sort of an MBA- or blue chip consulting firm-type of document) is “State of Agentic AI: Founder’s Edition.” I think it was issued in March 2025, but with backdating popular, who knows. I had the research report in my files, and yesterday (November 3, 2025) I was gathering some background information for a talk I am giving on November 6, 2025. The document walked through data about the use of software to replace people. Actually, the smart software agents generally do several things according to the agent vendors’ marketing collateral. The cited document restated these items this way:
- Agents are set up to reach specific goals
- Agents are used to reason which means “break down their main goal … into smaller manageable tasks and think about the next best steps.”
- Agents operate without any humans in India or Pakistan operating invisibly and behind the scenes
- Agents can consult a “memory” of previous tasks, “experiences,” work, etc.
Agents, when properly set up and trained, can perform about as well as a human. I came away from the tan and pink charts with a ball park figure of 75 to 80 percent reliability. Close enough for horseshoes? Yep.
There is a run down of pricing options. Pricing seems to be challenge for the vendors with API usage charges and traditional software licensing used by a third of the agentic vendors.
Now here’s the most important segment from the document:
We asked founders in our survey: “What are the biggest issues you have encountered when deploying AI Agents for your customers? Please rank them in order of magnitude (e.g. Rank 1 assigned to the biggest issue)” The results of the Top 3 issues were illuminating: we’ve frequently heard that integrating with legacy tech stacks and dealing with data quality issues are painful. These issues haven’t gone away; they’ve merely been eclipsed by other major problems. Namely:
- Difficulties in integrating AI agents into existing customer/company workflows, and the human-agent interface (60% of respondents)
- Employee resistance and non-technical factors (50% of respondents)
- Data privacy and security (50% of respondents).
Here’s the chart tallying the results:

Several ideas crossed my mind as I worked through this research data:
- Getting the human-software interface right is a problem. I know from my work at places like the University of Michigan, the Modern Language Association, and Thomson-Reuters that people have idiosyncratic ways to do their jobs. Two people with similar jobs add the equivalent of extra dashboard lights and yard gnomes to the process. Agentic software at this time is not particularly skilled in the dashboard LED and concrete gnome facets of a work process. Maybe someday, but right now, that’s a common deal breaker. Employees says, “I want my concrete unicorn, thank you.”
- Humans say they are into mobile phones, smart in-car entertainment systems, and customer service systems that do not deliver any customer service whatsoever. Change as somebody from Harvard said in a lecture: “Change is hard.” Yeah, and it may not get any easier if the humanoid thinks he or she will allowed to find their future pushing burritos at the El Nopal Restaurant in the near future.
- Agentic software vendors assume that licensees will allow their creations to suck up corporate data, keep company secrets, and avoid disappointing customers by presenting proprietary information to a competitor. Security is “regular” enterprise software is a bit of a challenge. Security in a new type of agentic software is likely to be the equivalent of a ride on roller coaster which has tossed several middle school kids to their death and cut off the foot of a popular female. She survived, but now has a non-smart, non-human replacement.
Net net: Agentic software will be deployed. Most of its work will be good enough. Why will this be tolerated in personnel, customer service, loan approvals, and similar jobs? The answer is reduced headcounts. Humans cost money to manage. Humans want health care. Humans want raises. Software which is good enough seems to cost less. Therefore, welcome to the agentic future.
Stephen E Arnold, November 11, 2025
Sure, Sam. We Trust Your with Our Data
November 11, 2025
OpenAI released a new AI service called “company knowledge” that collects and analyzes all information within an organization. Why does this sound familiar? Because malware does the same thing for nefarious purposes. The story comes from Computer World and is entitled, “OpenAI’s Company Knowledge Wants Access To All Of Your Internal Data.”
A major problem is that OpenAI is a still a relatively young company and organizations are reluctant to share all of their data with it. AI is still an untested pool and so much can go wrong when it comes to regulating security and privacy. Here’s another clincher in the deal:
“Making granting that trust yet more difficult is the lack of clarity around the ultimate OpenAI business model. Specifically, how much OpenAI will leverage sensitive enterprise data in terms of selling it, even with varying degrees of anonymization, or using it to train future models.”
What does the vice-president and principal analyst at Forrester, Jeff Pollard say?
“ ‘The capabilities across all these solutions are similar, and benefits exist: Context and intelligence when using AI, more efficiency for employees, and better knowledge for management.”
But there’s a big but that Pollard makes clear:
“ ‘Data privacy, security, regulatory, compliance, vendor lock-in, and, of course, AI accuracy and trust issues. But for many organizations, the benefits of maximizing the value of AI outweighs the risks.’”
The current AI situation is that applications are transiting from isolated to connected agents and agentic systems developed to maximize value for the users. In other words, according to Pollard, “high risk and high reward.” The rewards are tempting but the consequences are also alarming.
Experts say that companies won’t place all of their information and proprietary knowledge in the hands of a young company and untested technology. They could but there aren’t any regulations to protect them.
OpenAI should practice with its own company first, then see what happens.
Whitney Grace, November 11, 2025
Microsoft: Desperation or Inspiration? Copilot, Have We Lost an Engine?
November 10, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
Microsoft is an interesting member of the high-tech in-crowd. It is the oldest of the Big Players. It invented Bob and Clippy. It has not cracked the weirdness of Word’s numbering weirdness. Updates routinely kill services. I think it would be wonderful if Task Manager did not spawn multiple instances of itself.
Furthermore Microsoft, the cloudy giant with oodles of cash, ignited the next big thing frenzy a couple of years ago. The announcement that Bob and Clippy would operate on AI steroids. Googzilla experienced the equivalent of traumatic stress injury and blinked red, yellow, and orange for months. Crisis bells rang. Klaxons interrupted Foosball games. Napping in pods became difficult.
Imagine what’s happening at Microsoft now that this Sensor Tower chart is popping up in articles like “Microsoft Bets on Influencers Like Alix Earle to Close the Gap With ChatGPT.” Here’s the gasp inducer:

Source: The chart comes from Sensor Tower. It carries Bloomberg branding. But it appeared in an MSN.com article. Who crafted the data? How were the data assembled? What mathematical processes were use to produce such nice round numbers? I have no clue, but let’s assume those fat, juicy round numbers are “real,” and the weird imaginary “i” things electrical engineers enjoy each day.
The write up states:
Microsoft Corp., eager to boost downloads of its Copilot chatbot, has recruited some of the most popular influencers in America to push a message to young consumers that might be summed up as: Our AI assistant is as cool as ChatGPT. Microsoft could use the help. The company recently said its family of Copilot assistants attracts 150 million active users each month. But OpenAI’s ChatGPT claims 800 million weekly active users, and Google’s Gemini boasts 650 million a month. Microsoft has an edge with corporate customers, thanks to a long history of selling them software and cloud services. But it has struggled to crack the consumer market — especially people under 30.
Microsoft came up with a novel solution to its being fifth in the smart software league table. Is Microsoft developing useful AI-infused services for Notepad? Yes. Is Microsoft pushing Copilot and its hallucinatory functions into Excel? Yes. Is Microsoft using Copilot to help partners code widgets for their customers to use in Azure? Yeah, sort of, but I have heard that Anthropic Claude has some Certified Partners as fans.
The Microsoft professionals, the leadership, and the legions of consultants have broken new marketing ground. Microsoft is paying social media influencers to pitch Microsoft Copilot as the one true smart software. Forget that “God is my copilot” meme. It is now “Meme makers are Microsoft’s Copilot.”
The write up includes this statement about this stunningly creative marketing approach:
“We’re a challenger brand in this area, and we’re kind of up and coming,” Consumer Chief Marketing Officer Yusuf Mehdi
Excuse me, Microsoft was first when it announced its deal with OpenAI a couple of years ago. Microsoft was the only game in town. OpenAI was a Silicon Valley start up with links to Sam AI-Man and Mr. Tesla. Now Microsoft, a giant outfit, is “up and coming.” No, I would suggest Microsoft is stalled and coming down.
The write up from that university / consulting outfit New York University is quoted in the cited write up. Here is that snippet:
Anindya Ghose, a marketing professor at New York University’s Stern School of Business, expressed surprised that Microsoft is using lifestyle influencers to market Copilot. But he can see why the company would be attracted to their cult followings. “Even if the perceived credibility of the influencer is not very high but the familiarity with the influencers is high, there are some people who would be willing to bite on that apple,” Ghose said in an interview.
The article presents proof that the Microsoft creative light saber has delivered. Here’s that passage:
Mehdi cited a video Earle posted about the new Copilot Groups feature as evidence that the campaign is working. “We can see very much people say, ‘Oh, I’m gonna go try that,’ and we can see the usage it’s driving.” The video generated 1.9 million views on Earle’s Instagram account and 7 million on her TikTok. Earle declined to comment for this story.
Following my non-creative approach, here are several observations:
- From first to fifth. I am not sure social media influencers are likely to address the reason the firm associated with Clippy occupies this spot.
- I am not sure Microsoft knows how to fix the “problem.” My hunch is that the Softies see the issue as one that is the fault of the users. Its either the Russian hackers or the users of Microsoft products and services. Yeah, the problem is not ours.
- Microsoft, like Apple and Telegram, are struggling to graft smart software into ageing platforms, software, and systems. Google is doing a better job, but it is in second place. Imagine that. Google in the “place” position in the AI Derby. But Google has its own issues to resolve, and it is thinking about putting data centers in space, keeping its allegedly booming Web search business cranking along at top speed, and sucking enough cash from online advertising to pay for its smart software ambitions. Those wizards are busy. But Googzilla is in second place and coping with acute stress reaction.
Net net: The big players have put huge piles of casino chips in the AI poker game. Desperation takes many forms. The sport of extreme marketing is just one of the disorder’s manifestations. Watch it on TikTok-type services.
Stephen E Arnold, November 10, 2025
Train Models on Hostility Oriented Slop and You Get Happiness? Nope, Nastiness on Steroids
November 10, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Informed discourse, factual reports, and the type of rhetoric loved by Miss Spurling in 1959 can have a positive effect. I wasn’t sure at the time if I wanted to say Whoa! Nelly to my particular style of writing and speaking. She did her best to convert me from a somewhat weird 13 year old into a more civilized creature. She failed I fear.

A young student is stunned by the criticism of his approach to a report. An “F”. Failure. The solution is not to listen. Some AI vendors take the same approach. Thanks, Venice.ai, good enough.
When I read “Study: AI Models Trained on Clickbait Slop Result In AI Brain Rot, Hostility,” I thought about Miss Spurling and her endless supply of red pencils. Algorithms, it seems, have some of the characteristics of an immature young person. Feed that young person some unusual content, and you get some wild and crazy outputs
The write up reports:
To see how these [large language] models would “behave” after subsisting on a diet of clickbait sewage, the researchers cobbled together a sample of one million X posts and then trained four different LLMs on varying mixtures of control data (long form, good faith, real articles and content) and junk data (lazy, engagement chasing, superficial clickbait) to see how it would affect performance. Their conclusion isn’t too surprising; the more junk data that is fed into an AI model, the lower quality its outputs become and the more “hostile” and erratic the model is …
But here’s the interesting point:
They also found that after being fed a bunch of ex-Twitter slop, the models didn’t just get “dumber”, they were (shocking, I know) far more likely to take on many of the nastier “personality traits” that now dominate the right wing troll platform …
The write up makes a point about the wizards creating smart software; to wit:
The problem with AI generally is a decidedly human one: the terrible, unethical, and greedy people currently in charge of it’s implementation (again, see media, insurance, countless others) — folks who have cultivated some unrealistic delusions about AI competency and efficiency (see this recent Stanford study on how rushed AI adoption in the workforce often makes people less efficient).
I am not sure that the highly educated experts at Google-type AI companies would agree. I did not agree with Miss Spurling. On may points, she was correct. Adolescent thinking produces some unusual humans as well as interesting smart software. I particularly like some of the newer use cases; for instance, driving some people wacky or appealing to the underbelly of human behavior.
Net net: Scale up, shut up, and give up.
Stephen E Arnold, November 10, 2025
Mobile Hooking People: Digital Drugs
November 10, 2025
Most of us know that spending too much time on our phones is a bad idea, especially for young minds. We also know the companies on the other end profit from keeping us glued to the screen. The Conversation examines the ways “Smartphones Manipulate our Emotions and Trigger our Reflexes– No Wonder We’re Addicted.” Yes–try taking a 12 year old’s mobile phone and let us know how that goes.
Of course, social media, AI chatbots, games, and other platforms have their own ways of capturing our attention. This article, however, focuses on ways the phones themselves manipulate users. Author Stephen Monteiro writes:
“As I argue in my newly published book, Needy Media: How Tech Gets Personal, our phones — and more recently, our watches — have become animated beings in our lives. These devices can build bonds with us by recognizing our presence and reacting to our bodies. Packed with a growing range of technical features that target our sensory and psychological soft spots, smartphones create comforting ties that keep us picking them up. The emotional cues designed into these objects and interfaces imply that they need our attention, while in actuality, the devices are soaking up our data.”
The write-up explores how phones’ responsive features, like facial recognition, geolocation, touchscreen interactions, vibrations and sounds, and motion and audio sensing, combine to build a potent emotional attachment. Meanwhile, devices have drastically increased how much information they collect and when. They constantly record data on everything we do on our phones and even in our environments. One chilling example: With those sensors, software can build a fairly accurate record of our sleep patterns. Combine that with health and wellness apps, and that gives app-makers a surprisingly comprehensive picture. Have you seen any eerily insightful ads for fitness, medical, or mindfulness products lately? Soon, they will be even be able to gauge our emotions through analysis of our facial expressions. Just what we need.
Given a cell phone is pretty much required to navigate life these days, what are we to do? Monteiro suggests:
“We can access device settings and activate only those features we truly require, adjusting them now and again as our habits and lifestyles change. Turning on geolocation only when we need navigation support, for example, increases privacy and helps break the belief that a phone and a user are an inseparable pair. Limiting sound and haptic alerts can gain us some independence, while opting for a passcode over facial recognition locks reminds us the device is a machine and not a friend. This may also make it harder for others to access the device.”
If these measures do not suffice, one can go retro with a “dumb” phone. Apparently, that is a trend among Gen Z. Perhaps there is hope for humanity yet.
Cynthia Murrell, November 10, 2025


