Censorship Gains Traction at an Individual Point
May 23, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I read a somewhat sad biographical essay titled “The Great Displacement Is Already Well Underway: It’s Not a Hypothetical, I’ve Already Lost My Job to AI For The Last Year.” The essay explains that a 40 something software engineer lost his job. Despite what strike me as heroic efforts, no offers ensued. I urge you to take a look at this essay because the push to remove humans from “work” is accelerating. I think with my 80 year old neuro-structures that the lack of “work” will create some tricky social problems.
I spotted one passage in the essay which struck me as significant. The idea of censorship is a popular topic in central Kentucky. Quite a few groups and individuals have quite specific ideas about what books should be available for students and others to read. Here is the quote about censorship from the cited “Great Displacement” essay:
I [the author of the essay] have gone back and deleted 95% of those articles and vlogs, because although many of the ideas they presented were very forward-thinking and insightful at the time, they may now be viewed as pedestrian to AI insiders merely months later due to the pace of AI progress. I don’t want the wrong person with a job lead to see a take like that as their first exposure to me and think that I’m behind the last 24 hours of advancements on my AI takes.
Self-censorship was used to create a more timely version of the author. I have been writing articles with titles like “The Red Light on the Green Board” for years. This particular gem points out that public school teachers sell themselves and their ideas out. The prostitution analogy was intentional. I caught a bit of criticism from an educator in the public high school in which I “taught” for 18 months. Now people just ignore what I write. Thankfully my lectures about online fraud evoke a tiny bit of praise because the law enforcement, crime analysts, and cyber attorneys don’t throw conference snacks at me when I offer one of my personal observations about bad actors.
The cited essay presents a person who is deleting content into to present an “improved” or “shaped” version of himself. I think it is important to have in original form essays, poems, technical reports, and fiction — indeed, any human-produced artifact — available. These materials I think will provide future students and researchers with useful material to mine for insights and knowledge.
Deletion means that information is lost. I am not sure that is a good thing. What’s notable is that censorship is taking place by the author for the express purpose of erasing the past and shaping an impression of the present individual. Will that work? Based on the information in the essay, it had not when I read the write up.
Censorship may be one facet of what the author calls a “displacement.” I am not too keen on censorship regardless of the decider or the rationalization. But I am a real dinobaby, not a 40-something dinobaby like the author of the essay.
Stephen E Arnold, May 23, 2025
AI: Improving Spam Quality, Reach, and Effectiveness
May 22, 2025
It is time to update our hoax detectors. The Register warns, “Generative AI Makes Fraud Fluent—from Phishing Lures to Fake Lovers.” What a great phrase: “fluent fraud.” We can see it on a line of hats and t-shirts. Reporter Iain Thomson consulted security pros Chester Wisniewski of Sophos and Kevin Brown at NCC Group. We learn:
“One of the red flags that traditionally identified spam, including phishing attempts, was poor spelling and syntax, but the use of generative AI has changed that by taking humans out of the loop. … AI has also widened the geographical scope of spam and phishing. When humans were the primary crafters of such content, the crooks stuck to common languages to target the largest audience with the least amount of work. But, Wisniewski explained, AI makes it much easier to craft emails in different languages.”
For example, residents of Quebec used to spot spam by its use of European French instead of the Québécois dialect. Similarly, folks in Portugal learned to dismiss messages written in Brazilian Portuguese. Now, though, AI makes it easy to replicate regional dialects. Perhaps more eerily, it also make it easier to replicate human empathy. Thomson writes:
“AI chatbots have proven highly effective at seducing victims into thinking they are being wooed by an attractive partner, at least during the initial phases. Wisniewski said that AI chatbots can easily handle the opening phases of the scams, registering interest and appearing to be empathetic. Then a human operator takes over and begins removing funds from the mark by asking for financial help, or encouraging them to invest in Ponzi schemes.”
Great. To make matters worse, much of this is now taking place with realistic audio fakes. For example:
“Scammers might call everybody on the support team with an AI-generated voice that duplicates somebody in the IT department, asking for a password until one victim succumbs.”
Chances are good someone eventually will. Whether video bots are a threat (yet) is up for debate. Wisniewski, for one, believes convincing, real-time video deepfakes are not quite there. But Brown reports the experienced pros at his firm have successfully created them for specific use cases. Both believe it is only a matter of time before video deepfakes become not only possible but easy to create and deploy. It seems we must soon learn to approach every interaction that is not in-person with great vigilance and suspicion. How refreshing.
Cynthia Murrell, May 22, 2025
IBM CEO Replaces Human HR Workers with AskHR AI
May 21, 2025
An IBM professional asks the smart AI system, “Have I been terminated?” What if the smart software hallucinates? Yeah, surprise!
Which employees are the best to replace with AI? For IBM, ironically, it is the ones with “Human” in their title. Entrepreneur reports, “IBM Replaced Hundreds of HR Workers with AI, According to Its CEO.” But not to worry, the firm actually hired workers in other areas. We learn:
“IBM CEO Arvind Krishna told The Wall Street Journal … that the tech giant had tapped into AI to take over the work of several hundred human resources employees. However, IBM’s workforce expanded instead of shrinking—the company used the resources freed up by the layoffs to hire more programmers and salespeople. ‘Our total employment has actually gone up, because what [AI] does is it gives you more investment to put into other areas,’ Krishna told The Journal. Krishna specified that those ‘other areas’ included software engineering, marketing, and sales or roles focused on ‘critical thinking,’ where employees ‘face up or against other humans, as opposed to just doing rote process work.’”
Yes, the tech giant decided to dump those touchy feely types in personnel. Who need human sensitivity with issues like vacations, medical benefits, discrimination claims, or potential lawsuits? That is all just rote process work, right? The AskHR agent can handle it.
According to Wedbush analyst Dan Ives, IBM is just getting started on its metamorphosis into an AI company. What does that mean for humans in other departments? Will their jobs begin to go the way of their former colleagues’ in HR? If so, who would they complain to? Watson, are you on the job?
Cynthia Murrell, May 21, 2025
Microsoft: What Is a Brand Name?
May 20, 2025
Just the dinobaby operating without Copilot or its ilk.
I know that Palantir Technologies, a firm founded in 2003, used the moniker “Foundry” to describe its platform for government use. My understanding is that Palantir Foundry was a complement to Palantir Gotham. How different were these “platforms”? My recollection is that Palantir used home-brew software and open source to provide the raw materials from which the company shaped its different marketing packages. I view Palantir as a consulting services company with software, including artificial intelligence. The idea is that Palantir can now perform like Harris’ Analyst Notebook as well as deliver semi-custom, industrial-strength solutions to provide unified solutions to thorny information challenges. I like to think of Palantir’s present product and service line up as a Distributed Common Ground Information Service that generally works. About a year ago, Microsoft and Palantir teamed up to market Microsoft – Palantir solutions to governments via “bootcamps.” These are training combined with “here’s what you too can deploy” programs designed to teach and sell the dream of on-time, on-target information for a range of government applications.
I read “Microsoft Is Now Hosting xAI’s Grok 3 Models” and noted this subtitle:
Grok 3 and Grok 3 mini are both coming to Microsoft’s Azure AI Foundry service.
Microsoft’s Foundry service. Is that Palantir’s Foundry, a mash up of Microsoft and Palantir, or something else entirely. The name confuses me, and I wonder if government procurement professionals will be knocked off center as well. The “dream” of smart software is a way to close deals in some countries’ government agencies. However, keeping the branding straight is also important.
What does one call a Foundry with a Grok? Shakespeare suggested that it would smell as sweet no matter what the system was named. Thanks, OpenAI? Good enough.
The write up says:
At Microsoft’s Build developer conference today, the company confirmed it’s expanding its Azure AI Foundry models list to include Grok 3 and Grok 3 mini from xAI.
It is not clear if Microsoft will offer Grok as another large language model or whether [a] Palantir will be able to integrate Grok into its Foundry product, [b] Microsoft Foundry is Microsoft’s own spin on Palantir’s service which is deprecated to some degree, or [c] a way to give Palantir direct, immediate access to the Grok smart software. There are other possibilities as well; for example, Foundry is a snappy name in some government circles. Use what helps close deals with end-of-year money or rev up for new funds seeking smart software.
The write up points out that Sam AI-Man may be annoyed with the addition of Grok to the Microsoft toolkit. Both OpenAI and Grok have some history. Maybe Microsoft is positioning itself as the role of the great mediator, a digital Henry Clay of sorts?
A handful of companies are significant influencers of smart software in some countries’ Microsoft-centric approach to platform technology. Microsoft’s software and systems are so prevalent that Israel did some verbal gymnastics to make clear that Microsoft technology was not used in the Gaza conflict. This is an assertion that I find somewhat difficult to accept.
What is going on with large language models at Microsoft? My take is:
- Microsoft wants to offer a store shelf stocked with LLMs so that consulting service revenue provides evergreen subscription revenue
- Customers who want something different, hot, or new can make a mark on the procurement shopping list and Microsoft will do its version of home delivery, not quite same day but convenient
- Users are not likely to know what smart software is fixing up their Miltonic prose or centering a graphic on a PowerPoint slide.
What about the brand or product name “Foundry”? Answer: Use what helps close deals perhaps? Does Palantir get a payoff? Yep.
Stephen E Arnold, May 20, 2025
Salesforce CEO Criticizes Microsoft, Predicts Split with OpenAI
May 20, 2025
Salesforce CEO Marc Benioff is very unhappy with Microsoft. Windows Central reports, “Salesforce CEO Says Microsoft Did ‘Pretty Nasty’ Things to Slack and Its OpenAI Partnership May Be a Recipe for Disaster.” Writer Kevin Okemwa reminds us Benioff recently dubbed Microsoft an “OpenAI reseller” and labeled Copilot the new Clippy. Harsh words. Then Okemwa heard Benioff criticizing Microsoft on a recent SaaStr podcast. He tells us:
“According to Salesforce CEO Marc Benioff: ‘You can see the horrible things that Microsoft did to Slack before we bought it. That was pretty bad and they were running their playbook and did a lot of dark stuff. And it’s all gotten written up in an EU complaint that Slack made before we bought them.’ Microsoft has a long-standing rivalry with Slack. The messaging platform accused Microsoft of using anti-competitive techniques to maintain its dominance across organizations, including bundling Teams into its Microsoft Office 365 suite.”
But, as readers may have noticed, Teams is no longer bundled into Office 365. Score one for Salesforce. The write-up continues:
“Marc Benioff further indicated that Microsoft’s treatment of Slack was ‘pretty nasty.’ He claimed that the company often employs a similar playbook to gain a competitive advantage over its rivals while referencing ‘browser wars’ with Netscape and Internet Explorer in the late 1990s.”
How did that one work out? Not well for the once-dominant Netscape. Benioff is likely referring to Microsoft’s dirty trick of making IE 1.0 free with Windows. This does seem to be a pattern for the software giant. In the same podcast, the CEO predicts a split between Microsoft and ChatGPT. It is a recent theme of his. Okemwa writes:
“Over the past few months, multiple reports and speculations have surfaced online suggesting that Microsoft’s multi-billion-dollar partnership with OpenAI might be fraying. It all started when OpenAI unveiled its $500 billion Stargate project alongside SoftBank, designed to facilitate the construction of data centers across the United States. The ChatGPT maker had previously been spotted complaining that Microsoft doesn’t meet its cloud computing needs, shifting blame to the tech giant if one of its rivals hit the AGI benchmark first. Consequently, Microsoft lost its exclusive cloud provider status but retains the right of refusal to OpenAI’s projects.”
Who knows how long that right of refusal will last. Microsoft itself seems to be preparing for a future without its frenemy. Will Benioff crow when the partnership is completely destroyed? What will he do if OpenAI buys Chrome and pushes forward with his “everything” app?
Cynthia Murrell, May 20, 2025
Behind Microsoft’s Dogged Copilot Push
May 20, 2025
Writer Simon Batt at XDA foresees a lot of annoyance in Windows users’ future. “Microsoft Will Only Get More Persistent Now that Copilot has Plateaued,” he predicts. Yes, Microsoft has failed to attract as many users to Copilot as it had hoped. It is as if users see through the AI hype. According to Batt, the company famous for doubling down on unpopular ideas will now pester us like never before. This can already be seen in the new way Microsoft harasses Windows 10 users. While it used to suggest every now and then such users purchase a Windows 11-capable device, now it specifically touts Copilot+ machines.
Batt suspects Microsoft will also relentlessly push other products to boost revenue. Especially anything it can bill monthly. Though Windows is ubiquitous, he notes, users can go years between purchases. Many of us, we would add, put off buying a new version until left with little choice. (Any XP users still out there?) He writes:
“When ChatGPT began to take off, I can imagine Microsoft seeing dollar signs when looking at its own assistant, Copilot. They could make special Copilot-enhanced devices (which make them money) that run Copilot locally and encourage people to upgrade to Copilot Pro (which makes them money) and perhaps then pay extra for the Office integration (which makes them money). But now that golden egg hasn’t panned out like Microsoft wants, and now it needs to find a way to help prop up the income while it tries to get Copilot off the ground. This means more ads for the Microsoft Store, more ads for its game store, and more ads for Microsoft 365. Oh, and let’s not forget the ads within Copilot itself. If you thought things were bad now, I have a nasty feeling we’re only just getting started with the ads.”
And they won’t stop, he expects, until most users have embraced Copilot. Microsoft may be creeping toward some painful financial realities.
Cynthia Murrell, May 20, 2025
Grok and the Dog Which Ate the Homework
May 16, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I remember the Tesla full self driving service. Is that available? I remember the big SpaceX rocket ship. Are those blowing up after launch? I now have to remember an “unauthorized modification” to xAI’s smart software Grok. Wow. So many items to tuck into my 80 year old brain.
I read “xAI Blames Grok’s Obsession with White Genocide on an Unauthorized Modification.” Do I believe this assertion? Of course, I believe everything I read on the sad, ad-choked, AI content bedeviled Internet.
Let’s look at the gems of truth in the report.
First, what is an unauthorized modification of a complex software humming along happily in Silicon Valley and— of all places — Memphis, a lovely town indeed. The unauthorized modification— whatever that is— caused a “bug in its AI-powered Grok chatbot.” If I understand this, a savvy person changed something he, she, or it was not supposed to modify. That change then caused a “bug.” I thought Grace Hopper nailed the idea of a “bug” when she pulled an insect from one of the dinobaby’s favorite systems, the Harvard Mark II. Are their insects at the X shops? Are these unauthorized insects interacting with unauthorized entities making changes that propagate more bugs? Yes.
Second, the malfunction occurs when “@grok” is used as a tag. I believe this because the “unauthorized modification” fiddled with the user mappings and jiggled scripts to allow the “white genocide” content to appear. This is definitely not hallucination; it is an “unauthorized modification.” (Did you know that the version of Grok available via x.com cannot return information from X.com (formerly Twitter) content. Strange? Of course not.
Third, I know that Grok, xAI, and the other X entities have “internal policies and core values.” Violating these is improper. The company — like other self regulated entities — “conducted a thorough investigation.” Absolutely. Coders at X are well equipped to perform investigations. That’s why X.com personnel are in such demand as advisors to law enforcement and cyber fraud agencies.
Finally, xAI is going to publish system prompts on Microsoft GitHub. Yes, that will definitely curtail the unauthorized modifications and bugs at X entities. What a bold solution.
The cited write up is definitely not on the same page as this dinobaby. The article reports:
A study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found xAI ranks poorly on safety among its peers, owing to its “very weak” risk management practices. Earlier this month, xAI missed a self-imposed deadline to publish a finalized AI safety framework.
This negative report may be expanded to make the case that an exploding rocket or a wonky full self driving vehicle is not safe. Everyone must believe X outfits. The company is a paragon of veracity, excellent engineering, and delivering exactly what it says it will provide. That is the way you must respond.
Stephen E Arnold, May 16, 2025
Google Advertises Itself
May 16, 2025


- The signals about declining search traffic warrant attention. SEO wizards, Google’s ad partners, and its own ad wizards depend on what once was limitless search traffic. If that erodes, those infrastructure costs will become a bit of a challenge. Profits and jobs depend on mindless queries.
- Google’s reaction to these signals indicates that the company’s “leadership” knows that there is trouble in paradise. The terse statement that the Cue comment about a decline in Apple to Google search traffic and this itty bitty ad are not accidents of fate. The Google once controlled fate. Now the fabled company is in a sticky spot like Sisyphus.
- The irony of Google’s problem stems from its own Transformer innovation. Released to open source, Google may be learning that its uphill battle is of its own creation. Nice work, “leadership.”
Apple AI Is AImless: Better Than Fire, Ready AIm
May 16, 2025
Apple’s Problems Rebuilding Siri
Apple is a dramatist worthy of reality TV. According to MSN, Apple’s leaders are fighting each other says the article, “New Siri Report Reveals Epic Dysfunction Within Apple — But There’s Hope.” There’s so many issues with Apple’s leaders that Siri 2.0 is delayed until 2026.
Managerial styles and backroom ambitions clashed within Apple’s teams. John Giannandrea heads Siri and has since 2018. He was hired to lead Siri and an AI group. Siri engineers claim they are treated like second class citizens. Their situation worsened when Craig Federighi’s software team released features and updates.
The two leaders are very different:
“Federighi was placed in charge of the Siri overhaul in March, alongside his number two Mike Rockwell — who created the Apple Vision Pro headset— as Apple attempts to revive its Siri revamp. The difference between Giannandrea and Federighi appears to be the difference between the tortoise and the hare. John is allegedly more of a listener and slow mover who lets those underneath him take charge of the work, especially his number two Robby Walker. He reportedly preferred incremental updates and was repeatedly cited as a problem with Siri development. Meanwhile, Federighi is described as brash and quick but very efficient and knowledgeable. Supposedly, Giannandrea’s “relaxed culture” lead to other engineers dubbing his AI team: AIMLess.”
The two teams are at each other’s throats. Projects are getting done but they’re arguing over the means of how to do them. Siri 2.0 is caught in the crossfire like a child of divorce. The teams need to put their egos aside or someone in charge of both needs to make them play nicely.
Whitney Grace, May 16, 2025
Retail Fraud Should Be Spelled RetAIl Fraud
May 16, 2025
As brick-and-mortar stores approach extinction and nearly all shopping migrates to the Web, AI introduces new vulnerabilities to the marketplace. Shocking, we know. Cyber Security Intelligence reports, “ChatGPT’s Image Generation Could Be Driving Retail Fraud.” We learn:
“The latest AI image generators can create images that look like real photographs as well as imagery from simple text prompts with incredible accuracy. It can reproduce documents with precisely matching formatting, official logos, accurate timestamps, and even realistic barcodes or QR codes. In the hands of fraudsters, these tools can be used to commit ‘return fraud’ by creating convincing fake receipts and proof-of-purchase documentation.”
But wait, there is more. The post continues:
“Fake proof of purchase documentation can be used to claim warranty service for products that are out of warranty or purchased through unauthorised channels. Fraudsters could also generate fake receipts showing purchases at higher values than was actually paid for – then requesting refunds to gift cards for the inflated amount. Internal threats also exist too, as employees can create fake expense receipts for reimbursement. This is particularly damaging for businesses with less sophisticated verification processes in place. Perhaps the scenario most concerning of all is that these tools can enable scammers to generate convincing payment confirmations or shipping notices as part of larger social engineering attacks.”
Also of concern is the increased inconvenience to customers as sites beef up their verification processes. After all, the write-up notes, The National Retail Federation found 70% of customers say a positive return experience makes them more likely to revisit a seller.
So what is a retail site to do? Well, author Doriel Abrahams is part of Forter, a company that uses AI to protect online sellers from fraud. Naturally, he suggests using a platform like his firm’s to find suspicious patterns without hindering legit customers too much. Is more AI the solution? We are not certain. If one were to go down that route, though, one should probably compare multiple options.
Cynthia Murrell, May 16, 2025