Big Tech AI: Biased or Not?

March 13, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read an unusual news item published by Versant CNBC. Its title is “Anthropic’s Claude Would Pollute Defense Supply Chain: Pentagon CTO.” I don’t know much about the US government and I know even less about the Department of War. What I do know is that Versant CNBC called attention to a facet of smart software most ignored. Dr. Timnit Gebru raised some questions about AI bias, and she was invited to find her future elsewhere along with her pet stochastic parrot. Others have suggested that certain content is under-represented; namely, I have with regard to coverage of information in other major countries. To get Chinese and Russian perspectives, I have to use language specific indexes and rely on online translation services. The information I have located is not well represented in result sets my team and I have reviewed from US big tech outfits’ AI systems. Yeah, English and low-hanging fruit are more common, not a salient post from a Chinese or Russian language source. Your mileage may vary, but I am a dinobaby, and I don’t wander too far from the outmoded ideas about editorial policies, precision, recall, and other other impedimenta from ancient online services.

image

A smart software system is testifying about policy biases before a distinguished body of elected officials. Thanks, Venice.ai. Good enough.

The Versant CNBC outfit which I will refer to as VC NBC reports:

Defense Department CTO Emil Michael on Thursday [March 12, 2026] said Anthropic’s Claude artificial intelligence models would “pollute” the agency’s supply chain because they have “a different policy preference” that is baked in.

Okay, “a different policy preference” suggests to me:

  1. The developers of smart software can steer what the models output; that is, weaponize them, shape them, make them formulate responses that affect the systems or users ingesting AI output
  2. Professionals at in the US government have determined from their own observations and by consulting trusted experts like those from Palantir Technologies that their determinations are accurate and valid based on the systems in use prior to this determination
  3. Users of these systems and analysts of these systems have not been sufficiently critical of AI outputs to observe these pollutive functions and the explicit policy preferences noted in the information presented by VC NBC.

Let’s assume the information presented by VC NBC is spot on. I have several questions:

  1. Is it easy to shape or weaponize the probabilistic word guessing systems to make a duck the equivalent of a cow or to present a fact as an incorrect assertion? If yes, who are the experts turning the knobs and twisting the dials in these AI companies?
  2. Is one company capable of weaponizing and shaping, or can other AI outfits perform a similar calibration mechanism? If yes, are the Chinese and French models weaponized, shaped, or directed in a similar way? Other than academics publishing in ArXhiv, are there mainstream research outfits tracking these clever meta-editorial activities? Are RAND- or McKinsey-type outfits chasing this concept.
  3. Short of shutting down an alleged weaponizer, how will this “policy” shaping system be controlled? I think that may be difficult because some organizations like Microsoft have integrated an alleged policy shaper along with the allegedly more objective services. Note that the smart software, including the Chinese and French systems, do not include Chinese or Russian content. English seems to be the go-to language for training data. That decision may inject cultural bias I would suggest.

Net net: I think that policy shaping is now a “fact” that may have some persistence in the AI world. I will be interested in watching how the AI firms explain and demonstrate that the outputs of their systems are not just “correct” but “objective” according to the standard used by the US government. Does anyone care? That is an important question. I want to avoid Zitron-onics and say, “Worth monitoring.”

Stephen E Arnold, March 13, 2026

Big Tech AI Tries to Understand Real Life

March 6, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read “OpenAI’s Compromise with the Pentagon Is what Anthropic Feared.” I want to be upfront. Every time I read or hear about MIT, I think Epstein Epstein Epstein. This translates to my being [a] dismissive of what the MIT thing outputs, [b] the integrity of the institution, and [c] what it brings to the knowledge party. Therefore, if you are into MIT, stop reading.

This particular write up is one of those crazy analyses of the perception of the world from the point of view of wizards and how stuff actually works in the US government or any nation’s government. Whiz kids think they have something really cool. They give talks at conferences. They moms and dads pester their connections about Timmy’s or Wendy’s great new thing. They do brown bag lunches in the bowels of the GSA. They trek to FDIC events in interesting locations. They write Substacks, blog posts, and Forbes thought leader articles. They stand in trade show booths squinting at name tags and look crestfallen when big time people walk by their bright smiles.

The reality is that outfits want to make government sales, and if they want to close a deal and keep the deal, the people who sign those contracts expect vendors to do what they are told. Is this the optimal approach by governments? No. Is this an informed strategy? No. Is this a tactic to become best pals with vendors? No.

And guess what? No one in those governments’ procurement processes cares very much what a vendor wants. Sure, there is some flexibility. But one doesn’t have to be an MIT graduate or a doner like Mr. Epstein Epstein Epstein to figure out that the government is going to prevail. Even in countries which are obscure and unfamiliar to an American big tech outfit, the approach is the same: Read the terms of the deal, agree, get paid, and do what the client wants.

image

A group of AI wizards learn how life is versus how life should be. Thanks, Venice.ai. Good enough.

Painful, right.

The write up says:

In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon.

Hey, MIT writer publisher thing, OpenAI got the message. I could suggest that MIT check out the history of MITRE to put my observations in context.

Everything is clear. A company that wants to do business with the government regardless of country needs to drop the crazy idea that governmental institutions care about the emotional zeitgeist of the whiz kids. I know that it takes time for some government professionals to grasp what one can do with a technology that is new, unfamiliar, and less friendly than making a call on a iPhone. However, once that insight arrives in the mind of a government professionals, the mental orientation of the wizard is usually irrelevant. It’s noise. It’s a distraction. It’s unwanted. It’s infuriating.

The write up says:

The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use.

News flash. When the Department of War licenses a technology, that Department (regardless of the nation state) is going to use that technology to complete the mission its leadership deems appropriate. If a company or a wizard cannot understand this concept, why are these firms and their wizards in the meeting and procurement process. Go hunt for money elsewhere.

How about this statement from the write up:

But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground.

The leadership of the big tech AI companies think they are rational. Those well paid experts are not. The people in the government are not rational. Why? They are humans who have interesting ways of responding to work, technology, and the context in which they find themselves.

Why did MIT embrace Epstein Epstein Epstein? The leadership of MIT made a decision. The big AI tech people made a decision. Neither seems to have been eager to walk away. Why not try to own up to your decisions? That’s called adulting.

Stephen E Arnold, March 6, 2026

Palantir: Morphing into an SAP-Type Outfit: Intelware Is a Minor Component

February 23, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Palantir Technologies has been around a couple of decades. I wrote about the firm’s system in my book “CyberOSINT: Next Generation Information Access.” Like the other intelware vendors’ systems, Palantir used open source, home brew code, an interface twist (see illustration from one of the older versions), and moxie. Palantir used “forward deployed engineers” who would go to a prospect’s office, set up a system, and show the staff how to use the system. At the time, most of the Palantir bells and whistles were already in systems developed originally by i2 Ltd. I was a consultant to i2 Ltd, and my legal eagles told me long ago I should make that point.

image

A senior forward deployed engineer explains that the Palantir system is indeed a “seeing stone.” It can provide its licensees with unparalleled insight. Those in such presentations often believe that Palantir has the same magic that infuses “The Lord of the Rings.” Thanks, Qwen. Close enough.

In terms of the longevity of intelware companies, Palantir has kept on trucking. Many of the companies I profiled in CyberOSINT in 2015 have been acquired, merged, or folded up their tent and focused on selling ad agencies. The core functions of these systems included at that time:

  1. User point-and-click interfaces
  2. Some control over data added to a system by the user
  3. Relationship diagrams
  4. Easier cross tabulations
  5. Report generation tools.

In the intervening decade, the current crop of intelware systems have bolted on smart software. These functions are useful because the volume of data for an investigation or an analysis for intelligence purposes involves a lot of data.

What’s going on with Palantir Technologies now? The main developments are:

  1. Big visibility. Most people cannot name an intelware company, but quite a few know about Palantir or have some name recognition. Palantir has won the PR battle. Too bad Light House and Sixgill.
  2. Big contracts. Palantir is not in the $5,000 a month range. The size of the publicized contracts are big.
  3. Big capabilities. Palantir makes clear in its marketing that it has the biggest, best intelware system anywhere. (I am not sure I agree with that, but that’s not germane to this post.)

Why am I writing about Palantir on February 20, 2026? Answer: I read “DHS Awards Palantir up to $1B to Deploy AI and Data Analytics Platforms.” The number is big or seems big. There is that “up to” caveat. The article states:

The U.S. Department of Homeland Security has awarded Palantir Technologies Inc. a five-year blanket purchase to expand the department’s use of artificial intelligence and large-scale data analytics platforms across its agencies.

From my point of view, the most important factoid in the news story is this one:

The agreement, which is valued at up to $1 billion, allows multiple DHS agencies to acquire Palantir platforms without initiating separate competitive contracts for each deployment. The blanket purchasing agreement deal establishes pre-approved pricing and terms, with funding distributed through individual task orders over the five-year period rather than as a single upfront award.

As I interpret the passage, it seems that other intelware vendors may have a more difficult time selling or licensing their systems to DHS. Some of those systems are better than Palantir’s system, but that’s normal in the world of intelware. No one system does everything. Larger systems exhibit innovation friction. The bigger the outfit, the more difficult it becomes to integrate in a slick way the latest and greatest twist for law enforcement and intelligence professionals conducting investigations. That’s why larger intelware outfits acquire small, more fleet of foot start ups.

image

This is a screenshot of the right click wheel selector. The idea is that this right click method is more functional for an investigator. I believe the interface has been updated since I snagged this in 2006 or 2007 in a demo at a trade show. I assume the entire image is copyright protected, trademarked, and super proprietary. Anyway, it is definitely a Palantir “innovation.”

Several observations:

  1. The contract suggests that standardization makes it easier to train authorized users of a system like Palantir’s
  2. Personnel can move more easily from one unit of DHS to another without having to deal with different intelware products. (Some will find their way into specialized units anyway.)
  3. DHS has, in theory, one throat to choke if the system or the customized instances of Palantir’s software does not meet the specification for that implementation.

I won’t mention names, but there was a similar “let’s just pick one and go” approach a number of years ago. The company promised a range of specific capabilities, asserted flexibility, and described easier customization than other approaches. What happened? In this particular intelware instance, the multi year agree was on the rocks within nine months. The time required to train and develop the custom applications for the use cases converted intelware into a more inefficient deployment than SAP or similar “workflow” system. The costs of implementation soared as engineering change orders and supplemental specifications were developed and pushed forward. In a short time, money ran out and these fixes had to be integrated into the next fiscal year’s budget.

I did not work on this particular project. I was engaged in an equally large and even more visible project related to government-wide search and retrieval of digital information. I didn’t think about one agency. We were struggling with the entire airport van of agencies, departments, and related entities.

Nevertheless, we learned about the issues that a Swiss army knife poses when one or more of the tools doesn’t open or breaks upon use. I hope that the Palantir solution does not create a similar set of issues for DHS. I want to be optimistic. I know that descriptions like this are very appealing to government executives, and I quote from the news story:

DHS is expected to use Palantir’s platforms to support investigative case management, threat identification, logistics coordination and operational planning. The platforms apply machine learning models and rules-based analytics to information from enforcement databases, biometric systems, financial records, travel data and other sources to generate risk assessments, link analyses and operational dashboards.

Palantir’s system, if this paragraph is accurate, is no longer intelware. It is smart software doing what SAP-type systems do. Believe me, intelware is a tough enough niche. Expecting Palantir to be enterprise integration and automation software looks like an even more complex undertaking.

Can Palantir deliver? Sure, anything can be done with money, time, and appropriate knowledge resources (people, folks). The problem is that in DHS and other enforcement-type entities time is a problem. Changing priorities is a constant. Pressure is high and unrelenting. Small intelware vendors are, as I said, speedy. Big outfits aren’t.

Just a thought. (Oh, the CyberOSINT book is still available for free for law enforcement and intelligence professionals. Just write us at kentmaxwell at proton dot me.)

Stephen E Arnold, February 23, 2026

Google: Another Great Idea… for Google

February 18, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I love Google lawyer logic. Last week I mentioned that Google insisted that a duck was a cow; that is, YouTube is Netflix-like, not social media. Another example hit my stream today (February 14, 2026). The good old orange newspaper published “Google Warns EU Against Erecting Walls in Tech Sovereignty Push.” (This is a paywalled story because “real” news costs money.)

The write up explains that Google lawyers are sharing some free advice with the European Union. And what is Google saying that is absolutely, 100 percent Googley? The write up reports:

Kent Walker, president of global affairs and chief legal officer at Google, told the FT that the EU faces a “competitive paradox” as it seeks to spur growth while “restricting the use of the technologies it needs to get there”. “We deliver a lot of value to Europe,” he said. “Erecting walls that make it harder to use some of the best technology in the world, especially as it’s advancing so quickly, would actually be counter-productive.”

The Google logic is bulletproof is you are Googley; that is, just standardize on Google. A failure to embrace Google means you clueless officials and your pathetic nation states will fail. But, listen up, going Google will allow you to succeed.

image

Thanks, Venice.ai. Close enough. The ducks are goats instruction baffled you. That’s okay. You are AI.

Yep, the duck is a goat logic. Google to be fair is not alone with this type of reasoning. David Sachs, a US semi-official official, pointed out that it was really not so good for each state to regulate important stuff like AI and crypto.

I call this “monopoly thinking.” Does it work? Sure, if you emerge as the top outfit in a particular business. Monopolies are great. That’s why I believe this Googler’s statement of sentiment:

Google is focused on providing its services to the bloc and is “deeply committed” to Europe. He also stressed the popularity of Google services in Europe, whether it comes to its search engine, email, translation services or maps, which European consumers often use on a daily basis. Walker warned that Europe’s “regulatory friction” risks holding back innovation and denies European consumers and businesses access to “the best digital tools”.

My translation: Hey, you regulation crazy fools, get with out program. We are the “best.” Our tools are the “best.” Our mission is the “best.”

Several observations:

  1. Google wants to make clear that it will do what is best
  2. The EU will be a non starter unless it and the member countries wear Google T shirts
  3. You can fine us, and we will just do what we do because we are Googley.

Yep, the duck is a goat. Life is easier when there is one ruler who is in charge.

How will this approach fly in Brussels? It won’t.

Stephen E Arnold, February 18, 2026

Future Stupiding: That Is the Goal

February 17, 2026

green-dino_thumb_thumb3_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I think it was 1979 when I was in Ellen Shedlarz office adjacent a blue chip consulting firm’s library. Ms. Shedlarz said, “Know what this is?” as she pointed to a flat gizmo with a keyboard.

“Yes, that’s a version of a teletype terminal. I saw a version at IT&T a couple of years ago.”

“This is different. This is the future,” said Ms. Shedlarz. “What are you working on?”

I described a project involving food fabrication. She then asked me a number of questions. After five or six questions about converting soy bean paste into something that would sell as a snack to hungry teens, she typed into the device. After a minute or so, paper began to spew out of the slot in the back of the machine.

image

Ms. Shedlarz had just reduced several days of work in libraries, a number of telephone calls, and chats with food scientists and engineers scattered across the consulting firm’s global operations.

She explained that the gizmo connected to an online service. That service contained bibliographic information and abstracts of journal articles, conference papers, and other types of textual data. The information displayed matched the query she fed into the gizmo.

I asked, “Is the information accurate?”

She said, “Yes, I select specific online databases I know to have rigorous editorial standards. I don’t need your colleagues standing in my door shouting and yelling at me.

Several points:

  1. A consummate professional selected specific sources she knew would be acceptable to the often out-of-control Type As at the blue chip firm
  2. She performed a reference interview in order to use her training and on-the-job experience to get on-point information
  3. She would review the output before handing it to one of my often-intense colleagues.

Where are we today?

According to “How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt,” in 2026 were are standing in a pig pen filling with cognitive debt. Instead of mammals, we have smart software and people who believe they are experts in finding information themselves. The majority of people with whom I interact wouldn’t know a special librarian unless one stood at 39th and 3rd in Manhattan holding a sign that said, “Special librarian here.” Who needs the old-fashioned curated databases? Who needs a person to intermediate between the “give me everything about…” person and the high value content accessible online. Nope. Just let a black box output an answer. We live in a world where the “work” of creating knowledge value is unnecessary and not valued. Oh, the AI companies want old fashioned professionals who can do knowledge work. But those not in the elite just take what’s output. We are in a “good enough” society in the US.

The article says:

Even if AI agents produce code that could be easy to understand, the humans involved may have simply lost the plot and may not understand what the program is supposed to do, how their intentions were implemented, or how to possibly change it.

To my way of thinking, “lost the plot” means stupid. Making a “change” to the output is now beyond the ken of many professionals and most of the people I see at my Planet Fitness. You know these folks, the sit on the machine and doom scroll. Yeah, big thinkers.

The write up predictably tosses out some bait for those who need a consultant to fix up the problem or who want to attend a training class chock full of glittering panaceas.

The concept of “cognitive debt” is a good one. However, the write up does not nail the issues the blunting of knowledge work delivers by the garbage truck load:

  1. Learning is not easy. Without effort, learning is not valued and superficial
  2. An inability to learn and think critically means that decisions will probably be ill considered, half formed, incorrect, or disastrous
  3. A society without an educated citizenry becomes one that can be shaped
  4. Information will be weaponized.

When that human clerk cannot make change, that’s a person who will believe everything that fits into whatever their uninformed world view accepts. If the world view is expansive (the purpose of a college education for some people), there is a chance that weaponized or shaped information can be recognized, evaluated, and processed in a context of information believed by people like Ms. Shedlarz to be useful. The people who processed the print outs she delivered, in theory, would then continue to curate and process the information. The goal was to convert soy paste into a snack that met the terms of the client engagement.

Without people who are educated, the baseline is not excellence for most people. The baseline is good enough. Do you want to stand in front of a Waymo-type self driving car? Do you have confidence your hospital will not infect you with a forever virus? Do you believe that the word “organic” on a bunch of vegetables is free from forever chemicals?

The main shift of the big technology companies is to create knowledge dependence. With that dependence, “facts” are whatever the online systems and the black boxes output. Control that information flow and one has a social construct that has the capability to make people into puppets.

Cognitive debt means Punch and Judy shows. Ads, entertainment, and loss of knowledge control. No Ms. Shedlarz needed. Do some US big technology companies want this type of control? You bet your life. Say the secret word and get a free month on an AI system.

Stephen E Arnold, February 17, 2026

The EU Oils Its Cash Register for 2026 Action: Meta Is at Bat

February 17, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Commission Notifies Meta of Possible Interim Measures to Reverse Exclusion of Third-Party AI Assistants from WhatsApp” signals the Zuck that check writing may be something to put on his calendar. The EU statement says:

The European Commission has sent a Statement of Objections to Meta, setting out its preliminary view that Meta breached EU antitrust rules by excluding third party Artificial Intelligence (‘AI’) assistants from accessing and interacting with users on WhatsApp. Meta’s conduct risks blocking competitors from entering or expanding in the rapidly growing market for AI assistants. The Commission therefore intends to impose interim measures to prevent this policy change from causing serious and irreparable harm on the market, subject to Meta’s reply and rights of defense.

What’s interesting is the statement “intends to impose interim measures”. The EU tends to move slowly as opposed to one major country’s sending elected official home for a vacay. I interpret this statement to mean: Zuck, things are just going to happen. Buckle up, buddy.

image

EU professionals enjoy reading documents describing alleged criminal activities. Thanks, Venice.ai. Good enough.

The short announcement includes a graphic. To be honest, I am not sure if this art has been produced by smart software. Also, I am not sure I understand how the flipping of a red arrow to a green arrow will work. But I am a dinobaby, and I simply cannot understand some things. My view is to ring up certain known Internet service providers and some of those “ghost ISPs” and mandate aggressive filtering. The ISPs will squawk, but the EU litigation can create some financial pressure for certain outfits. My hunch is that certain judiciary units might want to do some on-site investigations too.

I am not sure if Meta’s legal eagles can remediate such direct actions at the service firms between a WhatsApp user and Meta. Certain countries are likely to be more enthusiastic about altering Meta’s behavior. But the EU’s signal is clear:

If the Commission concludes, after the parties have exercised their rights of defense, that the conditions for interim measures are met, it can adopt a decision imposing such measures. The adoption of an interim measures’ decision does not prejudge the final findings of the Commission on the substance of the case.

France has demonstrated that even iconic services can be spray painted dull gray. Is WhatsApp facilitating certain types of behavior the EU might find objectionable beyond the blocking of competitive smart software? I would suggest that this AI wrapping hides what’s in a box of alleged infractions facilitated by Meta / Facebook, WhatsApp, and Instagram.

My view is that France’s direct action against Pavel Durov may have been a useful example for some EU professionals. Furthermore, I think the EU has a short list of American big tech companies to examine with renewed vigor in the new year. Will the US firms care? Nope or at least until a significant revenue hit takes place. I predict that 2026 will be a bounteous year for legal eagles involved in US big tech, AI, and social media matters.

Stephen E Arnold, February 17, 2026

Big Tech and Age Verification: Now What, People?

February 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I have a couple of people on my team who reacted in an interesting way to this question: “How long would it take you to get around the age verification required in Australia?” Howard, just snorted. Stuart thought a moment and said, “A couple of minutes, maybe a bit more, maybe a bit less.” My team is not in its teens. But I know that there are some pre-teens, teens, and teens going on 28 years old who know how to subvert age verification systems.

image

Proud parents watch as Bill creates a social media account for Timmy, his younger brother. Mom and dad watch with pride because the siblings are interacting in a positive manner. Thanks, ChatGPT, good enough.

How many of these can you work around?

  1. ID via facial recognition using one’s real face just aged or use an older sibling’s face
  2. Use an adult’s user name and password
  3. Create alias accounts with new emails
  4. Rely on VPNs and related methods
  5. Pay an older person to register and set up an account.

Three separate news reports suggest that the US big tech outfits and outfits like Telegram will have to find a way to implement age verification systems that mostly work and don’t violate other laws. Otherwise, certain firms will lose access to customers in France, Greece, and Spain. These countries probably have a regulator or two eager to fine the US social media companies, go through the legal processes, issue fines, and then check their bank accounts. Deposits in the tens or hundreds of millions in dollars or other fiat currency are easy to spot.

Here are the three reports:

VPNs are next on my list – France set to evaluate VPN use following social media ban for under-15s

Greece to soon announce social media ban for children under 15, government source says

Spain, Greece weigh teen social media bans, drawing fury from Elon Musk

  My team and I think that other countries in the EU will jump on the bandwagon. I am not sure the mental health of those under 16 is the only motivation for this requirement. Anti-US big tech sentiment reaches me in rural Kentucky. My hunch is that it extends to Silicon Valley as well. A crusade against the US may become a way to win re-election or snag a lucrative advisory job in some countries. Plus, there is what I called the “Kaching factor.” That’s the notional sound of issuing a big fine and ringing the cash register for the regulator bring an action against a US company.

If the age verification movement  gains steam, the US social media companies will have to do some actual innovation in their age verification departments. Solutions to this problem are fraught with booby traps. These range from ease of use to security issues. Also, US big tech companies don’t want to lose access to these youthful users. Translation: The ad dollars are too significant.

Observing how the US tech companies respond will be fascinating. I look forward to verbal statements, legal battles, and direct violations. My view is that there is no perfect fix, just rising risk and costs. With everyone embracing AI, why not just use smart software. Yeah, that will work.

Stephen  E Arnold, February 12, 2026

When Humans Edit AI Outputs: Differences Manifest Themselves It Seems

February 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Americans don’t think much about Canada. I try to follow interesting content. I ignore the country in which the documents or information originates. I spotted a quite interesting report about Canada’s AI assisted research about smart software. But what makes the write up fascinating is that a person named Michael Geist pumped the same content through AI systems and noted some differences.

Do humans make a difference? Do AI systems get thing straight? I cannot recycle the entire quite good essay. You can read “An Illusion of Consensus: What the Government Isn’t Saying About the Results of its AI Consultation” yourself and form your own opinions. I want to hit a few highlights and then offer a handful of observations. (Hey, what do you want from a free blog?)

image

Thanks, MidJourney. Good enough.

For set up, the “old” industry Canada has been rejiggered to include smart software. The entity is called Innovation, Science and Economic Development Canada or ISED. The agency conducted what it called the “largest public consultation in the history of ISED” to learn what the AI sentiment and use cases were in Canada.

Mr. Geist downloaded the data and let AI reach conclusions. He learned:

It [the report] would still have benefited from some additional perspectives, but the resulting reports suggest that the experts took their mandate seriously and provided candid, action-oriented advice on developing a national AI strategy.

What were the key differences?

  1. “the expert reports consistently argue that Canada’s AI challenge is not about research excellence or talent creation, but rather execution.” Mr. Geist noted: The official report downplays the risks of AI.
  2. “the expert reports frame as a strategic variable in which countries that move faster lead, while those that hesitate are left to regulate what others have built; that is, the Canadian government is not moving fast with regards to AI. Geist said that the Canadian government softened the idea about its dragging its feet.
  3. “The government summary refers indirectly to the access to capital challenges without digging into the political choices.” Mr. Geist points out that the Canadian government does not want to highlight a lack of investment capital for AI.

The most important “divergence” between the two analyses relates to trust. Here’s the passage from Mr. Geist’s review:

Perhaps the most important divergence comes from the issue of trust and safety. This was a major concern from the public responses and the government is likely headed toward making AI governance, audits, transparency, and risk-based regulation key elements of its AI strategy. Yet there is far less consensus in the expert reports. Just about everyone agrees that trust is essential for AI adoption, but the implementation of regulation draws different views. Some want to move quickly, while others warn that overly broad regulation will slow deployment, disadvantage domestic firms, and regulate technologies Canada does not control. Those disagreements largely disappear in the government’s summary, where trust is presented as a settled consensus objective, rather than a contested policy domain with real trade-offs.

My observations are:

  1. Government entities don’t want to look bad; therefore, sanding and smoothing is to be expected
  2. The lack of funding strikes me as a novel finding because without money who can innovate without access to AI compute, people, and the other oddments that require that some big tech companies pour billions into their systems to facilitate their own innovation
  3. I was surprised that Mr. Geist gave the Canadian government a reasonably good review.

Interesting.

Stephen E Arnold, February 12, 2026

Telegram Gets a Note with Bad News from Roskomnadzor

February 10, 2026

goat 3Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

The Kyiv Independent published "Russia Restricts Telegram in Latest Push to Tighten Internet Control." According to the cited article,

"Russia’s communications regulator, Roskomnadzor, began restricting the operation of Telegram on Feb. 10, targeting one of the country’s most widely used messaging platforms, the regulator said. The move fits into the Kremlin’s broader push to replace Western digital services with domestic alternatives as it advances plans for a so-called "sovereign internet," tightening state control over online communications."

Russia’s communications regulator Roskomnadzor has moved to restrict Telegram’s operations, with measures described as partial limitations rather than a full block. Russia has promoted state-aligned domestic messaging alternatives intended to reduce reliance on foreign platforms.

Telegram has over a billion users worldwide. Telegram’s largest concentrations of users are reported in Russia, Eastern Europe, South Asia, and parts of the Middle East. Telegram has been taking steps to increase the number of users in the United States.

The Kyiv Independent states:

"A full-scale crackdown on Telegram could present challenges for the Kremlin, as Russian state-aligned media outlets rely heavily on the platform, where many have amassed millions of subscribers."

Russia offers Max, a Telegram-like service with benefits to the government; namely, no dancing with Pavel Durov when information about a topic or a person of interest is deemed necessary.

It is too early to determine how Telegram, its users, its Russia-based service providers, its contractors, and the companies who use Telegram for sales and customer service will react.

Telegram has been adjusting to the uncertain outcome of the French judiciary’s criminal charges levied against Pavel Durov. A trial is expected in France sometime in 2026, but the wheels of the French court system can turn more slowly. Adding to the stress upon Telegram has been the slow, steady decline of the "value" of the TONcoin. Other U.S.-centric initiatives have faced financial headwinds, stalling Mr. Durov’s "we are coming to America" assertion in 2025.

The Russian action adds uncertainty to the Telegram ecosystem. Founded in 2013, Telegram sailed in relatively calm waters until the French judiciary’s arrest of Mr. Durov. But Telegram has been able to operate despite regulatory pressure in multiple jurisdictions, but the recent legal actions in Europe represent a different category of risk. Russia’s action, which may not deter die-hard Telegram users, suggests that 2026 may be another year clouded by uncertainty.

Stephen E Arnold, February 10, 2026

Zuck = WhatsApp = Kaching

February 2, 2026

Mark Zuckerberg’s luck keeps growing! His company Meta bought WhatsApp. It was a tiny thing with a tiny but growing user base. Zuck struck. It grew. Zuck bumped into the pesky EU regulators. Did Zuck duck? Nope. Engadget reports on how WhatsApp’s categorization might be changed in the EU:“WhatsApp Might Soon Be Subject To Stricter Scrutiny Under The EU’s Digital Services Act.”

The EU might make WhatsApp follow the European Commission’s Digital Services Act (DSA). During the first half of 2025, WhatsApp gained about 51.7 million users. The Digital Services Act applies to platforms with 45 million users or more. WhatsApp may be designated a “very large online platform” or VLOP to the young in mind and spirit. The Zuck may be subject to the DSA’s rules. If Meta doesn’t comply, it will fined up to six percent of its global annual revenue.

Have the regulators run amuck? Is Zuck’s luck evaporating? But Zuck has a history of not complying with some European Union silliness. Zuck is an American with an American company. What’s the EU got to do with Zuck? Answer: The cited article reports:

“Meta was charged with violating the EU law in October 2025 because of how it asks users to report illegal content on Facebook and Instagram. Earlier that month, a Dutch court also ordered the company to change how it presents the timelines on its platforms because people in the Netherlands were not "sufficiently able to make free and autonomous choices about the use of profiled recommendation systems" in the company’s apps.”

Will Zuck duck? Nope, legal eagles will take flight. Meetings will occur. Kaching. With money one can make luck, right, Zuck?

Whitney Grace, February 2, 2026

Next Page »

  • Archives

  • Recent Posts

  • Meta