When Humans Edit AI Outputs: Differences Manifest Themselves It Seems

February 12, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

Americans don’t think much about Canada. I try to follow interesting content. I ignore the country in which the documents or information originates. I spotted a quite interesting report about Canada’s AI assisted research about smart software. But what makes the write up fascinating is that a person named Michael Geist pumped the same content through AI systems and noted some differences.

Do humans make a difference? Do AI systems get thing straight? I cannot recycle the entire quite good essay. You can read “An Illusion of Consensus: What the Government Isn’t Saying About the Results of its AI Consultation” yourself and form your own opinions. I want to hit a few highlights and then offer a handful of observations. (Hey, what do you want from a free blog?)

image

Thanks, MidJourney. Good enough.

For set up, the “old” industry Canada has been rejiggered to include smart software. The entity is called Innovation, Science and Economic Development Canada or ISED. The agency conducted what it called the “largest public consultation in the history of ISED” to learn what the AI sentiment and use cases were in Canada.

Mr. Geist downloaded the data and let AI reach conclusions. He learned:

It [the report] would still have benefited from some additional perspectives, but the resulting reports suggest that the experts took their mandate seriously and provided candid, action-oriented advice on developing a national AI strategy.

What were the key differences?

  1. “the expert reports consistently argue that Canada’s AI challenge is not about research excellence or talent creation, but rather execution.” Mr. Geist noted: The official report downplays the risks of AI.
  2. “the expert reports frame as a strategic variable in which countries that move faster lead, while those that hesitate are left to regulate what others have built; that is, the Canadian government is not moving fast with regards to AI. Geist said that the Canadian government softened the idea about its dragging its feet.
  3. “The government summary refers indirectly to the access to capital challenges without digging into the political choices.” Mr. Geist points out that the Canadian government does not want to highlight a lack of investment capital for AI.

The most important “divergence” between the two analyses relates to trust. Here’s the passage from Mr. Geist’s review:

Perhaps the most important divergence comes from the issue of trust and safety. This was a major concern from the public responses and the government is likely headed toward making AI governance, audits, transparency, and risk-based regulation key elements of its AI strategy. Yet there is far less consensus in the expert reports. Just about everyone agrees that trust is essential for AI adoption, but the implementation of regulation draws different views. Some want to move quickly, while others warn that overly broad regulation will slow deployment, disadvantage domestic firms, and regulate technologies Canada does not control. Those disagreements largely disappear in the government’s summary, where trust is presented as a settled consensus objective, rather than a contested policy domain with real trade-offs.

My observations are:

  1. Government entities don’t want to look bad; therefore, sanding and smoothing is to be expected
  2. The lack of funding strikes me as a novel finding because without money who can innovate without access to AI compute, people, and the other oddments that require that some big tech companies pour billions into their systems to facilitate their own innovation
  3. I was surprised that Mr. Geist gave the Canadian government a reasonably good review.

Interesting.

Stephen E Arnold, February 12, 2026

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta