AI Use Cases: Let Many Flowers Bloom Even Where They Are Unwanted

February 25, 2026

As the title asks, why is it surprising that people use AI differently? Technology has never been one out-of-the-box solution services all. The Harvard Business Review asks this stupid question and researches it: “Why AI Boosts Creativity For Some Employees But Not Others.” Generative AI bots are becoming an essential tool for day-to-day business. The hope is that generative AI will make employees more creative and create more inspiring ideas.

Nope.

A Gallup survey reported that only 26% of employees who use generative AI saw creativity improvements. Why?

“Our new research, published in the Journal of Applied Psychology, answers this question. We find that generative AI can indeed boost employee creativity, but the gains are not universal. Specifically, employees with stronger metacognition—the ability to plan, evaluate, monitor, and refine their thinking—are more likely to experience creative gains from using generative AI, because they can use it more effectively to acquire the cognitive job resources that fuel creativity.”

The team conducted creativity research with the appropriate scientific method jargon. Blah. Blah. Blah. They discovered smart individuals and people with more creative thinking don’t use AI like a calculator. They use it as a tool to enhance their work skills. Stupid people, however, just plug in questions and take the results at face value. Here’s more results about the differences between the the employees with proper results language:

“By contrast, employees low in metacognition are more likely to accept AI’s first answer, rely on default outputs, and fail to check whether AI’s suggestions are accurate or relevant. As a result, employees with stronger metacognition are far better positioned to use AI tools to acquire the cognitive job resources that fuel creativity, whereas those with weaker metacognitive skills see few creative gains from AI.”

Let’s step back. Is it possible that humans want to be valued for their work, not the work of unknown coders and black boxes? Is it possible that humans know that “good enough” is exactly what probabilistic software delivers?  Can it be that humans need to do something meaningful to make them happy or at least give them something about which to complain?

The reality is that AI leads in one direction: Abrogation of control from humans to a couple of humans who own the smart software. Happy now?

Whitney Grace, February 25, 2026

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta