Copilot, Can You Crash That Financial Analysis?

August 22, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

The ever-insouciant online service The Verge published a story about Microsoft, smart software, and Excel. “Microsoft Excel Adds Copilot AI to Help Fill in Spreadsheet Cells” reports:

Microsoft Excel is testing a new AI-powered function that can automatically fill cells in your spreadsheets, which is similar to the feature that Google Sheets rolled out in June.

Okay, quite specific intentionality: Fill in cells. And a dash of me-too. I like it.

However, the key statement in my opinion is:

The COPILOT function comes with a couple of limitations, as it can’t access information outside your spreadsheet, and you can only use it to calculate 100 functions every 10 minutes. Microsoft also warns against using the AI function for numerical calculations or in “high-stakes scenarios” with legal, regulatory, and compliance implications, as COPILOT “can give incorrect responses.”

I don’t want to make a big deal out of this passage, but I will do it anyway. First, Microsoft makes clear that the outputs can be incorrect. Second, don’t use it too much because I assume one will have to pay to use a system that “can give incorrect results.” In short, MSFT is throttling Excel’s Copilot. Doesn’t everyone want to explore numbers with an addled Copilot known to flub numbers in a jet aircraft at 0.8 Mach?

I want to quote from “It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes”:

Think of it. Forty-five hundred years ago, if you were a Sumerian scribe, while your calculations on the world’s first abacus might have been laborious, you could be assured they’d be correct. Four hundred years ago, if you were palling around with William Oughtred, his new slide rule may have been a bit intimidating at first, but you could know its output was correct. In the 1980s, you could have bought the cheapest, shittiest Casio-knockoff calculator you could find, and used it exclusively, for every day of the rest of your life, and never once would it give anything but a correct answer. You could use it today! But now we have Microsoft apparently determining that “unpredictability” was something that some number of its customers wanted in their calculators.

I know that I sure do. I want to use a tool that is likely to convert “high-stakes scenarios” into an embarrassing failure. I mean who does not want this type of digital Copilot?

Why do I find this Excel with Copilot software interesting?

  1. It illustrates that accuracy has given way to close enough for horseshoes. Impressive for a company that can issue an update that could kill one’s storage devices.
  2. Microsoft no longer dances around hallucinations. The company just says, “The outputs can be wrong.” But I wonder, “Does Microsoft really mean it?” What about Red Bull-fueled MBAs handling one’s retirement accounts? Yeah, those people will be really careful.
  3. The article does not come and and say, “Looks like the AI rocket ship is losing altitude.”
  4. I cannot imagine sitting in a meeting and observing the rationalizations offered to justify releasing a product known to make NUMERICAL errors.

Net net: We are learning about the quality of [a] managerial processes at Microsoft, [b] the judgment of employees, and [c] the sheer craziness that an attorney said, “Sure, release the product just include an upfront statement that it will make mistakes.” Nothing builds trust more than a company anchored in customer-centric values.

Stephen E Arnold, August 22, 2025

The Bubbling Pot of Toxic Mediocrity? Microsoft LinkedIn. Who Knew?

August 19, 2025

Dino 5 18 25_thumbNo AI. Just a dinobaby working the old-fashioned way.

Microsoft has a magic touch. The company gets into Open Source; the founder “gits” out. Microsoft hires a person from Intel. Microsoft hires garners an engineer, asks some questions, and the new hire is whipped with a $34,000 fine and two years of mom looking in his drawers.

Now i read “Sunny Days Are Warm: Why LinkedIn Rewards Mediocrity.” The write up includes an outstanding metaphor in my opinion: Toxic Mediocrity. The write up says:

The vast majority of it falls into a category I would describe as Toxic Mediocrity. It’s soft, warm and hard to publicly call out but if you’re not deep in the bubble it reads like nonsense. Unlike it’s cousins ‘Toxic Positivity’ and ‘Toxic Masculinity’ it isn’t as immediately obvious. It’s content that spins itself as meaningful and insightful while providing very little of either. Underneath the one hundred and fifty words is, well, nothing. It’s a post that lets you know that sunny days are warm or its better not to be a total psychopath. What is anyone supposed to learn from that?

When I read a LinkedIn post it is usually referenced in an article I am reading. I like to follow these modern slippery footnotes. (If you want slippery, try finding interesting items about Pavel Durov in certain Russian sources.)

Here’s what I learn:

  1. A “member” makes clear that he or she has information of value. I must admit. Once in a while a useful post will turn up. Not often, but it has happened. I do know the person believes something about himself or herself. Try asking a GenAI about their personal “beliefs.” Let me know how that works.
  2. Members in a specific group with an active moderator often post items of interest. Instead of writing my unread blog, these individuals identify an item and use LinkedIn as a “digital bulletin board” for people who shop at the same sporting goods store in rural Kentucky. (One sells breakfast items and weapons.)
  3. I get a sense of the jargon people use to explain their expertise. I work alone. I am writing a book. I don’t travel to conferences or client locations now. I rely on LinkedIn as the equivalent of going to a conference mixer and listening to the conversations.

That useful. I have a person who interacts on LinkedIn for me. I suppose my “experience” is therefore different from someone who visits the site, posts, and follows the antics of LinkedIn’s marketers as they try to get the surrogate me to pay to do what I do. (Guess what? I don’t pay.)

I noted this statement in the essay:

Honestly, the best approach is to remember that LinkedIn is a website owned by Microsoft, trying to make money for Microsoft, based on time spent on the site. Nothing you post there is going to change your career. Doing work that matters might. Drawing attention to that might. Go for depth over frequency.

I know that many people rely on LinkedIn to boost their self confidence. One of the people who worked for me moved to another city. I suggested that she give LinkedIn a whirl. She wrote interesting short items about her interests. She got good feedback. Her self confidence ticked up, and she landed a successful job. So there’s a use case for you.

You should be able to find a short item that a new post appears on my blog. Write me and my surrogate will write you back and give you instructions about how to contact me. Why don’t I conduct conversations on LinkedIn? Have you checked out the telemetry functions in Microsoft software?

Stephen E Arnold, August 19, 2025

A Baloney Blizzard: What Is Missing? Oh, Nothing, Just Security

August 19, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I do not know what a CVP is. I do know a baloney blizzard when I see one. How about these terms: Ambient, pervasive, and multi-modal. I interpret ambient as meaning temperature or music like the tunes honked in Manhattan elevators. Pervasive I view as surveillance; that is, one cannot escape the monitoring. What a clever idea. Who doesn’t want Microsoft Windows to be inescapable? And multi-modal sparks in me thoughts of a cave painting and a shaman. I like the idea of Windows intermediating for me.

Where did I get these three odd ball words? I read “Microsoft’s Windows Lead Says the Next Version of Windows Will Be More Ambient, Pervasive, and Multi-Modal As AI Redefines the Desktop Interface.” The source of this write up is an organization that absolutely loves Microsoft products and services.

Here’s a passage I noted:

Davuluri confirms that in the wake of AI, Windows is going to change significantly. The OS is going to become more ambient and multi-modal, capable of understanding the content on your screen at all times to enable context-aware capabilities that previously weren’t possible. Davuluri continues, “you’ll be able to speak to your computer while you’re writing, inking, or interacting with another person. You should be able to have a computer semantically understand your intent to interact with it.”

Very sci-fi. However, I don’t want to speak to my computer. I work in silence. My office is set up do I don’t have people interrupting, chattering, or asking me to go to get donuts. My view is, “Send me an email or a text. Don’t bother me.” Is that why in many high-tech companies people wear earbuds? It is. They don’t want to talk, interact, or discuss Netflix. These people want to “work” or what they think is “work.”

Does Microsoft care? Of course not. Here’s a reasonably clear statement of what Microsoft is going to try and force upon me:

It’s clear that whatever is coming next for Windows, it’s going to promote voice as a first class input method on the platform. In addition to mouse and keyboard, you will be able to ambiently talk to Windows using natural language while you work, and have the OS understand your intent based on what’s currently on your screen.

Several observations:

  1. AI is not reliable
  2. Microsoft is running a surveillance operation in my opinion
  3. This is the outfit which created Bob and Clippy.

But the real message in this PR marketing content essay: Security is not mentioned. Does a secure operation want people talking about their work?

Stephen E Arnold, August 19, 2025

Microsoft: Knee Jerk Management Enigma

July 29, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.

I read “In New Memo, Microsoft CEO Addresses Enigma of Layoffs Amid Record Profits and AI Investments.” The write up says in a very NPR-like soft voice:

“This is the enigma of success in an industry that has no franchise value,” he wrote. “Progress isn’t linear. It’s dynamic, sometimes dissonant, and always demanding. But it’s also a new opportunity for us to shape, lead through, and have greater impact than ever before.” The memo represents Nadella’s most direct attempt yet to reconcile the fundamental contradictions facing Microsoft and many other tech companies as they adjust to the AI economy. Microsoft, in particular, has been grappling with employee discontent and internal questions about its culture following multiple rounds of layoffs.

Discontent. Maybe the summer of discontent. No, it’s a reshaping or re-invention of a play by William Shakespeare (allegedly) which borrows from Chaucer’s Troilus and Criseyde with a bit more emphasis on pettiness and corruption to add spice to Boccaccio’s antecedent. Willie’s Troilus and Cressida makes the “love affair” more ironic.

Ah, the Microsoft drama. Let’s recap: [a] Troilus and Cressida’s Two Kids: Satya and Sam, [b] Security woes of SharePoint (who knew? eh, everyone]; [c] buying green credits or how much manure does a gondola rail card hold? [d] Copilot (are the fuel switches on? Nope); and [e] layoffs.

What’s the description of these issues? An enigma. This is a word popping up frequently it seems. An enigma is, according to Venice, a smart software system:

The word “enigma” derives from the Greek “ainigma” (meaning “riddle” or “dark saying”), which itself stems from the verb “aigin” (“to speak darkly” or “to speak in riddles”). It entered Latin as “aenigma”, then evolved into Old French as “énigme” before being adopted into English in the 16th century. The term originally referred to a cryptic or allegorical statement requiring interpretation, later broadening to describe any mysterious, puzzling, or inexplicable person or thing. A notable modern example is the Enigma machine, a cipher device used in World War II, named for its perceived impenetrability. The shift from “riddle” to “mystery” reflects its linguistic journey through metaphorical extension.

Okay, let’s work through this definition.

  1. Troilus and Cressida or Satya and Sam. We have a tortured relationship. A bit of a war among the AI leaders, and a bit of the collapse of moral certainty. The play seems to be going nowhere. Okay, that fits.
  2. Security woes. Yep, the cipher device in World War II. Its security or lack of it contributed to a number of unpleasant outcomes for a certain nation state associated with beer and Rome’s failure to subjugate some folks.
  3. Manure. This seems to be a metaphorical extension. Paying “green” or money for excrement is a remarkable image. Enough said.
  4. Fuel switches and the subsequent crash, explosion, and death of some hapless PowerPoint users. This lines up with “puzzling.” How did those Word paragraphs just flip around? I didn’t do it. Does anyone know why? Of course not.
  5. Layoffs. Ah, an allegorical statement. Find your future elsewhere. There is a demand for life coaches, LinkedIn profile consultants, and lawn service workers.

Microsoft is indeed speaking darkly. The billions burned in the AI push have clouded the atmosphere in Softie Land. When the smoke clears, what will remain? My thought is that the items a to e mentioned above are going to leave some obvious environmental alterations. Yep, dark saying because knee jerk reactions are good enough.

Stephen E Arnold, July 29, 2025

Microsoft, Security, and Blame: Playing the Same Record Again

July 24, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I have a dim memory of being at my friend’s house. His sister was playing “Catch a Falling Star” by Perry Como. My friend’s mother screamed, “Turn off that music. It’s driving me crazy.” The repetition, the loudness, and the sappiness were driving my friend’s mother insane. I didn’t care. I have the ability to tune out repetition, loud noise, and sappiness. My friend’s sister turned up the record player. Words did not work.

Those skills were required when I read “Microsoft Says Chinese Hackers Are Exploiting SharePoint Flaws.” The write up reports:

Microsoft Corp. accused Chinese hackers of exploiting vulnerabilities in its SharePoint software that have led to breaches worldwide in recent days.

What does one expect? Microsoft has marketed its software to government agencies and companies throughout the world. Hundreds of millions of people use its products and services. Students in computer science security classes learn how to probe its ubiquitous software for weak points. Professionals exploit hunters target software in wide use.

When a breach occurs, what tune does Microsoft put on the record player? The song is “Blame Game.” One verse is:

Let’s play the blame game, I love you more
Let’s play the blame game for sure
Let’s call out names, names, I hate you more
Let’s call out names, names, for sure

My dinobaby thought is that the source of the problem is not Chinese bad actors or thousands of Russian hackers or whatever assertion is presented to provide cover for a security failure.

Why not address the issue of Microsoft’s own quality control processes? Whatever happened to making security Job One? Oh, right, AI is the big deal. Well, if the AI is so good, why doesn’t Microsoft’s AI address these issues directly.

Maybe Microsoft is better at marketing than at software engineering? Now that’s a question worth exploring at the US government agencies now at risk of Microsoft’s own work processes.

Mothers can shout at their children. Microsoft issues PR speak about government intelligence agencies. News flash, Microsoft. Those actors know what big soft target to attack. Plus, they are not listening to your old tunes.

Stephen E Arnold, July 24, 2024

What Did You Tay, Bob? Clippy Did What!

July 21, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I was delighted to read “OpenAI Is Eating Microsoft’s Lunch.” I don’t care who or what wins the great AI war. So many dollars have been bet that hallucinating software is the next big thing. Most content flowing through my dinobaby information system is political. I think this food story is a refreshing change.

So what’s for lunch? The write up seems to suggest that Sam AI-Man has not only snagged a morsel from the Softies’ lunch pail but Sam AI-Man might be prepared to snap at those delicate lady fingers too. The write up says:

ChatGPT has managed to rack up about 10 times the downloads that Microsoft’s Copilot has received.

Are these data rock solid? Probably not, but the idea that two “partners” who forced Googzilla to spasm each time its Code Red lights flashed are not cooperating is fascinating. The write up points out that when Microsoft and OpenAI were deeply in love, Microsoft had the jump on the smart software contenders. The article adds:

Despite that [early lead], Copilot sits in fourth place when it comes to total installations. It trails not only ChatGPT, but Gemini and Deepseek.

Shades of Windows phone. Another next big thing muffed by the bunnies in Redmond. How could an innovation power house like Microsoft fail in the flaming maelstrom of burning cash that is AI? Microsoft’s long history of innovation adds a turbo boost to its AI initiatives. The Bob, Clippy, and Tay inspired Copilot is available to billions of Microsoft Windows users. It is … everywhere.

The write up explains the problem this way:

Copilot’s lagging popularity is a result of mismanagement on the part of Microsoft.

This is an amazing insight, isn’t it? Here’s the stunning wrap up to the article:

It seems no matter what, Microsoft just cannot make people love its products. Perhaps it could try making better ones and see how that goes.

To be blunt, the problem at Microsoft is evident in many organizations. For example, we could ask IBM Watson what Microsoft should do. We could fire up Deepseek and get some China-inspired insight. We could do a Google search. No, scratch that. We could do a Yandex.ru search and ask, “Microsoft AI strategy repair.”

I have a more obvious dinobaby suggestion, “Make Microsoft smaller.” And play well with others. Silly ideas I know.

Stephen E Arnold, July 21, 2025

Microsoft Innovation: Emulating the Bold Interface Move by Apple?

July 2, 2025

Dino 5 18 25_thumb[3]_thumb_thumb_thumbThis dinobaby wrote this tiny essay without any help from smart software. Not even hallucinating gradient descents can match these bold innovations.

Bold. Decisive. Innovative. Forward leaning. Have I covered the adjectives used to communicate “real” innovation? I needed these and more to capture my reaction to the information in “Forget the Blue Screen of Death – Windows Is Replacing It with an Even More Terrifying Black Screen of Death.

Yep, terrifying. I don’t feel terrified when my monitors display a warning. I guess some people do.

The write up reports:

Microsoft is replacing the Windows 11 Blue Screen of Death (BSoD) with a Black Screen of Death, after decades of the latter’s presence on multiple Windows iterations. It apparently wants to provide more clarity and concise information to help troubleshoot user errors easily.

The important aspect of this bold decision to change the color of an alert screen may be Apple color envy.

Apple itself said, “Apple Introduces a Delightful and Elegant New Software Design.” The innovation was… changing colors and channeling Windows Vista.

Let’s recap. Microsoft makes an alert screen black. Apple changes its colors.

Peak innovation. I guess that is what happens when artificial intelligence does not deliver.

Stephen E Arnold, July 2, 2025

Microsoft and OpenAI: An Expensive Sitcom

July 1, 2025

Dino 5 18 25No smart software involved. Just an addled dinobaby.

I remember how clever I thought the book title “Who Says Elephants Can’t Dance?: Leading a Great Enterprise Through Dramatic Change.” I find the break dancing content between Microsoft and OpenAI even more amusing. Bloomberg “real” news reported that Microsoft is “struggling to sell its Copilot solutions. Why? Those Microsoft customers want OpenAI’s ChatGPT. That’s a hoot.

Computerworld adds to this side show more Monte Python twists. “Microsoft and OpenAI: Will They Opt for the Nuclear Option?” (I am not too keen on the use of the word “nuclear.” People bandy it about without understanding exactly what the actual consequences of such an opton means. Please, do a bit of homework before suggesting that two enterprises are doing anything remotely similar.)

The estimable Computerworld reports:

Microsoft needs access to OpenAI technologies to keep its worldwide lead in AI and grow its valuation beyond its current more than $3.5 trillion. OpenAI needs Microsoft to sign a deal so the company can go public via an IPO. Without an IPO, the company isn’t likely to keep its highly valued AI researchers — they’ll probably be poached by companies willing to pay hundreds of millions of dollars for the talent.

The problem seems to be that Microsoft is trying to sell its version of smart software. The enterprise customers and even dinobabies like myself prefer the hallucinatory and unpredictable ChatGPT to the downright weirdness of Copilot in Notepad. The Computerworld story says:

Hovering over it all is an even bigger wildcard. Microsoft’s and OpenAI’s existing agreement dramatically curtails Microsoft’s rights to OpenAI technologies if the technologies reach what is called artificial general intelligence (AGI) — the point at which AI becomes capable of human reasoning. AGI wasn’t defined in that agreement. But Altman has said he believes AGI might be reached as early as this year.

People cannot agree over beach rights and school taxes. The smart software (which may remain without regulation for a decade) is a much bigger deal. The dollars at stake are huge. Most people do not know that a Board of Directors for a Fortune 1000 company will spend more time arguing about parking spaces than a $300 million acquisition. The reason? Most humans cannot conceive of the numbers of dollars associated with artificial intelligence. If the AI next big thing does not work, quite a few outfits are going to be selling snake oil from tables at flea markets.

Here’s the humorous twist from my vantage point. Microsoft itself kicked off the AI boom with its announcements a couple of years ago. Google, already wondering how it can keep the money gushing to pay the costs of simply being Google, short circuited and hit the switch for Code Red, Yellow, Orange, and probably the color only five people on earth have ever seen.

And what’s happened? The Google-spawned methods aren’t eliminating hallucinations. The OpenAI methods are not eliminating hallucinations. The improvements are more and more difficult to explain. Meanwhile start ups are doing interesting things with AI systems that are good enough for certain use cases. I particularly like consulting and investment firms using AI to get rid of MBAs.

The punch line for this joke is that the Microsoft version of ChatGPT seems to have more brand deliciousness. Microsoft linked with OpenAI, created its own “line of AI,” and now finds that the frisky money burner OpenAI is more popular and can just define artificial general intelligence to its liking and enjoy the philosophical discussions among AI experts and lawyers.

One cannot make this sequence up. Jack Benny’s radio scripts came close, but I think the Microsoft – OpenAI program is a prize winner.

Stephen E Arnold, July 1, 2025

Microsoft Demonstrates a Combo: PR and HR Management Skill in One Decision

June 2, 2025

How skilled are modern managers? I spotted an example of managerial excellence in action. “Microsoft fires Employee Who Interrupted CEO’s Speech to Protest AI Tech for Israel” reports something that is allegedly spot on; to wit:

“Microsoft has fired an employee who interrupted a speech by CEO Satya Nadella to protest the company’s work supplying the Israeli military with technology used for the war in Gaza.”

Microsoft investigated similar accusations and learned that its technology was not used to harm citizens / residents / enemies in Gaza. I believe that a person investigating himself or herself does a very good job. Law enforcement is usually not needed to investigate a suspected bad actor when the alleged malefactor says: “Yo, I did not commit that crime.” I think most law enforcement professionals smile, shake the hand of the alleged malefactor, and say, “Thank you so much for your rigorous investigation.”

Isn’t that enough? Obviously it is. More than enough. Therefore, to output fabrications and unsupported allegations against a large, ethical, and well informed company, management of that company has a right and a duty to choke off doubt.

The write up says:

“Microsoft has previously fired employees who protested company events over its work in Israel, including at its 50th anniversary party in April [2025].”

The statement is evidence of consistency before this most recent HR / PR home run in my opinion. I note this statement in the cited article:

“The advocacy group No Azure for Apartheid, led by employees and ex-employees, says Lopez received a termination letter after his Monday protest but couldn’t open it. The group also says the company has blocked internal emails that mention words including “Palestine” and “Gaza.””

Company of the year nominee for sure.

Stephen E Arnold, June 2, 2025

Copilot Disappointments: You Are to Blame

May 30, 2025

dino orange_thumbNo AI, just a dinobaby and his itty bitty computer.

Another interesting Microsoft story from a pro-Microsoft online information service. Windows Central published “Microsoft Won’t Take Bigger Copilot Risks — Due to ‘a Post-Traumatic Stress Disorder from Embarrassments,’ Tracing Back to Clippy.” Why not invoke Bob, the US government suggesting Microsoft security was needy, or the software of the Surface Duo?

The write up reports:

Microsoft claims Copilot and ChatGPT are synonymous, but three-quarters of its AI division pay out of pocket for OpenAI’s superior offering because the Redmond giant won’t allow them to expense it.

Is Microsoft saving money or is Microsoft’s cultural momentum maintaining the velocity of Steve Ballmer taking an Apple iPhone from an employee and allegedly stomping on the device. That helped make Microsoft’s management approach clear to some observers.

The Windows Central article adds:

… a separate report suggested that the top complaint about Copilot to Microsoft’s AI division is that “Copilot isn’t as good as ChatGPT.” Microsoft dismissed the claim, attributing it to poor prompt engineering skills.

This statement suggests that Microsoft is blaming a user for the alleged negative reaction to Copilot. Those pesky users again. Users, not Microsoft, is at fault. But what about the Microsoft employees who seem to prefer ChatGPT?

Windows Central stated:

According to some Microsoft insiders, the report details that Satya Nadella’s vision for Microsoft Copilot wasn’t clear. Following the hype surrounding ChatGPT’s launch, Microsoft wanted to hop on the AI train, too.

I thought the problem was the users and their flawed prompts. Could the issue be Microsoft’s management “vision”? I have an idea. Why not delegate product decisions to Copilot. That will show the users that Microsoft has the right approach to smart software: Cutting back on data centers, acquiring other smart software and AI visionaries, and putting Copilot in Notepad.

Stephen E Arnold, May 30, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta