Copilot, Can You Crash That Financial Analysis?
August 22, 2025
No AI. Just a dinobaby working the old-fashioned way.
The ever-insouciant online service The Verge published a story about Microsoft, smart software, and Excel. “Microsoft Excel Adds Copilot AI to Help Fill in Spreadsheet Cells” reports:
Microsoft Excel is testing a new AI-powered function that can automatically fill cells in your spreadsheets, which is similar to the feature that Google Sheets rolled out in June.
Okay, quite specific intentionality: Fill in cells. And a dash of me-too. I like it.
However, the key statement in my opinion is:
The COPILOT function comes with a couple of limitations, as it can’t access information outside your spreadsheet, and you can only use it to calculate 100 functions every 10 minutes. Microsoft also warns against using the AI function for numerical calculations or in “high-stakes scenarios” with legal, regulatory, and compliance implications, as COPILOT “can give incorrect responses.”
I don’t want to make a big deal out of this passage, but I will do it anyway. First, Microsoft makes clear that the outputs can be incorrect. Second, don’t use it too much because I assume one will have to pay to use a system that “can give incorrect results.” In short, MSFT is throttling Excel’s Copilot. Doesn’t everyone want to explore numbers with an addled Copilot known to flub numbers in a jet aircraft at 0.8 Mach?
I want to quote from “It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes”:
Think of it. Forty-five hundred years ago, if you were a Sumerian scribe, while your calculations on the world’s first abacus might have been laborious, you could be assured they’d be correct. Four hundred years ago, if you were palling around with William Oughtred, his new slide rule may have been a bit intimidating at first, but you could know its output was correct. In the 1980s, you could have bought the cheapest, shittiest Casio-knockoff calculator you could find, and used it exclusively, for every day of the rest of your life, and never once would it give anything but a correct answer. You could use it today! But now we have Microsoft apparently determining that “unpredictability” was something that some number of its customers wanted in their calculators.
I know that I sure do. I want to use a tool that is likely to convert “high-stakes scenarios” into an embarrassing failure. I mean who does not want this type of digital Copilot?
Why do I find this Excel with Copilot software interesting?
- It illustrates that accuracy has given way to close enough for horseshoes. Impressive for a company that can issue an update that could kill one’s storage devices.
- Microsoft no longer dances around hallucinations. The company just says, “The outputs can be wrong.” But I wonder, “Does Microsoft really mean it?” What about Red Bull-fueled MBAs handling one’s retirement accounts? Yeah, those people will be really careful.
- The article does not come and and say, “Looks like the AI rocket ship is losing altitude.”
- I cannot imagine sitting in a meeting and observing the rationalizations offered to justify releasing a product known to make NUMERICAL errors.
Net net: We are learning about the quality of [a] managerial processes at Microsoft, [b] the judgment of employees, and [c] the sheer craziness that an attorney said, “Sure, release the product just include an upfront statement that it will make mistakes.” Nothing builds trust more than a company anchored in customer-centric values.
Stephen E Arnold, August 22, 2025
So Much AI and Now More Doom and Gloom
August 22, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Amidst the hype about OpenAI’s ChatGPT 5, I have found it difficult to identify some quiet but to me meaningful signals. One, in my opinion, appears in “Sam Altman Sounds Alarm on AI Crisis That Even He Finds Terrifying.” I was hoping that the article would provide some color on the present negotiations between Sam and Microsoft. For a moment, I envisioned Sam in a meeting with the principals of the five biggest backers of OpenAI. The agenda had one item on the agenda, “When do we get our money back with a payoff, Mr. Altman?”
But no. The signal is that smart software will enable fast-moving, bureaucracy-free bad actors to apply smart software to online fraud. The write up says:
[Mr.] Altman fears that the current AI-fraud crisis will expand beyond voice cloning attacks, deepfake video call scams and phishing emails. He warns that in the future, FaceTime or video fakes may become indistinguishable from reality. The alarming abilities of current AI-technology in the hands of bad faith actors is already terrifying. Scammers can now use AI to create fake identification documents, explicit photos, and headshots for social media profiles.
Okay, he is on the money, but he overlooks one use case for smart software. A bad actor can use different smart software systems and equip existing malware with more interesting features. At some point, a clever bad actor will use AI to build a sophisticated money laundering mechanism that uses the numerous new crypto currencies and their attendant blockchain systems to make the wizards at Huione Guarantee look pretty pathetic.
Can this threat be neutralized. I don’t think it can be in the short term. The reason is that AI is here and has been available for more than a year. Code generation is getting easier. A skilled bad actor can, just like a Google-type engineer, become more productive. In the mid-term, the cyber security companies will roll out AI tools that, according to one outfit whose sales pitch I listened to last wee, will “predict the future.” Yeah, sure. News flash: Once a breach has been discovered, then the cyber security firms kick into action. If the predictive stuff were reliable, these outfits would be betting on horse races and investing in promising start ups, not trying to create such a company.
Mr. Altman captures significant media attention. His cyber fraud message is a faint signal amidst the cacophony of the AI marketing blasts. By the way, cyber fraud is booming, and our research into outfits like Telegram suggest that AI is a contributing factor.
With three new Telegram-type services in development at this time, the future for bad actors looks as bright and the future for cyber security firms looks increasingly reactive. For investors and those with retirement funds, the forecast is less cheery.
Stephen E Arnold, August 22, 2025
Another Google Apology Coming? Sure, It Is Just Medical Info. Meh
August 22, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Another day and more surprising Mad Magazine type of smart software stories. I noted this essay as a cocktail party anecdote particularly when doctors are chatting with me: “Doctors Horrified After Google’s Healthcare AI Makes Up a Body Part That Does Not Exist in Humans.”
Okay, guys like Leonardo da Vinci and Michelangelo dissected cadavers in order to get a first-hand, hands on and hands in sense of what was in a human body. However, Google’s smart software does not require any of that visceral human input. The much hyped systems developed by Google’s wizards just use fancy math and predict what it knows and what a human needs to answer a question. Simple, eh.
The cited write up says:
One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions. It identified an “old left basilar ganglia infarct,” referring to a purported part of the brain — “basilar ganglia” — that simply doesn’t exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.
Big deal or not? The write up points out:
… in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google’s faux pas more than likely didn’t result in any danger to human patients, it sets a worrying precedent, experts argue.
Several observations:
- Smart software will just improve. Look at ChatGPT 5, it is doing wonders even though rumor has it that OpenAI is going to make ChatGPT4o available again. Progress.
- Google will apologize and rework the system so it does not make this specific medical misstep again. Yep, rules based smart software. How tenable is that? Just consider how that worked for AskJeeves years ago.
- Ask yourself the question, “Do I want Google-infused smart software to replace my harried personal physician?”
Net net: Great anecdote for a cocktail party. I bet those doctors will find me very amusing.
Stephen E Arnold, August 22, 2025
News Flash: Google Does Not Care about Publishers
August 21, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read another Google is bad story. This one is titled “Google Might Not Believe It, But Its AI Summaries Are Bad News for Publishers.” The “news” service reports that a publishing industry group spokesperson said:
“We must ensure that the same AI ‘answers’ users see at the top of Google Search don’t become a free substitute for the original work they’re based on.”
When this sentence was spoken was the industry representative’s voice trembling? Were there tears in his or her eyes? Did the person sniff to avoid the embarrassment of a runny nose?
No idea.
The issue is that Google looks at its metrics, fiddles with its knobs and dials on its ad sales system, and launches AI summaries. Those clicks that used to go to individual sites now provide the “summary space” which is a great place for more expensive, big advertising accounts to slap their message. Yep, it is the return to the go-go days of television. Google is the only channel and one of the few places to offer a deal.
What does Google say? Here’s a snip from the “news” story:
"Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year," Liz Reid, VP and Head of Google Search, said earlier this month. "Additionally, average click quality has increased, and we’re actually sending slightly more quality clicks to websites than a year ago (by quality clicks, we mean those where users don’t quickly click back — typically a signal that a user is interested in the website). Reid suggested that reports like the ones from Pew and DCN are "often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in Search."
Translation: Haven’t you yokels figured out after 20 years of responding to us, we are in control now. We don’t care about you. If we need content, we can [a] pay people to create it, [b] use our smart software to write it, and [c] offer inducements to non profits, government agencies, and outfits with lots of writers desperate for recognition a deal. TikTok has changed video, but TikTok just inspired us to do our own TikTok. Now publishers can either get with the program or get out.
PC News apparently does not know how to translate Googlese.
It’s been 20 plus years and Google has not changed. It is doing more of the game plan. Adapt or end up prowling LinkedIn for work.
Stephen E Arnold, August 21, 2025
What Cyber Security Professionals “Fear”
August 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
My colleague Robert David Steele (now deceased) loved to attend Black Hat. He regaled me with the changing demographics of the conference, the reaction to his often excitement-inducing presentations, and the interesting potential “resources” he identified. I was content to stay in my underground office in rural Kentucky and avoid the hacking and posturing.
I still keep up (sort of but not too enthusiastically) with Black Hat events by reading articles like “Black Hat 2025: What Keeps Cyber Experts Up at Night?” The write up explains that:
“Machines move faster than humans.”
Okay, that makes sense. The write up then points out:
“Tools like generative AI are fueling faster, more convincing phishing and social engineering campaigns.”
I concluded that cyber security professionals fear fast computers and smart software. When these two things are combined, the write up states:
The speed of AI innovation is stretching security management to its limits.
My conclusion is that the wide availability of smart software is the big “fear.”
I interpret the information in the write up from a slightly different angle. Let me explain.
First, cyber security companies have to make money to stay in business. I could name one Russian outfit that gets state support, but I don’t want to create waves. Let’s go with money is the driver of cyber security. In order to make money, the firms have to come up with fancy ways of explaining DNS analysis, some fancy math, or yet another spin on the Maltego graph software. I understand.
Second, cyber security companies are by definition reactive. So far the integration of smart software into the policeware and intelware systems I track adds some workflow enhancements; for example, grouping information and in some cases generating a brief paragraph, thus saving time. Proactive perimeter defense systems and cyber methods designed to spot insider attacks are in what I call “sort of helpful” mode. These systems can easily overwhelm the person monitoring the data signals. Firms respond by popping up a level with another layer of abstraction. Those using the systems are busy, of course, and it is not clear if more work gets done or if time is bled off to do busy-work. Cyber security firms, therefore, are usually not in proactive mode except for marketing.
Third, cyber security firms are consolidating. I think about outfits like Pala Alto or the private equity roll ups. The result is that bureaucratic friction is added to the technology development these firms must do. Just figuring out how to snag data from the latest and greatest Dark Web secret forum and actually getting access to a Private Channel on Telegram disseminating content that is illegal in many jurisdictions takes time. With smart software, bad actors can experiment. The self-appointed gatekeepers do little to filter these malware activities because some bad actors are customers of the gatekeepers. (No, I won’t name firms. I don’t want to talk to lawyers or inflamed cyber security firms’ leadership.) My point is that consolidation creates bureaucratic work. That activity puts the foot on the fast moving cyber firm’s brakes. Reaction time slows.
What does this mean?
I think the number one fear for cyber security professionals may be the awareness that bad actors with zero bureaucratic, technical, or financial limits can use AI to make old wine new again. Recently a major international law enforcement organization announced the shutdown of particular stealer software. Unfortunately that stealer is currently being disseminated via Web search systems with live links to the Telegram-centric vendor pumping the malware into thousands of unsuspecting Telegram users each month.
What happens when that “old school” stealer is given some new capabilities by one of the smart software tools? The answer is, “Cyber security firms may have to hype their capabilities to an even greater degree than they now do. Behind the scenes, the stage is now set for developer burn out and churn.
The fear, then, is a nagging sense that bad guys may be getting a tool kit to punch holes in what looks like a slam dunk business. I am probably wrong because I am a dinobaby. I don’t go to many conferences. I don’t go to sales meetings. I don’t meet with private equity people. I just look at how AI makes asymmetric cyber warfare into a tough game. One should not take a squirt gun to a shoot out with a bad actor working without bureaucratic and financial restraints armed with an AI system.
Stephen E Arnold, August 21, 2025
The Risks of Add-On AI: Apple, Telegram, Are You Paying Attention?
August 20, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Name three companies trying to glue AI onto existing online services? Here’s my answer:
-
Amazon
-
Apple
-
Telegram.
There are others, but each of these has a big “tech rep” and command respect from other wizards. We know that Tim Apple suggested that the giant firm had AI pinned to the mat and whimpering, “Let me be Siri.” Telegram mumbled about Nikolai working on AI. And Amazon? That company has flirted with smart software with its Sagemaker announcements years ago. Now it has upgraded Alexa, the device most used as a kitchen timer.
“Amazon’s Rocky Alexa+ Launch Might Justify Apple’s Slow Pace with Next-Gen Siri” ignores Telegram (of course. Who really cares?) and uses Amazon’s misstep to apologize for Apple’s goofs. The write up says:
Apple has faced a similar technical challenge in its own next-generation Siri project. The company once aimed to merge Siri’s existing deterministic systems with a new generative AI layer but reportedly had to scrap the initial attempt and start over. … Apple’s decision to delay shipping may be frustrating for those of us eager for a more AI-powered Siri, but Amazon’s rocky launch is a reminder of the risks of rushing a replacement before it’s actually ready.
Why does this matter?
My view is that Apple’s and Amazon’s missteps make clear that bolting on, fitting in, and snapping on smart software is more difficult than it seemed. I also believe that the two firms over-estimated their technical professionals’ ability to just “do” AI. Plus, both US companies appear to be falling behind in the “AI race.”
But what about Telegram? That company is in the same boat. Its AI innovations are coming from its third party developers who have been using Telegram’s platform as a platform. Telegram itself has missed opportunities to reduce the coding challenge for its developers with it focus on old-school programming languages, not AI assisted coding.
I think that it is possible that these three firms will get their AI acts together. The problem is that AI native solutions for the iPhone, the Telegram “community,” and Amazon’s own hardware products. The fumbles illustrate a certain weakness in each firm. Left unaddressed, these can be debilitating in an uncertain economic environment.
But the mantra go fast or the jargon accelerate is not in line with the actions of these three companies.
Stephen E Arnold, August 20, 2025
Cyber Security: Evidence That Performance Is Different from Marketing
August 20, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
In 2022, Google bought a cyber security outfit named Mandiant. The firm had been around since 2004, but when Google floated more than $5 billion for the company, it was time to sell.
If you don’t recall, Google operates a large cloud business and is trying diligently to sell to Microsoft customers in the commercial and government sector. A cyber security outfit would allow Google to argue that it would offer better security for its customers and their users.
Mandiant’s business was threat intelligence. The idea is that Mandiant would monitor forums, the Web, and any other online information about malware and other criminal cyber operations. As an added bonus, Mandiant would blend automated security functions with its technology. Wham, bam! Slam dunk, right?
I read “Google Confirms Major Security Breach After Hackers Linked To ShinyHunters Steal Sensitive Corporate Data, Including Business Contact Information, In Coordinated Cyberattack.” First, a disclaimer. I have no idea if this WCCF Tech story is 100 percent accurate. It could be one of those Microsoft 1,000 Russian programmers are attacking us” plays. On the other hand, it will be fun to assume that some of the information in the cited article is accurate.
With that as background, I noted this passage:
The tech giant has recently confirmed a data breach linked to the ShinyHunters ransomware group, which targeted Google’s corporate Salesforce database systems containing business contact information.
Okay. Google’s security did not work. A cloud customer’s data were compromised. The assertion that Google’s security is better than or equal to Microsoft’s is tough for me to swallow.
Here’s another passage:
As per Google’s Threat Intelligence Group (GTIG), the hackers used a voice phishing technique that involved calling employees while pretending to be members of the internal IT team, in order to have them install an altered version of Salesforce’s Data Loader. By using this technique, the attackers were able to access the database before their intrusion was detected.
A human fooled another human. The automated systems were flummoxed. The breach allegedly took place.
Several observations are warranted:
- This is security until a breach occurs. I am not sure that customers expect this type of “footnote” to their cyber security licensing mumbo jumbo. The idea is that Google should deliver a secure service.
- Mandiant, like other threat intelligence services, allows the customer to assume that the systems and methods generally work. That’s true until they don’t.
- Bad actors have an advantage. Armed with smart software and tools that can emulate my dead grandfather, the humans remain a chink in the otherwise much-hyped armor of an outfit like Google.
What this example, even if only partly accurate, makes it clear than cyber security marketing performs better than the systems some of the firms sell. Consider that the victim was Google. That company has touted its technical superiority for decades. Then Google buys extra security. The combo delivers what? Evidence that believing the cyber security marketing may do little to reduce the vulnerability of an organization. What’s notable is that the missteps were Google’s. Microsoft may enshrine this breach case and mount it on the walls of every cyber security employees’ cubicles.
I can imagine hearing a computer-generated voice emulating Bill Gates’, saying, “It wasn’t us this time.”
Stephen E Arnold, August 20, 2025
Inc. Magazine May Find that Its MSFT Software No Longer Works
August 20, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
I am not sure if anyone else has noticed that one must be very careful about making comments. A Canadian technology dude found himself embroiled with another Canadian technology dude. To be frank, I did not understand why the Canadian tech dudes were squabbling, but the dust up underscores the importance of the language, tone, rhetoric, and spin one puts on information.
An example of a sharp-toothed article which may bite Inc. Magazine on the ankle is the story “Welcome to the Weird New Empty World of LinkedIn: Just When Exactly Did the World’s Largest Business Platform Turn into an Endless Feed of AI-Generated Slop?” My teeny tiny experience as a rental at the world’s largest software firm taught me three lessons:
-
Intelligence is defined many ways. I asked a group of about 75 listening to one of my lectures, “Who is familiar with Kolmogorov?” The answer was for that particular sampling of Softies was exactly zero. Subjective impression: Rocket scientists? Not too many.
-
Feistiness. The fellow who shall remain nameless dragged me to a weird mixer thing in one of the buildings on the “campus.” One person (whose name and honorifics I do not remember) said, “Let me introduce you to Mr. X. He is driving the Word project.” I replied with a smile. We walked to the fellow, were introduced, and I asked, “Will Word fix up its autonumbering?” The Word Softie turned red, asked the fellow who introduced me to him, “Who is this guy?” The Word Softie stomped away and shot deadly sniper eyes at me until we left after about 45 minutes of frivolity. Subjective impression: Thin skin. Very thin skin.
-
Insecurity. At a lunch with a person whom I had met when I was a contractor at Bell Labs and several other Softies, the subject of enterprise search came up. I had written the Enterprise Search Report, and Microsoft had purchased copies. Furthermore, I wrote with Susan Rosen “Managing Electronic Information Projects.” Ms. Rosen was one of the senior librarians at Microsoft. While waiting for the rubber chicken, a Softie asked me about Fast Search & Transfer, which Microsoft had just purchased. The question posed to me was, “What do you think about Fast Search as a technology for SharePoint?” I said, “Fast Search was designed to index Web sites. The enterprise search functions were add ons. My hunch is that getting the software to handle the data in SharePoint will be quite difficult?” The response was, “We can do it.” I said, “I think that BA Insight, Coveo, and a couple of other outfits in my Enterprise Search Report will be targeting SharePoint search quickly.” The person looked at me and said, “What do these companies do? How quickly do they move?” Subjective impression: Fire up ChatGPT and get some positive mental health support.
The cited write up stomps into a topic that will probably catch some Softies’ attention. I noted this passage:
The stark fact is that reach, impressions and engagement have dropped off a cliff for the majority of people posting dry (read business-focused) content as opposed to, say, influencer or lifestyle-type content.
The write up adds some data about usage of LinkedIn:
average platform reach had fallen by no less than 50 percent, while follower growth was down 60 percent. Engagement was, on average, down an eye-popping 75 percent.
The main point of the article in my opinion is that LinkedIn does filter AI content. The use of AI content produces a positive for the emitter of the AI content. The effect is to convert a shameless marketing channel into a conduit for search engine optimized sales information.
The question “Why?” is easy to figure out:
-
Clicks if the content is hot
-
Engagement if the other LinkedIn users and bots become engaged or coupled
-
More zip in what is essentially a one dimension, Web 1 service.
How will this write up play out? Again the answers strike me as obvious:
-
LinkedIn may have some Softies who will carry a grudge toward Inc. Magazine
-
Microsoft may be distracted with its Herculean efforts to make its AI “plays” sustainable as outfits like Amazon say, “Hey, use our cloud services. They are pretty much free.”
-
Inc. may take a different approach to publishing stories with some barbs.
Will any of this matter? Nope. Weird and slop do that.
Stephen E Arnold, August 20, 2025
Smart Software Fix: Cash, Lots and Lots of Cash
August 19, 2025
No AI. Just a dinobaby working the old-fashioned way. But I asked ChatGPT one question. Believe it or not.
If you have some spare money, Sam Altman aka Sam AI-Man wants to talk with you. It is well past two years since OpenAI forced the 20 year old Google to go back to the high school lab. Now OpenAI is dealing with the reviews of ChatGPT 5. The big news in my opinion is that quite a few people are annoyed with the new smart software from the money burning Bessemer furnace at 3180 18th Street in San Francisco. (I have heard that a satellite equipped with an infra red camera gets a snazzy image of the heat generated from the immolation of cash. There are also tiny red dots scattered around the SF financial district. Those, I believe, are the burning brain cells of the folks who have forked over dough to participate in Sam AI-Man’s next big thing.
“As People Ridicule GPT-5, Sam Altman Says OpenAI Will Need ‘Trillions’ in Infrastructure” addresses the need for cash. The write up says:
Whether AI is a bubble or not, Altman still wants to spend a certifiably insane amount of money building out his company’s AI infrastructure. “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” Altman told reporters.
Trillions is a general figure that most people cannot relate to everyday life. Years ago when I was an indentured servant at a consulting firm, I worked on a project that sought to figure out what types of decisions Boards of Directors of Fortune 1000 companies consumed the most time. The results surprised me then and still do.
Boards of directors spent the most time discussing relatively modest-scale projects; for example, expanding a parking lot or developing of list of companies for potential joint ventures. Really big deals like spending large sums to acquire a company were often handled in swift, decisive votes.
Why?
Boards of directors, like most average people, cannot relate to massive numbers. It is easier to think in terms of a couple hundred thousand dollars to lease a parking lot than borrow billions and buy a giant allegedly synergistic company.
When Mr. Altman uses the word “trillions,” I think he is unable to conceptualize the amount of money represented in his casual “you should expect OpenAI to spend trillions…”
Several observations:
- AI is useful in certain use cases. Will AI return the type of payoff that Google’s charge ‘em every which way from Sunday for advertising model does?
- AI appears to produce incorrect outputs. I liked the application for oncology docs who reported losing diagnostic skills when relying on AI assistants.
- AI creates negative mental health effects. One old person, younger than I, believed a chat bot cared for him. On the way to meet his digital friend, he flopped over dead. Anticipative anxiety or a use case for AI sparking nutso behavior?
What’s a trillion look like? Answer: 1,000,000,000,000.
How many railroad boxcars would it take to move $1 trillion from a collection point like Denver, Colorado, to downtown San Francisco? Answer from ChatGPT: you would need 10,000 standard railroad boxcars. This calculation is based on the weight and volume of the bills, as well as the carrying capacity of a typical 50-foot boxcar. The train would stretch over 113.6 miles—about the distance from New York City to Philadelphia!
Let’s talk about expanding the parking lot.
Stephen E Arnold, August 19, 2025
The Bubbling Pot of Toxic Mediocrity? Microsoft LinkedIn. Who Knew?
August 19, 2025
No AI. Just a dinobaby working the old-fashioned way.
Microsoft has a magic touch. The company gets into Open Source; the founder “gits” out. Microsoft hires a person from Intel. Microsoft hires garners an engineer, asks some questions, and the new hire is whipped with a $34,000 fine and two years of mom looking in his drawers.
Now i read “Sunny Days Are Warm: Why LinkedIn Rewards Mediocrity.” The write up includes an outstanding metaphor in my opinion: Toxic Mediocrity. The write up says:
The vast majority of it falls into a category I would describe as Toxic Mediocrity. It’s soft, warm and hard to publicly call out but if you’re not deep in the bubble it reads like nonsense. Unlike it’s cousins ‘Toxic Positivity’ and ‘Toxic Masculinity’ it isn’t as immediately obvious. It’s content that spins itself as meaningful and insightful while providing very little of either. Underneath the one hundred and fifty words is, well, nothing. It’s a post that lets you know that sunny days are warm or its better not to be a total psychopath. What is anyone supposed to learn from that?
When I read a LinkedIn post it is usually referenced in an article I am reading. I like to follow these modern slippery footnotes. (If you want slippery, try finding interesting items about Pavel Durov in certain Russian sources.)
Here’s what I learn:
- A “member” makes clear that he or she has information of value. I must admit. Once in a while a useful post will turn up. Not often, but it has happened. I do know the person believes something about himself or herself. Try asking a GenAI about their personal “beliefs.” Let me know how that works.
- Members in a specific group with an active moderator often post items of interest. Instead of writing my unread blog, these individuals identify an item and use LinkedIn as a “digital bulletin board” for people who shop at the same sporting goods store in rural Kentucky. (One sells breakfast items and weapons.)
- I get a sense of the jargon people use to explain their expertise. I work alone. I am writing a book. I don’t travel to conferences or client locations now. I rely on LinkedIn as the equivalent of going to a conference mixer and listening to the conversations.
That useful. I have a person who interacts on LinkedIn for me. I suppose my “experience” is therefore different from someone who visits the site, posts, and follows the antics of LinkedIn’s marketers as they try to get the surrogate me to pay to do what I do. (Guess what? I don’t pay.)
I noted this statement in the essay:
Honestly, the best approach is to remember that LinkedIn is a website owned by Microsoft, trying to make money for Microsoft, based on time spent on the site. Nothing you post there is going to change your career. Doing work that matters might. Drawing attention to that might. Go for depth over frequency.
I know that many people rely on LinkedIn to boost their self confidence. One of the people who worked for me moved to another city. I suggested that she give LinkedIn a whirl. She wrote interesting short items about her interests. She got good feedback. Her self confidence ticked up, and she landed a successful job. So there’s a use case for you.
You should be able to find a short item that a new post appears on my blog. Write me and my surrogate will write you back and give you instructions about how to contact me. Why don’t I conduct conversations on LinkedIn? Have you checked out the telemetry functions in Microsoft software?
Stephen E Arnold, August 19, 2025