Grok and the Dog Which Ate the Homework
May 16, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I remember the Tesla full self driving service. Is that available? I remember the big SpaceX rocket ship. Are those blowing up after launch? I now have to remember an “unauthorized modification” to xAI’s smart software Grok. Wow. So many items to tuck into my 80 year old brain.
I read “xAI Blames Grok’s Obsession with White Genocide on an Unauthorized Modification.” Do I believe this assertion? Of course, I believe everything I read on the sad, ad-choked, AI content bedeviled Internet.
Let’s look at the gems of truth in the report.
First, what is an unauthorized modification of a complex software humming along happily in Silicon Valley and— of all places — Memphis, a lovely town indeed. The unauthorized modification— whatever that is— caused a “bug in its AI-powered Grok chatbot.” If I understand this, a savvy person changed something he, she, or it was not supposed to modify. That change then caused a “bug.” I thought Grace Hopper nailed the idea of a “bug” when she pulled an insect from one of the dinobaby’s favorite systems, the Harvard Mark II. Are their insects at the X shops? Are these unauthorized insects interacting with unauthorized entities making changes that propagate more bugs? Yes.
Second, the malfunction occurs when “@grok” is used as a tag. I believe this because the “unauthorized modification” fiddled with the user mappings and jiggled scripts to allow the “white genocide” content to appear. This is definitely not hallucination; it is an “unauthorized modification.” (Did you know that the version of Grok available via x.com cannot return information from X.com (formerly Twitter) content. Strange? Of course not.
Third, I know that Grok, xAI, and the other X entities have “internal policies and core values.” Violating these is improper. The company — like other self regulated entities — “conducted a thorough investigation.” Absolutely. Coders at X are well equipped to perform investigations. That’s why X.com personnel are in such demand as advisors to law enforcement and cyber fraud agencies.
Finally, xAI is going to publish system prompts on Microsoft GitHub. Yes, that will definitely curtail the unauthorized modifications and bugs at X entities. What a bold solution.
The cited write up is definitely not on the same page as this dinobaby. The article reports:
A study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found xAI ranks poorly on safety among its peers, owing to its “very weak” risk management practices. Earlier this month, xAI missed a self-imposed deadline to publish a finalized AI safety framework.
This negative report may be expanded to make the case that an exploding rocket or a wonky full self driving vehicle is not safe. Everyone must believe X outfits. The company is a paragon of veracity, excellent engineering, and delivering exactly what it says it will provide. That is the way you must respond.
Stephen E Arnold, May 16, 2025
An Agreeable Google: Will It Write Checks with a Sad, Wry Systemic Smile?
May 14, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
Did you see the news about Google’s probable check writing?
“Google Settles Black Employees’ Racial Bias Lawsuit for $50 Million” reports:
According to the complaint, Black employees comprised only 4.4% of Google’s workforce and 3% of its leadership in 2021. The plaintiff April Curley, hired to expand outreach to historically Black colleges, said Google denied her promotions, stereotyped her as an “angry” Black woman, and fired her after six years as she prepared a report on its alleged racial bias. Managers also allegedly denigrated Black employees by declaring they were not “Googley” enough or lacked “Googleyness,” which the plaintiffs called racial dog whistles.
The little news story includes the words “racially biased corporate culture” and “systemic racial bias.” Is this the beloved “do no evil” company with the cheerful kindergarten colored logo? Frankly, this dinobaby is shocked. This must be an anomaly in the management approach of a trusted institution based on advertising.
Well, there is this story from Bloomberg, the terminal folks: “Google to Pay Texas $1.4 Billion to End Privacy Cases.” As I understand it,
Google will pay the state of Texas $1.375 billion to resolve two privacy lawsuits claiming the tech giant tracks Texans’ personal location and maintains their facial recognition data, both without their consent. Google announced the settlement Friday, ending yearslong battles with Texas Attorney General Ken Paxton (R) over the state’s strict laws on user data.
Remarkable.
The Dallas Morning News reports that Google’s position remains firm, resolute, and Googley:
The settlement doesn’t require any new changes to Google’s products, and the company did not admit any wrongdoing or liability. “This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,” said José Castañeda, a Google spokesperson. “We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”
Absolutely.
Imagine a company with those kindergarten colors in its logos finding itself snared in what seem to me grade school issues. Google must be misunderstood like one of those precocious children who solve math problems without showing their work. It’s just system perhaps?
Stephen E Arnold, May 14, 2025
Ask Siri: How Does Apple Avoid a Tariff Catastrophe
May 7, 2025
Visualize Tim Apple. He asks Siri, “Siri, how do I guarantee Apple’s dominance in the mobile device sector?”
Siri answers, “Just ignore reality.”
The only problem is that Siri is one example of Apple’s outstanding technology, management decision making, and financial wizardry. Too bad the outputs are incorrect.
Let’s look at one good example:
Apple’s immense success is underpinned by the global supply chain it has spent decades cultivating. Now, President Trump may have turned that asset into a liability with the stroke of a pen. The BBC explains, “Designed in US, Made in China: Why Apple is Stuck.” Though the president backtracked a bit and exempted smartphones and computers from the tariffs, those final products are just the last step of Apple’s production infrastructure. Reporter Annabelle Liang writes:
“While the sleek rectangle that runs many of our lives is indeed designed in the United States, it is likely to have come to life thousands of miles away in China: the country hit hardest by US President Donald Trump’s tariffs, now rising to 245% on some Chinese imports. Apple sells more than 220 million iPhones a year and by most estimates, nine in 10 are made in China. From the glossy screens to the battery packs, it’s here that many of the components in an Apple product are made, sourced and assembled into iPhones, iPads or Macbooks. Most are shipped to the US, Apple’s largest market. Luckily for the firm, Trump suddenly exempted smartphones, computers and some other electronic devices from his tariffs last week. But the comfort is short-lived. The president has since suggested that more tariffs are coming: ‘NOBODY is getting ‘off the hook’,’ he wrote on Truth Social, as his administration investigated ‘semiconductors and the WHOLE ELECTRONICS SUPPLY CHAIN’.”
Such as stable genius. Meanwhile, Apple is vulnerable to competition from Chinese firms that benefit from the infrastructure Apple fostered. We learn:
“‘Now that ‘Apple has cultivated China’s electronic manufacturing capabilities, Huawei, Xiaomi, Oppo and others can reuse Apple’s mature supply chain,’ according to Mr. Lin. Last year, Apple lost its place as China’s biggest smartphone seller to Huawei and Vivo.”
Way to kick a billionaire when he is down. It seems Tim Cook may now face Apple sauce, not Apple success. Did he not kiss the ring sufficiently? The firm now plans to invest $500 billion in the US, but we doubt even that sum will relocate much of Apple’s entrenched network to these shores. Or do much to placate the tariffer-in-chief. I want to write about ignoring the court decision regarding its online store. That’s another example of Ask Siri wisdom.
Cynthia Murrell, May 7, 2025
AI-Fueled Buggy Whip Executive Cannot Be Replaced by AI: A Case Study
May 2, 2025
I read about a very optimistic executive who owned buggy whip companies in the US. One day a horseless carriage, today known as a Tesla, raced past this office. The person telling me the story remembered the anecdote from her required reading in her first year MBA strategic thinking class. The owner of the buggy whip company, she said. “Those newfangled machines will not replace the horse.”
The modern version of this old chestnut appears in “Marc Andreessen Says One Job Is Mostly Safe From AI: Venture Capitalist.” I hope Mr. Andreessen is correct. The write up states:
In the future, AI will apparently be able to do everybody’s job—except Marc’s.
Here’s the logic, according to the write up:
Andreessen described his job as a nuanced combination of “intangible” skills,
including psychological analysis of the entrepreneurs he works with: “A lot of it
is psychological analysis, like, ‘Who are these people?’ ‘How do they react under
pressure?’ ‘How do you keep them from falling apart?’ ‘How do you keep them
from going crazy?’ ‘How do you keep from going crazy yourself?’ You know, you
end up being a psychologist half the time.” “So, it is possible—I don’t want to be definitive—but it’s possible that that is quite literally timeless. And when, you know, when the AI is doing everything else, that may be one of the last remaining fields that people are still doing.”
I found this paragraph from the original story one that will spark some interest; to wit:
Andreessen’s powers of self-delusion are well known. His Techno-
Optimist’s Manifesto, published a few years ago, was another great window into
a mind addled by too much cash and too little common sense. If you’re one of
Silicon Valley’s Masters of the Universe, I guess having weird, self-serving views
just comes with the territory.
Several observations:
- In my opinion, some VCs will continue to use AI. Through use and greater familiarity, the technology will gain some traction. At some point, AI will handle jobs once done by wild-eyed people hungry for riches.
- Start up VCs may rely upon AI for investment decisions, not just gaining through the business plans of fund seekers. If those “experiments” show promise, whoever owns the smart VC may develop a next generation VC business. Ergo: Marc can stay, but he won’t do anything.
- Someone may stumble upon an AI VC workflow process that works faster, better, and more efficiently. If that firm emerges, Mr. Andreessen can become the innovator identified with digital horse accelerators.
How does one say “Giddy up” in AI-system-to-AI-system communication lingo? Answer maybe: Dweep, dweep, dupe?
Stephen E Arnold, May 2, 2025
The 10X Engineer? More Trouble Than They Are Worth
April 25, 2025
Dinobaby, here. No smart software involved unlike some outfits. I did use Sam AI-Man’s art system to produce the illustration in the blog post.
I like it when I spot a dinobaby fellow traveler. That happened this morning (March 28, 2025) when I saw the headline “In Praise of Normal Engineers: A Software Engineer Argues Against the Myth of the 10x Engineer.”
The IEEE Spectrum article states:
I don’t have a problem with the idea that there are engineers who are 10 times as productive as other engineers. The problems I do have are twofold.
Everyone is amazed that the 10X engineer does amazing things. Does the fellow become the model for other engineers in the office? Not for the other engineers. But the boss loves this super performer. Thanks, OpenAI, good enough.
The two “problems” — note the word “problems” are:
- “Measuring productivity.” That is an understatement, not a problem. With “engineers” working from home or in my case a far off foreign country, a hospital waiting room, or playing video games six fee from me productivity is a slippery business.
- “Teams own software.” Alas, that is indeed true. In 1962, I used IBM manuals to “create” a way to index. The professor who paid me $3 / hour was thrilled. I kept doing this indexing thing until the fellow died when I started graduate school. Since then, whipping up software confections required “teams.” Why? I figured out that my indexing trick was pure good fortune. After that, I made darned sure there were other eyes and minds chugging along by my side.
The write up says:
A truly great engineering organization is one where perfectly normal, workaday software engineers, with decent skills and an ordinary amount of expertise, can consistently move fast, ship code, respond to users, understand the systems they’ve built, and move the business forward a little bit more, day by day, week by week.
I like this statement. And here’s another from the article:
The best engineering orgs are not the ones with the smartest, most experienced people in the world. They’re the ones where normal software engineers can consistently make progress, deliver value to users, and move the business forward. Places where engineers can have a large impact are a magnet for top performers. Nothing makes engineers happier than building things, solving problems, and making progress.
Happy workers are magnets.
Now let’s come back to the 10X idea. I used to work at a company which provided nuclear engineering services to the US government and a handful of commercial firms engaged in the nuclear industry. We had a real live 10X type. He could crank out “stuff” with little effort. Among the 600 nuclear engineers employed at this organization, he was the 10X person. Everyone liked him, but he did not have much to say. In fact, his accent made what he said almost impenetrable. He just showed up every day in a plaid coat, doodled on a yellow pad, and handed dot points, a flow chart, or a calculation to another nuclear engineer and went back to doodling.
Absolutely no one at the nuclear engineering firm wanted to be a 10X engineer. From my years of working at this firm, he was a bit of a one-off. When suits visited, a small parade would troop up to his office on the second floor. He shared that with my close friend, Dr. James Terwilliger. Everyone would smile and look at the green board. Then they would troop out and off to lunch.
I think the presence of this 10X person was a plus for the company. The idea of trying to find another individual who could do the nuclear “stuff” like this fellow was laughable. For some reason, the 10X person liked me, and I got the informal job of accompanying to certain engagements. I left that outfit after several years to hook up with a blue chip consulting firm. I lost track of the 10X person, but I had the learnings necessary to recognize possible 10X types. That was a useful addition to my bag of survival tips as a minus 3 thinker.
Net net: The presence of a 10X is a plus. Ignoring the other 599 engineers is a grave mistake. The errors of this 10X approach are quite evident today: Unchecked privacy violations, monopolistic behaviors enabled by people who cannot set up a new mobile phone, and a distortion of what it means to be responsible, ethical, and moral.
The 10X concept is little more than a way to make the top one percent the reason for success. Their presence is a positive, but building to rely on 10X anything is one of the main contributing factors to the slow degradation of computer services, ease of use, and, in my opinion, social cohesion.
Engineers are important. The unicorn engineers are important. Balance is important. Without out balance “stuff” goes off the rails. And that’s where we are.
Stephen E Arnold, April xx, 2025
Management? Never Been Easier
April 4, 2025
Age-Old Management Techniques That Still Happen
Millennials and Gen Z focus on prioritizing mental health and not dealing with the same BS as earlier generations. Some might see these priorities as infantile, but even past generations didn’t like them. INC breaks down “4 Really Bad Management Behaviors That Drive Good Employees Away” and the article really drives home how these management techniques have been in place for eons.
There article’s author Marcel Schwantes starts with a very wise statement: “Not everyone in management should be there.” He then breaks down four poor management techniques…er…behaviors that employees have been fed up with since forever. The first is shooting down their workers’ ideas. This happens on a daily basis and no one in the modern workforce won’t experience this at some point in their careers. Sometimes ideas need to be shot down, but other times a different way of doing something is just what the doctor ordered.
The second is treating people like numbers. This is a totalitarian perspective that runs rampant in corporate America, China, and North Korea. Businesses want measurable results and bodies in places to fill quotas. This doesn’t always lead to good results and the number one thing companies learn is how many people they’ll turn over in a year.
Micromanaging is number three. Micromanagers believe everything has to be done by the book or no one can do anything better than them. They watch their employees like hawks, creating a stifling environment. This creates stress and more mistakes on part of the employees. No one thrives with micromanagers.
The last poor management behavior is hoarding information and only sharing on a need-to-know basis if any. When this technique thrives in a work environment, employees are working in the dark with a flash light that only turns on once in a while.
Schwantes finishes with:
“Want to keep your best people? Treat them like people. Connect with them. Support them. Give them what they need to thrive. Because when you invest in your team, serve their needs, and remove obstacles from their path, they’ll run through walls for you—and that’s how you grow into the leader everyone wants to work for.”
No matter how far technology and human advances, we’ll always be dealing with these management behaviors. Dan anyone else imagine cavemen…er…people dealing with these? They probably didn’t put ump them and whacked their managers over the head with a club.
Whitney Grace, April 4, 2025
Original Research: Not-So-Original Assertions about Content Appropriation
April 2, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
The Social Science Research Council published the 30-plus page report “Beyond Public Access in LLM Pre-Training Data.” The subtitle reveals the principal finding: “Non-Public Non-Public Book Content in OpenAI’s Models.”
The write up states:
Using a legally obtained dataset of 34 copyrighted O’Reilly Media books, we apply the DE-COP membership inference attack method to investigate whether OpenAI’s large language models were trained on copyrighted content without consent. Our AUROC scores show that GPT-4o, OpenAI’s more recent and capable model, demonstrates strong recognition of paywalled O’Reilly book content (AUROC = 82%), compared to OpenAI’s earlier model GPT-3.5 Turbo. In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples. GPT-4o Mini, as a much smaller model, shows no knowledge of public or non-public O’Reilly Media content when tested (AUROC ? 50%). Testing multiple models, with the same cutoff date, helps us account for potential language shifts over time that might bias our findings. These results highlight the urgent need for increased corporate transparency regarding pre-training data sources as a means to develop formal licensing frameworks for AI content training.
I want to mention that the DE-COP method provide one way to determine whether a specific data record was part of the training dataset for a machine learning model. The result of the SSRC’s research suggests that as OpenAI enhanced its models, the OpenAI organization appears to have used non-public content. Smaller (less capable OpenAI) models seemed to know less about the test set; the bigger models knew more.
The write up concludes that developers of smart software are sucking down data without obtaining permission or entering into licensing deals with publishers, copyright holders, or entities working on content not yet available as public content like this dinobaby blog post.
To sum up: A not-to-original assertion that certain organizations just take content and ignore rights is apparently accurate in this set of research. OpenAI accelerates. Speed is a positive. Breaking speed limits is believed to be no big deal.
A bit of a flashback: Over 40 years ago, I participated in a series of meetings to license the right to create abstracts of original articles from more than 1000 different publications. Our efforts were successful. Most publishers of academic business and management publications knew zero about online information services. The idea of summarizing an eight page article about executive compensation was unfamiliar to most of the people with whom our organization interacted over a period of years. Nevertheless, the owner of the database unit of the Courier Journal & Louisville Times Company insisted that we get signed agreements. The process was time consuming and expensive. Some outfits had representatives. So a negotiation went through layers of bureaucracy. Other publishers wanted a sit down meeting and a dog-and-pony show to explain why we wanted to boil down a 600 word article to about 150 words. There were worries about the impact of online access. There were worries about control of editorial content. There were worries about signing away rights. We couldn’t assuage worries, but we chugged along. The reputations of the Courier Journal and Barry Bingham Jr. helped but our small database operation had to follow the rules.
If the data in the cited document are accurate, the attitude of Barry Bingham Jr. would not have gained much traction at OpenAI. Instead of my getting a pat on the head for setting up the licensing deals with the able help of other Courier Journal professionals, I would have gotten kicked on the shin or worse.
Several observations:
- Observing content ownership remains important despite the acceleration toward illegal behaviors in the online sector
- Using show-and-tells and explaining the benefits of participating in digital innovations remains important. Knowledge minimizes anger and other negative reactions to pre-emptive behavior
- Managing the process is as important as demonstrating an unwillingness to conform to industry, social, and legal norms.
Let’s assume that the findings in the SSRC are off by 50 percent. Does that change what appears to be a strong possibility of some fancy dancing at the OpenAI disco?
My conclusion is that taking short cuts is part of the Silicon Valley approach of doing whatever and just saying, “Wow, I am sorry.”
I asked ChatGPT 4 via You.com this question:
Should I create an online product and just use content from the Harvard Business Review without permission?
Here’s the response:
No, you should not create an online product using content from the Harvard Business Review (HBR) without permission. Doing so would likely violate copyright laws and HBR’s usage policies. Here’s why:
HBR’s Copyright and Usage Policies
- HBR Material is Copyrighted: Harvard Business Review explicitly states that its content is protected by copyright, and no part of its publications may be reproduced without permission.
- Usage Restrictions: HBR has strict policies regarding how its content can be used. For example, even in academic settings, HBR articles cannot be freely distributed unless done through specific channels like creating course packs via "Harvard Business Publishing for Educators"
Permission is Required: If you want to use HBR content, you must obtain explicit permission. This includes reproducing, distributing, or using their material in any form.
Legal and Ethical Implications
- Copyright Infringement: Using HBR content without permission would likely constitute copyright infringement, which can result in legal action, fines, or other penalties.
- Ethical Concerns: Beyond legal issues, using someone else’s intellectual property without authorization is unethical and could harm your reputation or business.
What You Should Do Instead
- Seek Permission: Contact Harvard Business Review to request permission to use their content. They may have licensing options available for your intended use.
- Create Original Content: Instead of using HBR’s material, consider creating your own original content inspired by their ideas (but without copying their text or proprietary materials).
- Use Public Domain or Licensed Content: Look for content that is in the public domain or available under a license (e.g., Creative Commons) that allows commercial use.
Conclusion
Using HBR content without permission is not only illegal but also against their explicit policies. To avoid legal and ethical issues, either seek proper licensing or create your own original material.
In this case, the OpenAI’s AI may be more intelligent that some people engaged in content acquisition at the firm. But I am a dinobaby. I try to follow the social, legal, and ethical rules as my parents who had only high school educations taught me. I am a dinobaby. A sad dinobaby.
Stephen E Arnold, April 2, 2025
Amazon: So Many Great Ideas
April 1, 2025
AWS puts its customers first. Well, those who pay for the premium support plan, anyway. A thread on Reddit complains, "AWS Blocking Troubleshooting Docs Behind Paid Premium Support Plan." Redditor Certain_Dog1960 writes:
"When did AWS decide that troubleshooting docs/articles require you to have a paid premium support plan….like seriously who thought this was a good idea?"
Good question. The comments and the screenshot of Amazon’s message make clear that the company’s idea of how to support customers is different from actual customers’ thoughts. However, Certain_Dog posted an encouraging update:
"The paywall has been taken down!!! :)"
Apparently customer outrage still makes a difference. Occasionally.
Cynthia Murrell, March 31, 2025
The Gentle Slide Down the Software Quality Framework
March 21, 2025
Yep, another dinobaby original.
I listened to a podcast called “The WAN Show,” featuring a couple of technology buffs who sell T shirts, mugs, and screwdrivers. What was notable about the program which is available on Apple podcasts was the opening story. In a nutshell, the two fellows made clear some problems with Apple’s hardware. The key statement made by one of the fellows was, “I will pay my way to Cupertino and help you Apple engineers to fix the problems. I will do it for free.” A number of people younger than I believe that an individual can overcome a bureaucracy.
Someone is excited about taking the long slide down in software quality. Thanks, OpenAI, definitely good enough.
I forget about the comment and the pitch to buy a backpack until I read “Apple’s Software Quality Crisis: When Premium Hardware Meets Subpar Software.” The write up hit upon some of the WAN grousing and introduced a number of ideas about Apple’s management focus.
Here’s a comment from the write up I circled:
The performance issues don’t stop at sluggish response times. During these use cases, my iPad overheated, making it uncomfortable to hold or even rest the palm on, raising concerns about potential long-term hardware damage. What made this particularly frustrating is that these aren’t third-party applications pushing the hardware to its limits. These are Apple’s own applications that should be theoretically optimized for their hardware. After demonstrating the issues in person to Apple Store staff (that were courteous and professional), the support representative that was handling my case suggested a hardware replacement. However, after further discussion, we both concluded this was likely a software problem rather than a hardware defect.
To a dinobaby like me, I interpreted the passage as saying, “The problem can’t be fixed. Suck it up, buttercup.”
I then discovered more than 1,000 comments to the “Apple’s Software Quality Crisis” article. I scanned them and then turned to one of the ever reliable smart software systems to which I have access and asked, “What are the main themes of the 1,000 comments.
Here’s what the smart software output, and, please, keep in mind, that smart software hallucinates, goes bonkers, and if a product of Google, really has trouble with cheese-related prompts. The found points output are:
- Persistent Bugs: Users report long-standing issues, such as date-handling errors in Contacts that have remained unresolved for years. ?
- Declining User Experience: There’s a sentiment that recent design changes, like the macOS Settings app, have led to a less intuitive user experience. ?
- Inconsistent Quality Across Platforms: Some users feel that Apple’s software quality has become comparable to other platforms, lacking the distinctiveness it once had.
- Ineffective Bug Reporting: Concerns are raised about Apple’s bug reporting system, with users feeling their feedback doesn’t lead to timely fixes.
Okay, we have a sample based on one podcast, one blog essay, and a number of randos who have commented on the “Apple’s Software Quality Crisis” article. Let me offer several observations:
- Apple, like Amazon, Facebook (Metazuck or whatever), Google, and Microsoft cannot deliver software that does much more than achieve the status of “good enough.” Perhaps size and the limitations of humans contribute to this wide spread situation?
- The problem is not fixable because new software comes out and adds to the woes of the previous software. Therefore, the volume of problems go up and there is neither money nor time to pay down the technical debt. In my experience, this means that a slow descent on a quite fungible gradient occurs. The gravity of technical debt creates the issues the individuals complaining identify.
- The current economic and regulatory environment does not punish these organizations for their products and services. The companies’ managers chug along, chase their bonuses, and ignore the gentle drift to quite serious problems between the organizations and their customers.
So what? Sorry, I have no solutions. Many of the “fixes” require deep familiarity with origin software. Most fixes are wrappers because rewrites take too long or the information required to fix one thing and not break two others is not available.
Welcome, to the degrading status quo.
Stephen E Arnold, March 21, 2025
Management Insights Circa Spring 2025
March 18, 2025
Another dinobaby blog post. Eight decades and still thrilled when I point out foibles.
On a call today, one of the people asked, “Did you see that excellent leadership comes from ambivalence?” No, sorry. After my years at the blue chip consulting firm, I ignore those insights. Ambivalence. The motivated leader cares about money, the lawyers, the vacations, the big customer, and money. I think I have these in the correct order.
Imagine my surprise when I read another management breakthrough. Navigate to “Why Your ‘Harmonious’ Team Is Actually Failing.” The insight is that happy teams are in coffee shop mode. If one is not motivated by one of the factors I identified in the first paragraph of this essay, life will be like a drive-through smoothie shop. Kick back, let someone else do the work, and lap up that banana and tangerine goodie.
The write up reports about a management concept that is that one should strive for a roughie, maybe with a dollop of chocolate and some salted nuts. Get that blood pressure rising. Here’s a passage I noted:
… real psychological safety isn’t about avoiding conflict. It’s about creating an environment where challenging ideas makes the team stronger, not weaker.
The idea is interesting. I have learned that many workers, like helicopter parents, want to watch and avoid unnecessary conflicts, interactions, and dust ups. The write up slaps some psycho babble on this management insight. That’s perfect for academics on tenure track and talking to quite sensitive big spending clients. But often a more dynamic approach is necessary. If it is absent, there is a problem with the company. Hello, General Motors, Intel, and Boeing.
Stifle much?
The write up adds:
I’ve seen plenty of “nice” teams where everyone was polite, nobody rocked the boat, and meetings were painless. And almost all of those teams produced ok work. Why? Because critical thinking requires friction. Those teams weren’t actually harmonious—they were conflict-avoidant. The disagreements still existed; they just went underground. Engineers would nod in meetings then go back to their desks and code something completely different. Design flaws that everyone privately recognized would sail through reviews untouched. The real dysfunction wasn’t the lack of conflict—it was the lack of honest communication. Those teams weren’t failing because they disagreed too little; they were failing because they couldn’t disagree productively.
Who knew? Hello, General Motors, Intel, and Boeing.
Here’s the insight:
Here’s the weird thing I’ve found: teams that feel safe enough to hash things out actually have less nasty conflict over time. When small disagreements can be addressed head-on, they don’t turn into silent resentment or passive-aggressive BS. My best engineering teams were never the quiet ones—they were the ones where technical debates got spirited, where different perspectives were welcomed, and where we could disagree while still respecting each other.
The challenge is to avoid creating complacency.
Stephen E Arnold, March 18, 2025