Palantir Rattles the Windows in the Nuclear Industry

June 30, 2025

Dino 5 18 25_thumb[3]_thumb_thumbThis dinobaby asked ChatGPT about Palantir. Interesting output.

Palantir Partners to Develop AI Software for Nuclear Construction” caught me by surprise. I did not think the company founded in 2003 was an artificial intelligence company with expertise in the nuclear field. The cited article reports:

Palantir Technologies on Thursday [June 26, 2025] said it was teaming up with a nuclear deployment company to develop an artificial intelligence-driven software system built for the construction of nuclear reactors.

I have paid much attention to Palantir, but I did own a small amount of Palantir stock but I dumped it.

image

Screenshot of the ChatGPT output on June 27, 2025, at 855 am US Eastern

Today I wanted to refresh my memory about the company. I navigated to OpenAI’s ChatGPT and plugged in key words and this is what the Sam Altman system generated for me. Note: It is unlikely that the ChatGPT system can output the same report because probability doesn’t work the way users of the system think it does.

Remember: The quoted passage is copied directly from the ChatGPT response. ChatGPT told me:

Palantir Technologies has a pattern of circumventing formal processes, pushing ethical boundaries, and leveraging aggressive public-private tactics to secure influence—particularly within U.S. defense and intelligence communities.

Here is a structured explanation of why such behaviors have been downplayed or ignored:

 


1. Operational Utility Overrides Bureaucratic Violations

  • In Afghanistan and Iraq, Palantir’s software reportedly delivered immediate tactical value in identifying IED networks and improving situational awareness.
  • By deploying software and engineers directly to the field without formal contracts, Palantir created facts on the ground—a working product that soldiers wanted, and field commanders endorsed.
  • Pentagon leadership faced a dilemma: challenge Palantir for its violations or continue using a system that operators said worked better than the legacy alternatives (e.g., DCGS-A).
  • This “bottom-up demand” shielded Palantir from early procurement blowback.

 


2. Information Warfare and Lobbying

  • Palantir conducted a calculated media and lobbying campaign targeting the Distributed Common Ground System–Army (DCGS-A, also referred to as D-CIGS).
  • By discrediting IBM and other legacy defense contractors, Palantir positioned itself as the modern, Silicon Valley alternative.
  • This messaging resonated with Congress, think tanks, and parts of the press, framing Palantir as a disruptor, not a rule-breaker.
  • Lobbying disclosures show Palantir spent millions courting lawmakers and decision-makers.

 


3. Legal Maneuvering and Settlements

  • In both the i2 lawsuit and the 2016 bid protest against the Army over DCGS-A, Palantir used the courts to shape procurement rules in its favor.
  • The Court of Federal Claims sided with Palantir in 2016, affirming that the Army violated federal acquisition regulations by excluding commercial solutions from its process ([Palantir USG v. United States, 129 Fed. Cl. 218 (2016)]). This set a precedent that indirectly legitimized Palantir’s aggressive tactics.

 


4. Shift in Public Attention and National Security Prioritization

  • Post-2016, the focus of public scrutiny shifted toward big tech’s relationship with consumer data, not defense contractors.
  • Palantir successfully repositioned itself as a key player in pandemic response, immigration enforcement, and border security—domains where national interest arguments silence criticism.
  • The company also leaned into its identity as a pro-Western, anti-authoritarian force, which further insulated it from deeper ethical scrutiny.

 


5. Opacity and Media Fatigue

  • Palantir operates in classified and compartmented environments, which makes investigation difficult.
  • Many journalists lack the access or technical background to track the nuance of procurement violations or reverse engineering disputes.
  • After years of unresolved criticism, media fatigue set in—journalists moved on, and watchdog groups prioritized new targets (e.g., TikTok, Clearview AI, etc.).

 


Conclusion:

Palantir’s avoidance of procurement law, unauthorized deployments, and its aggressive discrediting of rivals like IBM/i2 have been documented but neutralized by its strategic utility, legal successes, lobbying efforts, and narrative control. The firm masks tactical rule-breaking with patriotic branding, which has proved highly effective in suppressing deeper accountability.

True, false, reality, or hallucinatory? I don’t know. Nuclear in the context of this ChatGPT output that suggests the Palantir outfit is definitely operating in a very interesting manner. What would Palantir’s AI say about the company?

Stephen  E Arnold, June 30, 2025

 

US Science Conferences: Will They Become an Endangered Species?

June 26, 2025

Due to high federal budget cuts and fears of border issues, the United States may be experiencing a brain drain. Some smart people (aka people tech bros like to hire) are leaving the country. Leadership in some high profile outfits are saying, ““Don’t let the door hit you on the way out.” Others get multi-million pay packets to remain in America.

Nature.com explains more in “Scientific Conferences Are Leaving The US Amid Border Fears.” Many scientific and academic conferences were slated to occur in the US, but they’ve since been canceled, postponed, or moved to other venues in other countries. The organizers are saying that Trump’s immigration and travel policies are discouraging foreign nerds from visiting the US. Some organizers have rescheduled conferences in Canada.

Conferences are important venues for certain types of professionals to network, exchange ideas, and learn the alleged new developments in their fields. These conferences are important to the intellectual communities. Nature says:

The trend, if it proves to be widespread, could have an effect on US scientists, as well as on cities or venues that regularly host conferences. ‘Conferences are an amazing barometer of international activity,’ says Jessica Reinisch, a historian who studies international conferences at Birkbeck University of London. ‘It’s almost like an external measure of just how engaged in the international world practitioners of science are.’ ‘What is happening now is a reverse moment,’ she adds. ‘It’s a closing down of borders, closing of spaces … a moment of deglobalization.’”

The brain drain trope and the buzzword “deglobalization” may point to a comparatively small change with longer term effects. At the last two specialist conferences I attended, I encountered zero attendees or speakers from another country. In my 60 year work career this was a first at conferences that issued a call for papers and were publicized via news releases.

Is this a loss? Not for me. I am a dinobaby. For those younger than I, my hunch is that a number of people will be learning about the truism “If ignorance is bliss, just say, ‘Hello, happy.’”

Whitney Grace, June 26, 2025

Big AI Surprise: Wrongness Spreads Like Measles

June 24, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

Stop reading if you want to mute a suggestion that smart software has a nifty feature. Okay, you are going to read this brief post. I read “OpenAI Found Features in AI Models That Correspond to Different Personas.” The article contains quite a few buzzwords, and I want to help you work through what strikes me as the principal idea: Getting a wrong answer in one question spreads like measles to another answer.

Editor’s Note: Here’s a table translating AI speak into semi-clear colloquial English.

 

 

Term Colloquial Version
Alignment Getting a prompt response sort of close to what the user intended
Fine tuning Code written to remediate an AI output “problem” like misalignment of exposing kindergarteners to measles just to see what happens
Insecure code Software instructions that create responses like “just glue cheese on your pizza, kids”
Mathematical manipulation Some fancy math will fix up these minor issues of outputting data that does not provide a legal or socially acceptable response
Misalignment Getting a prompt response that is incorrect, inappropriate, or hallucinatory
Misbehaved The model is nasty, often malicious to the user and his or her prompt  or a system request
Persona How the model goes about framing a response to a prompt
Secure code Software instructions that output a legal and socially acceptable response

I noted this statement in the source article:

OpenAI researchers say they’ve discovered hidden features inside AI models that correspond to misaligned “personas”…

In my ageing dinobaby brain, I interpreted this to mean:

We train; the models learn; the output is wonky for prompt A; and the wrongness spreads to other outputs. It’s like measles.

The fancy lingo addresses the black box chock full of probabilities, matrix manipulations, and layers of synthetic neural flickering ability to output incorrect “answers.” Think about your neighbors’ kids gluing cheese on pizza. Smart, right?

The write up reports that an OpenAI interpretability researcher said:

“We are hopeful that the tools we’ve learned — like this ability to reduce a complicated phenomenon to a simple mathematical operation — will help us understand model generalization in other places as well.”

Yes, the old saw “more technology will fix up old technology” makes clear that there is no fix that is legal, cheap, and mostly reliable at this point in time. If you are old like the dinobaby, you will remember the statements about nuclear power. Where are those thorium reactors? How about those fuel pools stuffed like a plump ravioli?

Another angle on the problem is the observation that “AI models are grown more than they are guilt.” Okay, organic development of a synthetic construct. Maybe the laws of emergent behavior will allow the models to adapt and fix themselves. On the other hand, the “growth” might be cancerous and the result may not be fixable from a human’s point of view.

But OpenAI is up to the task of fixing up AI that grows. Consider this statement:

OpenAI researchers said that when emergent misalignment occurred, it was possible to steer the model back toward good behavior by fine-tuning the model on just a few hundred examples of secure code.

Ah, ha. A new and possibly contradictory idea. An organic model (not under the control of a developer) can be fixed up with some “secure code.” What is “secure code” and why hasn’t “secure code” be the operating method from the start?

The jargon does not explain why bad answers migrate across the “models.” Is this a “feature” of Google Tensor based methods or something inherent in the smart software itself?

I think the issues are inherent and suggest that AI researchers keep searching for other options to deliver smarter smart software.

Stephen E Arnold, June 24, 2025

LLMs, Dread, and Good Enough Software (Fast and Cheap)

June 11, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

More philosopher programmers have grabbed a keyboard and loosed their inner Plato. A good example is the essay “AI: Accelerated Incompetence” by Doug Slater. I have a hypothesis about this embrace of epistemological excitement, but that will appear at the end of this dinobaby post.

The write up posits:

In software engineering, over-reliance on LLMs accelerates incompetence. LLMs can’t replace human critical thinking.

The driver of the essay is that some believe that programmers should use outputs from large language models to generate software. Doug does not focus on Google and Microsoft. Both companies are convinced that smart software can write good enough code. (Good enough is the new standard of excellence at many firms, including the high-flying, thin-air breathing Googlers and Softies.)

The write up identifies three beliefs, memes, or MBAisms about this use of LLMs. These are:

  • LLMs are my friend. Actually LLMs are part of a push to get more from humanoids involved in things technical. For a believer, time is gained using LLMs. To a person with actual knowledge, LLMs create work in order to catch errors.
  • Humans are unnecessary. This is the goal of the bean counter. The goal of the human is to deliver something that works (mostly). The CFO is supposed to reduce costs and deliver (real or spreadsheet fantasy) profits. Humans, at least for now, are needed when creating software. Programmers know how to do something and usually demonstrate “nuance”; that is, intuitive actions and thoughts.
  • LLMs can do what humans do, especially programmers and probably other technical professionals. As evidence of doing what humans do, the anecdote about the robot dog attacking its owner illustrates that smart software has some glitches. Hallucinations? Yep, those too.

The wrap up to the essay states:

If you had hoped that AI would launch your engineering career to the next level, be warned that it could do the opposite. LLMs can accelerate incompetence. If you’re a skilled, experienced engineer and you fear that AI will make you unemployable, adopt a more nuanced view. LLMs can’t replace human engineering. The business allure of AI is reduced costs through commoditized engineering, but just like offshore engineering talent brings forth mixed fruit, LLMs fall short and open risks. The AI hype cycle will eventually peak10. Companies which overuse AI now will inherit a long tail of costs, and they’ll either pivot or go extinct.

As a philosophical essay crafted by a programmer, I think the write up is very good. If I were teaching again, I would award the essay an A minus. I would suggest some concrete examples like “Google suggests gluing cheese on pizza”, for instance.

Now what’s the motivation for the write up. My hypothesis is that some professional developers have a Spidey sense that the diffident financial professional will license smart software and fire humanoids who write code. Is this a prudent decision? For the bean counter, it is self preservation. He or she does not want to be sent to find a future elsewhere. For the programmer, the drum beat of efficiency and the fife of cost reduction are now loud enough to leak through noise reduction head phones. Plato did not have an LLM, and he hallucinated with the chairs and rear view mirror metaphors.

Stephen E Arnold, June 11, 2025

We Browse Alongside Bots in Online Shops

May 23, 2025

AI’s growing ability to mimic humans has brought us to an absurd milestone. TechRadar declares, “It’s Official—The Majority of Visitors to Online Shops and Retailers Are Now Bots, Not Humans.” A recent report from Radware examined retail site traffic during the 2024 holiday season and found automated programs made up 57%. The statistic includes tools from simple scripts to digital agents. The more evolved the bot, the harder it is to keep it out. Writer Efosa Udinmwen tells us:

“The report highlights the ongoing evolution of malicious bots, as nearly 60% now use behavioral strategies designed to evade detection, such as rotating IP addresses and identities, using CAPTCHA farms, and mimicking human browsing patterns, making them difficult to identify without advanced tools. … Mobile platforms have become a critical battleground, with a staggering 160% rise in mobile-targeted bot activity between the 2023 and 2024 holiday seasons. Attackers are deploying mobile emulators and headless browsers that imitate legitimate app behavior. The report also warns of bots blending into everyday internet traffic. A 32% increase in attack traffic from residential proxy networks is making it much harder for ecommerce sites to apply traditional rate-limiting or geo-fencing techniques. Perhaps the most alarming development is the rise of multi-vector campaigns combining bots with traditional exploits and API-targeted attacks. These campaigns go beyond scraping prices or testing stolen credentials – they aim to take sites offline entirely.”

Now why would they do that? To ransom retail sites during the height of holiday shopping, perhaps? Defending against these new attacks, Udinmwen warns, requires new approaches. The latest in DDoS protection, for example, and intelligent traffic monitoring. Yes, it takes AI to fight AI. Apparently.

Cynthia Murrell, May 23, 2025

Complexity: Good Enough Is Now the Best Some Can Do at Google-

May 15, 2025

dino-orange_thumb_thumb_thumb_thumb_[1]_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zillennials.

I read a post called “Working on Complex Systems: What I Learned Working at Google.” The write up is a thoughtful checklist of insights, lessons, and Gregorian engineering chants a “coder” learned in the online advertising company. I want to point out that I admire the amount of money and power the Google has amassed from its reinvention of the GoTo-Overture-Yahoo advertising approach.

image

A Silicon Valley executive looks at past due invoices. The government has ordered the company to be broken up and levied large fines for improper behavior in the marketplace. Thanks, ChatGPT. Definitely good enough.

The essay in The Coder Cafe presents an engineer’s learnings after Google began to develop products and services tangential to search hegemony, selling ads, and shaping information flows.

The approach is to differentiate complexity from complicated systems. What is interesting about the checklists is that one hearkens back to the way Google used to work in the Backrub and early pre-advertising days at Google. Let’s focus on complex because that illuminates where Google wants to direct its business, its professionals, its users, and the pesky thicket of regulators who bedevil the Google 24×7.

Here’s the list of characteristics of complex systems. Keep in mind that “systems” means software, programming, algorithms, and the gizmos required to make the non-fungible work, mostly.

  1. Emergent behavior
  2. Delayed consequences
  3. Optimization (local optimization versus global optimization)
  4. Hysteresis (I think this is cultural momentum or path dependent actions)
  5. Nonlinearity

Each of these is a study area for people at the Santa Fe Institute. I have on my desk a copy of The Origins of Order: Self-Organization and Selection in Evolution and the shorter Reinventing the Sacred, both by Stuart A. Kauffman. As a point of reference Origins is 700 pages and Reinventing about 300. Each of the cited articles five topics gets attention.

The context of emergent behavior in human- and probably some machine- created code is that it is capable of producing “complex systems.” Dr. Kauffman does a very good job of demonstrating how quite simple methods yield emergent behavior. Instead of a mess or a nice tidy solution, there is considerable activity at the boundaries of complexity and stability. Emergence seems to be associated with these boundary conditions: A little bit of chaos, a little bit of stability.

The other four items in the list are optimization. Dr. Kauffman points out is a consequence of the simple decisions which take place in the micro and macroscopic world. Non-linearity is a feature of emergent systems. The long-term consequences of certain emergent behavior can be difficult to predict. Finally, the notion of momentum keeps some actions or reactions in place through time units.

What the essay reveals, in my opinion, that:

  1. Google’s work environment is positioned as a fundamental force. Dr. Kauffman and his colleagues at the Santa Fe Institute may find some similarities between the Google and the mathematical world at the research institute. Google wants to be the prime mover; the Santa Fe Institute wants to understand, explain, and make useful its work.
  2. The lingo of the cited essay suggests that Google is anchored in the boundary between chaos and order. Thus, Google’s activities are in effect trials and errors intended to allow Google to adapt and survive in its environment. In short, Google is a fundamental force.
  3. The “leadership” of Google does not lead; leadership is given over to the rules or laws of emergence as described by Dr. Kauffman and his colleagues at the Santa Fe Institute.

Net net: Google cannot produce good products. Google can try to emulate emergence, but it has to find a way to compress time to allow many more variants. Hopefully one of those variants with be good enough for the company to survive. Google understands the probability functions that drive emergence. After two decades of product launches and product failures, the company remains firmly anchored in two chunks of bedrock:

First, the company borrows or buys. Google does not innovate. Whether the CLEVER method, the billion dollar Yahoo inspiration for ads, or YouTube, Bell Labs and Thomas Edison are not part of the Google momentum. Advertising is.

Second, Google’s current management team is betting that emergence will work at Google. The question is, “Will it?”

I am not sure bright people like those who work at Google can identify the winners from an emergent approach and then create the environment for those winners to thrive, grow, and create more winners. Gluing cheese to pizza and ramping up marketing for Google’s leadership in fields ranging from quantum computing to smart software is now just good enough. One final question: “What happens if the advertising money pipeline gets cut off?”

Stephen E Arnold, May 15, 2025

LLM Trade Off Time: Let Us Haggle for Useful AI

May 15, 2025

dino orangeNo AI, just the dinobaby expressing his opinions to Zellenials.

What AI fixation is big tech hyping now? VentureBeat declares, “Bigger Isn’t Always Better: Examining the Business Case for Multi-Million Token LLMs.” The latest AI puffery involves large context models—LLMs that can process and remember more than a million tokens simultaneously. Gemini 1.5 Pro, for example can process 2 million tokens at once. This achievement is dwarfed by MiniMax-Text-01, which can handle 4 million. That sounds impressive, but what are such models good for? Writers Rahul Raja and Advitya Gemawat tell us these tools can enable:

Cross-document compliance checks: A single 256K-token prompt can analyze an entire policy manual against new legislation.

Customer support: Chatbots with longer memory deliver more context-aware interactions.

Financial research: Analysts can analyze full earnings reports and market data in one query.

Medical literature synthesis: Researchers use 128K+ token windows to compare drug trial results across decades of studies.

Software development: Debugging improves when AI can scan millions of lines of code without losing dependencies.

I theory, they may also improve accuracy and reduce hallucinations. We are all for that—if true. But research from early adopter JPMorgan Chase found disappointing results, particularly with complex financial tasks. Not ideal. Perhaps further studies will have better outcomes.

The question for companies is whether to ditch ponderous chunking and RAG systems for models that can seamlessly debug large codebases, analyze entire contracts, or summarize long reports without breaking context. Naturally, there are trade-offs. We learn:

While large context models offer impressive capabilities, there are limits to how much extra context is truly beneficial. As context windows expand, three key factors come into play:

  • Latency: The more tokens a model processes, the slower the inference. Larger context windows can lead to significant delays, especially when real-time responses are needed.
  • Costs: With every additional token processed, computational costs rise. Scaling up infrastructure to handle these larger models can become prohibitively expensive, especially for enterprises with high-volume workloads.
  • Usability: As context grows, the model’s ability to effectively ‘focus’ on the most relevant information diminishes. This can lead to inefficient processing where less relevant data impacts the model’s performance, resulting in diminishing returns for both accuracy and efficiency.”

Is it worth those downsides for simpler workflows? It depends on whom one asks. Some large context models are like a 1958 Oldsmobile Ninety-Eight: lots of useless chrome and lousy mileage.

Stephen E Arnold, May 15, 2025

The Future: Humans in Lawn Chairs. Robots Do the Sports Thing

May 8, 2025

Can a fast robot outrun a fast human? Not yet, apparently. MSN’s Interesting Engineering reports, “Humanoid ‘Tiangong Ultra’ Dons Winning Boot in World’s First Human Vs Robot Marathon.” In what appears to be the first event of its kind, a recent 13-mile marathon pitted robots and humans against each other in Beijing. Writer Christopher McFadden reports:

“Around 21 humanoid robots officially competed alongside human marathoners in a 13-mile (21 km) endurance race in Beijing on Saturday, April 19th. According to reports, this is the first time such an event has been held. Competitor robots varied in size, with some as short as 3 feet 9 inches (1.19 m) and others as tall as 5 feet 9 inches (1.8 m). Wheeled robots were officially banned from the race, necessitating that any entrants be able to walk or run similarly to humans.”

The winner was one of the tallest at 5 feet 9 inches and weighed 114 pounds. It took Tiangong Ultra two hours and forty minutes to complete the course. Despite its impressive performance, it lagged considerably behind the first-place human who finished at one hour and two minutes. The robots’ lane of the course was designed to test the machines’ capabilities, mixing inclines and both left and right turns with flat stretches.

See the article for a short video of the race. Most of it features the winner, but there is a brief shot of one smaller, cuter robot. The article continues:

“According to the robot’s creator, Tang Jian, who is also the chief technology officer behind the Beijing Innovation Centre of Human Robotics, the robot’s long legs and onboard software both aided it in its impressive feat. … Jian added that the robot’s battery needed to be changed only three times during the race. As for other robot entrants, many didn’t perform as well. In particular, one robot fell at the starting line and lay on the ground for a few minutes before getting up and joining the race. Yet another crashed into a railing, causing its human operator to fall over.”

Oops. Sadly, those incidents do not appear in the video. The future is clear: Wizards will sit in lawn chairs and watch their robots play sports. I wonder if  my robot will go to the gym and exercise for me?

Cynthia Murrell, May 8, 2025

Thorium News: Downplaying or Not Understanding a Key Fact

May 7, 2025

dino orange_thumb_thumb_thumbNo AI. Just a dinobaby who gets revved up with buzzwords and baloney.

My first real job, which caused me to drop out of my PhD program at the University of Illinois, was with a nuclear consulting and services firm. The company was in the midst of becoming part of Halliburton. I figured a PhD in medieval literature might be less financially valuable to me than working in Washington, DC, for the nuke outfit. When I was introduced at a company meeting, my boss, James K. Rice explained that I was working on a PhD in poetry. Dr. James Terwilliger, a nuclear engineer shouted out, “I never read a poem.” Big laugh. Terwilliger and I became fast friends.

At that time in the early 1970s, there was one country that was the pointy end of the stick in things nuclear. That was the United States. Some at the company like Dominique Dorée would have argued that France was right next to the USA crowd, and she would have been mostly correct. Russia was a player. So was China. But the consensus view was that USA was number one. When I worked for a time for Congressman Craig Hosmer (R-Cal., USN admiral ret.), he made it quite clear that America’s nuclear industry was and would be on his watch the world leader in nuclear research, applications, and engineering.

I read an article in the prestigious online publication Popular Mechanics which appears to be trapped in that 1970s’ mind set. The publication’s write up “A Thorium Reactor in the Middle of the Desert Has Rewritten the Rules of Nuclear Power” does a good job of running through the details and benefits of a thorium-based nuclear reactor. Think molten salt instead the engineering problem child water to cool these systems.

But the key point in the write up was buried. I want to highlight what I think is the most important item in the article. Here it is:

Though China may currently be the world leader in molten salt reactors, the U.S. is catching up.

Several observations:

  1. Quite a change in the 60 plus years between Terwilliger’s comment about poetry and China’s leadership in thorium systems
  2. Admiral Craig Hosmer would not be happy were he still alive and playing a key role in supporting nuclear research and engineering as the head of the Joint Committee on Atomic Energy. (An unhappy Admiral is not a fun admiral I want to point out.)
  3. The statement about China’s lead in this technical space suggests that fast and decisive action is needed to train young, talented people with the engineering, mathematical, and other technical skills required to innovate in nuclear technology.

Popular Mechanics buried the real story, summarizing some features of thorium reactors. Was that from a sense of embarrassment or a failure to recognize what the real high impact part of the write up was?

Action is needed, not an inability to recognize a fact with high knowledge value. Less doom scrolling and more old fashioned learning. That reactor is not in a US desert; it is operating in a Chinese desert. That’s important in my opinion.

Stephen E Arnold, May 7, 2025

The 10X Engineer? More Trouble Than They Are Worth

April 25, 2025

dino orange_thumb_thumb_thumbDinobaby, here. No smart software involved unlike some outfits. I did use Sam AI-Man’s art system to produce the illustration in the blog post.

I like it when I spot a dinobaby fellow traveler. That happened this morning (March 28, 2025) when I saw the headline “In Praise of Normal Engineers: A Software Engineer Argues Against the Myth of the 10x Engineer.”

The IEEE Spectrum article states:

I don’t have a problem with the idea that there are engineers who are 10 times as productive as other engineers. The problems I do have are twofold.

image

Everyone is amazed that the 10X engineer does amazing things. Does the fellow become the model for other engineers in the office? Not for the other engineers. But the boss loves this super performer. Thanks, OpenAI, good enough.

The two “problems” — note the word “problems” are:

  1. “Measuring productivity.” That is an understatement, not a problem. With “engineers” working from home or in my case a far off foreign country, a hospital waiting room, or playing video games six fee from me productivity is a slippery business.
  2. “Teams own software.” Alas, that is indeed true. In 1962, I used IBM manuals to “create” a way to index. The professor who paid me $3 / hour was thrilled. I kept doing this indexing thing until the fellow died when I started graduate school. Since then, whipping up software confections required “teams.” Why? I figured out that my indexing trick was pure good fortune. After that, I made darned sure there were other eyes and minds chugging along by my side.

The write up says:

A truly great engineering organization is one where perfectly normal, workaday software engineers, with decent skills and an ordinary amount of expertise, can consistently move fast, ship code, respond to users, understand the systems they’ve built, and move the business forward a little bit more, day by day, week by week.

I like this statement. And here’s another from the article:

The best engineering orgs are not the ones with the smartest, most experienced people in the world. They’re the ones where normal software engineers can consistently make progress, deliver value to users, and move the business forward. Places where engineers can have a large impact are a magnet for top performers. Nothing makes engineers happier than building things, solving problems, and making progress.

Happy workers are magnets.

Now  let’s come back to the 10X idea. I used to work at a company which provided nuclear engineering services to the US government and a handful of commercial firms engaged in the nuclear industry. We had a real live 10X type. He could crank out “stuff” with little effort. Among the 600 nuclear engineers employed at this organization, he was the 10X person. Everyone liked him, but he did not have much to say. In fact, his accent made what he said almost impenetrable. He just showed up every day in a plaid coat, doodled on a yellow pad, and handed dot points, a flow chart, or a calculation to another nuclear engineer and went back to doodling.

Absolutely no one at the nuclear engineering firm wanted to be a 10X engineer. From my years of working at this firm, he was a bit of a one-off. When suits visited, a small parade would troop up to his office on the second floor. He shared that with my close friend, Dr. James Terwilliger. Everyone would smile and look at the green board. Then they would troop out and off to lunch.

I think the presence of this 10X person was a plus for the company. The idea of trying to find another individual who could do the nuclear “stuff” like this fellow was laughable. For some reason, the 10X person liked me, and I got the informal job of accompanying to certain engagements. I left that outfit after several years to hook up with a blue chip consulting firm. I lost track of the 10X person, but I had the learnings necessary to recognize possible 10X types. That was a useful addition to my bag of survival tips as a minus 3 thinker.

Net net: The presence of a 10X is a plus. Ignoring the other 599 engineers is a grave mistake. The errors of this 10X approach are quite evident today: Unchecked privacy violations, monopolistic behaviors enabled by people who cannot set up a new mobile phone, and a distortion of what it means to be responsible, ethical, and moral.

The 10X concept is little more than a way to make the top one percent the reason for success. Their presence is a positive, but building to rely on 10X anything is one of the main contributing factors to the slow degradation of computer services, ease of use, and, in my opinion, social cohesion.

Engineers are important. The unicorn engineers are important. Balance is important. Without out balance “stuff” goes off the rails. And that’s where we are.

Stephen E Arnold, April xx, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta