The Time Problem: Technology, Money, and People Have Different Clock Speeds
March 25, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Here’s a comment by steveBK123.
My experience as well. Waterfall is like – let’s think about where we want this product to go, and the steps to get there. Agile is like ADHD addled zig zag journey to a destination cutting corners because we are rewriting a component for the third time, to get to a much worse product slightly faster. Now we can do that part 10x faster, cool. The thing is, at every other level of the company, people are actually planning in terms of quarters/years, so the underlying product being given only enough thought for the next 2 weeks at a time is a mismatch.
Mr. BK123 posted it to an essay titled “Every Layer of Review Makes You 10x Slower.”
The write up “Every Layer…” runs through management practices that slam the brakes on moving quickly. At the same time, implementing quality assurance undermines the quality of the product or output. One can agree or disagree with the ideas spelled out in “Every Layer….” I want to focus on Mr. BK123’s observation about time. I think it is accurate and under appreciated.
In an organization (traditional or New Age), functions and the people responsible for these have different clock speeds. Consider this example for a company with several dozen people. The financial people have clocks that tick to weekly paycheck, monthly bank reconciliations, and maybe quarterly reports. Speed is defined within this financial context. Deadlines are determined by relatively inflexible frameworks. Therefore, those in finance work feverishly to meet the deadline and then do it again and again. In most companies, once the business process is set up and working it does not get revamped every few days. The marketing department is different. A trade show requires a specific clock that operates at a cadence determined by booth preparations, publishing collateral, signing contracts, lining up people, and setting up meetings with “must connect” targets. The financial people have zero clue about trade shows or any of the other marketing tasks like getting a product person on a podcast. Those engaged in technology, however, have multiple clocks running. These are normal clocks like updating software. Then there are chaotic clocks like responding to a failure, attending a meeting and learning that a core requirement has changed, or learning that a security problem exists and must be fixed immediately.

The executives at this meeting each has a different clock. Each interprets work according to his or her unit’s clock. Time conflicts slow down work. Thanks, Venice.ai. Good enough.
Most of the clock confusion is worked out in meetings. But meetings about software and many other issues makes in the words of “Every Layer…”:
Every layer of review makes you 10x slower….Every layer of approval makes a process 10x slower.
Now organizations have two problems: Different clocks and processes that slow work down. Is there a fix? No, “Every Layer…” says:
AI can’t fix this.
The essay “Every Layer…” suggests that trust, fallibility (which I interpret as “good enough”), and modularity (which I think means small units, not sky-scraper monoliths) allow an organization to get code written and other tasks completed.
But I want to come back to Mr. BK123’s observation about “mismatch.” Is the stress of work in modern organizations due to these different clocks? The offices I have visited in the last couple of years have been eerily empty. People are with customers, at off-site meetings, or working from a coffee shop. When I visited my father’s office when he worked at LeTourneau in the 1950s, there were people everywhere. When he went to the office on Saturday morning to finish something, there were other people at their desks. Today there are Zooms, Teams, and Slacks.
Several observations:
- Adding AI to the work mix is likely to disrupt these different clocks. None will work at the speed of AI, yet the humans have to use or do something with the AI outputs. That means human time collides with AI time. The result is stress or just using what AI generates and let the systems fail where they may.
- The new communications methods do not eliminate but they alter the old-fashioned “everyone in the office” approach that was common not too long ago. Could slow downs and inefficiencies result from these new methods?
- The fix in many organizations is “just do it.” That works in some types of organizations, but in others the approach can lead to somewhat notable outcomes; for example, the driverless car runs over a jogger.
Net net: I think the task of management and organizing work processes warrants research, management attention, and a realization that going slow may have an upside.
Stephen E Arnold, March 25, 2026
Why Worry? Be Happy. Big Tech Is
March 24, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
With the growing dependency on land and water, AI data centers are popping up quicker than nuclear power plants post World War II. Nana Nwachukwu published a piece on The Conversation with a startling comparison: “We Are In A Digital Version Of The Enclosures – Like The Landowners, Big Tech Has Power Without Responsibility.”
Thanks, Venice.ai. Do much user testing with your new interface? I understand. Speed is better than user satisfaction.
If you’re unfamiliar with British history, during the 18th-19th centuries the English parliament passed over 4,000 enclosure acts that stole public land from peasants. The land was given to aristocrats and the church. The same thing is happening says economic historian Karl Polanyi, because resources that once sustained the community are now being targeted by bigwigs. The resources are transformed into commodities and force people to depend on markets they don’t control.
With those lands came certain responsibilities, but the new landowners rejected them.
Nwachukwu noticed something similar with the lack of culpability with Big Tech. One example is Grok was used to generate images of scantily clad women. She also mentions this:
“As a former Trusted Facebook Partner, I am familiar with how content moderation used to work. Platforms such as Meta (when it was Facebook) ran programs where activists and civil society organizations could flag harmful content directly to human reviewers for outright removal or labelling. While these arrangements were imperfect, they were a form of negotiated governance where communities retained input into what stayed and what was taken away.
A year ago, Meta announced it was ending its fact-checking program and moving to “community notes” modeled on X’s systems. Users now moderate each other. Meta framed this as a trade-off for free expression. I regard it as a withdrawal of responsibility while retaining control.”
Big Tech seems to touch everything some people do. The power of these firms seems to be regulated. The EU investigates, fines, and then repeats without causing meaningful change to the behavior of certain US companies. Checks and balances are not working or not in place. How is that freedom of scope working out? Yeah, great.
Whitney Grace, March 24, 2026
Nvidia: PR That Screws Up Some Data Center Planning
March 19, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read a remarkable piece of content marketing collateral. The information appeared as a feature item in the online publication Venture Beat. “Nvidia Introduces Vera Rubin, a Seven-Chip AI Platform with OpenAI, Anthropic and Meta on Board” struck me as a somewhat bold attempt to make sure Nvidia remained a trillion dollar, super high performance, innovative company. If that were not enough, without Nvidia the artificial intelligence, agentic golden age would not come to pass. Wow! The only hitch in the git along is that nothing substantive was revealed about where this marvelous constellation of new chips and software would be housed, powered, cooled, and connected.

The conflict of turtle and digital time. Clock speeds matter. Thanks, Venice.ai. Good enough.
I think that was not an intentional omission. The point of the featured article was to market the company, its technology, and its “now in production” innovative silicon. In this post, I want to talk about the data center omission and link the grandiosity of the announcement to what are valiant efforts of a small publicly-traded company trying to generate revenue from the AI agentic revolution.
First the data center angle:
The Data Center Angle
Building or retrofitting an existing exercise is different from buying a table lamp from Wayfair and plugging it in. The work is at best a multi-month effort and more likely a project that will reach out 24, 36, or more months before the system goes online for customers. Silicon and software moves according to one clock speed. Permits, power, plumbing, and planning chug along at a different clock speed.
Upgrading or getting a new data center online typically morphs into a multi-year capital commitment that bumps into the constraints on building physical facilities. These efforts run on what I call “turtle time.” No leadership speech can change the clicks in turtle time.
A new data center deployment begins with land and working in a political environment which is local with the potential to become a state or federal matter. People in the US ignore the availability of power and water. When my family lived in Brazil, we had power only a few hours a day. Water was a separate effort because when the tap flowed, the stuff that emerged could and would kill someone careless enough to drink it. The 1950s in Campinas was an education in modern conveniences for my mother, father, and me. Many US professionals make assumptions about water and power. Those assumptions may be different from what is economically feasible.
In the US and some countries, sufficient electrical grid capacity, access to water for cooling, acceptable distance from flood zones and seismic risk, and zoning classifications that permit the construction of industrial-scale power and thermal infrastructure. In some US states, each of these tasks are variables. Navigating a public approval process, getting appropriate zoning variances, working out utility agreements, and even obtaining building permits do not move on the clock ticks of a high-technology company’s marketing department or bean counters prefer. Schedules of municipal planning boards, state environmental agencies, and utility commissions are measured in turtle time. Keynote enthusiasm may not be shared with understaffed and politically constrained state, county, and city bureaucrats.
Once land permits are secured, old-fashioned work begins. The Vera Rubin rack-scale system rack appears to weigh about two US tons (the same as two black rhinoceroses) and demands liquid cooling infrastructure and power delivery systems equivalent to a small city like Seymour, Indiana. Liquid cooling loops, coolant distribution units, and high-amperage busbars must be engineered, procured, installed, and tested. The supply chains for those components like heat exchangers, precision-machined cooling trays, high-current switchgear are likely to be interrupted by [a] current data center construction and [b] supply chain problems caused by war.
Connectivity requirements add another layer. The Vera Rubin platform relies on NVLink sixth-generation switch fabric delivering 260 terabytes per second of scale-up bandwidth within a rack. Nvidia’s innovation also needs ConnectX-9 SuperNIC networking capable of providing 1,600 gigabits per second per graphics processing unit for scale-out. The fiber, transceivers, and switching infrastructure required to support those bandwidths at facility scale are not commodity items available from a warehouse. Most are at this time specialty components with their own lead times, qualification requirements, and integration complexity. Speed is everything until a physical device is required. Then, you guessed it, turtle time.
For existing operational data centers — including facilities like AT&T Ashburn or Equinix in Miami — the retrofit path is not without pitfalls. Floor load ratings, raised floor depths, power distribution architectures, and cooling topologies have to support the forthcoming Nvidia AI systems. Structural fixes, electrical panel replacement, and cooling system overhaul are not software adding a dozen lines of code. Facilities to be partially or fully taken offline during upgrades shifts from go-fast to go-slow quickly.
Now… the Marketing Gap
Nvidia’s announcement of the Vera Rubin platform is technically slick at the component level. Seven chip types, rack-scale integration, stated performance improvements of five times inference throughput and ten times lower cost per token compared to the Blackwell platform. Impressive indeed. The silicon exists. Yields are reportedly acceptable. The fabrication process at TSMC’s three-nanometer node is allegedly delivering satisfactory yields.
What the announcement does not address is the Grand Canyon-scale gap between chip availability and operational deployment.
That gap is currently wider than it has been at any point in the modern data center era. The global supply chain for advanced semiconductor manufacturing equipment, particularly the extreme ultraviolet lithography systems produced almost exclusively by ASML in the Netherlands, is at this time [a] operating under geopolitical constraints, [b] export control regimes, and [c] demand pressures that have no recent precedent. Access to process-critical materials like helium, which is used in significant quantities throughout semiconductor fabrication and precision cooling, faces supply disruptions tied to instability in producing regions; for example, the Iran War. Specialty packaging substrates, high-bandwidth memory fourth-generation components, and silicon photonics transceivers each faces its own procurement vulnerabilities.
“Full production” in Nvidia’s usage means the company’s fabrication partners are producing chips at volume. It does not mean that the Full Monty of rack, cooling, networking, software stack, and facility is going to be turned on sometime between July and December 2026. Based on my experience, there is a difference between a system in a lab and in a production data center meeting service level agreements 24×7.
Will I overlook Nvidia’s own digital clock for inventing or refining its chips and software? Nope. First, the company does not run on turtle time. It is zipping along on Silicon Valley time. Nvidia’s own product cadence creates a rational deterrent to capital commitment for Excel jockeys. An optimist with access to money has to consider the infrastructure investment required to deploy Vera Rubin at scale and not gobble antacids because Nvidia’s or a competitor’s next big thing is already in development or in preliminary testing. The infrastructure clock ticks in turtle time. The silicon clock ticks fast. Interest on loans ignores both clocks. It just accrues. Asymmetry is the name of the gambling game. This is not Hamstr Kombat.
What about Those Data Center Plays?
Readers of this dinobaby blog and our new “Telegram Notes” know that we have been monitoring one of the most unusual AI compute / data center plays in the last two years. Believe me, there have been some wild and crazy ones. Have you been to Memphis lately?
I am referencing the publicly traded cancer research company that flipped into a reseller of AI compute. Yep, makes perfect sense. Plus the maneuver was accomplished in five or six months in 2025. I am referring to AlphaTON Capital, NASDAQ:ATON. (Did you know “ATON” is the name of a big Russian financial outfit?)
AlphaTON Capital has publicly claimed ownership of 500 Nvidia GB200 graphics processing units, positioning that holding as a substantial artificial intelligence infrastructure asset. Examined against the Vera Rubin announcement, that position has the profile of a stranded investment in accelerating decline.
The GB200 is what I would called stable, current-generation hardware today. But Vera Rubin’s stated economics — one-tenth the cost per token compared to Blackwell — mean that GB200-based compute will become progressively uncompetitive as Vera Rubin deployments roll out. Customers purchasing artificial intelligence compute services make decisions on cost per token and performance per watt. On both metrics, GB200 infrastructure is likely to be less efficient even if one considers the deployment delays discussed above.
AlphaTON Capital’s position is further weakened by its infrastructure dependency. The company has no purpose-built facility. Its 500 units are dependent on a colocation arrangement with a data center in Sweden. This wonderful country’s data center operators partnering or hired by AlphaTON Capital have to navigate the same procurement, permitting, cooling retrofit, and optimization timeline that constrains most AI compute data center operators. By the time the AlphaTON Capital chips are producing revenue, the market value of GB200-based compute will have deteriorated significantly.
Graphics processing units that cannot compete on cost-per-token economics in a market increasingly defined by Vera Rubin benchmarks do not retain enterprise customers. These chips seem destined to be suited for lower margin jobs. At some point, the GB200s may turn up on eBay. For a publicly-traded company whose valuation depends on the market accepting its artificial intelligence infrastructure swizzle, words won’t work.
The consequences will not be measured in turtle time clock tips. The speed of the problem will startle many, including AlphaTON Capital’s investors.
Stephen E Arnold, March 19, 2026
Will Meta Force the UK to Do a Kremlin Telegram Play?
March 18, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
The US big tech outfits may be annoying some countries. Examples include Switzerland and its interactions with Palantir Technologies, Google and its jousting with EU regulators, and Amazon AWS’ chats with customers about “dependence.” One set of interactions caught my attention because it has the potential to trigger a quite dramatic governmental reaction. Remember: This is a dinobaby’s interpretation and extrapolation of a “what if” scenario.
A wealthy Silicon Valley professional drives his RV into a British home owners’ garden. The home owner is distressed. The RV driver toots ignores the outraged home owner. Thanks, Venice.ai. Good enough.
The trigger for my thinking is a write up titled “Exclusive: Meta Vowed to Stop Illegal Financial Ads in Britain. It Failed 1,000 Times in a Week.” Please, read the original Reuters’ story. I will boil it down and then focus on what I call the Kremlin Telegram Play.
The cited story from the trust outfit reports that Meta said one thing, then did another…. just a 1,000 times in a week. I will quote one sentence from the exclusive report:
… 56% of those ads were ?from an unspecified number of unauthorized advertisers the FCA had already flagged to Meta, according to the results of the review seen by Reuters and reported here for the first time.
Now let’s think about this alleged action by Meta. The British government made a reasonable request. Meta agreed. Then Meta did what it has been doing for many years. The company just followed its “we do what we want” approach to its core philosophy of moving fast and breaking things.
No surprise here. Sarah Wynn-Williams’ "Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism" documents a number of examples of how the Facebook, Instagram, and WhatsApp owner deals with political, financial, and ethical decisions. The Meta outfit does what it has decided to do just as it allegedly is doing with illegal financial ads.
Some might call Meta’s approach good business. Others, like some other countries, view the behavior as either inappropriate, unethical, or illegal. That divide is what makes some American high technology companies deeply problematic. Users love the services; elected officials are troubled. Meta’s failure to block illegal financial advertisements in the UK may raise an interesting question; that is, “Will the UK block Meta’s services just as the Kremlin is blocking access to Telegram’s services.
If the UK takes this decision, the impact of US corporate behavior could trigger a set of similar actions in other countries. The damage would not be confined to companies exhibiting Telegram-type behavior. The US companies would lose some percentage of their customer base. But the knock on effects are interesting to consider; for example:
- Data service providers (ISPs and others) would find themselves having to make a decision. Do these firms follow the law of the country in which the data center is located and the law of a country like Russia or Britain? Do these firms roll over or do they defy regulators?
- Suppliers. Companies and consultants working for a blocked company could be subject to fines, loss of government contracts, or the arrest of the senior executives. Will these supply chain entities comply or will they adopt the US approach and say, “Sure. Whatever.” And then continue to work for the US big technology firms. That may fly in some countries, but in other countries, that might be a problem.
- Employees of US companies who live and work in a country which has taken action to block their employer’s services could face arrest, imprisonment, and in some countries, extreme punishment. (I won’t define “extreme,” but you can look it up on one of those big tech smart software services. Note: You may be blocked from viewing the content. Why not give it a whirl?)
- Users would find themselves looking for ways to evade blocks by data centers and other firms in order to access the US services. In Iran, there are rumors on social media that the government is looking for individuals with Elon Musk Starlink systems and people who use virtual private networks. Breaking the law raises some interesting questions about user push back or kinetic response.
- Lawyers and consultants. Still billing no matter what.
Now let’s look at the question in the title of this essay, “Will Meta force the UK to do a Kremlin Telegram play?”
Accommodation. If the UK just accommodates Meta, the firm may continue to run the plays in its game plan. This signals other companies to ignore British laws, rules, and regulations. If a fine is levied, pay the fine and keep on running what’s in the play book.
Negotiate. Yeah, that works. “Great to meet you and your team. My team is here and ready to work out an understanding.” Look at the agreement getting settled in its little coffin.
Do nothing but talk. If the UK does nothing, big US high technology firms are likely to expand and become more aggressive in their methods for generating revenue from users and advertisers in that country. The “do nothing” approach has been, from my point of view, the path the EU has followed. How much money have US big technology companies paid in fines? Answer: Not much.
Block Meta’s services. If the UK requires UK data centers and related firms to block access to Meta’s services, the UK has adopted the Kremlin approach to managing information. I am not sure how that will fly in Britain or if it would fly in Ireland, Scotland, or Wales. It would, however, be interesting to watch the different political entities respond to this Putinesque approach. The Ivory Tower thinkers at Oxbridge would produce some fascinating essays and books about the decision.
Net net: The Reuters’ story, if accurate, is important. The consequences for the UK may be significant. Meta will just adapt because of the Silicon Valley, big technology, tech bro thing.
Stephen E Arnold, March 18, 2026
A Peek into the Thiel-iverse
February 26, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I don’t pay much attention to Palantir. After I saw the firm run an ad in the Wall Street Journal explaining that it was an AI company, I dropped it from my intelware watch list. Palantir, I concluded, was an open source surfing custom software shop. It was essentially building solutions for customers. There’s nothing wrong with that approach, but I prefer outfits like Octostar who just sell a license. The customer is ready to roll after a short training course.
I read two items I found interesting. The link between the two is Peter Thiel, not the companies themselves. Let’s look briefly at these two items. Please, read each in its original form and assemble your own opinion about the messages contained in each “real” news item.

Thanks, Venice.ai. Good enough.
The first article is in the UK newspaper The Telegraph. Its story is “HS Contractor Palantir Will Suffer $200bn Wipeout, Says Big Short Investor.” The write up reports, “Michael Burry accuses ‘easily replaceable’ tech firm of overplaying its AI credentials.”
I found this snippet interesting:
Under its latest contract with the NHS, Palantir was tasked with joining up existing NHS data in a bid to speed up diagnosis and reduce waiting times and hospital stays. However, official figures this week showed that A&E trolley waits have risen to their worst on record – with more NHS patients than ever facing 12-hour delays last month.
Not unusual. Years ago I had a client with a juicy NHS contract. The client’s software did a couple of things, but to do heavy lifting such as that required by the NHS, custom code had to be written. My client could not meet the requirement and its contract was not renewed. Was it the NHS’ fault? Was my client responsible? I have no idea. But custom software required for a product that does a couple of things often presents challenges. Palantir is now tackling the NHS, and it has an alleged US$200 billion to help out the NHS with some fundamental issues.
He [an expert named Michael Burry] said its chief executive, Alex Karp, had initially been “blindsided by ChatGPT” and other large language models, only then to decide that he could “spin this as Palantir is AI”. Mr Burry said: “Like Trump, Karp figures bluster has gotten him pretty far, and so will continue in that mode.” Palantir has previously hit back against Mr Burry’s criticism. In late 2025, Mr Karp branded the investor “bats–t crazy” for predicting such sharp falls in its stock.
Yep, professional.
The second article is “Discord Distances Itself From Age Verification Firm After Ties To Palantir’s Peter Thiel Surface.” This write up states:
Started in 2018, Persona develops identity detection and anti-fraud technologies. They’ve been having an absolute field day since the OSA, being implemented to verify user ID across Reddit and Roblox. One sticking point, however, is who’s backing the company: Peter Thiel, the cofounder of ICE-approved surveillance firm Palantir.
The write up points out:
Thiel, of course, is known for many things. A co-founder of PayPal, Thiel is now more closely affiliated with Palantir, a company specializing in digital surveillance and exploiting user information.
My reaction to the Thiel thread linking these two items is that:
- Aggressive marketing is working for Palantir
- Mr. Thiel has a knack for spotting “in between” opportunities; that is, pools of high value information and customers like governments
- Some people like Mr. Burry and the author of the Discord article are nervous about the companies and, I surmise, Mr. Thiel.
Is it possible that Mr. Thiel and other influential Silicon Valley professionals want to use their technology to create an on ramp for themselves and their companies to gain not just more money but direct influence over the government and the citizens of a country?
A partial answer might be found in the public statements of thinkers like Nick Land, Patrick Deneen, and Curtis Yarvin. The touchstone old timer may be René Girard or Leo Strauss. Some of the ideas might shed light on Mr. Thiel’s investments, his support of the Palantir approach to marketing, and the funding of outfits like Persona.
Getting fascinated with an individual chess piece is necessary, but the game is won by trying to figure out the strategy of the player. That’s why I don’t follow Palantir. It is the bigger picture into which Palantir fits that matters.
Stephen E Arnold, February 26, 2026
Software That “Works.” Okay, What Does “Work” Mean?
February 9, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I don’t want to write about the US government. I don’t want to write about consulting trends. I don’t want to write about AI. Once I wrote about search and retrieval. Now that’s just AI-ized, and it still does not “work” for many work related processes. Why am I negative? Well, folks, AI infused search just outputs information that can be wrong. When key word search can’t find a document a user just created on a laptop connected to the company network, an AI-infused system may not find it. The AI system could just fabricate it or output a match that is close enough for horse shoes.
Therefore, I perked up when I read “Your Job Is to Deliver Code You Have Proven to Work.” The title sounded a bit like the other dinobabies whom I meet for lunch. Let’s take a look.
The author Simon Willison states:
Your job is to deliver code you have proven to work.
I don’t want to be a Negative Nancy, but my team and I encounter quite a bit of software that does not “work.” The key to understanding Mr. Willison’s point of view and mine is to define “work.” Mr. Willison writes:
We need to deliver code that works—and we need to include proof that it works as well.
Okay, someone somewhere has taken the time to write a tight specification or just waved hands and said, “I need something to do X.” In order to demonstrate that the software works, one has to show the customer or the user or the other components with which the new code interacts that it outputs what’s in the spec.

Thanks, Venice.ai. Good enough. Where have you heard that before?
Do you spot the flaw? Many “modern” software systems have what I call a “sort of spec.” The idea is that no one has the time or the information to create a detailed specification. The spec is just good enough. What happens?
Here’s a current example. Use ChatGPT in Edge. The smart software will say click the “plus” or the “icon” to perform a task. Okay, but there is no icon. There is no plus. The reason is that ChatGPT does not display certain controls in Edge. The same weird half complete implementation surfaces in other smart software. Ever try Comfy or Gemini in Edge? What about Perplexity in the Yandex browser? Who has time for this silliness? Certainly not the programmer / developer. The interfaces are good enough.
The flaw is that “works” is relative. A boss may not look at the code and review it. The person could be an MBA from a far off land who studied at a French graduate school. Excel is about the limit of the individual’s technical expertise. Some managers don’t use the software. I have been in meetings for one reason: To demo a software. Why? The boss had no clue how to use the product.
The essay concludes with:
A computer can never be held accountable. That’s your job as the human in the loop. Almost anyone can prompt an LLM to generate a thousand-line patch and submit it for code review. That’s no longer valuable. What’s valuable is contributing code that is proven to work. Next time you submit a PR, make sure you’ve included your evidence that it works as it should.
I want to point out that some people look for one throat to choke. Not many I agree. Some are out there. But the problem, in my opinion, is that the attitude, commitment, or determination to do a job, work to the best of one’s ability, and make sure whatever the spec calls for actually delivers. Then check the result on other systems.
Sure, you can use smart software. But at some point yours will be the throat to choke. Why die on the Hill of Ineptitude? “Works” is subjective, but you can avoid immolation by a superior’s hot fire like outputs. Maybe the entire information technology department will burn in the white hot leadership flame thrower.
Stephen E Arnold, February 9, 2026
Creepy Robots? Absolutely
January 29, 2026
Did you know that robotics has advanced so much they’re now making robots smaller than a grain of salt? It’s something out of science-fiction, but Science Daily shares the scoop on these tiny machines: “Scientists Create Robots Smaller Than A Grain Of Salt That Can Think.” University of Pennsylvania assistant professor Marc Miskin is the senior author of a paper that describes these itty bitty wonders.
These teeny tiny robots are called microrobots. They measure 300x200x50 micrometers and that’s barely visible without magnification. They’re programmed to swim, think, and survive for long stretches of time. What’s even cooler is that they’re powered by light.
Each microrobot is equipped with a microscopic computer that contains programmed instructions. The computers can detect temperature changes and the microrobot adjusts their movements accordingly. They move through using their own propulsion:
"Instead of bending or flexing, the robots generate an electrical field that gently pushes charged particles in the surrounding liquid. As those ions move, they drag nearby water molecules with them, effectively creating motion in the fluid around the robot. ‘It’s as if the robot is in a moving river,’ says Miskin, ‘but the robot is also causing the river to move.’”
Miskin worked with David Blaauw’s team from the University of Michigan to design solar panels that would power the robots. The team also redesigned the robots’ programmed instructions so they would inside the computer’s limited memory.
There are limitless applications for these minuscule robots, from medical to weapons usage. Let’s hope they’re used for good and not evil.
Whitney Grace, January 29, 2026
Management Is the Problem, Not Technology
January 22, 2026
Inc. said at the end of 2025 that “The Tech Industry Is Dying” and offered its opinions about how to fix it up the ageing buggy. One of its editorials says the tech industry is being demoted to a commodity like insurance and other and a doc in the box in a strip mall. The reason? Top talent isn’t being nurtured and desirable products aren’t being designed and built.
Now this premise strikes me as a management challenge. A worker has to know what the job or task is. If a manager cannot explain it, how can the technology worker know what to do? The answer seems to me that the employee has to figure that out alone. No wonder outfits like Amazon can bring down half of the US Web sites or Verizon outages leave people without mobile access or law enforcement without communications.
But let’s look at what Inc. thinks:
The editorial’s author Joe Procopio suggests five ways to save the industry are as common sense. For example, don’t wait for others to innovate and always meet shortsightedness with facts. Groupthink is another hurdle to jump that is worse to maneuver than meerkats over a cliff:
“Consensus rule is dominating the tech industry more than it ever has. I’ll explain why this happens. My wife brought home a game where everyone takes turns being the first to guess an answer to a question by placing their marker on one of a few hundred options. It sucks having to go first, because then you don’t have the luxury of being able to put your marker closer to the consensus of markers. Then, when the answer is revealed, everyone can see just how wrong you were. If you want to lead, you can’t just be right. You have to convince everyone that their marker should be near your marker. Techies are terrible at getting consensus. Hopefully that analogy I just gave you helps you understand what you’re dealing with, so you can fight groupthink in a way that doesn’t get you ostracized and fired.”
Procopio says assume control over AI and remember to take risks (within reason of course). These suggestions sound like a self-help motivational course, not just ways to improve the tech industry.
Let’s think about this “consensus.” An employee without an effective manager or even a coherent job description may talk with a colleague. The colleague’s views become the input the employee needs. Calling this “consensus” misses the point. Organizations with managers who are not able to perform the employee’s job cannot provide guidance. Therefore, non-management allows the manager to say, “You work it out with your team.”
The team may be clueless. The results of this approach are visible in many firms. How many Copilot features are available? How many different interfaces does Google present to its users? How many products on eBay are as described?
In my opinion, Inc. dodges the core issue: Management methods deliver cleverness, an individual’s idea of what must be done, and an ultimately unstable technical house of cards. Is 2026 providing examples of positive change? I can’t think of one, but it is early in the year. I am not optimistic. My Internet just went down… again.
Whitney Grace, January 22, 2026
The Drivers for 2026
January 14, 2026
The new year is here. Decrypt.co runs down the high and lows in, “Emerge’s 2025 Story of the Year: How the AI Race Fractured the Global Tech Order.” The main events of 2025 revolve around China and the US battling for dominance over the AI market. With only $256,000, the young Chinese startup Deepseek claimed it trained an AI model that matched OpenAI. OpenAI spent over a hundred million to arrive at the same result.
After Deepseek hit the Apple app store, Nvidia lost $600 billion in revenue as the largest drop in market history. Nvidia’s China market share fell from 95% to zero. The Chinese government banned all foreign AI chips from its datacenters, then the US Pentagon signed $10 billion in AI defense contracts.
China and the US are now warring a cold technology war. Deepseek exposed the US’s belief that controlling advanced chips would hinder China. Here’s how the US responded:
“The AI market entered panic mode. Stocks tanked, politicians started polishing their patriotic speeches, analysis exposed the intricacies of what could end up in a bubble, and enthusiasts mocked American models that cost orders of magnitude more than the Chinese counterparts, which were free, cheap and required a fraction of the money and resources to train.
Washington’s response was swift and punishing. The Trump administration expanded export controls throughout the year, banning even downgraded chips designed specifically for the Chinese market. By April, Trump restricted Nvidia from shipping its H20 chips.”
Meanwhile China retaliated:
“The tit-for-tat escalated into full decoupling. A new China’s directive issued in September banned Nvidia, AMD, and Intel chips from any data center receiving government money—a market worth over $100 billion since 2021. Jensen Huang revealed the company’s market share in China had hit "zero, compared to 95% in 2022.”
The US lost a big market for chips and China’s chip manufacturers increased domestic production by 40%. The US then implemented tariffs, then China responded by exerting its control over the physical elements needed to make technology in the strictest rare earth export controls ever. China wants to hit US defenses hard.
The Pentagon then invested in MP Materials with a cool $400 million. Trump also signed the Genesis Mission executive order, a Department of Energy-led AI initiative that the Trump administration compared to the Manhattan Project. Then China did…etc, etc.
Net net: Hype and hostility are the fuels for the months ahead. Hey, that’s positive Decrypt.
Whitney Grace, January 14, 2026
A Revised AI Glossary for 2026
January 5, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I have a lot to do. I spotted this article: “ChatGPT Glossary: 61 AI Terms Everyone Should Know.” I read the list and the definitions. I have decided to invest a bit of time to de-obfuscate this selection of verbal gymnastics. My hunch is that few will be amused. I, however, find this type of exercise very entertaining. On the other hand, my reframing of this “everyone should know” stuff reflects on my role as an addled dinobaby limping around rural Kentucky.
Herewith my recasting of the “everyone should know” list. Remember. Everyone means everyone. That’s a categorical affirmative, and these assertions trigger me.
Artificial general intelligence. Sci-fi craziness from “we can rule the world” wizards
Agentive. You, human, don’t do this stuff anymore.
AI ethics. Anything goes, bros.
AI psychosis. It’s software and it’s driving you nuts.
AI safety. Sure, only if something does not make money or abrogate our control
Algorithms. Numerical recipes explained in classes normal people do not take
Alignment. Weaponizing models and information outputs
Anthropomorphism. Sure, fall in love with outputs. We don’t really care. Just click on the sponsored content.
Artificial intelligence. Code words for baloney and raising money
Autonomous agents. Stay home and make stuff to sell on Etsy
Bias. Our way is the only way
Chatbot. Talk to our model, pal
Claude. An example of tech bro-loney
Cognitive computing. A librarian consultant’s contribution to gibberish
Data augmentation. Indexing
Dataset. Anything an AI outfit can grab and process
Deep learning. Pretending to be smart
Diffusion. Moral dissipation and hot gas
Emergent behavior. Shameless rip off of the Santa Fe Institute and Walter Kaufman
End-to-end learning. Update models instead of retraining them
Ethical considerations. Pontifical statements or “"Those are my ethical principles, and if you don’t like them… well, I have others."
Foom. GenZ’s spelling of the Road Runner’s cartoon beep beep
Generative adversarial network. Jargon fog for inputs along the way to an output
Generative AI. Reason to fire writers and PR people
Google Gemini. An example of tech bro-loney from an ad sales outfit
Guardrails. Stuff to minimize suicides, law suits, and the proliferation of chemical weapons
Hallucination. Errors
Inference. Guesses
LLM. Today’s version of LSMFT
Machine learning. Math from half century ago
Microsoft Bing. Beats the heck out of me
Multimodal AI. A fancy way to say words, sound, pix, and video to help un-employ humans or did this type of work
Natural language processing. Software that understands William Carlos Williams’ poetry
Neural network. Lots of probability and human-fiddled thresholds
Open weights. You can put your finger on the scale too
Overfitting. Baloney about hallucinations, being wrong, and helping kids commit de-living
Paperclips. Less sexy than The Terminator but loved by tech bros who like the 1999 film Office Space
Parameters. Where you put your finger on the scale to fiddle outputs
Perplexity. Another example of tech bro-loney
Prompt. A query
Prompt chaining. Related queries fed into the baloney machine
Prompt engineering. Hunting for words and phrases to output pornography, instructions for making poison gas, and ways to defraud elders online
Prompt injection. Pressing enter after prompt engineering
Quantization. Jargon to say, “We won’t need so much money now, Mr. Venture Bankman”
Slop. Outputs from smart software
Sora. Lights, camera, you’re fired. Cut.
Stochastic parrot. A bound phrase that allowed Google to give Timnit Gebru a chance to find her future elsewhere
Style transfer. You too can generate a sketch in the style of Max Ernst and a Batman comic book
Sycophancy. AI models emulate new hires at McKinsey & Company
Synthetic data. Hey, we just fabricate data. No copyright problems, right
Temperature. A fancy way to explain twiddling with parameters
Text-to-image generation. Artists. Who needs them?
Tokens. n-grams but to crypto dudes it’s value
Training data. Copyright protected information, personally identifiable information, and confidential inputs in any format, even the synthetic made up stuff
Transformer model. A reason for Google leadership to ask, “Why did we release this to open source?”
Turing test. Do you love me? Of course, the system does. Now read the sponsored content
Unsupervised learning. Automated theft of training data
Weak AI (narrow AI). A model trained on a bounded data set, not whatever the AI company can suck down
Zero-shot learning. A stepping stone to artificial intelligence able to do more than any miserable human
I love smart software.
Oh, the cited source leaves out OpenAI’s ChatGPT. This means “Titanic” after the iceberg.
Stephen E Arnold, January 5, 2025

