A Pragmatic Look at Data Center Power: Just Turn Everything on at Once. No Problemo.

March 12, 2026

green-dino_thumb_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

A happy honk to the UK online information service the Register. “Your Datacenter’s Power Architecture Called. It’s Not Happy” presents some useful information about how AI compute data centers operate when the “on” button is pressed. The article explores in reasonably a non-EE geeky way some of the different facets of taking a corn field in Louisiana or a tobacco farm in Tennessee and building a five soccer field size data center. Heck, make it bigger. Land is cheaper than in Manhattan. Go for 20 soccer fields of AI compute hardware.

image

Yep, AI is close enough for horseshoed. Thanks, MidJourney. Nice fire.

The write up says:

GPU clusters and AI accelerators don’t operate on the old rules. They don’t ask for 15 kW. They demand hundreds of kilowatts per rack, an order-of-magnitude leap that legacy electrical and thermal architectures were never designed to survive. The comfortable assumptions baked into decades of datacenter design are now liabilities, and the industry is facing a reckoning it can no longer defer.

What? Are big time AI outfits kicking the can down the road or simply ignoring the fact that “power” is just there. It’s like water. In their accelerated lives, power has not been top of mind. Now the whiz kids at big tech AI companies have an opportunity to learn about the challenges their acres of computers and assorted gizmos will require.

The write up identifies a number of issues; for example:

Let’s talk about the current-squared problem and resistive losses. Because power loss scales with the square of the current, even small reductions in current lead to significant increases in efficiency. The power distribution efficiency is governed by Joule resistive loss (Ploss</loss> = I2R).

If you are not into this power lingo, the author seems to be saying that the new AI compute data centers require some extra engineering. Why? The pricey CPUs and GPUs can run hot. These devices plus RAM and solid state storage are voltage?sensitive. Even idle or under utilized racks can run at higher temperatures than old-fashioned data center racks. When demand hits to create a video for grandma, the AI compute system has to deal with heat. AI centric systems can throttle themselves. Pushing harder means that electrical noise can become a factor. Furthermore, failover is harder because the stacks are non?identical and latency?sensitive.

How do you deal with infrastructure challenges, power, and cooling? Answer: Novel engineering and money. The problem is that these big AI compute data centers are now the equivalent of the room-sized mainframes in the late 1950s. The solutions make 20 somethings laugh when the Smithsonian in DC puts some of its historical hardware on display.

The real equations are not the EE marvels. Nope, the basic equations pivot on the cost of time required to engineer, design, test, and manufacture the components necessary to keep power draw lower and temperatures even lower. At the same time, the solutions have to allow the CPUs and GPUs to go fast.

Accelerationists think about software. Accountants and people with common sense think about plumbing. Experienced professionals think about time and how much it costs to crunch quite challenging engineering into tiny intervals. Speed can kill data centers, the financial dreams of high tech whiz kids, and some businesses.

What happens when one tries to scale these nifty data centers? No big deal. We are big tech doing AI. We have the answers.

Stephen E Arnold, March 12, 2026

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta