Big tech has a big cooling problem. Data centers require massive amounts of energy to keep their chips—the semiconductors that process, store, and transmit data—from overheating. As these facilities proliferate across the U.S., the power grid is feeling the crunch, but an innovative new technology could help ease this strain.
Direct-to-chip cooling, which circulates coolant through “cold plates” mounted directly on processors, is emerging as a leading cooling method for its efficiency—an increasingly critical advantage as chips grow more powerful. Now, a team of researchers has found a way to make cold plates even more efficient. They published their findings today in the journal Cell Reports Physical Science.
“The main difference here is that we’re using this new manufacturing technology called ECAM—electrochemical additive manufacturing,” study co-author Nenad Miljkovic, professor and director of the Air Conditioning and Refrigeration Center at the University of Illinois Urbana-Champaign, told Gizmodo.
With this method, Miljkovic and his colleagues created copper cold plates that are optimally designed to deliver up to 32% better cooling than conventional cold plates. They also reduced pressure drop by 68%, making it easier for liquid coolant to flow through the plate. Deploying these plates across an entire data center would lead to significant energy savings compared to both air-cooling and commercially available liquid-cooling systems, according to the researchers.
3D printing a better cold plate
The exceptional efficiency of these copper cold plates stems from their fin design. The interiors of most cold plates are lined with tightly packed metal “fins” that project into the coolant to maximize the amount of surface area in contact with it.
Miljkovic and his colleagues collaborated with Fabric8Labs, the San Diego-based company that developed the proprietary ECAM technology, to produce copper cold plates with an optimized fin design. This manufacturing process is essentially like 3D printing with metal at very high resolution—it uses electrochemical plating to build tiny copper structures layer by layer instead of melting and fusing metal.
“We can make these optimized three-dimensional structures that you couldn’t make using classical manufacturing,” Miljkovic explained.
To create the fin design, his team started with a simple rectangular shape, then used a technique called topology optimization to determine the best shape to maximize cooling capability and reduce pressure drop. This technique uses a mathematical algorithm to gradually alter the fin’s shape and estimate the efficiency of each iteration.
“After 1,000 iterations, it ends up with this really beautiful tree-like structure, which is optimized for heat flow,” Miljkovic said.
According to the researchers, a data center with 1 gigawatt of computing power consumes roughly 500 megawatts of electricity to run an air-cooling system. That means it actually consumes 1.55 GW in total, but only 1 GW is used for data processing. With these optimized cold plates, a 1 GW data center would only need to use 11 MW for cooling, they say.
The next test: real servers
While their prototype testing yielded promising results, Miljkovic said the next step is to demonstrate the cold plates’ efficiency on real chips. He hopes to collaborate with companies providing large-scale cloud computing to see how this design functions on actual hyperscale servers.
As those companies continue to grow their computing power, finding ways to reduce energy consumption will be critical. Power-hungry data centers are already putting significant strain on the grid, and by 2028, some projections show that their energy demand could double or even triple.
Deploying this optimized cold plate design at scale won’t solve the grid crunch alone, but it could help pave the way toward a more sustainable future for Big Tech. AI isn’t going anywhere anytime soon, so adapting data centers to operate within the grid’s limits will be critical.


