Inside ECL's Futuristic AI Data Center
Extreme Density NVIDIA Racks in a Liquid-Cooled, Hydrogen-Powered Data Hall
Data center startup ECL launched to pioneer the use of hydrogen power to create the world’s most sustainable data centers. With the arrival of ChatGPT and the data center boom, ECL has evolved along with the industry, and shown that its cutting-edge approach to design is also ideal for high-density AI racks.
On the Data Center Richness podcast, ECL Founder Yuval Bachar shares how ECL is deploying the very latest NVIDIA high-density racks in a hydrogen-powered, liquid-cooled data center. It’s a story that reinforces the importance of an engineering mindset, and always being ready to innovate.
Here’s my podcast discussion with Yuval Bachar of ECL.
Here’s a brief excerpt, where Yuval and I discuss ECL’s California data center, which integrates many cutting-edge concepts in energy and cooling while supporting high-density AI racks.
Rich Miller of Data Center Richness: I'm curious about your first data center in Mountain View, where you tested out all the concepts you were developing. It's become the place where your customer does very high-density AI, serving as a proving ground for some of the very newest high-density equipment from Nvidia. What has that process been like, and what have you learned in your first 18 months operating that first site?
Yuval Bachar, ECL: The first (data center) was supposed to be a development site, but at some point our customer Lambda basically said, “Okay, move it to production. We want to actually lease the whole space.” Now Lambda has been running for over a year with very heavy AI workloads.
What happened in 2023 and 2024 is that the data center requirements changed completely, and all the blueprints that we had and ways of building data centers are just not relevant anymore. So we had to go and reinvent a lot of things, like direct-to-chip liquid cooling distribution systems, water distribution systems, fast integration - all the things that are super critical right now.
We were able to leverage the fact that we can run off-grid completely and be able to scale our power generation platform without dependency on the grid, which in retrospect was one of the best aspects of what we did.
Today we are running the site over here for 12 months in production, and it’s running 150kW racks of (NVIDIA Blackwell) GB300s. It’s fully run by an AI agent, so we don’t have people on site. We don’t have a NOC, and we are operating in a way which is extremely efficient because of our liquid cooling capabilities. Our PUE over here is 1.05, and it’s very tough to get 1.05 with that level of density. We provisioned our capabilities to integrate those very high power racks. Just an example: the last GB300 that arrived over here was online two hours from the moment it actually landed, which is very unusual. Usually, it takes much longer.
“What happened in 2023 and 2024 is that the data center requirements changed completely. … So we had to go and reinvent a lot of things.”
Yuval Bachar, ECL
We all talk about fast transients in power, but a fast transient in power is also associated with a fast transient with cooling because the cooling transitions very, very fast. Suddenly you jump from 30kW to 140kW in like milliseconds. Everything in the system actually has to adapt, and if you have multiple systems doing it at the same time, it creates a churn.
When we started the development, we did not plan for such a high transient, but we wanted to create a new kind of power system. So we eliminated the diesel generator, eliminated the UPS, and created this active-active setup. It worked perfectly for us because one of the elements of adding the active-active in Mountain View is a BESS (battery energy storage system) from Tesla. The BESS system is taking care of all the transients. So our customer gets the option to bring the rack as-is. They don’t need to create any transient management system or supercapacitors or anything; the system responds to their demand very simply in a matter of microseconds because the BESS systems are responding extremely fast.
The journey was extremely interesting and extremely transitional. One thing which is interesting is we’re sitting on the Inception program of Nvidia, so we see what’s coming down the line, and that’s scary. If you look at the world today, there may be 10 or 15 data centers which can actually deploy properly the GB300s. When we go to 400 kW and 600 kW and maybe a megawatt down the line, the number of data centers that will be able to handle this is going to shrink. It’s going to be less and less providers of data centers that will be able to do it and will obsolesce the data centers even faster.
I think the one thing we’ve learned is you can adapt very fast to a certain level, but if you start doing 2x or 3x every six months, that is going to be a major deployment blocker for future data centers.



