Following the Power to the Network's Edge
NVIDIA, Prologis Lead Partnership to Bring AI Inference to Utility Substations

A new initiative from leading players in AI, real estate and energy aims to reshape the geography of artificial intelligence by deploying smaller, modular data centers at or near utility substations with power capacity.
NVIDIA, Prologis, InfraPartners and the Electric Power Research Institute (EPRI) are teaming up to deploy a network of distributed AI inference” facilities ranging from 5 to 20 megawatts (MW). By targeting locations where grid capacity is available but underutilized, the initiative aims to bypass the primary constraint of the AI era: the lengthening wait for utility power.
“Using existing grid capacity to bring inference compute closer to where it’s needed, quickly and reliably, is a win for all,” said EPRI President and CEO Arshad Mansoor. “This collaboration with Prologis, NVIDIA, InfraPartners, and the utility community highlights the type of innovative actions required to meet the moment.”
Five-Site Rollout Planned for 2026
The partners plan to launch at least five pilot sites across the United States by the end of 2026. These installations will run NVIDIA’s optimized AI hardware and use InfraPartners’ modular construction to create a repeatable, standardized kit for rapid deployment.
Prologis, the world’s largest industrial real estate company, will provide the physical sites, many of which are strategically located near urban centers and logistics hubs where AI inference demand is highest. Site selection for the first 5 locations is in process.
This announcement reinforces the growing interest in distributed infrastructure for AI inference, which we highlighted last week:
Leading players in the data center sector are positioning themselves for a shift in the AI lifecycle. While the training of Large Language Models (LLMs) requires the massive scale and concentrated power, the actual use of those models - the inference phase - will often benefit from proximity to the end user and a more resilient, agile footprint.
In essence, this initiative is a highly localized example of the “follow the power” site selection strategy, which has succeeded “follow the network” as the guiding mantra in data center geography.
Modular Design Comes to the Fore
Edge computing extends data processing and storage closer to the growing universe of devices and sensors at the edge of the network, enabling new technologies and services across low-latency wireless connectivity.
Edge deployments have long been supported by modular data center designs, which have evolved to become key building blocks in the industry effort to manage the demands of high-density hardware with more frequent refresh cycles.
InfraPartners provides prefabricated, liquid-cooled modular units, and has worked closely with neocloud Nscale and is partnering with JLL on data center deployments. Its “Upgradeable Data Center” solution is designed to support the accelerating cadence of GPU platform upgrades from NVIDIA.
This modular design allows for a plug-and-play” approach - once the site is ready and the power is secured, the data center can be operational in months rather than years.
“AI is becoming the real-time engine of growth for the modern economy, and it demands a new kind of digital infrastructure,” said Harqs Singh, chief technology officer at InfraPartners. “By pairing InfraPartners’ AI data center solutions with EPRI’s technical leadership, NVIDIA’s platforms, and Prologis’ national footprint, we’re enabling rapid deployment of AI nodes where they’re needed most. Together, we’re building the foundation for the next decade of intelligent infrastructure.”

A Partnership Spanning Verticals
Each partner brings specific expertise to address the “constraint stack” of land, power, and hardware
Prologis operates 801 million square feet of real estate across 3,825 properties in the United States. The company is actively expanding into AI infrastructure, with plans to spend $8 billion over the next four years to build 20 data centers. The company says the AI data center boom represents “one of the most significant value creation opportunities in our history.”
“As energy demand grows, we need infrastructure solutions that support grid reliability and make better use of what’s already built,” said Parag Soni, senior vice president and global head of Utility Strategy and Engagement at Prologis. “This collaboration is about using our development and energy expertise to help deliver smarter, more flexible infrastructure right where it’s needed.”
As a non-profit research organization, EPRI acts as a bridge between the tech industry and the utility sector, working with utility partners to identify pockets of the grid where existing infrastructure can handle a 5-10 MW load without requiring major transmission upgrades. The institute can also help identify nearby fiber connections.
NVIDIA, of course, is the dominant player in the AI economy, as its GPU hardware is driving a massive shift to accelerated computing. The company is partnering and investing with leading companies across the AI ecosystem, and will be a key player in the new edge initiative.
"AI is driving a new industrial revolution that demands a fundamental rethinking of data center infrastructure,” said Marc Spieler, senior managing director for the Global Energy Industry at NVIDIA. "By deploying accelerated computing resources directly adjacent to available grid capacity, we can unlock stranded power to scale AI inference efficiently. This distributed approach, powered by NVIDIA accelerated computing, maximizes existing energy assets, helping to deliver the intelligence required to transform every industry."

The New Edge, With Echoes of the Old
The idea of placing modular data centers at key points on the utility grid is not entirely new. In 2017, the Phoenix-area utility SRP worked with modular specialist BaseLayer to deploy a DataStation near a major grid intersection with high reliability, which allowed the facility to operate without a backup generator.
That prototype didn’t result in a network of edge facilities. Indeed, many early initiatives to build large networks of distributed edge networks undershot their ambitious goals as the industry struggled with dueling definitions of edge computing.
As with those debates, there are differing views about how the infrastructure and geography of AI inference will evolve. Some believe hyperscale facilities can serve inference as well as training, hence the many references to “fungible design” from large operators. Some others believe consumer devices will ultimately handle many inference tasks.
The new partnership from NVIDIA, ERPI, Prologis and InfraPartners clearly sees a need for distributed facilities that can support the power and cooling needs of high-density GPU platforms. The IXP.us initiative we profiled last week shares similar views and ambitions.
Both intend to deploy capacity this year, so by the end of 2026 we will likely know more about the shape of low-latency inference and future of edge AI workloads.



