OpenAI Will Begin Building Its Own Data Centers
AI specialist may also explore new designs for data center equipment racks

LAS VEGAS - After partnering with data center developers and operators, OpenAI will begin building its own data centers as it continues to expand its high-performance computing infrastructure.
“We have lots of channels by which we deliver capacity,” said Chris Malone, Head of Data Centers for OpenAI, in a panel discussion at the Yotta 2025 conference. “We think it’s important for us to also self-build.
“We have a very strong conviction about scaling, and also have direct buy-in from our leadership,” said Malone.
As it sets out to design its own massive data centers, OpenAI is rethinking some of the fundamental building blocks of traditional facilities, including data center racks.
“The 19-inch cabinet has been around for many years, and I don’t think that needs to be (the only approach),” said Malone, who has previously worked on the data center teams at Meta and Google. “Why not deliver capacity in (a form factor) that allows rapid change.”
OpenAI’s Data Center Evolution
While newsworthy, OpenAI’s decision to begin building its own facilities is not a surprise. The company has been hiring data center talent since late 2024, including Malone and Director of Physical Infrastructure Keith Heyde, who also came from Meta.
The shift opens a new chapter in OpenAI’s data center journey, enabling the company to optimize its data halls, power and cooling for its high-performance AI training and inference operations.
For companies with intense or specialized infrastructure needs, building their own data centers allows greater control over cost and performance, and the ability to customize designs around specific applications.
At the time of ChatGPT’s launch in November 2022, OpenAI was running its AI workloads on the Azure Cloud through its partnership with Microsoft. Here’s a timeline of its subsequent evolution:
In June 2024 OpenAI began deploying AI capacity on Oracle Cloud Infrastructure.
In January OpenAI announced plans to partner with Oracle, SoftBank and MGX to create the Stargate Project, with plans to invest $500 billion in data centers to advance AI.
In the announcement at the White House, OpenAI also confirmed it was the end user in a huge data center project in Abilene, Texas developed by Lancium and leased by Oracle.
In early 2025, OpenAI made a $12 billion, five-year infrastructure hosting deal with CoreWeave.
OpenAI recently announced that it will also use Google Cloud for additional AI capacity.
“We have a whole set of partnerships with CSPs (cloud service providers) where we’re buying the capacity we need,” said Heyde. “What we’re looking for is confidence in that power ramp. You have to see the path forward (on power).”
Capacity Shortage is “Hundreds of Megawatts”
With more than 700 million weekly users as of August, OpenAI could well top 1 billion users by year-end 2025.
Even with its many partnerships, OpenAI is unable to keep up with demand for computing to support its products, said Davis Wu, who heads Compute Strategy and Finance at OpenAI.
“The demand continues to outpace the supply of capacity,” said Wu. “We are facing product tradeoffs, such as delaying product rollouts or limiting some features. We continue to try to make up the gap. Right now it’s on the order of hundreds of megawatts.
“We’re very excited about the data center builds in 2027 or 2028,” added Wu, speaking on a Yotta panel on the “trillion dollar market” for AI. “The near term, now and next year, is when we see the critical shortages.”
Malone noted Meta’s recent move to deploy GPUs in tents as an indicator of how the data center capacity crunch is driving innovation.
“That was a remarkable opportunity to break all the rules,” said Malone. “A year ago, that would have been impossible.
“We’re doing the same kind of thinking about what really matters,” he added. “We’re thinking about how we scale differently.”
The early phase of the design process includes “super-deep engagement with the software side.”
“There is a profound importance to get as much capacity as we can under one scaling roof,” said Malone. “Our primary goal is gigawatt-plus megaclusters that deliver the latency performance we need with our models.”
Rethinking Data Center Rack Design
Massive, closely-packed clusters of powerful GPUs create a lot of heat. The data center industry is grappling with how to support the higher rack densities for cutting-edge AI gear, which is driving more adoption of liquid cooling.
It’s also prompting speculation about whether extreme density will require new form factors for racks and enclosures.
OpenAI won’t be the first initiative to rethink rack design. In 2023 the Open Compute Project introduced the Open Rack, which featured a 21-inch wide rack instead of the traditional 19-inch rack, which dates to telecom equipment that preceded modern data centers. The wider rack allowed better cooling airflow within the chassis.
Malone is hopeful that any rack redesign will bring benefits for the broader industry.
“We really need to approach the whole ecosystem,” he said. “It’s incumbent on us as an industry to move this forward. I don’t believe there’s tremendous value on keeping secrets or having a secret OpenAI rack design.
We have more coverage coming from the Yotta 2025 event, so please subscribe!
If they don’t start bringing in more revenue, those data centers may end up being acquired by somebody half-finished at pennies on the dollar.
https://open.substack.com/pub/pramodhmallipatna/p/the-300-billion-bet-can-openai-afford