OpenAI's Unfeasible Data Center Ambitions: A Critical Financial and Logistical Analysis

artificial intelligence

This analysis critically examines OpenAI's ambitious plans to deploy hundreds of gigawatts of data center capacity, challenging the feasibility of their timelines, financial models, and resource availability.

A common critique leveled against current discourse surrounding major technological announcements is a perceived lack of critical scrutiny. This analysis argues against a deferential approach, asserting that a more direct and rigorous examination of ambitious claims is imperative.

Broadcom and OpenAI recently announced plans for an additional 10GW of custom AI chips and associated data center capacity, slated for full deployment by the end of 2029. This announcement has been met with largely neutral media reporting, often presented as not only achievable but also rational.

Constructing a gigawatt of data center capacity is an immense undertaking. Initial estimates suggest a cost of at least $32.5 billion and a construction timeline of two and a half years. However, figures vary significantly: Jensen Huang estimates $50 billion for computing hardware alone (excluding buildings and power infrastructure), while Barclays Bank projects $50 billion to $60 billion. Based on a re-evaluation of prior estimates, including a $32.5 billion cost for the Stargate Abilene project—which initially projected $12.5 billion per gigawatt of compute based on a $15 billion joint venture with Crusoe for 1.2GW—the actual costs appear substantially higher. The Abilene project, designed for 50,000 NVIDIA GB200 GPUs, would imply approximately $20 billion per gigawatt for GPUs alone. These earlier calculations did not fully account for all associated costs, such as power infrastructure, where Lancium, a partner, has raised over a billion dollars, nor did they include crucial networking infrastructure. Therefore, a more realistic and conservative estimate for a gigawatt of data center capacity is now considered to be $50 billion.

OpenAI has publicly committed to deploying 33GW of capacity across partnerships with AMD, NVIDIA, Broadcom, and the seven data centers under the Stargate initiative. Notably, one of these locations in Lordstown, Ohio, described by a SoftBank representative speaking to WKBN as "not a full-blown data center," is instead intended as a hub for cutting-edge AI and data storage infrastructure within specialized containers, rather than a traditional large-scale data center. This detail was readily discoverable through public searches.

Such ambitious pronouncements necessitate a serious examination of their feasibility, particularly concerning the significant risks and potential for market distortion. Let's outline the proposed developments for the coming year:

  • H2 2026 (OpenAI & Broadcom): OpenAI and Broadcom plan to finalize an AI inference chip, then manufacture sufficient quantities to populate a 1GW data center. This facility, requiring 1.2GW to 1.3GW of power (to account for cooling systems and transmission losses on the hottest days), currently lacks a disclosed location or commenced construction.
  • H2 2026 (AMD & OpenAI): The "first 1 gigawatt deployment of AMD Instinct MI450 GPUs" is scheduled to begin. This deployment will occur in an unnamed data center location, which, to meet the timeline, would have required construction and early power procurement to start at least a year ago.
  • H2 2026 (OpenAI & NVIDIA): As part of their $100 billion deal, OpenAI and NVIDIA are slated to deploy the first gigawatt of NVIDIA's Vera Rubin GPU systems. These GPUs will be housed in an undisclosed data center, which would similarly need to have begun construction over a year ago to adhere to the timeline.

In a conservative estimate, these data centers alone would exceed $100 billion. Crucially, a significant portion of this capital needs to be readily available to OpenAI, or secured from other investors, for construction to proceed.

The aggregated scope and timelines of these announcements raise serious questions regarding their practical attainability. Despite the significant skepticism, a number of reputable media outlets continue to report these projections as plausible, which warrants closer scrutiny.

There appears to be insufficient time, capital, and specialized resources—such as transformers, electrical-grade steel, or skilled talent—to realize these ambitious data center projects. This raises legitimate questions about why OpenAI, NVIDIA, Broadcom, and AMD are making such seemingly impossible claims. The underlying motivations appear to be driven by market dynamics and a necessity to maintain investor confidence and deal momentum.

Three publicly-traded tech firms have reportedly committed to Sam Altman's rapid expansion strategy, making promises on timelines that executives, with their deep industry knowledge, must understand are exceptionally difficult, if not impossible, to meet. This continuous bolstering of stock valuations based on what appear to be fundamentally unrealistic ideas demands a critical reassessment by the public. At some point, the financial realities will likely diverge sharply from these projections.

This analysis will detail the extensive requirements OpenAI faces in the coming year to fulfill its stated ambitions of massive, expensive capacity deployments, potentially for demand that may not materialize.

OpenAI frequently cites its 800 million weekly active users. However, it's worth noting that OpenAI's own research (see page 10, footnote 20) indicates a double-counting of logged-out users across different devices. Furthermore, OpenAI aims to build 250 gigawatts of capacity by 2033, an endeavor projected to cost $10 trillion—equivalent to one-third of the entire U.S. economy last year. The fundamental question is: who is this capacity for?

In February, Goldman Sachs estimated global data center capacity at around 55GW. OpenAI's goal is to add five times that capacity by itself within eight years, a growth that historically took over three decades to achieve organically. This proposition, costing one-third of America's 2024 economic output, lacks sensible justification. Even OpenAI's impressive growth from 700 million to 800 million weekly active users in two months does not support building capacity based on the assumption of universal, constant usage.

Furthermore, there are reported discrepancies concerning OpenAI's current operational capacity. According to an internal Slack note reviewed by Alex Heath of Sources, Altman claimed OpenAI started the year with "around" 230 megawatts of capacity, aiming to exceed 2GW by the end of 2025. This assertion is difficult to reconcile, as OpenAI is not known to possess significant proprietary capacity. The reported acquisition or construction of 1.7GW of capacity without disclosure—an amount equivalent to all operational data centers in the UK last year—lacks transparency. The origin of this capacity, especially given that CoreWeave (a partner) is projected to have a maximum of 900MW by end of 2025, remains unclear. Even the Stargate Abilene project, with only one operational building and 200MW of power, can support merely 130MW of IT loads due to necessary power reserves.

What is OpenAI's actual strategy for this colossal capacity? Even if it hits its $13 billion revenue projection for this year (from $5.3 billion by end of August, requiring over $1.5 billion monthly soon), is it realistic to expect a tenfold company expansion based on a history of requiring continuous massive capital injections?

According to The Information, OpenAI spent $6.7 billion on research and development in the first six months of 2025. Epoch AI data indicates that most of the $5 billion spent on R&D in 2024 was allocated to experimental runs and models never released, with only $480 million dedicated to training models for public use. Concerns also surround OpenAI's recent product releases. GPT-4.5 was reportedly underwhelming, with Altman himself acknowledging it would not "crush benchmarks." Similarly, GPT-5 was met with significant disappointment, and Sora 2, despite its capabilities, faced accusations of plagiarism, leading to necessary modifications.

This leads to a fundamental question: what tangible outcomes has OpenAI delivered in the past year and a half that justify an $11.7 billion investment, and what precisely do stakeholders believe OpenAI will achieve with these conceptual data centers?

Why Does ChatGPT Need $10 Trillion Of Data Centers?

The core challenge with ChatGPT extends beyond occasional factual inaccuracies; it lies in the inability to consistently guarantee specific functionalities. While it performs well in many instances, identifying a critical task it reliably executes every time users require it remains difficult. OpenAI's aspiration to build 250 gigawatts of data centers, necessitating $10 trillion, demands a more robust justification than merely "it's going to be really good."

Claims of building Artificial General Intelligence (AGI) by 2030, as expressed by Altman to Politico, are presented as a compelling reason for these investments. However, the financial and logistical requirements for building 250 gigawatts of capacity by 2033 demand immediate and unprecedented capital.

Based on current calculations, OpenAI requires at least $50 billion in the next six months to build a gigawatt of data centers for Broadcom. To achieve its 10 gigawatts target by the end of 2029, an additional $200 billion will be needed within the next 12 months. This is alongside at least $50 billion for a gigawatt of data centers for NVIDIA, $40 billion to cover 2026 compute costs, at least $50 billion for chips and a gigawatt of data centers for AMD, at least $500 million for its consumer device (the design of which remains uncertain), and at least a billion dollars for ARM CPUs to complement new Broadcom chips.

This totals an estimated $391.5 billion in immediate capital requirements—exceeding the $368 billion of global venture capital raised in all of 2024. This sum is nearly 11 times Uber's lifetime funding ($35.8 billion) and 5.7 times Amazon's capital expenditures for building Amazon Web Services ($67.6 billion).

OpenAI’s Other Costs

Beyond these infrastructure investments, OpenAI faces substantial operational costs. According to The Information, the company spent $2 billion on Sales and Marketing alone in the first half of 2025, with billions more likely allocated to salaries. This suggests an additional $10 billion in expenses, bringing the rounded total requirement to at least $400 billion. To meet the deadlines for these deals by the end of 2026, a significant portion of this capital would be required by February 2026.

OpenAI Needs Over $400 Billion In The Next 12 Months To Complete Any Of These Deals — And Sam Altman Doesn’t Have Enough Time To Build Any Of it

While raising debt or securing additional funding might be suggested, OpenAI faces a tight timeframe—less than a year—to materialize these commitments, which were publicly announced to conclude by December 2026. Even if timelines extend into 2027, data center construction necessitates immediate capital, and hardware partners like Broadcom, NVIDIA, and AMD will require cash payments for chips prior to shipment.

Even if OpenAI secures multiple consortiums of funding partners for tens of billions in data center investments, there are practical limits. OpenAI's aggressive timelines would necessitate raising multiple record-breaking data center financing rounds annually.

Furthermore, OpenAI must still honor its compute contracts with Oracle, CoreWeave, Microsoft (Azure credits potentially depleted), and Google (via CoreWeave) with actual cash—an estimated $40 billion. This comes while the company is reportedly burning $9.2 billion on compute in the first half of 2026 against revenues of $4.3 billion. Concurrently, OpenAI needs to cover staff salaries, storage, and a sales and marketing department that cost $2 billion in H1 2026. This is all amidst a crucial transition from a non-profit to a for-profit entity by year-end, failing which it risks losing $20 billion in SoftBank funding. Moreover, if this conversion is not completed by October 2026, its $6.6 billion funding round from 2024 will convert to debt.

The Global Financial System Cannot Afford OpenAI

The financial burden OpenAI's ambitions place on the global financial system is extraordinary and potentially destabilizing. At its current rate, it would absorb the capital expenditures of multiple hyperscalers, requiring numerous $30 billion debt financing operations annually. To reach its goal of 250 gigawatts by the end of 2033, OpenAI's capital expenditures would likely need to surpass those of any other company globally.

OpenAI's trajectory appears unsustainable, posing significant risks to all entities dependent on the completion of its ambitious plans. For it to succeed, it would need to absorb over a trillion dollars annually, potentially eclipsing the $1.7 trillion in global private equity deal volume in 2024 and becoming a significant component of global trade ($33 trillion in 2025).

The sheer scale of financial and material resources required makes these plans improbable without diverting a substantial portion of existing global capital. Even then, the proposed timelines for construction and deployment are unrealistic, suggesting that OpenAI is developing these plans reactively, with widespread acceptance despite their impracticality. At some point, OpenAI will be compelled to deliver on its commitments, a task that the global financial system appears ill-equipped to support.

To be explicit, OpenAI's promises appear largely unachievable. Consider the Oracle deal: Oracle needs 4.5 gigawatts of IT load capacity to provide OpenAI with compute for its $300 billion, five-year agreement. Despite Oracle CEO Greg Magouyrk's assertion that "of course OpenAI can pay $60 billion a year," OpenAI's current financial trajectory suggests it cannot sustain such payments. Even if it could, Oracle's capacity is insufficient. The Stargate Abilene project, intended to complete by the end of 2026 (six months behind schedule), currently has only 200MW of the 1.5+GW of power it requires, with insufficient capacity projected by year-end. Oracle's only other planned data center, a 1.4GW plot in Shackelford, Texas, has only just commenced construction and will have merely a single building completed by the second half of 2026.

The collective evidence strongly suggests that these ambitious timelines and capacity expansions are not realistically achievable, necessitating a candid assessment of the situation. OpenAI is not merely "building the AI industry"; it is creating massive capacity for a single company that consumes billions with no clear path to profitability. This endeavor risks becoming a significant misallocation of capital and resources, vulnerable to collapse should investor confidence falter.

The assertion that Sam Altman is constructing a vast data center empire masks a different reality: the presentation of unattainable goals. He claims he will build 250GW of data centers in eight years, an impossible feat requiring more capital, in volumes and intervals, than any entity could realistically raise. Sam Altman's notable ability lies in convincing others to believe in his vision or to participate in what appears to be a large-scale confidence game reliant on continuous investor belief. The recklessness of this approach primarily threatens retail investors, who may be swayed by promises of immense returns based on unrealistic projections.