OpenAI’s Stargate ambitions are redefining AI power demand, with planned capacity climbing toward 17 GW—roughly the combined summer peaks of New York City and San Diego. Experts call the scale “scary,” warning computing could reach 10–12% of global electricity by 2030 and run ahead of what grids and new nuclear can supply on the current timeline [1].
Key Takeaways
– Shows OpenAI and partners targeting up to 17 GW of AI power, roughly New York City’s 10 GW plus San Diego’s 5 GW peaks [1]. – Reveals Reuters’ $500 billion Stargate plan for five U.S. sites, aiming ~10 GW, with nearly 7 GW already under construction, 25,000 onsite jobs [2]. – Demonstrates CNBC’s estimate of $850 billion total buildouts and 17 GW demand—enough electricity for more than 13 million U.S. homes [3]. – Indicates one Texas site alone will draw about 900 MW and host ~60,000 Nvidia GB200 chips, with gas plus wind and solar support [5]. – Suggests global computing could consume 10–12% of world power by 2030, a trajectory an expert called “scary,” underscoring urgent grid constraints [1].
How AI power demand stacks up against major cities
Fortune reports OpenAI- and Nvidia-linked projects could require as much as 17 GW nationwide, a draw on par with New York City’s roughly 10 GW summer peak plus San Diego’s ~5 GW during a heatwave [1].
That equivalence is why University of Chicago computer scientist Andrew Chien labeled the trajectory “scary,” projecting computing loads could rise to 10–12% of global power by 2030, far outpacing grid and nuclear build timelines [1].
CNBC adds that 17 GW is enough to power more than 13 million U.S. homes, framing AI power not as a niche data-center load but as a metropolitan-scale footprint that will influence utility planning and rates [3].
The $500 billion Stargate build and 10 GW of AI power capacity
Reuters details a $500 billion Stargate program led by OpenAI, Oracle, and SoftBank to construct five new U.S. hyperscale data centers targeting about 10 GW of capacity, with nearly 7 GW already in construction—an unusually large share for such early-stage mega-infrastructure [2].
That plan includes sizable Nvidia chip commitments and an estimate of about 25,000 onsite jobs, but energy experts and regional authorities warn that securing round-the-clock electricity and grid stability at this pace will be difficult, especially during peak seasons [2].
Estimates vary on total outlays. CNBC places the broader slate of planned sites at roughly $850 billion—about $50 billion each—while reiterating expected demand of 17 GW as the compute footprint expands across multiple regions [3].
Business Insider reports Sam Altman’s goal to deliver roughly 1 GW of AI infrastructure per week—an unprecedented tempo experts say will collide with the “silent bottleneck” of electricity supply, from generation to transmission interconnections [4].
Where the AI power will come from: gas, renewables, and the grid
In Abilene, Texas, OpenAI and partners previewed a flagship Stargate site designed to draw about 900 MW, supported by a new natural gas plant and bolstered by regional wind and solar to diversify the power stack [5].
To mitigate water stress, the campus touts closed-loop cooling, but local residents and environmental groups have raised concerns about ecological impacts and grid reliability as load concentrates in a handful of massive nodes [5].
Energy authorities in other hosting regions echo those concerns, noting the challenge of delivering firm power 24/7 and preserving reserve margins for other customers as multiple multi-hundred-megawatt sites enter service [2].
Fortune underscores that nuclear—while appealing for zero-carbon baseload—cannot be built fast enough to match AI’s near-term surge, leaving interim reliance on gas, accelerated renewables, and costly transmission upgrades [1].
Costs, chips, and cooling: the hardware behind the AI power surge
AP reporting indicates each Stargate complex could house roughly 60,000 Nvidia GB200 accelerators, a density that drives extreme electrical and thermal loads and necessitates sophisticated power distribution and cooling designs [5].
Reuters notes the development timeline is anchored by significant Nvidia chip procurements, underscoring the capital intensity of training clusters that push aggregate demand toward multi-gigawatt campuses [2].
CNBC estimates individual sites averaging around $50 billion, cautioning that heavy CapEx and operating costs could impose sustained financial pressure even as AI usage grew tenfold in the past 18 months, straining regional energy capacity [3].
Analysts told Business Insider that matching a 1 GW-per-week rollout demands synchronized additions of generation, storage, and transmission—well beyond typical utility build rates—otherwise developers risk energy constraints delaying or downsizing deployments [4].
Can the grid keep up as AI power climbs toward 2030?
If computing reaches 10–12% of global electricity by 2030, as Andrew Chien warns, utilities face a planning challenge that rivals postwar electrification—balancing reliability, emissions targets, and affordability at unprecedented speed [1].
Meeting 10–17 GW of incremental AI load will require firm resources alongside rapid additions of renewables, batteries, and long-distance transmission to ensure power can be delivered when and where it’s needed [4][2].
Texas, already navigating fast-growing wind and solar fleets with a large thermal backbone, will likely need reserve-margin updates and significant interconnection upgrades to avoid congestion and curtailment as large AI campuses come online [5].
Communities are weighing the promise of about 25,000 onsite jobs against concerns about local noise, water allocation, and system peaks—trade-offs that intensify as construction nears 7 GW and the first multi-hundred-megawatt campuses ramp [2].
Fortune’s reporting highlights a central constraint: absent faster nuclear or long-duration storage, the pace of AI model growth may be throttled by kilowatts as much as by capital or algorithms, making energy strategy a core competitive variable [1].
What the next 12–24 months mean for AI power and policy
In the near term, developers must sequence data-hall buildouts with grid milestones to ensure firm service dates match energization, especially in regions with interconnection backlogs and limited peak reserves [2].
Construction of gas peakers and combined-cycle units near AI campuses will likely bridge reliability gaps, while expanded wind and solar portfolios supply daytime and shoulder energy; both paths are already evident in the Abilene project design [5].
Policy makers will confront siting questions and cost recovery debates as utilities propose new lines, substations, and flexible resources, under pressure from estimates that 17 GW of AI demand rivals multiple major metropolitan loads stitched together [1][3].
Investors, meanwhile, will scrutinize CapEx profiles across $500–$850 billion scenarios, evaluating whether utilization and monetization can offset electricity, cooling, and depreciation costs as compute intensity rises faster than historical grid expansion [2][3].
The bottom line: the AI race is now an energy race. As megawatt-hours become the gating factor for model training and inference at scale, winners will be those who secure dependable, affordable, and increasingly low-carbon AI power at speed [1][4].
Sources:
[1] Fortune (via INKL) – Sam Altman’s AI empire will devour as much power as New York City and San Diego combined. Experts say it’s ‘scary’: www.inkl.com/news/sam-altman-s-ai-empire-will-devour-as-much-power-as-new-york-city-and-san-diego-combined-experts-say-it-s-scary” target=”_blank” rel=”nofollow noopener noreferrer”>https://www.inkl.com/news/sam-altman-s-ai-empire-will-devour-as-much-power-as-new-york-city-and-san-diego-combined-experts-say-it-s-scary
[2] Reuters – OpenAI, Oracle, SoftBank plan five new AI data centers for $500 billion Stargate project: www.reuters.com/business/media-telecom/openai-oracle-softbank-plan-five-new-ai-data-centers-500-billion-stargate-2025-09-23/” target=”_blank” rel=”nofollow noopener noreferrer”>https://www.reuters.com/business/media-telecom/openai-oracle-softbank-plan-five-new-ai-data-centers-500-billion-stargate-2025-09-23/ [3] CNBC – Sam Altman’s AI empire will devour as much power as New York City and San Diego combined, experts warn: www.cnbc.com/2025/09/23/sam-altman-openais-850-billion-in-planned-buildouts-bubble-concern.html” target=”_blank” rel=”nofollow noopener noreferrer”>https://www.cnbc.com/2025/09/23/sam-altman-openais-850-billion-in-planned-buildouts-bubble-concern.html
[4] Business Insider – Sam Altman wants to build ‘the coolest and most important infrastructure project ever’: www.businessinsider.com/sam-altman-ai-infrastructure-1-gw-per-week-stargate-2025-9″ target=”_blank” rel=”nofollow noopener noreferrer”>https://www.businessinsider.com/sam-altman-ai-infrastructure-1-gw-per-week-stargate-2025-9 [5] Associated Press – OpenAI shows off Stargate AI data center in Texas and plans 5 more elsewhere with Oracle, Softbank: https://apnews.com/article/0b3f4fa6e8d8141b4c143e3e7f41aba1
Image generated by DALL-E 3
Leave a Reply