Microsoft is reducing its plans for global data centre expansion, with numerous projects experiencing delays or being halted in various countries such as the US, UK, Indonesia, and Australia, as reported by Bloomberg.
These modifications imply that the company is reevaluating its approach to supporting cloud computing and AI infrastructure.
The software giant has recently stopped discussions or postponed developments in regions including Indonesia, the UK, Australia, and the US, according to the publication citing informed sources.
Microsoft has “recognized” changes to its data centre strategy, though details on most projects remain undisclosed, the report also noted.
A representative was quoted by the publication saying: “We plan our data centre capacity needs years ahead to ensure we have adequate infrastructure situated appropriately.
“As demand for AI continues to rise, and our data centre presence keeps expanding, the adjustments we have made demonstrate the adaptability of our strategy.”
In the UK, Microsoft has withdrawn from negotiations to lease a site located between London and Cambridge, which had been highlighted for its potential to support advanced Nvidia chips. Similarly, plans for a data centre near Chicago, US have been put on hold.
CoreWeave CEO Michael Intrator mentioned in an interview that Microsoft, which had previously leased spare cloud computing capacity from CoreWeave, recently retracted its plans to acquire additional resources.
However, Intrator did not specify how many projects were affected or their locations but mentioned that CoreWeave has found another buyer for the capacity.
Despite the delays, Microsoft confirmed its ongoing commitment to its $3.3 billion investment in the Wisconsin site in the US, which is projected to commence operation in 2026.
The company has reiterated its intention to invest roughly $80 billion in data centre construction for the fiscal year concluding in June, with the following year concentrating on outfitting existing facilities instead of initiating new builds.
In March 2025, Reuters reported that Microsoft aims to establish its first cloud region in Malaysia by the second quarter of 2025, which will include three data centres.
Microsoft has downsized its data centre expansion activities across multiple regions, indicating a reevaluation of its infrastructure investments in response to changing demands for artificial intelligence (AI) and cloud services. The technology leader has either postponed or abandoned projects in areas including the United Kingdom, Australia, Indonesia, and several locations across the United States, as reported by Bloomberg citing sources.
As a leading player in commercial AI, primarily owing to its collaboration with OpenAI, Microsoft’s infrastructure expenditures are closely watched by investors. The firm’s choice to slow down data centre developments has sparked concerns regarding the long-term demand for AI services and cloud computing.
Reassessing Data Centre investments
The exact reasons for Microsoft’s data centre pullback remain ambiguous, with speculation ranging from challenges in construction and power supply to a re-evaluation of anticipated AI workloads. Some analysts view this action as a sign that the projected demand for AI services does not currently warrant Microsoft’s extensive investments in server farms.
Recently, this uncertainty has contributed to fluctuations in the technology sector, notably impacting chip manufacturers like Nvidia, which depend significantly on data centre expenditures. Microsoft’s stock dropped 2.3 percent in early trading in New York, adding to the overall market decline.
The company confirmed alterations to its infrastructure plans but did not provide specific details. A spokesperson stated: “We plan our data centre capacity needs years in advance to ensure we have sufficient infrastructure in the right places. As AI demand continues to grow, and our data centre presence continues to expand, the changes we have made show the flexibility of our strategy.”
Projects halted or delayed
Among the impacted initiatives, Microsoft has withdrawn from leasing negotiations for a site situated between London and Cambridge, where space was being promoted for its capability to support Nvidia-powered AI services. Another arrangement for data centre space near Chicago has also been canceled.
The firm has also decelerated construction on specific projects, such as a data centre campus in Indonesia, an expansion in Mount Pleasant, Wisconsin, and discussions regarding a server farm in North Dakota. Applied Digital Corp., which was initially in talks with Microsoft for the North Dakota site, has since sought alternate tenants due to a prolonged negotiation process.
In London, Microsoft had been discussing the possibility of leasing space at a significant data centre project in Docklands, operated by Ada Infrastructure, but has yet to finalize commitments. The developer is now considering alternative potential occupants.
Strategic realignment in AI infrastructure
Microsoft remains steadfast in its overall $80 billion investment plan for data centre expansion for its fiscal year ending in June. However, the company has previously indicated that its approach for the next fiscal year will focus on enhancing existing facilities rather than embarking on new projects.
Some analysts suggest that Microsoft’s changes reflect larger transformations in the AI sector. The recent announcement by Chinese startup DeepSeek regarding a competitive AI service developed with limited resources has raised doubts about the long-term need for extensive AI computing power. At the same time, OpenAI’s strategic partnership with Oracle and SoftBank—potentially involving up to $500 billion in AI infrastructure investments—indicates a broader diversification of AI workloads beyond Microsoft’s systems.
Market uncertainty and future outlook
Industry experts contend that Microsoft’s adjustment does not necessarily signal a fundamental decline in AI or cloud services but rather represents a tactical shift. Ed Socia, a director at intelligence firm datacenterHawk, pointed out that businesses are reevaluating their data center strategies to enhance efficiency and reduce costs.
“You might have initially believed that one data center project was the quickest way to market, but then realized that labor, supply chains, and power delivery weren’t as prompt as anticipated,” he explained. “Consequently, companies have to readjust their priorities in the short term.”
CoreWeave, a cloud service provider that Microsoft previously partnered with for additional capacity, acknowledged the company’s withdrawal but indicated that it was specific to Microsoft rather than an industry-wide issue.
“Their collaboration with OpenAI has shifted,” stated CoreWeave CEO Michael Intrator. “Thus, it is logical that there would be some changes.”
Despite recent market concerns, Microsoft asserts that its plans for cloud expansion remain on course. A company spokesperson reaffirmed commitments to ongoing initiatives, including a $3.3 billion investment in Wisconsin and a cloud region expected to launch in Indonesia in 2025.
Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End
This essentially represents the viewpoint expressed by AI researchers in a recent survey. When asked whether “scaling up” existing AI methods could lead to achieving artificial general intelligence (AGI), or a general-purpose AI that matches or surpasses human cognition, a significant 76 percent of respondents deemed it “unlikely” or “very unlikely” to succeed.
The findings of the survey, conducted by scientists at the Association for the Advancement of Artificial Intelligence and polling 475 AI researchers, serve as a strong rebuttal to the tech industry’s preferred approach to advancing AI—by equipping generative models and the data centers that train and operate them with more hardware. Given that AGI is the stated goal of AI developers, it is clear that scaling is widely regarded as a dead end.
“The enormous expenditures on scaling, without any corresponding efforts to comprehend the underlying mechanisms, always struck me as misguided,” stated Stuart Russell, a computer scientist at UC Berkeley who helped organize the report, in an interview with NewScientist. “I believe that approximately a year ago, it became evident to everyone that the advantages of scaling in the traditional sense had plateaued.”
Massive funds are being invested in the AI arms race. Generative AI investments exceeded $56 billion in venture capital funding in 2024, as reported by TechCrunch. Much of this investment is aimed at building or maintaining the large data centers necessary for generative models. Microsoft, for instance, has committed to spending $80 billion on AI infrastructure by 2025.
Consequently, the energy demand is equally overwhelming. Microsoft entered a deal to activate an entire nuclear power plant solely to supply energy to its data centers, with competitors like Google and Amazon also making notable nuclear energy agreements.
The idea that AI could be endlessly enhanced through scaling has always been somewhat tenuous. A case in point is the tech sector’s recent existential crisis triggered by the Chinese startup DeepSeek, whose AI model could compete with the West’s leading multi-billion-dollar chatbots at what are claimed to be only a fraction of the training costs and power requirements.
Of course, indications of this trend had been apparent beforehand. In November of the previous year, reports indicated that OpenAI researchers discovered that the forthcoming version of its GPT large language model showed considerably less improvement, and in some cases, no improvements compared to previous versions over their predecessors.
In December, Google CEO Sundar Pichai publicly stated that easy AI advancements were “over,” yet confidently asserted that there was no reason the industry couldn’t “keep scaling up.”
More cost-effective, efficient methods are currently being investigated. OpenAI has employed a technique known as test-time compute with its latest models, allowing the AI to spend more time “thinking” before selecting the most promising solution. Researchers claimed this method achieved a performance increase that would have otherwise required extensive scaling to replicate.
However, Arvind Narayanan, a computer scientist at Princeton University, mentioned that this approach is “unlikely to be a silver bullet,” as reported by NewScientist.