Big changes in hardware, power and cooling are coming for data centers in 2025

Posted on

The new year has just started, and there is already significant activity occurring in the data center industry. Exciting news continues to emerge, with numerous developments taking place in various key areas related to data centers, such as hardware, power, and cooling.

One of the most noteworthy announcements at the beginning of this year came from Microsoft. The technology giant revealed plans to invest $80 billion this fiscal year in constructing new data centers worldwide. Approximately half of this amount will be allocated for projects within the U.S., while the remainder will be distributed across Microsoft’s international data center network.

Microsoft’s fiscal year 2025 wraps up in June 2025, meaning there are less than six months left to achieve this goal. Although the company has already expended a significant amount during the initial six months of the fiscal year, specifics have not been disclosed. What is informally known is that the company has acquired numerous properties intended for data center use. Additionally, it is prioritizing the construction of facilities and, of course, purchasing the necessary hardware, which entails substantial costs.

In its fiscal year 2024, Microsoft allocated roughly $50 billion for capital expenditures, a vast majority of which was directed towards data centers. Consequently, in fiscal 2025, we can anticipate continued similar investments, with a particular emphasis on AI data centers. This includes centers dedicated to AI services, as well as utilizing AI technologies to enhance performance and maintenance within data centers.

Naturally, other companies are pursuing similar strategies. Major players such as Google, AWS, and Meta are also developing data centers tailored for AI. Additionally, smaller and dedicated data center operators are diligently working to upgrade their facilities and adapt to both competition and demand, propelling industry-wide advancements, including in hardware.

2025 will be a pivotal year for data center hardware.

Unsurprisingly, hardware garners the most attention; without it, the massive buildings that house the servers and cooling systems wouldn’t be necessary. DataCenterKnowledge highlights five significant trends shaping data center hardware in 2025.

One primary trend is the rise of AI accelerator hardware. These components are specially engineered for AI algorithms, delivering markedly faster and more efficient performance. AI workloads will gain the most from these accelerators, and as these tasks take center stage, they will influence data center hardware configurations. This could pose a long-term risk, particularly as AI becomes standard practice rather than an experimental phase. Should workloads diversify again and move beyond just AI-centric tasks, a subsequent restructuring of data centers may be necessary to accommodate more balanced hardware.

The second trend involves an increasing use of DPUs – Data Processing Units. These modules are uniquely created to manage network traffic and other tasks specific to data centers that are typically handled by standard CPUs. By taking on these responsibilities, DPUs free up valuable computing resources for client workloads.

DPUs have been in the market for some time, but their primary benefits have only become apparent as AI demand has overloaded traditional hardware resources. While they don’t accelerate AI workloads, DPUs are proving essential partners to CPUs and GPUs for handling other operations.

With the introduction of newer, more advanced hardware, security threats are naturally on the rise. Certain risks pertain to the physical hardware itself. For instance, malicious actors might attempt to compromise equipment during transit to the data center, potentially installing harmful firmware or undermining component reliability. This scenario underscores the necessity of implementing rigorous validation processes to ensure the hardware remains untampered. Additionally, data center operators will need to adapt their regular cybersecurity measures to counter emerging attack vectors.

Fourthly, there is likely to be a growing interest in ARM servers for data centers. For years, ARM CPUs have dominated the mobile device market, but their efforts to penetrate the PC and server markets have seen only limited success. Even so, they have persisted in their quest for increased market share in these areas.

Notably, in 2024, ARM CPUs made headlines by entering the AI PC and laptop sectors. While these devices perform well, sales figures have not been particularly strong, potentially due to their higher price point. Regardless, ARM CPUs are still seeking opportunities in the server space. They assert that they can offer superior energy efficiency, optimization for specific workloads, and overall enhanced performance. Nevertheless, it’s likely that ARM servers will remain a niche option even in 2025.

Finally, further innovations in cooling technologies are anticipated. This is logical, given the substantial energy being invested in computing power, which generates considerable heat; thus, cooling systems must evolve significantly. As expected, this is a critical topic that warrants deeper exploration.

The year to bring down temperatures

Naturally, we aren’t suggesting that the expansion of data centers should be slowed down. In fact, they will keep growing, which will generate a significant amount of heat – in more ways than one! DataCenter Knowledge has even reported that a “heat crisis” is developing within data centers. Analysts predict that, on average, energy use in data centers will rise by 12% by 2030. This increase is occurring despite ongoing attempts to optimize and reduce energy usage.

Once more, AI is a primary contributor to this issue. It requires a substantial increase in computing capability, which in turn generates considerably more heat in the servers. Traditional air cooling methods won’t suffice without significantly boosting airflow and equipment, which is not practical or cost-effective. Therefore, alternative solutions have begun to surface, including a completely new method – customized cooling systems. For instance, hyperscale operators are partnering with cooling manufacturers to create bespoke cooling designs for their facilities. This enables them to maximize their equipment’s efficiency while balancing performance, cost-effectiveness, and future preparedness.

Typically, these solutions involve hybrid systems that integrate both air and liquid cooling. Based on the data center’s location and operational requirements, they might also incorporate heat recycling systems, energy recovery technologies, and additional options. “Data center cooling has reached a crucial juncture. Adopting resilient and energy-efficient solutions will empower operators to succeed moving forward. As resource limitations become more pronounced, innovative cooling strategies are no longer optional – they are vital for the survival and prosperity of the data center sector,” states Karin Overstreet, president of Nortek Data Center Cooling, to DataCenterKnowledge.

A significant question remains unanswered

That question pertains to how data centers will address their electricity demands. This was somewhat of an issue in 2024 to the extent that some experts suggested that data centers might run out of power within a year and a half. We are now much closer to that unofficial timeline, and there’s still no solution in sight. Efforts to optimize and cut down consumption can only go so far, particularly with the numerous new construction projects underway.

Some hyperscalers are currently financing the reopening of decommissioned nuclear plants and making substantial investments in the development of more sustainable energy sources. The challenge here is that this approach is very costly, making it primarily feasible for hyperscalers. Smaller operators, in contrast, must compete within the standard energy market alongside everyone else. However, they also suffer the repercussions caused by hyperscalers. Primarily, in many areas dense with data centers, local businesses and authorities do not view these operators favorably.

They believe that these facilities are consuming their electricity and competing unfairly against them. Even if smaller data center operators aren’t actually doing this, they are often perceived the same way by the community, as people and businesses typically do not differentiate between a conventional data center and a hyperscaler. Authorities are eager to impose limits on data centers’ energy usage to gain support from local companies.

Operators must prioritize sustainability as a core component of their strategy. This could potentially address numerous limitations they encounter and also help them gain positive recognition from the community. It’s far more advantageous for operators to demonstrate that they possess their own sustainable energy sources and, even better, if they can provide electricity to the community at a reasonable cost.

Leave a Reply

Your email address will not be published. Required fields are marked *