Home About Us Blog Sustainable Development re……
2024-08-02

Sustainable Development requires Data Centers to reduce Power Consumption

Sustainable Development requires Data Centers to reduce Power Consumption

4 optimization approaches on the way to green computing

Many companies have clear sustainability goals and requirements from stakeholders and legislators to reduce their energy consumption. Nowadays IT energy consumption is growing due to the increasing computing power for AI and related applications and is therefore becoming an increasingly large cost factor. Information technology as a factor in energy consumption and emissions footprint is being critically examined in many companies this year, while a major part for reducing data center energy consumption is IT equipment such as servers and cooling systems. How could enterprises optimize their infrastructure and take the social responsibility of sustainable development? IT infrastructure provider KAYTUS shows in which four areas improvements are necessary and sensible and what special attention should be paid here.

Positive and negative factors in Europe's environmental balance sheet

The EU has steadily decreased its greenhouse gas emissions since 1990, reaching a total –32.5% in 2022. Although the volume of carbon removed from the atmosphere in the EU increased in 2022 compared to the previous year, the EU is currently still not on track to reach its 2030 objective of removing 310 million tonnes of CO2 from the atmosphere per year. The EU Member States need to significantly step up their implementation efforts and accelerate emissions reduction to stay on track to reach the -55% net greenhouse gas reduction target by 2030, and climate neutrality by 2050.

Surging AI applications increase energy requirements

With the development of AI applications such as generative AI, machine learning (ML), autonomous driving, and many more, the power required by servers and the power density of computer chips and server nodes continue to increase. The power consumption of AI chips has been increased from 500 watts to 700 watts and is expected to exceed 1000 watts in the future. As the power consumption of processors increases, the requirements for heat dissipation of the entire machine also continue to increase. Authorities in the EU and around the world have set high requirements for energy savings and consumption reduction for more environmentally friendly data centers. The latest EU Energy Efficiency Directive stipulates that data centers with an IT power requirement of up to 100 kilowatts must publicly report on their energy efficiency annually.  

By 2030, data centers are expected to consume 3.2 percent of the EU's total electricity needs, which corresponds to an increase of 18.5 percent compared to 2018. So, in order to achieve the environmental targets that have been set, companies should take their responsibilities of sustainable development. As computing workload grow, optimizing IT infrastructure has become a key measure for them to reduce energy consumption and develop sustainably.

Four key approaches to greener computing

How could companies minimize the power consumption of data centers? When delving into the realm of green computing optimization, the following factors should be prioritized: hardware design, software strategy, system-level refinements, as well as application optimization, all aimed at bolstering energy efficiency. Let’s look at the optimization approaches in detail.

20-BLOG-ENG_KAYTUS Green Computing__EN_approved.jpg


Hardware Component Design Approach

Focusing on the hardware component level, optimizing the structural design of elements like fans, air ducts and radiators can help improve heat dissipation efficiency. For example, improved front and rear intakes and fans can boost and streamline airflow by up to 15 percent, thereby maximizing cooling performance. IT infrastructure providers are now using simulation experiments to improve the shape, spacing and angle of the fan blades and reduce vibration. Furthermore, motor efficiency, internal structure and materials of the fans can be optimized to maximize airflow volume and reduce energy consumption.


Air ducts with minimized flow resistance make the airflow more stable and efficient. For example, a horizontal design and a honeycomb waveguide airflow design are recommended to effectively interrupt internal turbulence, which can increase heat dissipation efficiency by more than 30 percent.

 

Data center managers should consider crafting a variety of cooling solutions to augment the heat dissipation capabilities of high-performance processors. By using special heat sinks and various techniques such as standard heat dissipation, T-shaped heat dissipation, siphon heat dissipation, cold plate heat dissipation, etc., the heat dissipation efficiency of the entire server system can be increased by more than 24 percent, the power consumption of a single-node server can be reduced by 10 percent, and the cooling requirements of a 1U two-socket server with a single 350W processor can be met.

 

System Software Approach

At the software component level, energy-saving measures such as individual hard disk power control, intelligent speed adjustment and power limiting can be implemented, which can reduce the overall power consumption of the server by more than 15 percent.

 

IT specialists can seamlessly align the power management of individual hard disks with the thermal strategy, control the powering on and off of individual hard disks via CPLD, limit the system throughput to some hard disks and put other hard disks into sleep mode. This can save about 70 percent of power consumption. Adjusting the heat dissipation strategy helps reduce the fan speed and reduce the power consumption of the data center's cooling equipment. Compared to traditional server structures, this can reduce power consumption per TB by 315 percent, saving 40 percent of data center space and reducing total operating costs by more than 30 percent.

 

Integrated sensors adeptly capture real-time temperature data at different locations on the server, facilitating precise thermal management. Based on decentralized intelligent control technology and the data collected and evaluated, the fan speed in different air ducts is adjusted to achieve energy-saving fan speed control and precise air supply.

 

System Design Approach

To enhance the utilization efficiency of computing power, curtail device idle times, and consequently reduce energy consumption, IT experts tailor their IT systems to align with specific operation conditions and rely on a mixed air-liquid or full liquid cooling design for heat dissipation, especially for high-density servers, AI servers and rack-scale servers, thus reducing the PUE (Power Usage Effectiveness) to less than 1.2.

 

The cold plate liquid cooling method has demonstrated remarkable effectiveness, especially for high-power-consuming components like processors and memory, which account for more than 80 percent of the power consumption of a data center. The liquid cooling module is also ideally compatible with a variety of common cooling connectors, effectively reducing the power consumption of the entire server and reducing deployment costs.

 

For example, a cold plate liquid cooling system can meet the cooling requirements of a 1000-watt chip and support 100 kilo watts of heat exchange in a single server cabinet. Such a liquid-cooled server cabinet often integrates dynamic environmental monitoring devices that enable intelligent monitoring and use node-level liquid leak detection technology to provide real-time alarms. Notably, the liquid-cooled rack-scale system features excellent energy efficiency, offering higher density, 50 percent increased heat dissipation efficiency, and 40 percent reduced power consumption compared to traditional air cooling.

 

In addition, liquid-cooled servers support high-temperature cooling liquids, such as 45°C (113°F), at the inlet. The high temperature of the fluid returning to the system enables better utilize Free-Cooling technology, significantly reducing energy consumption. If liquid-cooled systems are designed to operate efficiently at high ambient temperatures, such as 45°C (113°F), energy consumption during data center operations can be further reduced.

 

Application Optimization Approach

Finally, the resource consumption of end applications also plays a crucial role in green computing, and here too it is important to optimize the compute-load of applications.

 

To do this, IT specialists should optimize the workloads of their servers to increase GPU/CPU utilization and enable consolidation on fewer servers. IT specialist can use computing power pooling and fine-grained computing resources division strategies to maximizing GPU utilization. This could range from single-card multiple instances to large-scale parallel computing across multiple machines and cards. And the cluster computing power utilization rate could improve over 70%.

 

An additional strategy for resource conservation involves employing asynchronous polling, which minimizes active cycles on battery-powered devices during intermittent data transfers. This minimizes the active communication time and reduces the overall power consumption compared to continuously polling at fixed intervals.

 

Conclusion

Despite the multitude of entry points for green computing initiatives, a critical focus should be the ongoing enhancement of the data center architecture in order to improve the efficiency of computing power from generation to transmission to application. The main requirement for system manufacturers is to test and optimize their system architecture design, performance optimization and heat dissipation capabilities. On the one hand, those optimizations can minimize computing power and the emissions caused by power consumption. On the other hand, the generated computing power can be applied to the application level as much as possible to reduce the waste of computing resources. Therefore, the companies can reduce costs and fulfil their responsibilities for sustainable development through prioritizing the selection of IT infrastructure that cater to their requirements both on operational efficiency and CO2 emissions regulations.

 

For more information on the topic of IT infrastructure for data centers, visit: https://www.kaytus.com/ 

 

Author:

Clark Li, Country Manager of KAYTUS for the DACH region. Clark has over 20 years of experience in IT industry, specializing in HPC, AI, cloud and enterprise IT solutions in the last 10 years.

———————————————

1 . https://climate.ec.europa.eu/eu-action/climate-strategies-targets/progress-climate-action_en#:~:text=Documentation-,Introduction,fall%20below%20the%202019%20level

2. https://www.tomshardware.com/tech-industry/artificial-intelligence/nvidias-b100-and-b200-processors-could-draw-an-astounding-1000-watts-per-gpu-dell-spills-the-beans-in-earnings-call

3. https://kpmg.com/xx/en/home/insights/2022/10/renewable-energy-and-energy-efficiency-directives.html

4. https://pdf.euro.savills.co.uk/european/european-commercial-markets/spotlight-european-data-centres---may-2024.pdf