Data centres cooled too much, report suggests

Report from industry group including IBM, HP and Intel shows almost all data centres cooled more than necessary

Tags: CoolingHP Middle EastIBM Middle EastIntel CorporationUSA
  • E-Mail
Data centres cooled too much, report suggests Data centres can operate at up to 27 degrees Celsius, according to industry guidelines.
By  Mark Sutton Published  September 5, 2009

Almost all computer data centres are cooled to a lower temperature than necessary, according to a new study by an industry group.

The group, which is comprised of representatives from Intel, IBM, HP, Liebert Precision Cooling and the Lawrence Berkeley National Lab, found that all data centres in a survey of the US Data Centre Users Group were cooled to well below a recommended temperature of 27°C.

The survey found that of 98 respondents, the highest temperature of air inflow through computer room-air handling (CRAH) systems was 23.3°C, with two thirds cooling to 20-21°C.

The recommended limit of 27°C was proposed by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). The consortium raised the recommended limit in 2008 to recognize the ongoing drive for energy efficiency in data centres.

The study estimated the potential energy savings that could be achieved if data centres could be operated closer to the recommended limit as being as high as 90% energy savings for CRAHs.

Achieving these energy savings however, is complicated by the fact that servers systems and cooling systems are rarely able to communicate, meaning that although some servers and data room systems now have advanced capabilities to determine cooling needs, they are not integrated with

The report, which is part of ongoing studies by the group into energy efficiency, suggested that advanced closed loop monitoring and management of cooling would help companies to operate data centre's closer to the recommended limit, and that integration of the advanced instrumentation capabilities of servers with facilities management systems, will open up further energy saving opportunities.

View the report here.

 

2770 days ago
Goran

All over-cooling is due to a way how cooling is design, especial air distribution. Fact that server racks need cooling is transfer to practice that room need cooling. if concept of cooling is concentrate on racks rather than a room, outcome in energy requirements for cooling will be dramatically lower. Beside it is known a min. operating temp. for server components.

2965 days ago
Hans Schreuders

Very interesting report. For existing datacenters it is not easy to change the infrastructure of the coolingsystem and specifically the CRAHs from the current controlsystem to a variable control system that manages the cooling infrastructure in a variable manner. However, it is not impossible, but if you start turning your datacenter upside down do the right things! The paper suggests that for each and every server measurements shouyld be taken to know the status of the airflow and temperature of the incoming air. This should allow the datacentermanagement to change settings to make sure every server gets air of , as proposed, 27 degrees C. This can be reached in a much more simple and less costly way. Make sure you separate cold and hot air by providing closed hot or cold aisles. In case of closed cold aisles put grates in the raised floor that do not cause resistance to the airflow into the cold aisle. Then make sure , as indicated in the paper, that the incoming air ( is the air taken by the servers ) has the right temperature ( 27 degrees C or whatever you choose). In that case ALL severs will get air of the same temperature. This is called the SWIMMING POOL PRINCIPLE. If you fill a swimming pool with water of a certain temperature you can water out of the swimming pool at any location with any capacity and you will always take water of the same temperature. Applying this principle makes sure that all the servers get air of the same temperature and you will not have to measure every server separately, saving a lot of money, attention and hassle. Because the air in the datacenter distributes itself you will also be able to put any heatload anywhere without worrying about the question if the heatload will receive enough air to be cooled. Each server will take the airflow that it needs and will be cooled as long as there is air of the right temperature in the datacenter. Of course you will need to bring in the same quantity of air as the servers are taking out of the datacenter. This can be easily solved by a simple measurement and control system that instructs ALL the CRAHs , to together deliver the right airflow.

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code