Too hot to handle

The temperature is rising in the data centre causing quite a few headaches for CIOs. Colin Edwards looks at what is needed to cool things down.

  • E-Mail
By  Colin Edwards Published  April 2, 2006

|~|fire200.gif|~||~|When your data centre power bill runs to nearly US$150, 000 a year and you're planning a new facility nearly four times the size of your existing data centre as well as looking to host 20,000 servers down the line, you've got to take power usage and heat generation seriously.

Sudheer Nair, assistant vice president of Infrastructure Management Group at Mindscape, Mashreq Bank's outsourcing division, certainly does. In fact it was one of his major concerns during the planning and implementation of Mindscape's new 15,000 sq ft data centre at Dubai Internet City, and will continue to be as he starts to plan a 50,000 sq ft facility at Dubai's new Outsource Centre.

He should be worried. In the US it is estimated that the average annual utility cost for a 100,000-square-foot data centre has reached $5.9 million. Even with the Middle East's cheaper - though rising - utility prices, Nair could be facing massive bills at his new facility.

"In the early days, we used to have seven to eight servers in a rack, now with blades coming in, you have around 50 servers in a rack. We used to calculate capacity needs by so many servers to the square metre. Now with blade servers, if you calculate on that basis, then you're going to hit real problems," he says.

According to Rohit Narvekar, business development manager, Dafnia Electronics, the UAE distributor for Knuerr, which manufactures water-cooled rack cabinets, data centre air-conditioning is struggling to keep high-density server racks cool.

"Before people were using 19-inch mountable servers with an expected heat dissipation of 3-4kW. Today it has reached more than 10kW per rack. We are talking about a rack 600 x 1000mm and 42U. That's a 2m high cabinet, so when you are talking about generating this type of heat in one rack you can imagine 25 of these in a 2,000 sq ft data centre. The heat is tremendous. It's beyond the capacity of a 60-ton air-conditioning unit to cool this kind of heat, because the heat remains inside the room," says Narvekar.

And such problems are going to get worse as CIOs continue to pack in more and more servers to meet insatiable computing demand. Blade servers are the fastest growing sector of the server market currently, according to IDC. The extra power bills being generated are not only due to the expanding server population. Those servers are also generating extra heat and resultant cooling costs and problems to the point where the greater affordability of today's servers is being eroded by additional running costs.

In fact, some of the older data centres in the region, designed at a time when no one could have dreamt of the server population explosion, are struggling to cope. Environments designed to cool equipment using 150W per sq m are now being faced with server racks pumping out 800W per sq m.

"Many data centres are stuck with the cooling system designed several years ago. The two main issues facing data centre managers are power consumption and cooling. When you have more power wattage usage you have more heat. How to address these two issues is a major challenge for any data centre manager at the moment," says Nair, adding that for some, outsourcing might be the answer as the cost of building new centres and running them is becoming prohibitive for some Middle East companies.

Just how much heat is generated by standard server technology these days is highlighted in a Forrester report entitled ‘Power and cooling heat up the data centre’, published in March. It points out that an Intel Xeon MP processor with 8MB of cache has heat dissipation per square inch approximately equivalent to an electric cooker heating plate.

"To complicate matters, designers using the Xeon must keep the temperature of the chip well below boiling, and they can't put a big pot of water on top of it to dissipate the heat," says the report’s author Richard Fichera.

With the hindsight of having to cope with such additional heat being generated in his data centre over the past few year, Nair is making sure his new environment, built to last 20 years, will be able to cope with massive projected growth in server numbers.

The new Mindscape data centre at the Dubai Internet City will be one of the first Tier IV facilities in the region, which means that it is designed for 99.995% availability or roughly 40 minutes downtime a year. This entails having redundancy built in for every component in the centre - even down to the socket level. Part of the Tier IV specification laid down by the US-based Uptime Institute is that the cooling system should be able to support at least 500 Watts per sq m, says Nair.

"In the new facility we have exceeded the recommendations and allowed for 800W per sq m. What is more, we have addressed our cooling system requirements for the next 10 years. The system is totally scalable in case our power consumption grows, which is inevitable. In fact it has been designed to scale to 1,500W per sq m."

Whether he will need more than that in 10 years time, not only depends on how big Mindscape's outsourcing venture becomes, but also what new technologies come to the fore in the meantime to address power usage and heat generation.

The vendors have now woken up to the problem and the opportunity heat generation offers. However, it is doubtful whether what they is on their roadmaps today will resolve the immediate problems caused by ever-increasing server density in the older data centre.

Chipmakers such as Intel, AMD and Sun have processor roadmaps totally focused on delivering needed performance without the heat. Dual core technology is one such solution. Quad systems, already a reality with Sun's UltraSPARC T1 offerings, will take it a step further. Sun's T1 combines multiple small cores, multiple threads and a lower-speed, power-saving process to yield power efficiencies in the range of between three and five times that of x86 processors.

"We're far ahead of others in what we are doing to reduce the heat being generated in today's data centres, and by the time they catch up with our quad developments, we'll be producing eight and 16-core systems," says Graham Porter, Sun Microsystems' marketing manager, adding that the T1's power dissipation can be reduced further by turning off two or more of the cores and running in single- or dual-core mode.

But as Forrester's Fichera points out: "Simply making current architectures more power efficient can yield differences in power efficiency approaching 50%."
Intel's and AMD's latest processors operate at slightly reduced clock frequencies and generate less heat. They provide increased performance by placing two processing engines within the same silicon area. AMD has processors with three levels of power dissipation: 95, 68, and 55W.

But new developments promise better power efficiencies. Intel's Sossaman, an ultra-low-power processor designed for server blades, is the first of the new Bensley Xeons with primarily sub-100W offerings. It is already shipping.

"Previously, every time you wanted to increase performance, demand for power increased. Now, at Intel we are decreasing the power requirement. By Q3 or Q4 we are going to have parts at 80W for data centre processors or rack mount use. Rack mounts used to be 110 to 135W," says Amir Alkaram, Intel’s country manager, Iraq.

In the third quarter of 2006, Intel will update the Bensley platform with the Woodcrest processor, which will reduce power consumption a further 35% with at least an 80% performance improvement.

Computer manufacturers such as HP and IBM, and infrastructure vendors such as APC, are also addressing the need for cooler environments. HP recently announced a chilled water rack cooling system that is said to triple the cooling capacity of a single server rack. Such supplemental cooling systems for high-density rack environments are also coming from more traditional cooling system vendors such as APC, Liebert and Knuerr - now a part of the Emerson group that owns the Liebert brand.

Supplemental cooling systems provide ready solutions for data centres facing immediate heat problems. As Forrester says: "Users can also take definite steps with today's technologies to minimise their data centre heating and cooling costs, while waiting for new solutions from systems, chip and software vendors.

"Optimise your overall data centre architecture and layout, implement spot cooling for problem racks and problem sections, and prioritise heating and cooling issues when selecting new systems management tools," it recommends.

Dafnia's Narvekar believes water-based cooling, such as Knuerr's CoolTherm, is the 'in-thing' now. "People talk about giving you a net shelter or cool room as a solution to the heat problem, but does this really serve the purpose? No, because the cooling that you require is more than 8 - 10kW a rack and rising. Blowing cool air at servers is not the solution. Over the next six to seven years, we'll be talking about 25W to 30kW a rack," he says.

For an existing data centre, Narveka explains that it is possible to retrofit water-cooled rear doors to existing cabinets, should the cabinet be used to house more powerful servers. This hybrid solution comprising standalone and conventional air-conditioning solutions obviates the need to boost the existing air-conditioning system, which is often an impossible or difficult and costly exercise in an older data centre.

Liebert and Egenera - blade servers OEMed by Fujitsu-Siemens in the Middle East - have tackled the problem of excessive heat generation with the introduction of CoolFrame. This combines Egenera's BladeFrame EX with Liebert's XD cooling technology to enable data centres to deploy blade servers while adding almost no heat to the facility. CoolFrame can reduce as much as 20,000W heat dissipation to 1,500W without impacting server performance and lower data centre cooling costs by a quarter in the process, according to the companies.

APC's InfraStruXure system is also tackling the problem head on. Its modular air-conditioning units can be bolted onto the side of each equipment rack, providing up to 60kW of direct cooling. The system feeds water to each cooling unit.

Another way to address power costs is the optimisation of server usage using server virtualisation techniques. "Virtualisation is a solution of the future," says Nair. "Today you only use an average 30% of the hardware. Rather have a system that can be used at 90% of the capacity. That way you save a lot of computing power. At Mashreq we are using virtual servers and it's something we'll be using in the new data centre and which we'll offer customers so that they can share facilities. We’ll be able to run multiple applications with fewer servers and thereby increase efficiency."

Intel is also developing virtualisation technology, whchis called Intel VT, for enterprise servers. Intel's next generation of VT will include I/O virtualisation to assign I/O devices to virtual machines, providing a more robust, higher performance platform for virtualised systems.

"We're not just looking at power and Gigahertz. It's also about the other technologies we bring to our processors - virtualisation for example. We get involved to enable virtualisation at the hardware point. Then companies like VMWare run on top of this," says Alkaram.

But high density server racks are not the only heat -generating problem in today's data centre.

Reports are starting to emerge that Power over Ethernet (PoE) switches are becoming a source of excessive heat in VoIP wiring cabinets where once just lower powered switches, hubs and panel patches resided.

As more and more companies start deploying IP phones across the enterprise, CIOs are finding that PoE comes at a price - additional cooling requirements from the PoE enabled switch itself and the uninterruptible power supply (UPS) that is also housed in the cabinet to keep the IP phones running in the event of an outage.

Recent reports show that, for example, a widely used non-PoE 24-port LAN switch generates around 176 BTUs of heat per hour. When PoE is added to the switch, it heats up to 534 BTUs. With a standard UPS pumping out an additional 80-100 BTUs, the total heat output has been tripled in just one cabinet.

Two years ago, few people would have envisaged such a problem arising out of an emerging technology and illustrates the need to build in plenty of scalability in new facilities.

"I've not heard of VoIP causing such power and heat problems," says Nair, "but we've planned our new centre on the basis that power consumption is going to rise.
"Power usage projection is always an issue. A problem we face today is that in this building we have a certain capacity that can be drawn from the transformer. If you don't project the power consumption from the very beginning, it is difficult to get extra capacity," concludes Nair.
||**|||~|Intel_Amir-Alkaram-Country-.gif|~|Alkaram: Intel going beyond power and Gigahertz.|~|To complicate matters, designers using the Xeon must keep the temperature of the chip well below boiling, and they can't put a big pot of water on top of it to dissipate the heat," says the report’s author Richard Fichera.

With the hindsight of having to cope with such additional heat being generated in his data centre over the past few year, Nair is making sure his new environment, built to last 20 years, will be able to cope with massive projected growth in server numbers.

The new Mindscape data centre at the Dubai Internet City will be one of the first Tier IV facilities in the region, which means that it is designed for 99.995% availability or roughly 40 minutes downtime a year. This entails having redundancy built in for every component in the centre - even down to the socket level. Part of the Tier IV specification laid down by the US-based Uptime Institute is that the cooling system should be able to support at least 500 Watts per sq m, says Nair.

"In the new facility we have exceeded the recommendations and allowed for 800W per sq m. What is more, we have addressed our cooling system requirements for the next 10 years. The system is totally scalable in case our power consumption grows, which is inevitable. In fact it has been designed to scale to 1,500W per sq m."

Whether he will need more than that in 10 years time, not only depends on how big Mindscape's outsourcing venture becomes, but also what new technologies come to the fore in the meantime to address power usage and heat generation.

The vendors have now woken up to the problem and the opportunity heat generation offers. However, it is doubtful whether what they is on their roadmaps today will resolve the immediate problems caused by ever-increasing server density in the older data centre.

Chipmakers such as Intel, AMD and Sun have processor roadmaps totally focused on delivering needed performance without the heat. Dual core technology is one such solution. Quad systems, already a reality with Sun's UltraSPARC T1 offerings, will take it a step further. Sun's T1 combines multiple small cores, multiple threads and a lower-speed, power-saving process to yield power efficiencies in the range of between three and five times that of x86 processors.

"We're far ahead of others in what we are doing to reduce the heat being generated in today's data centres, and by the time they catch up with our quad developments, we'll be producing eight and 16-core systems," says Graham Porter, Sun Microsystems' marketing manager, adding that the T1's power dissipation can be reduced further by turning off two or more of the cores and running in single- or dual-core mode.

But as Forrester's Fichera points out: "Simply making current architectures more power efficient can yield differences in power efficiency approaching 50%."
Intel's and AMD's latest processors operate at slightly reduced clock frequencies and generate less heat. They provide increased performance by placing two processing engines within the same silicon area. AMD has processors with three levels of power dissipation: 95, 68, and 55W.

But new developments promise better power efficiencies. Intel's Sossaman, an ultra-low-power processor designed for server blades, is the first of the new Bensley Xeons with primarily sub-100W offerings. It is already shipping.

"Previously, every time you wanted to increase performance, demand for power increased. Now, at Intel we are decreasing the power requirement. By Q3 or Q4 we are going to have parts at 80W for data centre processors or rack mount use. Rack mounts used to be 110 to 135W," says Amir Alkaram, Intel’s country manager, Iraq.

In the third quarter of 2006, Intel will update the Bensley platform with the Woodcrest processor, which will reduce power consumption a further 35% with at least an 80% performance improvement.

Computer manufacturers such as HP and IBM, and infrastructure vendors such as APC, are also addressing the need for cooler environments. HP recently announced a chilled water rack cooling system that is said to triple the cooling capacity of a single server rack. Such supplemental cooling systems for high-density rack environments are also coming from more traditional cooling system vendors such as APC, Liebert and Knuerr - now a part of the Emerson group that owns the Liebert brand.

Supplemental cooling systems provide ready solutions for data centres facing immediate heat problems. As Forrester says: "Users can also take definite steps with today's technologies to minimise their data centre heating and cooling costs, while waiting for new solutions from systems, chip and software vendors.

"Optimise your overall data centre architecture and layout, implement spot cooling for problem racks and problem sections, and prioritise heating and cooling issues when selecting new systems management tools," it recommends.

Dafnia's Narvekar believes water-based cooling, such as Knuerr's CoolTherm, is the 'in-thing' now. "People talk about giving you a net shelter or cool room as a solution to the heat problem, but does this really serve the purpose? No, because the cooling that you require is more than 8 - 10kW a rack and rising. Blowing cool air at servers is not the solution. Over the next six to seven years, we'll be talking about 25W to 30kW a rack," he says.

For an existing data centre, Narveka explains that it is possible to retrofit water-cooled rear doors to existing cabinets, should the cabinet be used to house more powerful servers. This hybrid solution comprising standalone and conventional air-conditioning solutions obviates the need to boost the existing air-conditioning system, which is often an impossible or difficult and costly exercise in an older data centre.

Liebert and Egenera - blade servers OEMed by Fujitsu-Siemens in the Middle East - have tackled the problem of excessive heat generation with the introduction of CoolFrame. This combines Egenera's BladeFrame EX with Liebert's XD cooling technology to enable data centres to deploy blade servers while adding almost no heat to the facility. CoolFrame can reduce as much as 20,000W heat dissipation to 1,500W without impacting server performance and lower data centre cooling costs by a quarter in the process, according to the companies.

APC's InfraStruXure system is also tackling the problem head on. Its modular air-conditioning units can be bolted onto the side of each equipment rack, providing up to 60kW of direct cooling. The system feeds water to each cooling unit.

Another way to address power costs is the optimisation of server usage using server virtualisation techniques. "Virtualisation is a solution of the future," says Nair. "Today you only use an average 30% of the hardware. Rather have a system that can be used at 90% of the capacity. That way you save a lot of computing power. At Mashreq we are using virtual servers and it's something we'll be using in the new data centre and which we'll offer customers so that they can share facilities. We’ll be able to run multiple applications with fewer servers and thereby increase efficiency."

Intel is also developing virtualisation technology, whchis called Intel VT, for enterprise servers. Intel's next generation of VT will include I/O virtualisation to assign I/O devices to virtual machines, providing a more robust, higher performance platform for virtualised systems.

"We're not just looking at power and Gigahertz. It's also about the other technologies we bring to our processors - virtualisation for example. We get involved to enable virtualisation at the hardware point. Then companies like VMWare run on top of this," says Alkaram.

But high density server racks are not the only heat -generating problem in today's data centre.

Reports are starting to emerge that Power over Ethernet (PoE) switches are becoming a source of excessive heat in VoIP wiring cabinets where once just lower powered switches, hubs and panel patches resided.

As more and more companies start deploying IP phones across the enterprise, CIOs are finding that PoE comes at a price - additional cooling requirements from the PoE enabled switch itself and the uninterruptible power supply (UPS) that is also housed in the cabinet to keep the IP phones running in the event of an outage.

Recent reports show that, for example, a widely used non-PoE 24-port LAN switch generates around 176 BTUs of heat per hour. When PoE is added to the switch, it heats up to 534 BTUs. With a standard UPS pumping out an additional 80-100 BTUs, the total heat output has been tripled in just one cabinet.

Two years ago, few people would have envisaged such a problem arising out of an emerging technology and illustrates the need to build in plenty of scalability in new facilities.

"I've not heard of VoIP causing such power and heat problems," says Nair, "but we've planned our new centre on the basis that power consumption is going to rise.
"Power usage projection is always an issue. A problem we face today is that in this building we have a certain capacity that can be drawn from the transformer. If you don't project the power consumption from the very beginning, it is difficult to get extra capacity," concludes Nair.
||**|||~|Graham-Porter200.gif|~|Porter: Sun is ahead of the pack.|~|Dafnia's Narvekar believes water-based cooling, such as Knuerr's CoolTherm, is the 'in-thing' now. "People talk about giving you a net shelter or cool room as a solution to the heat problem, but does this really serve the purpose? No, because the cooling that you require is more than 8 - 10kW a rack and rising. Blowing cool air at servers is not the solution. Over the next six to seven years, we'll be talking about 25W to 30kW a rack," he says.

For an existing data centre, Narveka explains that it is possible to retrofit water-cooled rear doors to existing cabinets, should the cabinet be used to house more powerful servers. This hybrid solution comprising standalone and conventional air-conditioning solutions obviates the need to boost the existing air-conditioning system, which is often an impossible or difficult and costly exercise in an older data centre.

Liebert and Egenera - blade servers OEMed by Fujitsu-Siemens in the Middle East - have tackled the problem of excessive heat generation with the introduction of CoolFrame. This combines Egenera's BladeFrame EX with Liebert's XD cooling technology to enable data centres to deploy blade servers while adding almost no heat to the facility. CoolFrame can reduce as much as 20,000W heat dissipation to 1,500W without impacting server performance and lower data centre cooling costs by a quarter in the process, according to the companies.

APC's InfraStruXure system is also tackling the problem head on. Its modular air-conditioning units can be bolted onto the side of each equipment rack, providing up to 60kW of direct cooling. The system feeds water to each cooling unit.

Another way to address power costs is the optimisation of server usage using server virtualisation techniques. "Virtualisation is a solution of the future," says Nair. "Today you only use an average 30% of the hardware. Rather have a system that can be used at 90% of the capacity. That way you save a lot of computing power. At Mashreq we are using virtual servers and it's something we'll be using in the new data centre and which we'll offer customers so that they can share facilities. We’ll be able to run multiple applications with fewer servers and thereby increase efficiency."

Intel is also developing virtualisation technology, whchis called Intel VT, for enterprise servers. Intel's next generation of VT will include I/O virtualisation to assign I/O devices to virtual machines, providing a more robust, higher performance platform for virtualised systems.

"We're not just looking at power and Gigahertz. It's also about the other technologies we bring to our processors - virtualisation for example. We get involved to enable virtualisation at the hardware point. Then companies like VMWare run on top of this," says Alkaram.

But high density server racks are not the only heat -generating problem in today's data centre.

Reports are starting to emerge that Power over Ethernet (PoE) switches are becoming a source of excessive heat in VoIP wiring cabinets where once just lower powered switches, hubs and panel patches resided.

As more and more companies start deploying IP phones across the enterprise, CIOs are finding that PoE comes at a price - additional cooling requirements from the PoE enabled switch itself and the uninterruptible power supply (UPS) that is also housed in the cabinet to keep the IP phones running in the event of an outage.

Recent reports show that, for example, a widely used non-PoE 24-port LAN switch generates around 176 BTUs of heat per hour. When PoE is added to the switch, it heats up to 534 BTUs. With a standard UPS pumping out an additional 80-100 BTUs, the total heat output has been tripled in just one cabinet.

Two years ago, few people would have envisaged such a problem arising out of an emerging technology and illustrates the need to build in plenty of scalability in new facilities.

"I've not heard of VoIP causing such power and heat problems," says Nair, "but we've planned our new centre on the basis that power consumption is going to rise.
"Power usage projection is always an issue. A problem we face today is that in this building we have a certain capacity that can be drawn from the transformer. If you don't project the power consumption from the very beginning, it is difficult to get extra capacity," concludes Nair.
||**||

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code