Playing it cool

HP's latest innovation in cooling technology for datacentres, the Dynamic Smart Cooling (DSC) solution, promises anything from 20% to 45% saving for customers. Roy Zeighami, infrastructure architect for DSC at HP was in town for the DatacenterDynamics event. NME caught up with him to find out how DSC delivers on its promises and the solution's relevance to regional enterprises.

  • E-Mail
By  Sathya Ashok Published  January 16, 2008

HP's latest innovation in cooling technology for datacentres, the Dynamic Smart Cooling (DSC) solution, promises anything from 20% to 45% saving for customers. Roy Zeighami, infrastructure architect for DSC at HP was in town for the DatacenterDynamics event. NME caught up with him to find out how DSC delivers on its promises and the solution's relevance to regional enterprises.

How is DSC different from any other solution in the market?

DSC is an advancement in air cooling in the datacentre. What is usually done wrong in datacentres is that they are provisioned with cooling and then get built up over 15 to 20 years. For the customer that is a wasteful process, because typically datacentre cooling runs significantly over provisioned for the majority of datacentres.

The fact is the datacentre is kind of the last frontier in IT, sort of the wild west in a certain way. It is every man for himself

On top of that, the cooling architecture of most datacentres is such that air temperature is measured at the air conditioner return. The only place where the temperature is measured is where it is going back into the air conditioner. The facilities manager who is responsible for ensuring air cooling inside the datacentre he has basically only one knob to turn. When he measures air going into the air conditioner the result is that he turns the temperature way down. There are two effects to that. First, turning the temperature way down will affect the equipment and second, it will cost much more. Most datacentres are over-cooled - infrastructure could be running below capacity in that cold temperature.

Now imagine that we can measure the air temperature at the point of entering the racks containing the equipment of the datacentre, instead of the air going back to the air conditioner. The result is that because we can measure the thing that we care about - that is air temperature going into the servers - we can make sure that the air is not too cold or too hot. We can then make the air in the datacentre precisely just the right temperature.

In DSC we start by measuring the thing that we care about - that is the air that goes into the racks. The DSC Energy Manager which can read all these temperatures throughout the datacentre can then provision the cooling resources right. The benefits are energy savings, which may not be too important in this region yet, because power is still inexpensive in the region.

But there is the benefit of added capacity and certainly many of the people that I have had the opportunity to speak with cared very much about optimally using resources in the datacentre.

And finally, there is a management aspect. Because of this provisioned temperature the datacentre manager will have a full picture of all the temperatures across the datacentre. And that in itself is of great value to most people. One of the premier HP datacentres that is running DSC is in Bangalore, with around 7500 sensors installed. That is not in the Middle East but it is closer than the US.

The fact is, the datacentre is kind of the last frontier in IT, sort of the wild west in a certain way. It is every man for himself and there are so many choices when it comes to technology and architectures. It's definitely much more undeveloped than say the IT equipment itself. IT has industrialised; it is becoming commoditised, its interfaces are very standard.

But the fact is every datacentre is custom-made and that in itself means it is more complicated. There is more choice and certainly many vendors who offer solutions for the datacentre often give very conflicting advice and offer products that in and themselves deliver very different value propositions. With DSC, what we are saying is that we understand the way people want to build their datacentres.

Most people don't want to get rid of their raised floor, they don't want to throw away the air conditioner, they don't want to buy services; they want a tangible product that can help them power and cool their datacentre. With DSC, what we are able to do, is we take these people who want to cool their datacentres in the way they understand it - with the raised floors and the air conditioners and other things - and we just help them do it better, in a very flexible adaptive way.

How is HP increasing awareness on power and cooling concerns and highlighting the benefits of DSC in the region?

It is a clear sign of our commitment to this region that I came all the way from the US to present this idea here. The note from our vice president at ISS endorsed it when he said that some of the world's biggest datacentres are being built out here in the Middle East.

And on top of that there is certainly a progression of maturity in terms of datacentre management. What typically happens is that the lifecycle of datacentres is very long - maybe 15-20 years. Though we are used to replacing the service in our datacentres every two or three years, the datacentres themselves typically stick around for 15 to 20 years.

What that means is that it takes a lot more time to gain maturity in terms of best practices in building and maintaining a datacentre. In the US this issue of power and cooling has gone from ‘I don't care' to probably one of the most important issues in the datacentre. One of the things we have realised in tackling this issue is that if you don't tackle it, it ends up constraining you.

You build a datacentre that you believe has a particular capacity and you find that when you reach 70% of rated capacity the datacentre starts experiencing hot spots and you can't install more servers and you are forced to go out and invest another US$100 m or whatever it costs to build a datacentre, which is a lot of money.

Unfortunately moving along on that maturity model may involve a couple of generations of datacentres, which takes a long time. What we have found in the US is that in order to tackle these issues you really have to bridge the gap between IT and facilities management. Even in the US you will find in many of the less advanced IT organisations, there are strict, very rigid silos in which IT and facilities operate - they might not even talk to each other - and that becomes a very serious problem. If the IT organisation is making acquisition choices that are unfavourable to the facilities organisation there isn't any way to allow that communication.

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code