A Rack, Is a Rack. Is a Rack. Not!

No longer is the humble 19-inch rack or cabinet a trivial commodity of little real consequence

Tags: Cannon Technologies (www.cannontech.co.uk)United Arab Emirates
  • E-Mail
A Rack, Is a Rack. Is a Rack. Not! Matt Goulding managing director of Cannon Technologies says that the wrong choice of cabinet could seriously affect the operation and running costs of the data centre in years to come.
By  ITP.net Staff Writer Published  April 9, 2012

No longer is the humble 19-inch rack or cabinet a trivial commodity of little real consequence, it is a critical component in the function of every data centre and the wrong choice could seriously affect data centre operation and running costs says rack and cabinet guru Matt Goulding, managing director of Cannon Technologies.

When I first came into the IT rack and cabinet business, the requirement was for little more than a set of steel uprights into which to bolt a combination of 19-inch equipment together with shelves for the many conventional tower servers and other electronic items.  Around these, to make things aesthetically pleasing and add security we fitted side panels and front/rear doors.

Power dissipation from the equipment was so low back then that no special provisions were needed. Natural convection took the heat out of the rack into the surrounding room and fan trays were added if needed.

Fast forward to 2011 and the ‘cabinets and racks landscape has changed out of all recognition with a number of new and key vectors now very much part of the functionality of the cabinet.

Data Centre Drivers
As many readers will be aware, we are living in a time where the world population’s hunger for online information and transactional services has and continues to rocket.  Information has developed from mostly text based, through the inclusion of a myriad of graphics, to a requirement for streamed video on almost everything from corporate websites to on-demand TV and video-calling.  Much of this is driven by hand-held mobile devices like the iPhone and its competitors.

All of these ‘delivery improvements’ have required more data storage, more bandwidth and more processing power.  In turn this has led to the requirement to not only build many new data centres but also to cram most of this extra capability into the existing limited physical space of older data centres.

Multi-U servers, once the order of the day, gave way to 1U pizza-box servers which currently make up the majority of the installed base. But these are now being replaced by power hungry blade servers to give an effective density of many servers per U.  (A ‘U’ by the way is the height increment between the pairs of mounting holes in a cabinet and typically there are 42 to 52 ‘U’ per cabinet.)

Of course, the underlying drivers are purely financial: Minimised capital expenditure (CapEx), minimised operating expenditure (OpEx), maximised return on investment (RoI) and of course revenues and business growth. all of these coupled with the need for predictability, avoidance of nasty surprises and the need to meet both operational and global targets for lower energy consumption and minimised carbon footprint.

It’s fascinating to see the word Cloud now adopted by marketers and the public worldwide to promote anything and everything that is ‘out there’ (i.e. not on your own computer).  Consumers are queuing up in droves to use ‘Cloud’ services. The pressure on data centres just took another major hike.

Density and Power
At the cabinet level, in the real-world of the data centre hall, the need to squeeze the maximum processing and storage into the smallest space has led to a massive increase in the heat density of the electronic equipment.  This has gone from under a kilowatt (kW) per rack to an average now of 2-5 kW, with high end equipment up to 20 kW or even 30kW.

How to remove this heat has become a complex science - with the need for not only a variety of different methods to remove the heat from the individual cabinet quickly and safely, but also to manage the heat footprint of the entire data centre such that the individual and cumulative heat from each doesn’t adversely affect the others.  As heat outputs rise, this latter point has become a serious problem. One with which data centre planners generally need expert help.

At the cabinet level – depending on the equipment contained, heat extraction at the basic level uses front-to-back airflow, through mesh doors, from the cold-aisle – into which cold air from the CRAC (computer room air cooling) units is fed – out into the hot-aisle from where the hot air finds its way eventually back to the CRAC units for cooling.

The cooling air supply to cold aisle floor grills can fall short when heat densities rise in a rack. A convenient fix can be achieved by adding fans to the rear door of the rack.to pull the hot air out, and the cold air in, more rapidly.

There comes a point however (for many, but by no means all data centres, this is when the racks in a row average around 5kW) where the conventional non-enclosed hot aisle, cold aisle arrangement cannot cope because much of the hot air out of the rear finds its way into the cold aisle warming up the cold air before it gets chance to enter these cabinets – now as warm, not cold air.

From thereon in, cooling becomes far more complex with the need to move to enclosed-aisle cooling to stop this unwanted re-circulation or ‘scavenging’. We call this arrangement aisle-cocooning.

As the heat density rises yet further you may need to consider ‘close coupled’ cooling – with cooling units mounted directly inside the cocoon – either between cabinets in the row or even within the cabinets themselves.

And all of this against the economic and environmental pressure to cut the cost and carbon impact of cooling loads to continuing great debate – and even more options – such as the viability of ambient-air-cooling, heat recovery and a myriad types of cooling technologies vying for supremacy.

In-cabinet air control
At the cabinet level, it has become essential to manage the airflows so that back-to-front (hot side to cold side) airflows within these cabinets are eradicated.

It may sound a small thing, but air-feedback here can have a massive negative effect on the cooling and lead to unsafe temperatures in one or more pieces of equipment within the rack - leading to subsequent equipment failure and user downtime.

Cabinets capable of in-situ upgrade to more efficient and powerful cooling options over the life of the data centre will save major disruption in years to come as higher heat output equipment needs to be deployed into existing space.

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code