Avoiding virtual pitfalls

Although virtualisation has long been touted as a solution to spending issues and IT efficiencies it is not all plain sailing. Senior analyst at Ovum, Tim Stammers, outlines the dangers and downsides that you will not hear from vendors pushing virtualisation and details how smart organisations can sidestep them.

Tags: InfrastructureOvumUnited Arab Emirates
  • E-Mail
By  Tim Stammers Published  November 14, 2009 Network Middle East Logo

Server virtualisation has many benefits, which is why it is being taken up so quickly. But the v-word carries downsides that can wipe out all of its advantages and cause a reduction of IT service levels.

The risk of this happening to customers is not small. In a recent survey of US businesses, over half of those that had virtualised mission-critical servers said that doing so created more problems than it solved. Other businesses that want to avoid the same experience must be aware that server virtualisation is a continuing process rather than a one-time project, and that it introduces new responsibilities and challenges that did not exist in the physical world.

Server virtualisation is many-splendoured

Famously, server virtualisation allows a great deal of consolidation of hardware, and initially this was its biggest selling point. Using the v-word to consolidate x64 servers boosts processor utilisation from around 20% for typical non-critical Windows and Linux servers to as much as 80%, depending on the headroom businesses wish to give themselves post-virtualisation.

This consolidation delays the need for hardware refreshes and reduces the number of physical entities that must be managed. It also cuts electricity consumption and carbon footprint, which is an especially welcome bonus in data centres that are approaching their power supply and cooling limits. So although the majority of the world’s x64 servers have yet to be virtualised, it is no surprise that this year, for the first time, Western Europe is expected to deploy more new virtual servers than new physical servers. North America has already crossed that threshold.

But the benefits of virtualisation go far beyond server consolidation. The easy migration of virtual servers from one physical host to another brings major advantages in load balancing, availability and disaster recovery. Virtualisation also allows IT to scale up existing applications or bring new ones into service very quickly, because virtual servers can be provisioned in a matter of seconds, rather than the days or weeks often needed to install new physical servers. Unlike physical servers, virtual servers can be created without the procurement and delivery of hardware. Also unlike provisioning in the physical world, virtual software stacks or server images do not need to be tailored to suit different physical servers, because virtualisation masks hardware differences.

The pitfalls begin with sprawl

Lunch is never free; alongside the benefits, server virtualisation introduces major hazards to IT operations. The biggest problem is the very high risk of sprawl or proliferation of virtual servers. Sprawl is a familiar problem for physical servers, but it is much more likely to happen in the virtual world, and to happen much faster.

Post-consolidation, IT administrators are tempted to believe — mistakenly — that because processor cycles are being used much more efficiently, virtual servers effectively cost nothing to run. Combined with the easy provisioning, this often results in an overly relaxed and enthusiastic attitude to the creation of what can soon be large numbers of virtual servers, many of which will be redundant or unused. Redundant virtual servers do not just consume physical resources; they also impose a much bigger administrative penalty than sprawling physical servers. That is because virtual servers are much more complex to manage than physical servers, for a number of reasons. The biggest is that virtualisation  adds a layer of logical entities such as hypervisors, virtual switches, and management consoles and management servers to an infrastructure. This amplifies the challenge of configuration control, which of course is essential to maintain security and service levels in both the physical and the virtual worlds. The mobility of virtual servers introduces yet more complication, for example by allowing sensitive applications to be moved to inadequately secured host servers, such as hosts running in a demilitarised zone outside of a firewall.

Even when there is no sprawl, server virtualisation creates new challenges. Businesses can struggle to realise the load-balancing benefits promised by virtualisation, mostly because they lack the adequate tools and visibility needed to diagnose performance problems. IT administrators who try to fix performance problems by moving virtual servers from one host to another, or adjusting the resources allocated to them, can suffer a syndrome that is common enough to have earned its own label – ‘motion sickness’. Each attempt to fix a problem creates another, resulting in a chain of server movements.

Live with the problems by managing them

But sprawling virtual servers can be a virtuous demonstration that pent-up demand for IT resources is at last being met. To be specific, the problem is not with virtual server sprawl; it is with unmanaged virtual server sprawl.

Much of the solution involves management and procedures, rather than the installation of yet more software, as even vendors concede. To avoid being like
that unhappy half of the US survey for whom virtualisation  simply created more work, businesses need to take these fundamental steps:

1. Plan from the start – beginning with an inventory of available physical hosts, and identification of the best applications to run in virtual servers.
2. Capacity plan — balance loads across host servers according to measurements of individual virtual servers’ usage of CPU cycles, memory and network bandwidth.
3. Control virtual-server provisioning — using role-based admin rights and workflows.
4. Review existing physical-server configuration control processes before assuming that they are good enough to apply to virtual servers. Bad change management threatens IT service levels even more in the virtual world than it did in the physical world.
5. Implement processes for the continual culling of redundant virtual servers.
6. Monitor performance continually — because even careful capacity planning can be defeated by unexpected loads.
7. Establish policy-based controls on server movements — by grouping physical hosts and virtual servers, according to what they respectively offer and need in terms of availability, security and data protection.

Reorganise and reassess to survive

Virtual servers also have more complicated relationships with storage and networking than physical servers, and this fact creates new responsibilities that can fall between the cracks unless they are specifically addressed. The most obvious example is the configuration of the virtual switches running inside physical hosts. Because of their location, they might be assumed to be under the care of a server management team. But in reality they are network elements which need to be put under the control of network management teams.

There is more than one approach to this management issue. Some businesses may be best served by creating specialist or ‘tiger’ virtualisation teams, which include server, storage, networking and even PC specialists. Others may want to ensure that IT as a whole embraces virtualisation, and avoid such specialist teams. Either way, businesses must be very aware that when they virtualise large numbers of servers they are entering a new world.

About the expert

Tim Stammers is senior analyst at Ovum. He focuses on server and storage infrastructure, and covers the activities of vendors such as EMC, VMware, Microsoft, IBM, and HP. Before joining Ovum in January 2008, Tim was storage practice senior analyst and bureau chief for ComputerWire in New York City.

He has fifteen years experience as an IT journalist, and has written for Computing Magazine (the UK’s highest-circulation trade title), and PC Week. Prior to his career in IT journalism Tim was a development engineer at Rolls-Royce, and a mainframe systems engineer at the London Electricity Board.

Tim holds a BSc Hons in Mechanical Engineering from City University, London and an MSc in Applied Computing from Middlesex University.

About Ovum

Ovum is an independent ICT research and analysis company that provide practical, actionable advice to the technology, telecoms and other business sectors. The company combines the expertise of Datamonitor Technology, Butler Group and Ovum, with Datamonitor Group’s 350 business analysts and relationships with 18 of the largest 20 global corporations.

Ovum research is based on independently audited methodologies that ensure that our clients can base decisions on rigorous and fact-based research, rather than on unqualified and unjustified opinions.

The research draws upon over 400,000 interviews a year with business and technology, telecoms and sourcing decision-makers, giving Ovum and its clients unparalleled insight not only into business requirements but also the technology that organisations must support.

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code