Top 10 back-up mistakes to avoid

Anthony Harrison of Symantec on the pitfalls of backing up a virtualised infrastructure

Tags: ArchivingLANStorageSymantec Corporation
  • E-Mail
Top 10 back-up mistakes to avoid Many companies fail to properly back-up virtualised environments, says Harrison.
By  Anthony Harrison Published  January 16, 2011

Server virtualisation can deliver massive benefits for an organisation, but unless the proper data protection steps are taken these are unlikely to be realised. Anthony Harrison, senior principal solutions specialist for storage and server management MENA at Symantec, reveals 10 mistakes to avoid.

10. Not backing up virtual machines

Last year, Symantec conducted an extensive global survey of thousands of end-users, which found that nearly two-thirds of virtual machines are not backed up. Failing to back up virtual machines is a risky endeavour and it is important to understand the reasons why many virtual machines have been left out of the back-up strategy. This includes virtual machine sprawl, cost of back-up agents and concerns over dragging down the host machine or network by moving a lot of data for back-up.

9. Installing a back-up agent on every guest

This has been a common strategy because of uncertainty about the ability to recover granularly, as well as the limitations imposed by virtualisation vendors. However, the impact of this approach is significantly higher costs from back-up agents and unnecessary management complexity.

8. Running two back-up infrastructures

While some IT organisations have invested in two separate tools for back-up (one for physical servers and one for virtual servers), IT has consistently asked for a single vendor to manage both environments. This is because a differing approach to back-up leads to inconsistent data management, back-up confusion, and even conflict between various IT organisations. The solution is for IT to bring together the virtualisation and back-up teams and assign ownership, authority and resources for back-up of both physical and virtual machines.

7. Failing to protect your applications

Failing to protect key applications is the most straightforward mistake and solution, yet is oddly still a common issue for many IT professionals. Back-up is not just for files and data, but also key applications. Enterprise end-users need applications and databases, so when IT virtualises these applications, it should also ensure they are backed up properly.

6. Backing up applications twice

Many IT shops are also backing up the same data two times in virtual environments: they back-up the first time for full image recovery, then back up a second time for more granular file or object recovery. The problem is the whole operation takes twice as long, puts twice the load on the network, and takes twice the storage capacity for the same data.

5. Backing up to tape-only (or disk-only)

Some IT professionals still take a singular approach and use only disk or only tape. However, most analysts recommend a ‘balanced’ strategy using both disk and tape for back-ups. The sensible strategy is to use disk where performance and flexibility are needed and use tape to reduce some costs.

4. Backing up redundant data

There is a lot of duplicate data on virtual machines. Consider the duplicate data in the OS, particularly if you use a standard image. It is not a wise strategy to back up all of that data. It congests the network, lengthens the back-up window and raises storage hardware costs.

3. Treating back-up as an island

Back-ups should be a regular part of day-to-day activities and not seen as an additional ‘out-of-band’ process. Unfortunately, many back-up solutions do not work well with the leading virtualisation technologies and do not integrate seamlessly with their infrastructure. The simple solution here is to ensure the roadmaps are aligned between the back-up solution and the virtualisation technologies.

2. Failing to use your SAN

If you have a storage area network (SAN) then you should take advantage of it for your virtual server back-ups. Many IT organisations only use their local area network (LAN) as their back-up network, missing the speed benefits afforded by a dedicated network while at the same time inflicting a performance drag on your LAN.

1. Failing to consider restore

Back-up is worthless if you can’t recover. One common challenges in back-up is the failure to consider recovery. This is even more common in virtualised environments because there are more options to restore than in the pure physical world.

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code