Enterprise backup strategy

In a world where SANs are growing to dominate enterprise storage, and data mirroring is common, is there still a place for traditional backup techniques and strategies?

  • E-Mail
By  Jon Tullett Published  January 25, 2001

Introduction|~||~||~|Businesses need constant access to inventory, and to customer, employee and external Web databases. They must be able to get to files instantly, with zero downtime. Data management and storage technology encompasses business, technology and management issues associated with database management and data warehouse systems, as well as high-performance, fault-tolerant storage system architectures; these include backup systems and services, SAN/NAS (storage area network/network-attached storage) solutions, and disaster-recovery procedures.

There are several reasons why traditional backup isn't dead. While high-availability solutions may be the bees knees when it comes to storage, they are also extremely expensive, and frequently complex to deploy and maintain. Aside from high-end environments and adventurous middle-tier companies, the rest of us make do with more conventional ways of protecting our data.

Then there is versioning. All the data mirrors in the world won't do you any good if a core database suffers serious corruption. If the error goes unnoticed for longer than your transaction logs, you have no way to roll back to a clean version, unless you have snapshots dating back further than the error.

If anything, the return of high-density enterprise storage environments highlights the need for effective baseline backups. The immense quantities of data flowing in today's e-business and production networks is testimony to the role information is playing in the entire business process. Any interruption to that data can have serious consequences.

||**||Drafting strategy|~||~||~|Drafting a backup strategy can also indicate possible future shortcomings of applications. If an application requires files to be locked or resources to be committed, it may require service interruption in order to safely be backed up - otherwise, open files may not be correctly stored. In most cases, such an application should be hosted on a database with loggable transactions, or on a journaling filesystem, where regular snapshots can be taken without any break in operation. If it is not, that environment may run into a brick wall later, when scalability demands it handle higher data demands on a 24x7 basis.

Similarly, client-server applications that store data client-side, or rely on regular data replication, may also be at risk. You have complete control at the data centre; all the information is readily accessible and not too difficult to back up. Out on the network, gathering data is far more difficult. Although there are a number of software solutions from companies such as Symantec and Veritas to perform remote backups, as well as dedicated client-backup devices such as HP's SureStore AutoBackup, there is always a compromise in terms of performance and network overhead, as well as the possibility of user error. Effective departmental and work-station backup is a vital component of the backup strategy. In fact, that was exactly one of the reasons SANs became popular - to offload the overhead of storage related traffic and backup tasks from the rest of the corporate data network.

Most PC users store data locally, and few companies discourage that. In many cases, that is not critical, but sometimes it could be disastrous. Because PC storage is using cheaper, lower-demand technology than that in a server (no RAID support, no environment control, etc), the MTBF is far lower. Failures are common - every company has experienced PC disk failure, and few ever have a way to recover. A sales executive with databases of clients and sales on his laptop or a software developer with the latest build of a critical system could be set back weeks, and do considerable damage to the company, when that drive crashes.

A tiered strategy is not difficult to deploy, whereby desktops regularly save incremental backups of specific directories to a backup server - an NAS device is a cost-effective and easy solution - which in turn can be backed up to traditional off-line media.

Backups will vary in time-sensitivity; some data can be archived and stored in offline vaults with long lead times to recovery, whereas other data may be far more sensitive. An e-commerce environment that measures revenue by the minute cannot afford the delay for a courier to fetch a tape archive from the safe-deposit box. It is important to evaluate the different types of data present, and to craft a backup strategy suited to each. For critical data, a near-line off-site mirror complemented by off-line long-term storage is an attractive, if costly, solution.

||**||Performance and reliability|~||~||~|Above raw performance, the most important aspect to offline storage is reliability. Magnetic media is not perfect, and needs to be thoroughly tested. It is not uncommon for data loss to occur, and the backup tape to be found faulty. Not only should backups be tested as soon as they are performed, existing archives should also be tested periodically.

Disparate environments raise another concern; most companies do not use a single unified data environment. There is user data, departmental servers, backend enterprise software with various databases, and so on. Just as there are strategies and technologies to facilitate managing that complex data environment, there needs to be backup solutions to accomplish the same thing. Inevitably, a chain breaks at the weakest link - providing bullet-proof backup on the database clusters but neglecting the web server will not prevent your Internet services from failing when the latter suffers a failure.

There are several areas needing attention when drafting backup strategies:

· Clients to be backed up and requiring restore services
· Servers where databases and data warehouses reside
· Mainframes and Backup Servers for mirroring, storage and caching
· Storage devices - either onsite or offsite
· Multi-platform network environment: NT, NetWare, UNIX, VMS

Any backup policy and management system that you select must have the ability to integrate all of these pieces and centralize control, the flexibility to accommodate new storage elements and application environments in the future, and interoperability with other vendor's equipment and software. That last point is especially important: many networks already have investments in other backup technology, or through corporate mergers may need to manage disparate backup solutions. The ability to integrate can prevent lengthy and potentially risky redrafting of backup strategies later on.

When evaluating backup solutions, several key areas should be noted:

· Centralised management and interfaces. Can it be scripted? Remotely controlled?
· Integration with management platforms
· Client platform coverage
· Data management (archiving, record keeping, etc)
· Resilience - self-healing, failover, fault prediction, etc.
· Scalability
· Speed of recovery
· Ease of deployment, configuration and maintenance

Of those points, the last two in particular are worthy of explanation. To the user or customer, the speed at which data is backed up is irrelevant; it is the speed (and reliability) at which it can be recovered that is important. If you can't turn around in your chair and put your hand on a backup of any specified file or database backup, the responsiveness of your backup solution should be questioned.

||**||Integration is key|~||~||~|Because backups strategies are expected to run with a minimum of intervention and a maximum of accommodation to new needs, the ease with which a solution can be deployed, integrated into existing systems and then managed is key. Every vendor offers management front-ends to their own products, what about others? New storage initiatives are going a long way to address what has been a historic problem, but there are still issues to be addressed.

Unfortunately, it's common to have to compromise on the list of desirable features for that backup solution. Whether from budget constraints, limitations in existing equipment or another restriction, cutting corners is often unavoidable. This is not necessarily bad; so long as the overall strategy takes the limitations into account and has methods to deal with them. So long as there is an overall strategy, accepting a drawback in the short-term that will support the long-term plan is better than opting for a complete set of bells-and-whistles now, only to find that it doesn't meet your long-term goals, or involves paying for many more features than you need.

A company that doubts the business impact caused by network downtime and storage constraints might not be in business five years from now. That is how critical storage systems are. Surveys show that the average loss in revenue due to storage failure is between $125,000 for small companies, and in excess of $1 million for large companies with 1,000 or more employees. The question really becomes: can you afford not to have a comprehensive backup solution.

The value of the data stored on file servers is usually far greater than the value of all the hardware and software components combined. Recent studies show that most companies that experience a data disaster lasting 10 days or more are either acquired by another company or file for bankruptcy within a year.

The major reason for catastrophic data loss is a lack of understanding of the requirements for preventing that loss. This is compounded by the fact that some of the available backup hardware and software are unreliable. The backup issues facing the system administrator are not always clear-cut. Many tape drive vendors would like users to concentrate on a few specific issues, such as backup time, capacity, and mean time between failures (MTBF). However, it takes a broad comprehension of the issues - from backup media options and methods to restoration procedures - to select and implement a solid backup system.

As the system administrator, don't find out too late that the backup system or procedures are inadequate or unreliable. The resulting data loss might be catastrophic. A solid backup system comprising hardware, software, and clear procedures is an absolute necessity.||**||

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code