Keeping pace with the need for space

Companies frequently complain of the headaches caused by storage management, especially on compliance. However, cost-efficient solutions are readily available.

  • E-Mail
By  Caroline Denslow Published  May 29, 2005

|~|main_storage_trends.jpg|~|No longer just solely concerned with the growth of capacity, the issue of storage involves consolidation, data management and cost-effective back up-alternatives. |~|The amount of data that a company creates is like an ever-expanding waistline — a weight problem, if you will, that is very difficult to resolve. If you ask system administrators what technology areas give them the most headache, you will find, as Sage Research discovered, that storage is one of them (in a Sage Research survey of 104 executives, storage ranked second to security). The findings should come as no surprise to anyone given that while storage demand doubles every year IT budgets have not changed. Most, if not all IT managers, have to grapple with the fact that they have to make do with whatever storage resources they have to accommodate the growing data girth. Going back to our waistline analogy, it is like making a pair of old trousers fit when you have already outgrown them by three or four sizes. And the fact that companies are forced to keep data on file longer due to the pressure of complying with government regulations is only making the problem worse. Because of these changes, the business of storage is no longer just about being able to grow one’s storage capacity, but being able to grow it in a cost efficient manner. In fact, today’s storage mantra revolve around consolidation, data management and cheaper backup alternatives. And when you talk about these things, three storage technologies come to mind: information lifecycle management, tape storage devices and virtualisation. ||**||ILM|~|main_storage_trends_02.jpg|~|Qais Gharaibeh of EMC Middle East believes the adoption of ILM stategy will help companies manage the growth of information through every stage of its lifecycle.|~|ILM, or information lifecycle management, is the latest paradigm created to efficiently use existing storage resources through the management of data. It is derived from an older concept called HSM (hierarchical storage management), which surmises that the value of data decreases over time, and by migrating older data to less expensive storage devices, such as tape media, you will be able to free up your more expensive, primary storage resources, subsequently saving costs. “It’s important for customers to be able to manage data growth more cost-effectively,” says David Beck, regional manager, StorageTek Middle East & Africa. “You can only keep ‘throwing disk at the problem’ for so long before you realise the cost implications. For some customers this may be a solution, but for most they just cannot afford to keep growing their disk subsystems year on year.” According to Qais Gharaibeh, partner sales manager, EMC Middle East, by deploying an ILM strategy, companies are assured of lower TCO, better data protection, faster recovery, assured compliance and the ability to manage the growth of information at every point in its lifecycle. “ILM is a strategy to align IT infrastructure with the business, based on the changing value of information,” Gharaibeh adds. However, at present the technology behind ILM is not fully developed. Storage vendors themselves agree that a comprehensive ILM solution does not exist in the market yet. Sure, there are plenty of products that cater to individual components of the ILM model, but no one, so far, has been able to deliver a fully integrated ILM solution set. “ILM is a mature concept in the mainframe world, but ILM for open systems is more complex as there are many different operating systems and file systems to cater for,” says Beck. “This is the challenge the industry has right now — to address ILM across different platforms. ILM technologies are still in their infancy in comparison to an overall ILM solution. Today, products address subsets of the overall ILM strategy and are making incremental improvement in data management across the enterprise,” says Nigel Foley, regional sales manager, CNT International. “We are still far from a single technology or product to manage all data across all environments and platforms. [But] while all the pieces to create a complete ILM strategy are not in place today, there are steps that companies can and should take to prepare for ILM,” adds Foley. Foley advises companies to embark on a data discovery process for them to fully understand how much data there is and where they are located. It is important that they start classifying information based on business needs — whether through availability, performance, regulatory requirements or other related functions — and begin looking at tiered storage deployments to support the various business requirements. “ILM is an ambitious goal and steps will be incremental but will add value along the way. These steps will allow companies to quickly take full advantage of ILM once the missing pieces are available,” Foley elaborates. Aside from cost cutting, regulatory compliance will boost ILM implementations within the year, says Gharaibeh. "Regulations can be a factor; compliance or governance generally has a clear requirement for the retention and/or destruction of information stored online,” he comments. Beck agrees that e-mail archiving and compliance archiving will drive companies to invest on ILM solutions. “I am pleasantly surprised at the number of customers I talk to in the Middle East who are concerned about compliance and the fact that it will happen in this region sooner than they originally thought. “Some are preparing themselves already, which is very encouraging,” comments Beck.||**||Tape storage|~|main_storage_trends_03.jpg|~|Tan Kok Peng of Tandberg Data sees the cost effectiveness of using tape storage in compliance, in meeting the data protection practices of governments.|~|The debate between whether to deploy tape or disk for data storage has been a long-standing issue. Many have predicted — particularly disk advocates — that tape storage will become obsolete soon, especially because it is considered an old technology, having existed for more than 50 years now. While it is true that the market for tape products is not considered high-growth, it has, however, remained steady and shows no signs of abating. “The question of whether tape will continue to survive in the long term is not new,” says Tan Kok Peng, technical development manager, Tandberg Data. “Long before today, the industry has predicted that tape was dead and applications that rely on tape would be replaced by disk.” In spite the emergence of disk storage devices, Omar Dajani, regional technical consultant, Veritas Middle East & North Africa, discovers that many enterprises still prefer to use tape for data backups. “There’s always been a little bit of resistance to moving from tape to disk storage,” says Dajani. “The idea that the final destination for your data is on a tape, makes people feel warm and fuzzy.” “Many of us have experienced a hard drive crash, information getting erased or the system being hit with a virus, and it was very painful for each of us when we suffer theset failures. The idea that if we do backup to disk we will lose all our data is always fresh on people’s minds, so they’re very reluctant to use disks as their final destination,” Dajani explains. Because tapes, by nature, are removable and have offline characteristics, data on tape is very secured as they are physically inaccessible to hackers or viruses, according to Tan. “Due to tape’s portable nature, they can be transported for off-site storage in a remote premise. The use of magnetic tapes is still paramount in backing up and archival of data as they may be the only last resort in recovering them,” Tan explains. While the tape storage market continues to thrive, the way companies use tape, however, has changed. If in the past tape has been used as a medium of data transfer between different platforms or installations, today it has taken in the role of long-term offsite storage of dormant data, or data that is unlikely to be accessed often. “Tapes are primarily used for backup and recovery, archival of fixed-content data, vaulting and remote storage for data recovery or reproduction purposes,” says Tan. A key driver for today’s tape adoption is compliance. The need to meet the mandates from governments and various industry regulatory boards for data protection practices to ensure integrity and privacy of stored information has significant implications on the broader data protection function. “Fuelled by the recent Sarbanes-Oxley Act, lawmakers and industry regulators are hitting hard at record-keeping practices, with specific requirements for the long-term collection and safeguarding of, and quick access to realms of vital information of all types,” comments Tan. According to Tan, tape storage makes sense in such situations because of its inherent cost effectiveness. “[Compliance] is definitely an area where tape is much more cost effective as compared to a disk. It is still less expensive on a per gigabyte basis, and the cost ratio of tape can be further reduced using automation,” he says. “According to various research entities, the average cost per gigabyte for disks ranges from US$3 to US$15, while automated tape ranges from US$0.50 to US$3 per gigabyte,” Tan continues. Whichever way the argument goes, the truth of the matter is tapes and disks aim for a peaceful coexistence, says Dajani. Disks will be the preferred device for backing up data faster and accelerating the restore process of current data while tape will continue to be the low-cost, long-term data storage and archiving method of choice. “Disk and tape are going to continue [to exist] in parallel and both are going to grow because data is growing,” he adds. Tan agrees. “In today’s business storage requirement, we see more collaboration of the two storage technologies rather than competition against one another,” he says.||**||FC vs. iSCSI|~||~||~|Storage area network (SAN) virtualisation is a key focus area for many players in the storage arena, mainly because it gives customers the capability to optimise whatever resources they currently have. What makes it attractive to an enterprise is the fact that by putting a proper virtualisation strategy in place, the company can have full control of all its existing storage resources — from its main storage systems down to individual users’ hard disks — which allows them to share and properly allocate storage capacities among its users, hence making sure that resources are not put to waste. A company that wants to employ a network-based approach to SAN virtualisation has two options: either to go the fibre channel way or the iSCSI (internet small computer system interface) route. While both offer the same goal of interconnecting islands of storage across a network, there is a great disparity between cost and performance. iSCSI is the cheaper of the two simply because it uses a company’s existing IP network to transfer data. Consequently it is much easier to implement because network administrators are already familiar with the architecture. However, having to use the same network means that different activities have to share the same bandwidth, which can make storage backup, for instance, a slow process since it has to contend with network and e-mail traffic and other business processes that run over the same network. Fibre channel, on the other hand, while costly, guarantees a dedicated line for your storage-related processes, offering data transfer rates that are ten to 20 times faster than a traditional local area network. According to Simon Gordon, solutions architect at McData Europe, Middle East & Africa, while network vendors pit the standards against each other, both have their place in most organisations. “Fibre channel has a level of maturity as a storage protocol, particularly when you start putting the traffic under heavy load,” Gordon says. “When you start doing some of the more interesting things, like booting over SAN and clustering, the maturity of fibre channel does give it a lot of advantage over iSCSI.” “For iSCSI there is a separate market that fibre channel may not be able to meet and that is the SMB sector. The issue of cost is making iSCSI particularly attractive in bringing in tier two servers — the Intel-based servers — into the SAN,” Gordon adds. Looking ahead, the market can expect a new virtualisation standard emerging as Cisco prepares a new protocol — fibre channel over IP (FCIP) — that will bring fibre channel and iSCSI together, says Abderrafi Belfakih, manager of Cisco Systems Middle East’s systems engineering group. According to Belfakih, FCIP pairs iSCSi’s ability to connect separate storage systems cost effectively with fibre channel’s capability to offer faster data bandwidth. “We do know that fibre channel has distance limitations. You cannot transport it over a wide area network because it will be too costly to implement. What we did is to take a fibre channel traffic and encapsulate it in an IP frame, allowing the deployment of virtualisation services over extended distances and eliminating the need for separate channel extension devices,” Belfakih explains. He sees FCIP to become the dominant protocol as it "promises to offer cost collaboration between data centres.” “Resource management will become much simpler with the standard making it possible for one IT team to oversee data centres that are geographically dispersed from any point of the network,” he says. Moreover, he believes the role of the network will continue to evolve as further consolidation of storage happens in the industry. Virtualisation will move beyond simple consolidation and integration to one where storage resources will adapt to what the data centre is doing, dynamically provisioning resources without the need for human intervention. “The phase beyond virtualisation is automation, where you can automatically adopt the performance speed and pre-empt failure, for example, and you can provision resources automatically without having the IT manager going at a particular odd time to provision service for each department,” Belfakih says.||**||

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code