Conquering space

Vendors believe that virtualisation — an abstraction of storage resources and devices — will help organisations to meet the challenge of managing their huge data mountains

  • E-Mail
By  Peter Branton Published  June 18, 2006

|~|virtualbody.jpg|~|Storage virtualisation technology has been hailed as helping organisations to cut down on their storage costs by reducing the amount of disk space needed to store data.|~|Data management has been an ongoing challenge for a lot of compa-nies and it is only going to get worse: the growth in the amount of data that needs to be stored seems unstoppable. Throw into the mix confusing government regulations, tighter internal budgets, and the complexities of dealing with multi-vendor storage environments and you have on your hands a potential disaster waiting to happen. But all is not lost. Storage vendors have taken the responsibility of addressing the issue and it seems that they have found the solution in storage virtualisation. The thinking behind virtualisation is quite simple: take all your available disks — and possibly, your other storage devices — and merge them into one, virtual pool of homogeneous storage. “As far as the technology is concerned, the primary functionality of storage virtualisation is to abstract disk drives and the hardware so that the end users do not have to deal with individual hard disks,” says Albert Saraie, director of marketing at US-based iQstor Networks. “What they see is a virtual pool of storage that they can very easily manage,” he adds. There is a lot to be said about deploying virtual storage. Vendors see a lot of potential in the technology, especially when it comes to easing some of the headaches associated with managing a large, multi-vendor storage infrastructure. One clear benefit is improved resource management as, behind the scenes, storage virtualisation maintains the appearance of one logical system. In other words, it eliminates the need for IT managers to individually administer each storage device that their firms have. “[Storage virtualisation] enabl- es IT administrators to manage larger amount of storage because they do not have to manage individual disk drives,” says Saraie. On top of that, storage virtualisation allows data to be stored from one storage device to another in the pool when subsystems fail to work or are replaced. With easier data migration comes the capability to implement tiered storage cost-effectively as it let administrators mix and match storage arrays, and transfer data to different sets of storage devices as it becomes less critical to the company. Another advantage is the consolidation of data services, such as snapshot and replication. Because virtualisation removes the need for full redundancy, companies do not need to copy entire volumes of data from one storage system to another. Instead, they can just copy partial data (or snapshots) or data changes and link them to the actual data set. Yet, despite the hype, the adoption of the technology has been slow, with most implementations limited to large companies with 10,000 or more employees. Interest is picking up, though, particularly in the medium-sized segment — about one-third have expressed their desire to deploy the technology by the end of the year, according to IDC. Saraie blames the slow adoption take on end users’ perception of new technologies. “Like any technology, when it is new or it comes to the market early, there is a lot of apprehension as to how the features work, and how the benefits are being realised by the customers,” he reasons. Damion Lock, business development manager at Magirus Middle East, a value-added distributor, agrees. “In the Middle East there is a slow curve to the adoption phase of new technologies. There is, maybe, a five-year pause between the adoption of new technologies here and in Europe. Currently, people are starting to talk about virtualisation, but this is more about server virtualisation rather than storage virtualisation,” Lock says. But both Saraie and Lock believe interest in storage virtualisation will pick up, especially as companies become more familiar with the technology. “Over time, as the technology really becomes mature, then customers will become more and more comfortable with the technology,” Saraie says. “Not everybody would want virtualisation. At this stage, they won’t,” concurs John Bentley, sales director, Hitachi Data Systems (HDS) Middle East. “But as the technology becomes more available, then it will eventually spread to the smaller organisations as well. At the moment, I think it is only the medium to larger companies that will be considering virtualisation because that is where it applies most, and that is where the technology is focused. But in time it will move downwards.” Aside from the tendency of companies to sit and wait for emerging technologies to become fully established before they deploy them, there are a number of other reasons why companies are not rushing to buy the technology. These include market confusion, interoperability issues, and deployment costs. There is also a lot of confusion as to which vendors offer true virtualisation and to what extent. Part of the misunderstanding stems from how storage virtualisation is defined, as the term means different things to different vendors. In addition, varying degrees of virtualisation are applied to existing solutions, and there are several ways in which virtualisation can be deployed. ||**||The three types|~|singerbody.jpg|~|Wolfgang Singer of IBM Technical Experts Council |~|Traditionally, there have been three distinct approaches to virtualisation: host-based, array-based and network-based. Each has its own merits — particularly in the areas of performance and manageability — and also their own problems. The host-based or system-based approach handles the virtualisation intelligence at the server level. By hosting virtualisation at the server, IT managers may benefit from the integration of their system and storage management tools to manage their IT environments. Better integration means greater control of heterogeneous, distributed resources. That also means that they may only need one person to administer both environments, and thus help significantly minimise people management costs. One problem, though, that products offering host-based virtualisation face is on performance. Since storage virtualisation requires a lot of computing power, putting the ability in the host means that it has to compete with the other host functions. Advocates of this approach refute the allegation, claiming that there is usually no significant performance degradation because the applications that require the data reside on servers as well, and hence, it is just one step from the server to the storage device to access virtualisation services from a virtualised array. This approach also tends to be less scaleable and is more susceptible to an erratic host, which can cause accidental access to protected data. It can also have the tendency to be less flexible as the software controlling storage virtualisation may not necessarily interoperate with third-party software and hardware. On the other hand, the host-based approach is the easiest and least costly to deploy because it does not require any additional hardware, making it a cost-effective approach to storage virtualisation. Vendors using this approach tend to be software vendors in the storage management area who already have mature software products. Hosting virtualisation in the array depends on the storage system to provide the functionality. A storage array approach is usually incomplete unless a third-party storage area network (SAN) virtualisation software is installed in the environment. Vendors such as EMC, HP, IBM, HDS and Sun, support this method. IBM’s SAN Volume Controller (SVC) is especially significant because of its solid adoption rate and strong heterogeneous storage array support. The SVC, which was launched in 2003, has been deployed at around 2000 installations worldwide, claims Wolfgang Singer, a member of IBM’s Technical Experts Council (TEC). The company has recently updated its storage virtualisation software. SVC version 4.1, features Global Mirror functionality designed to provide long-distance asynchronous remote replication for business continuity and disaster recovery at greater distances. IBM has also included support for 4Gbits/s SAN fabrics in its new SVC engines to enhance infrastructure simplification capabilities. There are several advantages to the array-based method. For one, it can be implemented within the storage system. Performance-wise this is good as it allows the array to quickly locate storage resources and make those resources readily available to applications and databases needing the services. Additionally, this approach makes virtualisation easier to manage and makes administration transparent to the IT manager. Most storage vendors that are backing array-based virtualisation normally have very strong storage management tools, which works well in managing storage both manually or automatically, thus reducing storage management costs. Hitachi’s TagmaStor master array is especially noteworthy because it can manage the virtualisation of large pools of distributed, underlying, heterogeneous storage devices. “Controller-based virtualisation is the simplest, most comprehensive virtualisation method currently available,” claims Bentley. “With the other approaches, you have to put additional devices between storage units. You have to put different devices in the network that does not offer the high availability that we can offer from a central controller point of view,” he adds. Compared to the other approaches, Bentley says that a co- ntroller-based approach is much simpler to administer and manage compared to a network-based method, which has other components linked to it. “What ha-ppens if something goes wrong with the network?” he asks. “You run a greater risk if virtualisation is in the network. There is even an opportunity with our controller to improve performance over and above a network virtualisation solution,” he claims. But introducing a storage controller into the array, either as a separate appliance or a built-in component, poses several drawbacks. While it generally does an excellent job of working with the storage, especially in cases of errors or write failures, relying on the functionalities a storage vendor can provide can isolate the use of just a bunch of disks (JBODS) and simple storage devices that do not have any storage virtualisation functionality. Therefore it runs the risk of being locked into a single storage vendor. Singer argues that this is not the case with IBM’s SVC. Previous versions of SVC have already provided support for third-party storage devices from EMC, Hitachi, HP and Sun. With SVC 4.1, IBM has extended support to new disk models and server operating systems, including Hitachi TagmaStore and OpenVMS. Also supported are new disk systems from IBM, HP, and Network Appliance, bringing the total number of environments supported to nearly 80. “We are increasing our subsystem support. We are intending to bring out two new releases every year,” Singer reveals. “SVC is very ideal for environments that have heterogeneous storage because you have the advantage from a total cost of ownership (TCO) point of view, in that you do not need to upgrade every time a new driver comes out for one of your storage subsystems,” Singer says. “You only have to install just one driver, which is the SVC. All the drivers of your storage subsystems are within the SVC, so you don’t have to take care of that,” he continues. On top of that, Singer says that IBM is one of the very first companies to implement storage management initiative specification (SMIS), which the Storage Networking Industry Association (SNIA) is pushing to standardise every interface used in the storage area. “You don’t have vendor lock-in with IBM because with our SMIS protocol we can handle other storage subsystems. You can decide what storage subsystem you want to add beneath the SVC,” Singer says. “Storage vendors are trying to combat vendor lock-in by sourcing third-party vendors. All vendors are offering to move the cu- stomers’ perspective of vendor lock-in away so that they can off- er customers options,” says Lock. The third and most recent approach to virtualisation puts the storage virtualisation functionality in the network. Network-based virtualisation comes in three forms: appliance, switch or router. As with the other two methods, there are advantages and disadvantages to using the network as the foundation of your storage virtualisation technology. For instance, using an appliance-based approach with a standard operating system, such as Windows, Sun Solaris or Linux, gives it the same capabilities that a host-based approach has. It is easy to adopt and is not too costly to implement. With a switch-based approach, security is enhanced as it does not require an agent to run on each host for the storage virtualisation to function properly. It also provides more interoperability in a multi-vendor environment. Like a switch-based approach, virtualisation via a router is also more secure, and provides high interoperability in a heterogeneous set-up. Hosting storage virtualisation at the switch level eliminates the need to clear through systems or storage intelligence requests for additional storage or systems services. The switch keeps a record of which resources are available and which resources are busy. Consequently, switch-based virtualisation can result in the ability to rapidly identify and utilise available storage resources. Only a few vendors have adopted the network-based approach, such as Cisco and Brocade. The trade-off, however, of having networking vendors providing virtualisation technology is that they do not offer a comparable level of sophisticated systems and storage management tools and utilities as storage vendors do. ||**||Promising signs|~||~||~|Lack of sophistication can result in increased systems and storage management costs. Other drawbacks with a network-based approach, especially with appliance-based virtualisation, include interoperability issues. The approach, aside from inheriting the advantages of a host-based method, has also acquired its disadvantages, including the need to have an agent software or an adapter for each host, as well as the issue of unprotected data access. Admittedly, choosing the right approach can be difficult. Pricing for all three strategies is very similar and all three perform comparably well. While iQstor’s approach to virtualisation is through the storage array, Saraie believes that virtualisation should be implemented where it best serves the needs of the customers. “There may be some enterprise-level customers that may prefer virtualisation at the network, whereas there may be some small and medium-sized businesses (SMBs) that may prefer virtualisation to be implemented in the storage array due to its lower implementation costs. The ideal approach to virtualisation depends on who the customer is and where they want to achieve virtualisation,” he explains. However, the debate on what approach to use may end soon as a number of storage vendors start introducing the technology as a standard feature of their storage products. For example, iQstor has made virtualisation a standard feature in all of its storage solutions, which it offers free of charge. “Initially, we offered virtualisation at a cost. Over time, however, virtualisation has become an inherent component of a storage system and we offer that now for free,” Saraie says. By doing so, Saraie believes that more SMBs will become interested in the technology. “The reason why they had not adopted virtualisation technologies was due to the increased costs. But with companies like iQstor offering enterprise-level features at a much more competitive price, we have made it possible for small and medium-sized companies to have access to the same features that are up to now only been available to large enterprises,” he says. At the same time, to encourage SMBs to buy into the technology, vendors have developed box-type solutions for this market segment. DataCore Software, which Magirus represents in the region, has rolled out an SMB version of its SANsymphony product, the SANmelody. “Budget restrictions hinder SMBs from deploying storage virtualisation. With SANmelody, SMBs now have the ability to achieve a centrally managed storage infrastructure at a more affordable price,” says Murtaza Talawala, a technical consultant for Magirus. A home version, called the SANmelody Lite, is also available for what Lock describes as “technically-aware users, especially engineers and people geared towards adopting the digital home concept.” HDS is also scaling down its enterprise solutions to a more modular version meant to target SMBs, claims Bentley. “We are bringing to the module enterprise market exactly the same facilities and functions and higher availability that we are providing to the largest organisations with the TagmaStore Universal Storage Platform,” he notes. “We are bringing that technology downwards into smaller organisations,” he adds. Further down the line, the vendors agree that there is a lot of promise in storage virtualisation, both from a market point of view and a technology point of view. The emergence of cheap, boxed solutions — Lock claims you can get the cheapest version of Datacore’s virtualisation product for US$199 — and the continuous enhancement of the technology, are encouraging signs that virtualisation will emerge as a key component of every company’s IT infrastructure. “On the technology front, we will see further improvements taking place on storage management software and content-addressable storage,” Bentley says. “We will see the move to develop more and more software upwards towards the application level so that there can be more automation introduced in the provisioning and management of storage,” he adds. But perhaps the most important factor that could determine the success of virtualisation is also the oldest —the bottom line. “More and more customers are looking at storage virtualisation because they see what kind of savings they can have. The return on investment (ROI) is rather high in the storage virtualisation area,” Singer says. And that could be the biggest attraction of all.||**||

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code