A consumer enterprise

As end-users become increasingly savvy, more consumer technology will find its way into enterprises and force IT departments to think differently about endpoints.

  • E-Mail
By  Sathya Ashok Published  April 20, 2008

And part of those is that if there is a shortage of labour and you want me to come and work for your company you better provide an environment where I can be productive and I can use my tools - those are the dynamics.

What is Symantec doing to increase best practices for storage and disaster recovery among enterprises globally?

Let me start with basic storage. One of the things that we try to tell people is whatever you decide to do you need a storage strategy which is very adaptable.

What is often not understood in disaster recovery is that when you look at risk there are two factors – what is the probability of that bad event and what is the impact.

The company may decide to pick a single vendor and that is going to be the way to standardise, but with the growth of all of our businesses the probability is that the firm will merge with or acquire another company and they will have a different choice.

And then you are not going to have a uniform or homogeneous solution anymore. The idea of simplifying with a single vendor is naïve. Overtime you can't avoid combinations because even if it is not an acquisition other vendors are going to come with better offers.

It is very important to try and standardise on a set of tools that allow you to manage an environment which has a lot of variety in vendors. That will be the first step.

The second thing is that if you do that it also allows you to train your staff with one set of tools - so you save on training costs and you save on staff because now they can be shared among different tasks more effectively. We believe this whole idea of standardisation at the software layer is very critical.

If you have a set of standardised tools that allow you to manage not just different vendors but different tiers of storage, you can't avoid getting started with the problem but you can avoid having to live with it.

At a certain point you can say we have these silos of storage, they are not structured correctly, we should put our less important data on the less costly storage and the more critical data on the more costly, high availability, higher performance storage.

These tools allow you to do that. Standardise because there is going to be multiple platforms in your environment; but also pick a set of tools that can withstand and deliver all the needs of your storage management.

The next stage is not just active or online storage, but also offline storage - the simplest stage of which is back-up. Looking for a way where you can manage, back-up in a systematic way across the datacentre and the branch offices, the servers and the desktops, that vendor agnostic approach is even more important.

What is often not understood in disaster recovery is that when you look at risk there are two factors - what is the probability of that bad event and what is the impact. Often companies do not evaluate that very correctly.

They say "well, the likelihood of a disaster is very small so we won't worry." But when you go through the assessment you say "yes, the probability is small, but that will put you out of business" - you won't just lose money, you will be out of business. With that big an impact I would do something.

Having a systematic assessment is the first step in disaster recovery and that's an area we can help with because we have people in our consulting practice who understand that. It is not a simple IT problem, it is more a business problem. An outage can have a huge impact on reputation and damage the entire business.

Most companies do not think that way - they think of disaster recovery as an IT problem. It is a business problem and that is what they need to start thinking about.

What do you believe are the biggest trends likely to affect datacentres in the near future?

We are at the cusp of a big change. If you go back to early 90s, the mainframe was the centre and Unix was starting to come into it. The perception was that the mainframe was the real computer and the Unix stuff was temporary. Here we are fifteen years later and Unix is the mainframe. The mainframe did not go away but it never made the transition to the new applications.

The traditional big Unix system that is in the datacentre is the legacy platform, in my opinion, and we are about to see a transition to a different model which is the next generation platform and it is more like what Google has.

It is masses of low-cost systems and enterprises won't care if they fail because they would have built software to avoid and manage around that. It is a different model than the hardcore transaction systems that we have today. It is more the web transaction model. And the apps are built differently.

Web 2.0 is the new model and we are going to see that transition. It is not going to happen overnight, the old one is not going to go away, but, maybe two years from now, certainly five years from now, the new apps will be built on that platform.

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code