Defining the data centre of the future
Technology advances today allow virtualisation of the entire technology stack—compute, network, storage, and security layers. Software-defined everything enables greater agility into organisations’ IT services, in addition to cost savings and improved productivity.
Software Defined Networking is gradually but surely giving way to Software Defined Everything (SDE), as organisations seek to extend the benefits of the virtualisation to their entire data centres.
Organisations taking the SDE plunge do so for various reasons.
Gregg Petersen, regional director, Middle East and SAARC, Veeam Software says organisations need to take the software defined route to simply meet the needs of the business today. “There are incredible demands on the data centre today. These include 24/7 operations, no patience for downtime and data loss as well as growing amounts of data. The only way to meet these expectations is to put the infrastructure to work in an intelligent fashion,” he says.
Regional organisations are considering how they can virtualise their data centres to invest on future requirements, says Ayman El-Sheikh, manager, Solutions Architect, META for Red Hat. A lot of these organisations have a combination of legacy, physical, and virtual applications but they also need to cater for future demand, notes El-Sheikh. “This involves organisations deciding on how they can add capacity on demand, and do so rapidly. Such considerations impose by default Software Defined Everything,” he says.
There has not been widespread adoption of the Software Defined Data Centre (SDDC) in the region, says Paul Griffiths, technical director, Advanced Technology Group at Riverbed, noting that the fear of losing control remains a major hurdle. Organisations are also faced with having to overcome the existing complexity in their IT infrastructures. “There is however an accelerated uptake in Government and the Oil &Gas (specifically around offshore and remote locations) sectors, where certain leaders in these industries are virtualising even mission-critical applications and environments,” he adds.
Cloud has emerged as a key enabler of virtualisation.
Regional CIOs today are moving towards adoption of cloud computing, Petersen observes. These CIOs realise that virtualisation is critical to cloud computing because it simplifies the delivery of services by providing a platform for optimising complex IT resources in a scalable manner. As a result, a large number of enterprises regionally are actively virtualising their entire IT infrastructure in order to be ‘cloud-ready’.
Emerging markets like the Middle East are adopting virtualisation fastest since they have less legacy infrastructure and in many cases are leaping over several generations of technology in their attempt to catch up with regions that are more established, Petersen adds.
“More and more enterprises in the region are moving their business critical applications into virtual platforms, so there is definitely a commitment. Virtualisation has in part enabled regional companies to provide better availability, Petersen says, adding, “But the highest levels of availability are a combination of a number of things in the data centre today: being highly virtualised, investing in modern storage systems and having a cloud strategy.”
Those keen to take the SDE route have support, and related technology from some of the biggest IT vendors.
Petersen says there are clear opportunities in bring a software defined approach to areas of data centre Availability. “Veeam’s approach is to extend the capabilities of the SDE characteristics to the mechanisms in place to address when things don’t go as expected, data centre Availability. And today, that’s more than just backup,” he adds.
In particular, Veeam is introducing the Scale-Out Backup Repository, Petersen explains. “The Scale-Out Backup Repository allows customers and partners to increase data centre availability from another perspective with a software defined approach. The Scale-Out Backup Repository provides a management level scale-out approach to the backup storage provided to the data centre,” he explains.
Red Hat’s strategy with the SDE works in two directions, says El-Sheikh. One of them is serving the legacy proprietary needs of workloads that are not designed for the cloud with a proprietary scale-out kind of architecture.
Red Hat is also promoting its Open Hybrid Cloud solution, which the company says delivers more agile and flexible solutions while protecting business assets and preparing for the future.
Open Hybrid Cloud, El-Sheikh says, is especially viable for companies delivering Infrastructure as a Service (IaaS). Red Hat is also one of the major contributing partners towards OpenStack, the free and open-source software platform for cloud-computing, mostly deployed as an infrastructure-as-a-service (IaaS). Through OpenStack, El-Sheikh explains, Red Hat has a networking component called Neutron, a solution for Software Defined Storage. “We are investing heavily in Software Defined Storage and acquired Inktank a few years ago, a company that’s a leader in the space,” El-Sheikh says. “Everything is moving towards into agile infrastructure with a scale out architecture and we are investing heavily in that. After that is the management aspect that will enable the organisation deliver applications faster,” he adds.
El-Sheikh says Red Hat can also help organisations with custom in-house development applications to use the latest technology like containers or SaaS, enabling them move from the big slow monolithic to the lightweight scalable cloud workloads.
For Riverbed, the key is SD-WAN, says Griffiths. He says Riverbed solutions are part of the infrastructure critical to the Software-Defined Data Centre and the development and management of applications being hosted in public clouds.
Having begun with virtualising their servers, organisations ostensibly have a working model on how to virtualise the entire data centre.
Griffiths however asserts that perhaps as a result of the lessons learned from server virtualisation, organisations are today focusing on meticulous planning before embarking on any ambitious virtualisation projects.
“My advice would be to make sure you understand how your applications work today and this means understanding not just the applications but also the platforms they are running on and the users that are accessing them,” says Griffiths, adding, “So when you move this into a virtualised environment, make sure you do not lose performance, availability, visibility or control.”
El-Sheikh urges prudence as well, citing a recent study that showed people who are undertaking the SDE journey through a gradual approach, moving from virtualisation to adding storage and then the network, have a higher success rate because they are improving people skills gradually.
Virtualising the data centre is the gateway to doing more with the infrastructure, Petersen asserts. With virtualisation, storage systems have come under scrutiny for a number of reasons, Petersen notes. For one, virtualising the data centre puts more demands on the storage systems. Secondly, prioritisation needs to come into play to ensure service levels for priority workloads are delivered. “There’s a need to have complete visibility in the storage and virtualisation infrastructure to answer key questions on what is going on and what has changed,” Petersen says.
There are a number of challenges that companies face today in their endeavour to virtualise their infrastructure.
Software-defined anything is potentially going to cause key challenges with visibility and control, Griffiths says. “To be able to head down this route, organisations must make sure they understand their existing environment. This involves taking a baseline of the current infrastructure behaviour and application performance within it,” Griffiths says.
If they fail to take due care and proceed with Software-Defined, Griffiths warns, they will have nothing to compare against and also lack the visibility needed to pinpoint issues when they arise. Similarly, without the right controls in place, they will not be able to address issues that arise.
Within the storage and backup sphere, Petersen highlights some challenges including Application Availability. According to Petersen, the data centre is properly driven by the data within it. Providing verified recoverability and high-speed recovery for the applications in the data centre is a challenge today, and Veeam has pioneered the industry in this space to address this challenge, he adds.
Petersen adds that full Disaster Recovery continues to challenge most organisations. Many companies haven’t fully addressed the disaster recovery challenge.
SDE heralds new ways of delivering new processes and features notes El-Sheikh. “You have to think about developing a new culture with DevOps as a replacement for IT services management that was too bloated. The new strategy should be to break your team into smaller teams working in collaboration,” he adds.