Defeating downtime

The growth of bandwidth is being matched by the growth of bandwidth-intensive applications. Now firms are turning to QoS (quality of service) agreements to make sure network traffic avoids getting disconnected and runs more smoothly

  • E-Mail
By  Caroline Denslow Published  October 30, 2005

|~|QosUserbody.jpg|~|Network downtime does not just leave disgruntled users, it can disrupt business and cost end users significant amount of money.|~|In his 2000 work, Telecosm: The World After Bandwidth Abundance, telecom guru George Gilder gave a vision of the world where bandwidth was so readily available that its cost would be virtually nothing. “New technologies of sand, glass, and air will form a web with a total carrying power, from household to global crossing, at least a million times larger than the network of today,” Gilder wrote then. Coming from the man who predicted in 1989 that Intel would be the name to watch in the IT industry in the next decade, that vision was given a lot of weight. Fast forward a few years and Gilder could be said to be spot on — at least as far as the supply side goes. “The demand for bandwidth has been growing even more quickly, with more applications getting centralised and newer, bandwidth-hungry applications such as VoIP (voice over internet protocol) going mainstream,” says Khalid Khan, marketing manager, 3Com Middle East and North Africa. “The market for real time applications, like VoIP and videoconferencing is maturing by the day. But in a lot of cases the amount of available bandwidth is not able to support these applications, leaving users stranded,” Khan adds. The rapid rise of IP-based applications has put a lot of pressure on corporate networks to be able to supply the right amount of resources to all of its users. And since each of these applications and their users make different types of demands on the network, allocation of resources becomes a complicated process. For instance, e-mail users do not need a lot of bandwidth individually but collectively they can put a lot of pressure on enterprise bandwidth. Browsing — with streaming web casts becoming increasingly popular — can also jam the network. Enterprise level applications, too, such as enterprise resource planning (ERP) can hog a lot of bandwidth, particularly when its database is being updated. VoIP and videoconferencing programs also need reasonably large doses of bandwidth. Such erratic behaviour in bandwidth consumption makes it quite difficult to establish any kind of pattern to users’ requirements, which makes it harder to determine whether or not the company’s current bandwidth supply is enough to meet the users’ collective needs. Much of the bandwidth that a company buys is shared between all of its enterprise and office applications. And when there are no priorities worked out, higher usage on one count, such as a multimedia application, can lead the network traffic on another application to be all choked up. Once the network traffic becomes clogged, the overburdened and under-resourced IT departments are placed under a constant barrage of complaints from users who find themselves getting disconnected from the network or those who find bandwidth speed too slow for their needs. ||**||Downtime costs|~|Hofbody.jpg|~|Jan Hof of Extreme Networks claims multiple services need one network.|~|However, there are more serious consequences to not having enough bandwidth other than having to face disgruntled users. Nowadays, network downtime is tantamount to business disruption, something that can cost end users significant sums of money. A study of 80 large companies in the US reveals that, on average, these firms experience 501 hours of network downtime each year, which results in annual productivity and revenue losses that are in the millions of dollars. When a network fails, it’s not just simply a matter of not being able to send e-mails or to access the internet. A short supply of bandwidth resources means the performance of mission-critical applications deteriorates. Also, degradation in network performance may have various consequences: unacceptable response times for crucial applications, loss of data, drops in productivity, or even the abandoning of tools by their users. Not having enough bandwidth means systems administrators cannot deply new applications because the existing network cannot support the additional demands required from it. A simple yet expensive solution to the network traffic dilemma would be to throw in more bandwidth to the problem. But while this adhesive bandage approach can prove effective at first, in the longer term, it just makes the issue more complicated and more difficult to address. Augmenting current bandwidth supply all too often can prove to be a losing strategy. The use — and abuse — of resources has always grown at a faster rate than falling prices could offset. Besides, the approach is imprecise. Added bandwidth is likely to be consumed by aggressive applications anyway, such as e-mails with huge attachments — which are often the less business-critical. Plus, it does not address the added latency (the amount of time it takes a packet to travel from source to destination) and jitter (a measure of the variability over time of the latency across a network) that resource-intensive applications can introduce into the network. To help network managers address the issue of maintaining fast and reliable connections defining quality of service (QoS) has become increasingly essential to ensure that users are supplied the right amount of resources at the right time. QoS, which refers to the differentiation of traffic flows for unequal processing, is the most general way to divide up service performance. Service may be grouped into classes for charging back costs, blocking user access, and controlling the quality of traffic flow. None of the QoS groupings have specific guarantees; the only certainty is that the preferred classes will get better service than the less preferred. Some people refer to QoS only with respect to service-level guarantees. Others distinguish between “hard QoS” — which guarantees particular quality metrics, such as uptime, speed, latency or jitter — and “soft QoS” — which provides better-than-best efforts, but not a guaranteed set of service distinctions. Best-efforts service means there is no differentiation among different types of data flows. This is what most of us have to put up with when using the internet. “In the past, companies thought that providing an over-kill of bandwidths could also solve the problem, says Jan Hof, director, field marketing, Extreme Networks, EMEA-SAM. “But in today’s infrastrucutre, multiple services and applications need to be served by one network. Imagine what will happen with your VoIP call if you did not enable QoS policies, or when your storage applications start to simultaneously throw terabytes of data on your network,” he adds. And bandwidth in the Middle East is not all that cheap yet. “In the Middle East we talk in terms of Kbps, whereas in the West, Mbps is the norm,” says Roger Hockaday, director of marketing, Packeteer EMEA. ||**||Access choices|~|Emambody.jpg|~|Mohamed Eman claims today’s routers are programmed to prioritise network traffic.|~|And that’s hardly enough for a large multi-location enterprise. But what if bandwidth does fall in price? Even then, accessing that bandwidth still won’t be cheap. The entire equipment and infrastructure might need to be upgraded and that itself can cost a lot. The situation today is very dynamic, with a lot of various access modes to choose from — there are leased lines, there are flavours of digital subscriber line (DSL) and then there is MPLS Multi-protocol label switching (MPLS). Each of these approaches requires different — and expensive — equipment to be bought and installed at cust-omer premises. Therefore many IT managers, says one vendor, are still adding bandwidth incrementally. “There is a reluctance to take big technology bets,” says Hockaday. Hence, the need to manage it is not going to go away in a hurry. So what does ensuring QoS mean from a CIO’s or network administrator’s point of view? It all boils down to managing bandwidth, which is consumed by enterprise software applications, voice and internet access. The starting point for this is to track how much bandwidth is needed by each application, and by what time (such as the day or month). For this, there is no need to spend millions on some black box with a fancy acronym. Routers, which anyone setting up a WAN will have, will do the job. From being dumb devices that just moved network traffic around, routers today are intelligent and can be seen as the first level bandwidth management tools. “Some routers come programmed out-of-the-box to priorit- ise network traffic for certain critical applications, such as VoIP. For other kinds of network traffic management, they need to be tuned,” says Mohamed Emam, technical manager of 3Com Middle East. Looping capabilities embedded in routers can control bandwidth utilisation by limiting the amount of bandwidth that applications running on different ports can draw upon. Routers today employ sophisticated queuing and prioritisation algorithms that serve data packets based on the priority queue they are assigned to. There are however other, simpler, ways of prioritising proc-esses that involve basic management practices. If, for example, updating the databases of your enterprise resource planning application, such as ERP, is choking the network, it might make sense to shift such activity to non-business hours. By doing so, not only will you be able to reduce loads during peak hours, you are able to allocate most of your network’s resources into processing the mission-critical jobs when there are less users around. ||**||Tracking traffic|~|Hockadaybody.jpg|~|Roger Hockaday of Packeteer EMEA says bandwidth does not come cheap in the Middle East.|~|Another thing that network administrators need to consider is that tracking and containing personal, non-business usage is critical to managing bandwidth and ensuring consistent QoS. “Anywhere between 20-30% that a company pays is cons-umed by non-business applications,” says Hockaday. Applications like streaming videos and online games can take a toll on your resources. Monitoring users’ bandwidth consumption can help an IT dep-artment understand more clearly where the traffic jams occur and find ways to resolve them. Companies that want a higher level of control over network traffic can also buy products specifically designed to manage bandwidth. There are two kinds of bandwidth management products: rate control products; which improve the default behaviour of transport control protocol (TCP) connection endpoints; and queuing products that create multiple queues and allocate priorities among them according to some algorithm. Essentially, rate control products calculate what the values of TCP window sizes and other parameters ought to be to make a traffic flow perform within the desired constraints. They then plug those values into the packets as they go through the device. Traffic flows that would, by default, nonchalantly hog the available pipe can be throttled down so that they leave room for other flows. One of the main advantages of the rate control approach is that the bandwidth management device is made aware of the full end-to-end-and-back path of traffic (at least with TCP flows), and can, to some extent, anticipate and smooth out through-put changes. Queuing, on the other hand, pushes high-priority traffic to a queue with the least backlog, while lower-priority flows will find themselves in a large, slow queue. Some companies use both rate control and queuing products. An advantage of implementing both techniques is that flows from a high-speed local area network (LAN) to a low speed internet access link, as well as those coming from the internet into the LAN, can be managed effectively. Most of these bandwidth management products sit on the LAN side of access routers. Prices range from a couple of thousand dollars to over US$15000 ( although most fall between the US$3-5000 range). While these tools seem to offer a way out from every network manager’s bandwidth allocation nightmare, they are yet to gain widespread acceptance in the Middle East, says Stephen Grey, territory manager, WebSense Middle East and Africa. “These are still perceived as expensive although they can give high ROI quickly,” he says. “However, the idea that merely putting these applications on the network will solve all bandwidth issues is a fallacy,” Grey adds. To use these tools to their full potential and gain maximum benefits out of the company’s investment requires a lot of commitment from IT managers. They will have to work diligently in applying these tools in analysing traffic patterns and altering policies. But work need not be complex or tedious for them as there are products that can help a little by compressing and caching data. Caching is a technique of keeping frequently accessed information in a location closer to you. A web cache, for example, stores web pages and content on a storage device that is physically or logically closer to the user. By reducing the amount of traffic on the WAN (wide area network) links and on overburdened web servers, caching provides significant benefits to enterprises. This means that with a local cache in operation, user web object requests go via the local cache, which then retains a copy of the said web object. This results in all subsequent requests for the same object being fulfilled from the local cache instead of from the site of origin. The process of web caching minimises the amount of times identical web objects are transferred from remote web sites by retaining copies of requested URLs in a cache. Web caching can be particularly effective when you have a significant number of requests sent to the same popular web sites. Without a cache all your user requests go directly to the remote site generating web traffic and using up bandwidth. But with a local cache, subsequent requests for previously cached URLs result in the cached copy of the object being returned to the user, which consequently creates little or no extra network traffic, improves efficiency and reduces waiting time. ||**||Compression|~|Khanbody.jpg|~|Khalid Khan of 3Com points to a huge demand for bandwidth|~|Another simple way to make sure that your bandwidth management tools are effective is to double it up with compression — a way of reducing the size of a file or data being transmitted. By condensing data into smaller, more manageable packets, transmitting them would be faster and would require less bandwidth. But if all the solutions mentioned earlier fail to resolve your traffic problems, the only answer might be to upgrade your current technology. If now you are using frame relay and asynchronous transfer mode (ATM) systems for transmitting data across the WAN, you might want to consider MPLS or multi-protocol label switching. MPLS is a widely supported method of speeding up data communication over combined IP/ATM networks. This improves the speed of packet processing and enhances performance of the network. Put simply, MPLS involves the tagging of IP traffic to speed up data transmission. This is achieved by indexing data packets with labels. As a result routers need to only look at the label and not the entire data packet. It gives network operators a great deal of flexibility to divert and route traffic around link failures, congestion and bottlenecks. From a QoS standpoint, internet service providers will be better able to manage different kinds of data streams based on priority and service plans and therefore offer more consistent service. As with any other IT-related issues, managing bandwidth should not be solely left to out-of-the-box products and software applications to administer. While these tools help in implementing the more technical aspects of prioritising users and allocating resources, or the more mundane activity of monitoring bandwidth consumption, user participation is key to making the whole process work. For one, before you install any bandwidth management device, systems administrators must first conduct an audit of their network to identify what is running on it. While the IT department knows what authorised applications are being used, chances are they have no idea of what unauthorised programs have been installed by the users, which are hogging the use of resources and slowing down the overall network performance. At the same time, the network managers should make sure that both the IT department and the users agree on certain ground rules, or policies about who is going to get what bandwidth and when. By following these straightforward practices and coupling it with bandwidth management tools, you might find that the right answer to your network traffic problem is not to add more lanes on which company data can pass through but to implement controls that would make your traffic flow smoother. ||**||

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code