Streaming media for fun and profit

Call it the "glitz overhead": the high-resolution images, Macromedia Flash animation and point-of-sale security that draw customers to B2C sites. You might think the sizzle stops at the consumer's screen, but B2B developers are finding just as many uses for streaming media, and the overhead keeps getting steeper.

  • E-Mail
By  Jon Tullett Published  January 24, 2001

Introduction|~||~||~|Streaming traffic means more than just full-motion video. Real-time and near-real-time data feeds can benefit from streaming traffic, as can advanced Web communications such as VoIP (voice over IP) and videoconference.

Streaming media servers also can support QoS (Quality of Service) and, even better, integrate that support into an SLA (service-level agreement) format. The technology is tailor-made for content providers that want to sell varying degrees of access to partners or customers, especially from a billing perspective. It should come as no surprise then that market analysts, such as the Internet Research Group, predict that multimedia-type traffic, especially streaming media, will represent 40 percent of both B2C and B2B traffic by 2004.

Therein lies the problem. Such growth is bringing with it a host of performance issues that QoS is supposed to help solve. Mapping the software onto an optimal serving infrastructure is time-consuming, expensive and dauntingly complex. While just two or three collocated server farms are needed for a site like barnesandnoble.com to get decent response time during the Christmas rush, B2B service provisioning is a different animal: Its serving side is typically obligated by contract to provide a minimal level of service to its customers.

Web application providers have additional headaches: advanced DHTML (Dynamic HTML) and XHTML (Extensible HTML) pages, JavaScript, ActiveX controls, Java applets, and an increasing number of Web-oriented coding languages and application snippets. Many of these can run independently on the server or the client side, but interactive-business-application serving is bringing with it a new need for constant server-to-client throughput performance. Keeping a customer interested in a product for a few minutes is one thing; making sure your customer is satisfied with daily application performance is another.

Geography lessons

If you plan to feed any form of streaming media to your customers, you'll need to map their locations to optimal NAPs (network access points) and PoPs (points of presence), taking into account both software and hardware. Business data must be protected, and security requires not only money but throughput. And the headaches don't stop at server and network performance, local redundancy or service-side disaster recovery, either. You have to worry about multiple serving sites - especially if your audience is international. High capacity networks in the US and Europe are able to support heavy-demand multimedia to some extent. With international bandwidth bottlenecks between the Middle East and the rest of the world, a better solution is needed.

||**||Content-delivery networks|~||~||~|CDNs (content-delivery networks) sprang into existence as one solution to the streaming-media problem. Early entrants such as Akamai Technologies and epicRealm set out to provide advanced Web designers with an easily implemented solution to help move heavy Web content reliably and quickly to end users. Their solutions combined state-of-the-art Web hosting facilities and high-end network infrastructure, both of which were optimised for streaming. They established alliances with a variety of regional ISPs and outfitted their own serving centres with forward-proxy caching, in which content is pulled into servers at the "edges" of the Internet. Such a strategy allowed for a much more even content stream to any location because fewer central PoP or third-party backbone routers got in the way. This form of passive content distribution is still employed by access providers and large content providers, including America Online and MSN.

Since then, CDN solutions have left backbone infrastructure largely to underlying-bandwidth-provider allies. The CDNs, meanwhile, have concentrated on improving master site replication, traffic redirection, and wide-area load-balancing and traffic analysis and management. While all these benefits look great on paper, the real question becomes whether they make sense from an outsourcing perspective.

Despite their appearance, CDNs offer a real advantage: They give content providers the ability to deliver content on a national or global scale and across multiple Internet backbones without sacrificing performance or security, and they provide the ability to monitor the application for traffic and usage statistics.

While these may seem like obvious considerations, they're still highly complex, and the solutions are evolving very quickly across hardware, software and even protocol horizons. Measuring performance across multiple Internet backbones is a major undertaking that requires national or even global QA (quality assurance) testing processes and constant open dialogue with bandwidth providers. Application developers can simply brush off the subject of encapsulating application data into an easily transferable format by mentioning XML.

But determining a standard XML document-handling structure between disparate systems, and then applying a universally accepted overlying security mechanism, is again a complicated development project. More than just XML programming, this project usually requires delicate work on both the client and the server sides. CDNs can help in this situation by providing a cohesive baseline development framework that's already equipped with a working network layer.

CDNs typically offer similar features. What differs is how they are implemented and what they cost. As a baseline, nearly all CDNs address the following criteria:

- Reliable and anonymous reading capability, with equally reliable, distributed writing capability. Typically, these capabilities must transcend both geographic and system boundaries.
- Distributed storage, with the ability to migrate data among systems in different geographic locations.
- Integrated security and authentication schemes that are easily deployed on any legacy or back-end permission system already in place.
- Special emphasis on pumping advanced Web content, especially video, sound and animation. This should include support for more than just one type of Web multimedia format. (In other words, support only for Macromedia's Flash or for RealAudio's RealPlayer is a no-no.)
- Performance metrics, meaning not only minimum performance guarantees but also the ability to verify those guarantees long-term and across multiple Web backbones. Such metrics include raw throughput, in addition to overall uptime and QoS for advanced forms of Web media.

||**||Reinventing the wheel|~||~||~|Many CDNs also provide specialized billing and personalization solutions that often require some kind of embedded application code or even a client-side module. Akamai's EdgeScape service, for example, requires additional code in customers' applications and Web servers but can then automatically serve up country-specific content based on an IP lookup.

Sounds great, but the next question out of your mouth would probably be "How does it work?" The answer can vary, but it all boils down to protocol enhancements - usually proprietary - to everyday IP services. A prime example is the 800-pound gorilla of the CDN market, Akamai. This CDN pioneer has a number of performance enhancers and metric engines aimed mainly at Web sites with high multimedia serving needs.

In Akamai's case, the company is quite open about the fact that all customer content needs to be "Akamaized" - essentially, modified with Akamai's own code and moved to its servers. While this presents excellent benefits from a time-to-market standpoint, it means you've really crossed the outsourcing line. While you may have a central site that is still under local control, that site is really just a staging area; your customers will access only Akamai's servers. Losing control over your data can affect future software-design decisions, and it can make for real heartaches if the business relationship ever sours.

Unsustainable?

Also, many analysts predict that such a closed-technology infrastructure model cannot be sustained as the Internet continues to grow and change. This conclusion boils down to two basic tenets. First, the proprietary technology has a limit on its scalability and flexibility. After all, a solution like Akamai's is only as scalable as the number of underlying ISPs and bandwidth providers that choose to ally with it. That could be a big problem, considering the speed at which the Web is growing and the complexity of the content it carries. Second, large networks with closed technology requirements will likely place roadblocks in the way of open communication between disparate bandwidth and content providers. Since a successful content-delivery system actually requires the cooperation of multiple entities (notably content providers, access providers and Web hosters), any kind of barrier to communication is a liability.

In their defence, Akamai and companies like it are aware of this dynamic and have taken steps to improve their relationships with other vendors. But most of this effort is centred on technical alliances with specific partners, not a shift to a truly open-technology architecture. That play is being taken up by a series of vendor-founded consortiums, including the Broadband Content Delivery Forum, Content Bridge and IPDR.org

||**||Open standards|~||~||~|Even with this new focus on open standards, any concrete results are many months away at best - a lifetime for burgeoning Web companies. The proprietary penalty is not nearly as troubling to Web developers as the time and expense required to implement similar solutions on their own. CDNs look simple because they're basically robust Web gateways with a raft of additional management features. However, the trick lies not only in building such a gateway, but also in maintaining it and especially in continuing its traffic-management features even after that traffic has left your primary Web backbone.

That means the first cost will be in the underlying hardware - let's say a robust cluster of Sun Microsystems Sun Enterprise 220 and 450 servers, complete with related switching, routing and load-balancing devices. These will need to be caged inside a bandwidth provider capable of fat national pipes and equipped with a series of Web and caching servers across your customer region.

You would then need to construct a management scheme that could track traffic and perform updates across cache domains. This scheme would increase reliability and allow for accurate demographic tracking, which could then be built directly into your application. Your bandwidth provider also would need to have solid QoS capability - preferably policy-based - that would enable you to offer streaming media content at variable price rates. An internal billing system to this effect would be a boon as well.

Think that's difficult? The really hard part comes when your traffic is forced to leave your bandwidth provider's backbone. You'll need to make deals with all the major and minor backbone providers to ensure that your cache-updating and traffic-monitoring capabilities can be continued across their networks. Companies like Akamai already have these relationships in place.

Only after you strike these deals can you begin to build the bridging technology that will allow all this cross-network management information exchange to occur. Of course, this involves serious expense in terms of infrastructure, software development and talent. However, you're also talking about an even more valuable commodity: time.

Contrast this with using an existing - if proprietary - CDN, and it's easy to see the potential cost and time savings. Although the price of these services do seem inflated over typical business Web hosting costs, their added value is undeniable for media-rich content providers. Rolling your own solution in this situation makes little sense except for the most well-heeled providers.

CDNs like Akamai can leverage their deals with backbone providers to offer carrier-quality content delivery at a much more attainable cost.

But does that mean CDN technology is a must-have for all B2Bs? Not at all. In fact, our analysis shows that it really makes sense only if you're running a true-blue Web application. By that, we mean a distributed application that transfers considerable amounts of secure, non-HTML data back and forth between end users and application servers - especially any kind of streaming or near-real-time data interchange. These kinds of applications require the steady, unbroken traffic streams that CDNs excel at providing.

In fact, many sites don't run this kind of software. Indeed, sites that traffic in straight HTML information presentation or in one-time, cacheable data transactions have little need of cross-backbone performance guarantees beyond what they already get from their backbone providers. Analysing whether your site will need a CDN's services means looking not only at the results you want in the future, but also at what your customers think of the results they're getting now. A high volume of problems in timely content delivery or demographic accuracy is an indicator that a CDN might be the right way to go. Whether such a move is an immediate necessity or can wait until the various vendor consortia have brought open technology to the market (probably around early to mid-2002) is a more personal decision.
||**||

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code