Half the story

Following on the heels of last month’s comment on a reader’s plea for guidance on who to trust when evaluating new solutions comes a Forrester Research Quick Take report discussing the half truths around performance benchmarks.

  • E-Mail
By  Colin Edwards Published  February 13, 2006

|~||~||~|Following on the heels of last month’s comment on a reader’s plea for guidance on who to trust when evaluating new solutions comes a Forrester Research Quick Take report discussing the half truths around performance benchmarks.

These performance tests are a core element of both server vendor marketing and IT server selection. However, IT buyers must keep in mind that they should not be used in isolation when making decisions about server selection, says Forrester, highlighting the importance of operating systems (OS) in relation to optimising application performance rather than server benchmarks.

“Only this approach will help separate the myth and reality of a vendor’s overall sweeping claims about its new server breakthroughs,” writes the study’s author Brad Day, who believes buyers tend to focus on the improved features, functionalities and performance of new server platforms as the core selection criteria in short-listing a new systems architecture.

“One must remember, however, that the operating system is the first technology component of the systems architecture that must be scrutinised,” warns Day, stressing the importance of investigating the feature/function/benefit of the operating system first, and what OS feature/functions differentiates the applications solutions on one system architecture versus another.

While performance benchmarks are used both by vendors to market their servers as well as by IT buyers in server selection, the buyer should not use server benchmarks in isolation because vendor claims only tell half the story.

Often glossed over is the requirement for the latest version of the operating system — and the feature/functions in the operating system that contribute to those applications performance results. To back his view, Day cites as examples IBM’s benchmarks for the eServer P5 and Sun Microsystems’ CoolThreads performance claims.

In IBM’s case, its new server model takes a first place position in 14 industry-leading benchmarks. But to reach that peak performance, applications must be optimised and production ready on AIX 5L Version 5.3 and its Simultaneous Multi-Threading technology — released in 2004 at the same time at the P5.

Sun Microsystems’ T1000 and T2000 CoolThreads server products achieved world-record performance benchmark results in Java, mail, and Web-tier applications workloads. However, CoolThreads only runs on Version 10 of Solaris, so only those applications that have been upgraded and/or optimised to Solaris 10 can take advantage of any of Sun’s CoolThreads server platform offerings.

Day says that while certain functionality is shared equally between the operating system and the hardware microarchitecture design, more often than not, the core differentiation rests with the functions that are delivered through the operating system environment.

IBM, for example has new micro-partitioning technology, for eServer p5 virtualisation strategy, yet, key aspects of this technology can only be achieved in the systems software elements of the AIX 5L Version 5.3 of its AIX operating system road map, coupled with the micro-architecture of its eServer p5 systems architecture.

Sun’s DTrace software tool, one of the most powerful systems software facilities in Solaris, is a similar case in point. It is unmatched for real time troubleshooting of the network, as well as tuning systems performance, but firms running Solaris 8 or 9 on their SunFire server platforms can’t take advantage of the benefits of DTrace and will need to upgrade to Solaris 10, Forrester points out.||**||

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code