The reality of grid computing
If the internet is considered as an intercontinental network of networks, then grid computing can be viewed as a network of computation and collaboration assets.
If the internet is considered as an intercontinental network of networks, then grid computing can be viewed as a network of computation and collaboration assets. Grid computing has been defined as an ambitious global effort to develop an environment in which individual users can access computers, databases and experimental facilities, without having to consider where those facilities are located.
These pooled assets are known as ‘virtual organisations’ and are distributed across the globe. Although grid computing is firmly entrenched in the realm of academic and research activities, commercial enterprises are turning to it for solving complex problems such as weather prediction and the 3D modelling of the human genome. Increased network bandwidth and the acceptance of the internet are driving the demand for sophisticated computing. However, challenges such as research and development (R&D) costs, time-to-market, greater throughput and improved quality are foremost in the minds of administrators.
In addition, computational needs are outpacing the ability of organisations to deploy sufficient resources to meet the growing workload demands. In today’s competitive business environment, IT budgets are tightly controlled and CIOs are forced to do more with less. For instance, the average utilisation within an enterprise for storage is approximately 52%, Linux 40%, Unix 10% and Windows workstations 5%.
This suggests that enterprises do not require a more powerful infrastructure, all they need to do is to utilise their existing resources efficiently. These companies should find a way to bring together all their apps and machines into a pool of potential labour, providing a secure and reliable access to the unbridled capacity.
Grid computing is emerging as a viable technology that businesses can use to wring more profits and productivity out of their IT resources. Over the last decade, enterprises have become increasingly concerned with the need to collaborate efficiently, share disparate data and have the ability to interact efficiently between heterogeneous distributed resources. Grid technologies enable sharing and the co-ordination of these distributed resources. Compared to traditional computing platforms, grid’s features are superior, cost-effective and flexible.
From a developer's perspective, grids are composed of ‘virtual organisations’ that use a common suite of protocols. These ‘virtual organisations’ can range from a handful of servers or desktop PCs in one room to a heterogeneous collection of disparate systems.
Work is underway by the Global Grid Forum to organise these protocols under the Open Grid Services Architecture, (OGSA), which has grown out of the open standards-based Web Services world. It is called architecture because it is mainly about describing and building a well-defined set of interfaces from which systems can be built, all based on open standards like the Web Services Description Language (WSDL). Furthermore, OGSA builds on the experience gained from building the Globus Toolkit, a valuable reference implementation.
Government laboratories and scientific organisations have been using grid technologies for sometime, solving some of the most complex and important problems facing mankind. Today's challenging business climate requires continuous innovation to differentiate products and services. Businesses must adjust dynamically and efficiently to marketplace shifts and customer demands. When grid experts talk about an individual service, they call it a service instance. Services and service instances can be lightweight and transient.
Proponents of grid technology say the same thing will happen with computing power. However, it is all about how we build protocols and services that allow computers in a grid to interact. Dr Ian Foster uses the term semantics to describe the power of OGSA to define service instances.
The use of the term "semantics" borrowed from linguistics and psychology is a big clue that grid computing is not just about data, processors and tasks. It is about context and meaning. Semantics in a computer-programming environment is more than just applying computing power to process data. It is about bringing a problem to the grid and finding a solution to that problem. To understand the potential of grid computing, enterprises need to equate its associated benefits to something more tangible like utilities such as water, electricity and gas.
Grid computing should be available at the push of a button. A grid must be able to quickly ascertain what resources are available on any computer and it should not be incapacitated by a slow or dated system. It should be autonomic in nature.
There are three main requirements for a successful grid environment: Security, quality-of-service (QoS) and reliability. Security is a paramount requirement; after all, we do not want just anyone getting access to grid resources. The QoS requirement of grid computing provides the capability to transfer data to another node in adverse situations. Reliability and performance also remain important.
If the grid cannot perform, then the business case for it certainly diminishes. The future seems to belong to grid computing. However, it remains to be seen if businesses are willing to take a gamble with the commercialisation of grid computing.