Testing times

The late 1990s saw companies across the globe reach for testing tools as Y2K loomed. Since then, the market appears to have gone quiet as end users rely on independent software vendors to act on their behalf. However, as applications become increasingly mission critical in the Middle East, the tide could be about to turn once more.

  • E-Mail
By  Maddy Reddy Published  June 3, 2004

|~|Michael-Feord_compuware_ins.jpg|~|Michael Feord, product director at Compuware Europe, Middle East & Africa.|~|The largest blackout in the history of North East America happened last year. It left 50 million customers without power, forced the shutdown of 100 power plants and cost approximately US$6 billion in just two days. The catastrophe was caused by a simple software bug in one of the utility company’s power monitoring and management applications and was only discovered after staff had checked more than four million lines of legacy code. If the utility company had tested its software and run simulations to mimic such a disaster, it may have discovered the bug, avoided the blackout and saved US$6 billion. Yet since the late 1990s, when companies deployed testing software from the likes of Segue, Rational, Mercury and Compuware to ensure their systems were Y2K ready, the desire to test software appears to have decreased. No longer do teams of highly paid consultants work around the clock with load and performance testing solutions, always wondering ‘what if’. Instead, applications are released into the wild with hardly a sideways glance and hardware is hot swappable to the extent that users rarely notice the change. This scenario is even more apparent in the Middle East, where a smaller number of companies took the steps that were then considered necessary to be Y2K ready. “Customers [in the Middle East] are yet to buy into the concept of testing software tools. Some customers talk to the independent software vendors (ISVs) directly while some customers use the services of custom software development houses or talk to other experienced customers,” says Ayman Abouseif, senior marketing director, Oracle Middle East & Africa, which formed a global relationship with Mercury in late 2002. Even International Foodstuffs Company (IFFCO), which used to build a specific testing environment when it made changes to its Baan enterprise resource planning (ERP) suite, has changed its approach to testing its applications before they go live. Today it no longer tests inhouse but uses the Sun and Tech Access iForce centre, which is located at Dubai Internet City (DIC). “Since we are running on a live environment across 10 locations in the group, it’s imperative that we test our code before we apply any patch or upgrade, before we actually deploy it in a live environment. [We used to] create a kind of server environment, and then test. That was a pretty painful process, as it’s not sensible to keep a server pending, just to do testing at any point of time,” says Venkatesh Mahadevan, general manager of IT at IFFCO. “Now we use the iForce centre, where in essence we stimulate the live environment and apply the service packs, run a few transactions, let users try the system, see usage results making sure the data generated tallies [with our existing systems],” he explains. In addition to the carefree attitude of local end users and the pain associated with testing applications inhouse, another reason for apparent the lack of testing software installations in the Middle East is that vendors are taking greater responsibility for getting it right on the users’ behalf. “Depending on what they [users] are deploying... they work with the software and hardware vendors for right sizing of the server and to make sure the software delivers what it promises,” says Toni Prince, business development manager, Intel, Middle East & Africa (MEA). “Customers buy all the hardware they need from hardware vendors who offer the right sizing tools, based on data volumes, scalability, requirements and the application. Although this doesn’t mean its right all the time, [overall] it is right most of the time,” adds Abouseif. Another factor in the reduced need for inhouse testing is that ISVs are also taking greater responsibility for delivering products that slot straight into an organisation’s environment rather than leaving their applications riddled with bugs. A growing number of them are using the competency centres being built by the likes of Sun, Tech Access and Intel. “We are a proof of concept centre, not a benchmarking centre. By testing and ironing out the wrinkles, ISVs can shorten the deployment time and save their customers lot of money,” says Chris Saul, systems engineer for Tech Access. Soft Management, for example, used Intel’s finance competency centre in Lebanon to test application scalability for Credit Libanais as it helped the bank migrate from a RISC platform to the vendor’s x86 systems. Landmark Graphics has used the vendor’s energy competency centre in Abu Dhabi to optimise its reservoir simulation suite, VIP, for the Intel platform. The chip vendor claims to have 12 more applications booked into the centre for the coming year. “Our tools are focused on software developers working for inhouse companies, developers or ISVs who develop bespoke apps. The bigger companies have their own developers… We have not come across any non-ISV customers using testing software tools. The objective of the competency centres is to [encourage] developers with a local presence to use our tools and optimise their apps on Intel,” says Prince. Even ISVs that do not use competency centres on a regular basis are becoming more proactive with regard to testing. According to, Bashar Kilani, manager of IBM software business in the Middle East, Egypt & Pakistan, local ISVs have been using testing tools for some time and are way ahead of the region’s end users. “The ISVs always had the systems and they are ahead of customers. Only recently, are we seeing interest from customers,” he says. One ISV that takes its testing seriously is Sakhr Software, which has 16 software test engineers and 64 Microsoft certified developers to put its applications through their paces before they are made commercially available. Other fail-safes are also built into the process. ||**|||~||~||~|“After they [the software engineers] approve it, it moves to the second level of testing, done by the quality control team to make sure that nothing has passed unnoticed. For the third level of testing, we are working with chip vendors [such as Intel] to stress-test our applications to make sure it scales up,” explains Dr Salah Malaeb, general manager, Sakhr Software. “Testing guarantees the software quality so it can compete globally and not just for the local market. Poor quality is not an option,” he adds. Globitel, a Jordan based ISV focused on the telecommunications industry, also carries out its own testing processes. “We try to accommodate all scenarios in-house, until it [the software] reaches an acceptable state for release, as it’s not possible to test at a customer’s site sometimes,” explains Samer Halawa, executive manager at Globitel. In addition to using competency centres and inhouse procedures to iron out bugs in their software, ISVs are increasingly employing independent standards such as Six Sigma, SEI CMM and ISO 9001 to ensure the quality of their work. Not only does this help reassure customers that their applications will work when deployed, but it also reduces the likelihood that ISVs will have to return to their code and rework sizeable chunks of it. The Software Engineering Institute (SEI), a research and development centre operated by Carnegie Mellon University offers one such standard. It offers the Capability Maturity Model (CMM) as a guide for development practices and assessment methodology. Based on a model of five levels of process ‘maturity,’ it moves from Level 1 to the highest level of achievement, Level 5. “More and more Middle Eastern ISVs [are] focusing on their own bespoke systems and applications and taking more responsibility for their own IT and making it meet certain standards,” says Michael Feord, product director at Compuware Europe, Middle East & Africa. “60% of the market is working to [the equivalent of] Level 2 or Level 3,” he adds. Currently, Mindscape is hoping to achieve level five CMM certification by 2006. The project is being overseen by ten of the companies most experienced operatives and when complete it will help the IT services spin off of Mashreq Bank maximise its development capabilities and increase productivity. “The success of any modern day business depends on being able to produce better products and services at a lower cost. This is our strategy and CMM helps us do this by making the development and implementation cycles accurate, which means we don’t get the added time that typically costs money,” explains Zubair Ahmed, quality manager at Mindscape. “The time we save can also be used to generate more business. Furthermore, avoiding [development] reworking not only reduces the effort from our part, but also ensures that the correct product is going out to the customer every time, all the time,” he adds. Despite the efforts of a select few, quality, only a small number of local ISVs have embarked on official standardisation drives. For instance, although Sakhr Software has carried out an internal audit that shows it has processes in place that equate to level 2 on the CMM scale, it has not signed up for the external audits that bestow official certification upon an ISV. Elsewhere, Globitel’s Halawa believes such standardisation will not be necessary until end users start to demand it before they sign up for solutions. “None of the government tenders put ‘at least Level 2 or Level 3 certification’ as a prerequisite. I don’t see it changing, at least in the short-term. Only when customers start complaining might there be a need to create stringent quality standards,” he says. However, if end users are not interested in testing their applications inhouse, outsourcing it to competency centres or trusting ISVs to do the work for them, there is a strong argument for organisations to return to the paranoia that was so rampant pre-Y2K. This is increasingly important as a large number of implementations still fail, despite the fact that enterprise applications are becoming the corner stone of many companies’ operations. “When any ISV releases an application they test it in their environment. However, the software companies just do standard testing, as they cannot simulate every customer environment. Large enterprise apps are 90% vanilla app but there is still 10%-15% customisation, which creates the issues or the possibility [of problems]. This is why it is important to test,” says IFFCO’s Mahadevan. Fortunately, it appears as if Mahadevan’s words are being heeded. According to Kilani an increasing number the region’s larger enterprises are beginning to buy into the need to test their software before it goes live. “Customers are now seeing the value in improving the productivity and quality of their software,” he says. “We are seeing more interest in the highest level of testing i.e. functional testing, which is the business process level testing, adding new functionality, workflows, where are the bottlenecks and tracing errors. The need is there, and it is becoming more obvious [to users] now that they have more complex systems in place,” Kilani adds. ||**||

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code