Conference Presentations

STAREAST 2007: Positioning Your Test Automation Team as a Product Group

Test automation teams are often founded with high expectations from senior management-the proverbial "silver bullet" remedy for a growing testing backlog, perceived schedule problems, or low quality applications. Unfortunately, many test automation teams fail to meet these lofty expectations and subsequently die a slow organizational death-their regression test suites are not adequately maintained and subsequently corrode, software licenses for tools are not renewed, and ultimately test engineers move on to greener pastures. In many cases, the demise of the test automation team can be traced back to unrealistic expectations originally used to justify the business case for test automation. In other words, the team is doomed for failure from the beginning.

Steve Splaine, Nielsen Media Research
Top Ten Reasons Test Automation Projects Fail

Test automation is the perennial "hot topic" for many test managers. The promises of automation are many; however, many test automation initiatives fail to achieve those promises. Shrini Kulkarni explores ten classic reasons why test automation fails. Starting with Number Ten ... having no clear objectives. Often people set off down different, uncoordinated paths. With no objectives, there is no defined direction. At Number Nine ... expecting immediate payback. Test automation requires a substantial investment of resources which is not recovered immediately. At Number Eight ... having no criteria to evaluate the success. Without defined success criteria, no one can really say whether the efforts were successful. At Number Seven ... Join Shrini for the entire Top Ten list and discover how you can avoid these problems.

  • Why so many automation efforts fail
  • A readiness assessment to begin test automation
Shrinivas Kulkarni, iGATE Global Solutions
Verification Points for Better Testing Efficiency

More then one-third of all testing time is spent verifying test results-determining if the actual result matches the expected result within some pre-determined tolerance. Sometimes actual test results are simple-a value displayed on a screen. Other results are more complex-a database that has been properly updated, a state change within the application, or an electrical signal sent to an external device. Dani Almog suggests a different approach to results verification: separating the design of verification from the design of the tests. His test cases include "verification points," with each point associated with one or more verification methods, which can later be used on different test cases and occasions. Some of the verification methods are very simple numerical or textual comparison; others are complex; such as photo comparison.

Dani Almog, Amdocs Inc
Unit Testing Code Coverage: Myths, Mistakes, and Realities

You've committed to an agile process that encourages test driven development. That decision has fostered a concerted effort to actively unit test your code. But, you may be wondering about the effectiveness of those tests. Experience shows that while the collective confidence of the development team is increased, defects still manage to raise their ugly heads. Are your tests really covering the code adequately or are big chunks remaining untested? And, are those areas that report coverage really covered with robust tests? Andrew Glover explains what code coverage represents, how to effectively apply it, and how to avoid its pitfalls. Code coverage metrics can give you an unprecedented understanding of how your unit tests may or may not be protecting you from sneaky defects.

Andrew Glover, Stelligent
Keyword-Driven Test Automation Illuminated

Test Automation has come a long way in the last twenty years. During that time many of today's most popular test execution automation tools have come into use, and a variety of implementation methods have been tried and tested. Many successful organizations began their automation effort with a data-driven approach and enhanced their efforts into what is now called keyword-driven test automation. Many versions of the keyword-driven test execution concept have been implemented. Some are difficult to distinguish from their data-driven predecessors. So what is keyword-driven test automation? Mark Fewster provides an objective analysis of keyword-driven test automation by examining the various implementations, the advantages and disadvantages of each, and the benefits and pitfalls of this automation concept.

Mark Fewster, Grove Consultants
Business Rules-Based Test Automation

All business applications implement business rules. Unfortunately, the rules can be very dynamic due to changes in requirements by external organizations and internal forces. Wise application designers and developers do not imbed the implementation of specific business rules within applications but define, store, and maintain them as data outside the applications that use them. Likewise, wise testers now use a similar approach called business rules-based test automation in which automated test scripts are written against the business rules rather than against the application. This process incorporates technical components such as a robust testing keyword library, a business-friendly user interface, and automated script generators to accelerate the test automation work and cover more business scenarios than with the conventional approach.

Harish Krishnankutty, Infosys Technologies Limited
Test Automation Centers of Excellence

Many organizations want to automate their testing efforts, but they aren't sure how to begin. Successful test automation requires dedicated resources and automation tool expertise-two things that overworked test teams do not have. Nationwide Insurance's solution was to create a Test Automation Center of Excellence, a group of experts in automation solution design. Members of this team partner with various project test teams to determine what to automate, develop a cost-benefit analysis, and architect a solution. Their automation experts stay with the test team throughout the automation project, assisting, mentoring, and cheering. Join Jennifer Seale to learn what it takes to put together a Test Automation Center of Excellence and examine test automation from a project management point of view.

Jennifer Seale, Nationwide Insurance
Behavior Patterns for Designing Automated Tests

Automated GUI tests often fail to find important bugs because testers do not understand or model intricate user behaviors. Real users are not just monkeys banging on keyboards. As they use a system, they may make dozens of instantaneous decisions, all of which result in complex paths through the software code. To create successful automated test cases, testers must learn how to model users' real behaviors. This means test cases cannot be simple, recorded, one-size-fits-all scripts. Jamie Mitchell describes several user behavior patterns that can be adopted to create robust and successful automated tests. One pattern is the 4-step dance, which describes every user GUI interaction: (1) ensure you're at the right place in the screen hierarchy; (2) provide data to the application; (3) trigger the system; and (4) wait for the system to complete its actions.

Jamie Mitchell, Test & Automation Consulting LLC
STAREAST Testing Be More Effective: Test Automation below the UI
Slideshow

To maintain optimal product quality of large-scale enterprise systems, the regression test suite usually increases in size over time. Whether using automated or manual regression, this brings an additional maintenance and infrastructure cost that tends to get way out of hand, often...

Ashish Mehta and Sohail Farooqui
Complete Your Automation with Runtime Analysis

So, you have solid automated tests to qualify your product. You have run these tests on various platforms. You have mapped the tests back to the design and requirements documents to verify full coverage. You have confidence that
results of these tests are reliable and accurate. But you are still seeing defects and customer issues-why? Could it be that your test automation is not properly targeted? Solid automated testing can be enhanced through runtime
analysis. Runtime analysis traces execution paths, evaluates code coverage, checks memory usage and memory leaks, exposes performance bottlenecks, and searches out threading problems. Adding runtime analysis to your
automation efforts provides you with information about your applications that cannot be gained even from effective automated testing.

  • Learn how runtime analysis enhances automation
  • Evaluate the pros and cons of code coverage
Poonam Chitale, IBM Rational

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.