Conference Presentations

The Ten Most Important Automation Questions-and Answers

As test automation becomes more complex, many important strategic issues emerge. Mukesh Mulchandani shares key questions you must answer before you begin a test automation project or an improvement program. He begins with the elementary questions. Should I automate now or wait? What specifically should I automate? What approach should I adopt? Mukesh then considers more complex questions: vertical vs. horizontal automation, handling static and dynamic data, and testing dynamic objects. The final questions relate to future automation trends: moving beyond keywords automation technology, making automation scripts extensible, introducing test-driven development, starting automation when the application is not yet stable, and offering the automation scripts to clients.

Mukesh Mulchandani and Krishna Iyer, ZenTEST Labs
Improving Testing with Quality Stubs

Many testers use stubs-simple code modules that simulate the behavior of much more complicated things. As components and their interfaces evolve, it is easy to overlook the need for associated stubs to evolve with them. Lee Clifford explains that the stubs Virgin Mobile previously used to simulate the functionality of third-party software were basic and static-simply returning hard-coded data values. While adequate, the stubs were difficult to maintain. So Virgin Mobile's testers decided to design, build, test, and deploy their own smart "quality stubs," not only for use by the test team but also for development and performance testing. The testers created fully configurable and programmable stubs that interface their systems to third-party products. The key advantage is that anyone in the test team can update the stubs with minimal cost and without the need to learn a programming language.

Lee Clifford, Virgin Mobile UK
The Secrets of Faking a Test Project

It's never been easier to fool your manager into thinking that you're doing a great job testing! In his presentation, Jonathan Kohl covers today's most respected test fakery. These techniques include misleading test case metrics, vapid but impressive looking test documentation, repeatedly running old tests "just in case they find something", carefully maintaining obsolete tests, methodology doublespeak, endless tinkering with expensive test automation tools, and taking credit for a great product that would have been great even if no one had tested it. Jonathan also covers best practices for blame deflection. By the time you're through, your executive management won't know whether to fire the programmers or the customers. But, it won't be you. (Disclaimer: It could be you if an offshore company fakes it more cheaply than you do.)

  • Cautionary true stories of test fakery, both purposeful and accidental
Jonathan Kohl, Kohl Concepts Inc.
Load Generation Capabilities for Effective Performance Testing

To carry out performance testing of Web applications, you must ensure that sufficiently powerful hardware is available to generate load levels. At the same time, you need to avoid investing in unnecessarily expensive hardware "just to be sure." A valid model for estimating the load generation capabilities of performance testing tools on different hardware configurations will help you generate the load you need with the minimum hardware. Rajeev Joshi believes the models provided by most tool vendors are too simplistic for practical use. In fact, in addition to the hardware configuration, the load generation capabilities of any tool are a function of many factors: the number of users, frequency and time distribution of requests, data volume, and think time. Rajeev presents a model for the open source load generator tool, Jmeter, which you can adapt for any performance testing tool.

John Scarborough, Aztecsoft
STARWEST 2007: Testing AJAX Applications with Open Source Selenium

Today's rich AJAX applications are much more difficult to test than the simple Web applications of yesterday. With this rich new user interface comes new challenges for software testers-not only are the platforms on which applications run rapidly evolving, but test automation tools are having trouble keeping up with new technologies. Patrick Lightbody introduces you to Selenium, an open source tool designed from the ground up to work on multiple platforms and to support all forms of AJAX testing. In addition, he discusses how to develop AJAX applications that are more easily testable using frameworks such as Dojo and Scriptaculous. Learn the importance of repeatable data fixtures with AJAX applications and how automated testing must evolve with the arrival of AJAX. Get ahead of the curve by encouraging the development of more testable AJAX software and adding new automation tools to your bag of testing tricks.

Patrick Lightbody, Gomez, Inc.
Bugs Bunny on Bugs! Hidden Testing Lessons from the Looney Tunes Gang

Bugs Bunny, Road Runner, Foghorn Leghorn, Porky Pig, Daffy Duck, and Michigan J. Frog provide wonderful metaphors for the challenges of testing. From Bugs Bunny we learn about personas and the risks of taking the wrong turn in Albuquerque. Michigan J. Frog teaches valuable lessons about defect isolation. Is it duck season or rabbit season?-and how ambiguous pronouns can dramatically change the meaning of our requirements. The Tasmanian Devil teaches us about the risks of following standard procedures and shows us practical approaches to stress and robustness testing. From Yosemite Sam we learn about boundary conditions and defying physics. And, of course, the Coyote seems to put a bit too much confidence in the latest tools and technologies from ACME. The Looney Tunes Gang teaches lessons for the young at heart-novice and experienced testers alike!

Robert Sabourin, AmiBug.com Inc
Result Driven Testing: Adding Value to Your Organization

Software testers often have great difficulty in quantifying and explaining the value of their work. One consequence is that many testing projects receive insufficient resources and, therefore, are unable to deliver the best value. Derk-Jan de Grood believes we can improve this situation although it requires changing our mindset to "result-driven testing." Result driven testing is based on specific principles: (1) understand, focus on, and support the goals of the organization; (2) do only those things that contribute to business goals; and (3) measure and report on testing's contribution to the organization. Keeping these principles at the forefront binds and guides the team. Join this session to find out how the test team at Collis has adopted these principles. They have developed a testing organization that generates trust and provides valuable insight into the quality of their organization's products.

Derk-Jan Grood, Collis
Ten Indispensable Tips for Performance Testing

Whether you are inexperienced with performance testing or an experienced performance tester who is continuously researching ways to optimize your process and deliverables, this session is for you. Based on his experience with dozens of performance testing projects, Gary Coil discusses the ten indispensable tips that he believes will help ensure the success of any performance test. Find out ways to elicit and uncover the underlying performance requirements for the software-under-test. Learn the importance of a production-like test environment, and methods to create suitable environments without spending a fortune. Take back valuable tips on how to create representative workload--mix profiles that accurately simulate the expected production load. And more! Gary has developed and honed these practical and indispensable tips through many years of leading performance testing engagements.

Gary Coil, IBM Global Services
Ensuring Quality in Web Services

As Web service-based applications become more prevalent, testers must understand how the unique properties of Web services affect their testing and quality assurance efforts. Chris Hetzler explains that testers must focus beyond functional testing of the business logic implemented in the services. Quality of Service (QoS) characteristics-security, performance, interoperability, and asynchronous messaging technology-are often more important and more complicated than in classical applications. Unfortunately these characteristics are often poorly defined and documented. In addition, Web services can be implemented using a number of technologies-object oriented programming, XML documents, and databases-and can employ multiple communications protocols, each requiring different testing skills.

Chris Hetzler, Appolis Software
A Pair of Stories about All-Pairs Testing

What do you do when you're faced with testing a million or more possible combinations, all manually? Easy-just declare the problem so big and the time so short that testing is impossible. But what if there were an analytic method that could drastically reduce the number of combinations to test while reducing risks at the same time? All-pairs testing, the pairing up of testable elements, is one way to create a reasonable number of test cases while reducing the risk of missing important defects. Unfortunately, as Jonathan Bach demonstrates, this technique can also be used incorrectly, thus creating more risk, not less. Jonathan shares his experiences on two projects-one success and one failure-that employed all-pairs analysis and describes the reasons behind the results. Start down the path to all-pairs success for your next big testing project.

  • Learn the rationale behind pairwise data analysis
Jon Bach, Quardev, Inc.

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.