Conference Presentations

Behavior Patterns for Designing Automated Tests

Automated GUI tests often fail to find important bugs because testers do not understand or model intricate user behaviors. Real users are not just monkeys banging on keyboards. As they use a system, they may make dozens of instantaneous decisions, all of which result in complex paths through the software code. To create successful automated test cases, testers must learn how to model users' real behaviors. This means test cases cannot be simple, recorded, one-size-fits-all scripts. Jamie Mitchell describes several user behavior patterns that can be adopted to create robust and successful automated tests. One pattern is the 4-step dance, which describes every user GUI interaction: (1) ensure you're at the right place in the screen hierarchy; (2) provide data to the application; (3) trigger the system; and (4) wait for the system to complete its actions.

Jamie Mitchell, Test & Automation Consulting LLC
Top Ten Tendencies That Trap Testers

A trap is an unidentified problem that limits or obstructs us in some way. We don't intentionally fall into traps, but our behavioral tendencies aim us toward them. For example, have you ever found a great bug and celebrated only to have one of your fellow testers find a bigger bug just one more keystroke away? A tendency to celebrate too soon can make you nearsighted. Have you ever been confused about a behavior you saw during a test and shrugged it off? The tendency to dismiss your confusion as unimportant or irrelevant may make you farsighted-limiting your ability to see a bug right in front of you. Jon Bach demonstrates other limiting tendencies like Stakeholder Trust, Compartmental Thinking, Definition Faith, and more. Testers can't find every bug or run every possible test, but identifying these tendencies can help us avoid traps that might compromise our effectiveness and credibility.

Jon Bach, Quardev Laboratories
Communicating the Value of Testing

Test managers constantly lament that few outside their group understand or care much about the value they provide and consistently deliver. Unfortunately, they are often correct. The lack of visibility and understanding of the test team's contribution can lead to restricted budgets,
fewer resources, tighter timelines, and ultimately, lower group productivity. Join Theresa Lanowitz as she highlights ways to move from simply being a tester of software to an advocate for your organization's customers. Learn how to effectively and concisely communicate with key
stakeholders in your organization to ensure that they understand the value and role of the testing group. With effective and concise communication, the testing group will be perceived as more strategically important and integral to the success of every project.

  • Strategies for communicating complex data
Theresa Lanowitz, voke, Inc. and Dan Koloski, Empirix
STAREAST Testing Be More Effective: Test Automation below the UI
Slideshow

To maintain optimal product quality of large-scale enterprise systems, the regression test suite usually increases in size over time. Whether using automated or manual regression, this brings an additional maintenance and infrastructure cost that tends to get way out of hand, often...

Ashish Mehta and Sohail Farooqui
Open Source Tools for Web Application Performance Testing

OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing will introduce you to the basics of OpenSTA including downloading and installing
the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, as well as exporting captured performance data into Excel for analysis and reporting. As with many open source tools, self-training is the rule. Support is not provided by a big vendor
staff but by fellow practitioners via email. Learn how to find critical documentation that is often hidden in FAQs and discussion forum threads. If you are up to the support challenge, OpenSTA is an excellent alternative to high-priced commercial tools.

  • Learn the capabilities of OpenSTA
Dan Downing, Mentora Inc
Measuring the End Game of Software Project - Part Deux

The schedule shows only a few weeks before product delivery. How do you know whether you are ready to ship? Test managers have dealt with this question for years, often without supporting data. Mike Ennis has identified six key metrics that will significantly reduce the guesswork. These metrics are percentage of tests complete, percentage of tests passed, number of open defects, defect arrival rate, code churn, and code coverage. These six metrics, taken together, provide a clear picture of your product's status. Working with the project team, the test manager determines acceptable ranges for these metrics. Displaying them on a spider chart and observing how they change from build to build enables a more accurate assessment of the product's readiness. Learn how you can use this process to quantify your project's "end game".

  • Decide what and how to measure
  • Build commitment from others on your project
Mike Ennis, Savant Tecnology
Practical Model-Based Testing for Interactive Applications

Model-based tests are most often created from state-transition diagrams. Marlon Vieria generates automated system tests for many Siemens systems from use cases and activity diagrams. The generated test cases are then executed using a commercial Capture-Replay tool. Marlon begins by describing the types of models used, the roles of those models in test generation, and the basic test
generation process. He shares the weaknesses of some techniques and offers suggestions on how to strengthen them to provide the required control flow and data flow coverage. Marlon describes the cost benefits and fault detection capabilities of this testing approach. Examples from a Web-based application will be used to illustrate the modeling and testing concepts.

  • Learn how to implement model-based testing in your organization
  • Create effective scripts for use by automation tools
Marlon Vieira, Siemens Corporate Research, Inc.
Testing for Sarbanes-Oxley Compliance

In the wake of huge accounting scandals, many organizations are now being required to conform to Sarbanes-Oxley (SOX) legal requirements regarding internal controls. Many of these controls are implemented within computer applications. As testers, we should be aware of these new requirements and ensure that those controls are tested thoroughly. Specifically, testers should identify SOX-based application requirements, design automated test cases for
those requirements, create test data and test environments to support those tests, and document the test results in a way understandable by and acceptable to auditors, both internal and external. To be most efficient, SOX testing should not be separate but should be incorporated into system testing.

  • Learn the SOX testing lifecycle
  • Identify testable requirements for SOX compliance testing
  • Review SOX test automation strategies
Suresh Chandrasekaran, Cognizant
Measuring the "Good" in "Good Enough Testing"

The theory of "good enough" software requires determining the trade off between delivery date (schedule), absence of defects (quality), and feature richness (functionality) to achieve a product which can meet both the customer's
needs and the organization's expectations. This may not be the best approach for pacemakers and commercial avionics software, but it is appropriate for many commercial products. But can we quantify these factors? Gregory Pope
does. Using the COQALMOII model, Halstead metrics, and defect seeding to predict defect insertion and removal rates; the Musa/Everette model to predict reliability; and MatLab for verifying functional equivalence testing, Greg
evaluates both quality and functionality against schedule.

  • Review how to measure test coverage
  • Discover the use of models to predict quality
  • Learn what questions you should ask customers to determine "good enough"
Gregory Pope, Lawrence Livermore National Laboratory
STARWEST 2006: Branch Out Using Classification Trees for Test Case Design

Classification trees are a structured, visual approach to identify and categorize equivalence partitions for test objects to document test requirements so that anyone can understand them and quickly build test cases. Join Julie Gardiner to look at the fundamentals of classification trees and how they can be applied in both traditional test and development environments. Using examples, Julie
shows you how to use the classification tree technique, how it complements other testing techniques, and its value at every stage of testing. She demonstrates a classification tree editor that is one of the free, commercial tools now available to aid in building, maintaining, and displaying classification trees.

  • Develop classification trees for test objects
  • Understand the benefits and rewards of using classification trees
  • Know when and when not to use classification trees
Julie Gardiner, QST Consultants Ltd.

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.