STAREAST 2008 - Software Testing Conference

PRESENTATIONS

Perils and Pitfalls of the New Agile Tester

If your background is testing on traditional projects, you are used to receiving something called "requirements" to develop test cases--and sometime later receiving an operational system to test. In an agile project, you are expected to continually test changing code based on requirements that are being uncovered in almost real time. Many perils and pitfalls await testers new to agile development.

Janet Gregory, DragonFire Inc.

Practical Pairwise Testing with PICT

Fault analysis reveals that interaction between the variables of dependent parameters is a common source of failure in complex systems. Imagine you are assigned to test a feature with twenty independent parameters and five possible states for each parameter. The total number of possible combinations is greater than five-hundred billion. At one test executed per millisecond, it would take more than 3,000 years to test all possible combinations. So, which combinations do we test?

Bj Rollison, Microsoft Corporation

Preformance Testing in Enterprise Application Environments

As systems become more complex--serving the enterprise and implemented on the Web and across the Internet-performance testing is becoming more important and more difficult. David Chadwick suggests that the starting point is to design tests that reflect real user activity, including independent arrivals of transactions and varying input data to prevent "cache only" results. David explains how to break down the end-to-end system response time into the distributed components involved in processing the transactions.

David Chadwick, IBM
Ready to Ship?

When developing software systems, the inevitable question is "Are we ready to ship?" Facing this question, many testers and test managers rely on their intuition and gut feeling to come up with a subjective verdict of the system under test. John Fodeh describes how to establish and use a set of Release Readiness Metrics in your organization. These metrics provide a snapshot of the system state and quality that you can use throughout the development process--especially when approaching the release date.

John Fodeh, HP Software

Root Cause Analysis: Dealing with Problems, Not Symptoms

Test managers often choose solutions to problems without sufficient analysis, resulting in a cover-up of the symptom rather than a solution to the underlying problem. Later, the problem may surface again in a different disguise, and we may mishandle it again, just as we did initially. Alon Linetzki describes a simple process you can use to identify the root causes of problems and create an appropriate solution to eliminate them.

Alon Linetzki, The Sela Group

STAREAST 2008: Seven Habits of Highly Effective Automation Testers

In many organizations, test automation is becoming a specialized career path. Mukesh Mulchandani identifies seven habits of highly effective automation specialists and compares them with Stephen Covey's classic Seven Habits of Highly Effective People. Mukesh not only describes behavior patterns of effective automation testers but he also discusses how to internalize these patterns so that you use them instinctively.

Mukesh Mulchandani, ZenTEST Labs

STAREAST 2008: Telling Your Exploratory Story

What do you say when your manager asks you, "How did it go today?" If you have a pile of test cases on your desk, it may be acceptable for you to say, "I ran x% of these tests today," or "I'll be finished with this stack in y days at the rate I'm going." However, if you're using exploratory testing as your approach, it may be downright terrifying to try to give a status report, especially if project stakeholders think exploratory testing is irresponsible and downright reckless compared to pre-scripted test cases.

Jon Bach, Quardev, Inc.

STAREAST 2008: The Hard Truth about Offshore Testing

Test managers often choose solutions to problems without sufficient analysis, resulting in a cover-up of the symptom rather than a solution to the underlying problem. Later, the problem may surface again in a different disguise, and we may mishandle it again, just as we did initially. Alon Linetzki describes a simple process you can use to identify the root causes of problems and create an appropriate solution to eliminate them.

Jim Olsen, Dell Inc
STAREAST 2008: Understanding Test Coverage

Test coverage of application functionality is often poorly understood and always hard to measure. If they do it at all, many testers express coverage in terms of numbers, as a percentage or proportion-but a percentage of what? When we test, we develop two parallel stories. The "product story" is what we know and can infer about the product-important information about how it works and how it might fail.

Michael Bolton, DevelopSense

Systematic Test Design...All on One Page

Good test design is a key ingredient for effective and efficient testing. Although there are many different test design methods and a number of books explaining them in detail, studies have shown that the regular use of these methods is actually quite limited. What are the reasons behind our neglecting to use these methods? How can we improve our practices to design better tests?

Peter Zimmerer, Siemens AG

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.