Conference Presentations

Testing in Turbulent Projects

Turbulent weather such as tornados is characterized by chaotic, random, and often surprising and powerful pattern changes. Similarly, turbulent software projects are characterized by chaotic, seemingly random project changes that happen unexpectedly and with force. Dealing with turbulence is about dealing with change. Testing teams must contend with continuously changing project requirements, design, team members, business goals, technologies, and organizational structures. Test managers and leaders should not just react to change; instead, they need to learn how to read the warning signs of coming change and seek to discover the source of impending changes. Rob Sabourin shares his experiences organizing projects for testing in highly turbulent situations. Learn how to identify context drivers and establish active context listeners in your organization.

Robert Sabourin, AmiBug.com Inc
What Your QA Program Is Missing

Many software development organizations have a Quality Assurance (QA) component. Often, QA is just an impressive name for "we do some testing before rolling out our product." True QA encompasses an integrated process that guides software development from inception to delivery using approaches such as CMMI®, Six Sigma, and ISO. The software testing that occurs near the end of a software development process is a separate, standalone activity that assesses "fitness for use" before delivery. Dawn Haynes explains the differences between quality measures and software requirements with an interactive exercise. She discusses ways for you to evaluate and measure progress toward quality goals during development and explores ways to build management support and develop a skilled QA team. So, if you're not implementing a truly formal QA program, come see what you are missing.

Dawn Haynes, PerfTestPlus, Inc.
Table-Driven Requirements with the FIT Testing Tool

Eliciting and articulating customer requirements-clearly and precisely-is difficult to say the least. Inaccuracies often creep in when translating requirements from business ideas into software models. Working with many clients, Alan Shalloway found that creating a large number of tables with examples-however time consuming the tables are to create-adds to the clarity and precision of requirements. He found, too, that if you can use the same example table as tests, then the time is well spent. Alan presents table-driven requirements as an approach to defining both functional and test specifications. Examine business rules, user interface flows, user-observable states, and other forms of useful tables. Learn how to employ the Framework for Integrated Testing (FIT) to turn table-driven requirements into table-driven tests.

Ken Pugh, Net Objectives
When to Ship? Choosing Quality Metrics

It's time to ship your product and you're looking at pages of data about the testing work you've done over the last year. How well does this data prepare you for making the recommendation to ship the product or delay it-perhaps once again? Do you rely primarily on the data or do you fall back on "gut feel" and intuition to make your decision? In this highly interactive session, Alan Page discusses how common measurements, such as code coverage, bug counts, and test pass rates are often misused, misinterpreted, and inaccurate in predicting software quality. Learn how to select both quantitative and qualitative metrics that evaluate your progress and help you make important decisions and recommendations for your product. Share your own ideas for test and quality metrics and learn how to evaluate those metrics to ensure that they are accurately answering the questions you need them to answer.

Alan Page, Microsoft
Taking Control Using Virtual Test Lab Automation

Due to more complex software and environments, the expectations placed on software labs have grown significantly. While under tighter budget constraints, test labs are expected to rapidly provide the infrastructure to create varied test environments for executing test cases. Traditionally, only physical machines and bare-bones hypervisors formed the lab infrastructure. Testers spent a significant amount of time creating and re-creating pristine test environments. Ravi Gururaj explains how virtual lab automation (VLA) leverages server virtualization and redesigns the lab to make it relevant to a broad set of stakeholders, including development, test, support, pre-sales, and training. Learn how you can create multi-machine configurations, execute test cases, and capture entire machine states quickly for later defect analysis.

Jim Singh, VMLogix, Inc.
"A" is for Abstraction - Managing Change in Successful Test Automation

Implementing a test automation project can be like a mountain climbing expedition-many find the task daunting, some attempt it, and only a few are successful. Showing real-world examples-such as the need for scripting across different platforms-Mark Meninger explains how to embrace change and use abstraction to provide creative ideas and approaches for your test automation project. You'll learn how to implement a platform abstraction layer in the automation architecture to overcome multi-platform issues and much more. Mark helps you understand how the roles of change and abstraction in test automation can impact your automation project. You can become one of the few who are truly successful by embracing abstraction in your test automation architecture. Otherwise, you may spend money, invest in tools, and build a team that never makes it to the top of the test automation mountain.

Mark Meninger, Research In Motion
Applying Test Design Concepts in the Real World

Have you ever read a book, taken a class, or attended a conference session on test design concepts that you never actually incorporated into your work? Have others on your team rejected new design techniques that appeared promising to you? Sometimes the path from concept to real-world application can be wrought with challenges. Marie Lipinski Was shares the path she took to bring formal test design techniques from the classroom to the workroom. Marie explains how she incorporated test design techniques-such as mind mapping, decision tables, pairwise testing, and user scenario testing-into the existing test processes at CNA Insurance. From the case studies Marie offers, you will learn how to present these new concepts to key stakeholders, quantify the cost/benefit to management, and overcome the challenges of changing the status quo.

Marie Was, CNA Insurance
The Strategic Alignment of Testing and Development

Strategic alignment between testing and development is vital for successful systems development. Missing, however, have been actionable, how-to approaches for assessing and enhancing this alignment. Jasbir Dhaliwal and Dave Miller present STREAM, the Software Testing Reassessment and Alignment Methodology, a systematic approach used to achieve this alignment at both strategy and execution levels. STREAM incorporates a step-by-step procedure that can be used to: 1) identify symptoms of developer-tester misalignment, 2) analyze and understand the misalignment, and 3) formulate action-plans for fostering stronger developer-tester alignment. In addition, Jasbir and Dave identify specific mechanisms and tools for ensuring that the execution capabilities of testing groups are aligned with their stated strategies. This represents a natural pre-requisite for successful developer-tester alignment.

Jasbir Dhaliwal, FedEx Institute of Technology at the University of Memphis
Agile Testing in the Large: Stories and Patterns of Transformation

You're part of a large test organization that has invested money, sweat, and tears in test processes, plans, cases, and automation tools that have served you well. You've built a team that excels in your development environment. In fact, everyone is depending on you to verify sound engineering practices and formally assure product quality. Now agile methods are being adopted in your organization and messing up everything. Developers and testers are pushed together with the hope that quality will somehow still happen. Is this your future? Bob Galen describes patterns of testing that he's found helpful in large-scale teams when they transition from traditional to agile testing.

Robert Galen, Software Testing Consultant
STAREAST 2009: Measuring the Value of Testing

Value is based on objectives, so why do we test? We test to find defects effectively, gain confidence in the software, and assess risk. So, the value of testing should be measured based on test effectiveness, confidence validation, and reduced system risk. In terms of testing effectiveness, the most useful metric is defect detection percentage (DDP)-the ratio of defects found in testing divided by the total number of defects found by testing and users in production. Dorothy Graham explains when to use this metric and outlines the choices, problems, and benefits of using DDP. In addition, Dorothy describes confidence and risk metrics that will tell you if you are going in the right direction-or not. She explains how to take costs into account to assess the return on investment for testing and outlines a simple way to bring home the message about the value of testing in terms of what the organization can save.

Dorothy Graham, Software Testing Consultant

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.