Conference Presentations

STAREAST 2009: The Case Against Test Cases

A test case is a kind of container. You already know that counting the containers in a supermarket would tell you little about the value of the food they contain. So, why do we count test cases executed as a measure of testing's value? The impact and value a test case actually has varies greatly from one to the next. In many cases, the percentage of test cases passing or failing reveals nothing about the reliability or quality of the software under test. Managers and other non-testers love test cases because they provide the illusion of both control and value for money spent. However, that doesn't mean testers have to go along with the deceit. James Bach stopped managing testing using test cases long ago and switched to test activities, test sessions, risk areas, and coverage areas to measure the value of his testing. Join James as he explains how you can make the switch-and why you should.

James Bach, Satisfice, Inc.
Building a Quality Dashboard for Your Project

Jason Bryant shows how you can transform readily available raw data into visual information that improves the decision-making process with simple measures that yield power for both testing and development managers. A quality dashboard helps focus regression tests to cover turmoil risk, ensures issues are resolved before beta, identifies risks in the defect pool, and provides information to monitor the team’s adherence to standard processes. Creating, measuring, and monitoring release criteria are fundamental practices for ensuring consistent delivery of software products. Schlumberger has implemented a quality dashboard that helps them continuously gauge how projects are progressing against their quality release criteria (QRC). By using dashboard data, Schlumberger makes better decisions and subsequently is able to see how those decisions affect projects.

Jason Bryant, Schlumberger Information Solutions
Integrating Divergent Testing Approaches at Cisco

Many large organizations have evolved their test processes project by project and department by department, leading to inefficient practices, overlapping activities, redundant test environments, shelfware test tools, and more. It is possible, however, to focus on a few key areas and bring even the most wildly different test approaches together. Bill Schongar describes the real-world testing problems at Cisco: thousands of test engineers, a never-ending variety of practices, and numerous tools-all deployed across a large spectrum of environments. Employing collaborative communication among test groups, standardized test coding practices, common test environments, best of breed test tooling, and consolidated results tracking, Cisco was able to integrate their diverse testing approaches successfully. Discover what worked (and what failed) and how Cisco made testing faster, more effective, and less painful-even fun at times.

Bill Schongar, Cisco Systems, Inc.
Improve Your Testing with Static Analysis

Static analysis is a technique for finding defects in code without executing it. Static analysis tools are easy to use because no test cases are required. In addition, today's technology has advanced significantly over the last few years. Although their use is increasing, many misconceptions about the capabilities of these innovative tools still exist. Paul Anderson describes static analysis tools and how they work and clarifies their strengths and limitations. He demystifies static analysis jargon, explaining terms such as "object-sensitive" and "context-sensitive." Paul describes how they can be used to help traditional testing activities be more effective and how best to use the tools in the software lifecycle. Paul presents data from real case studies to demonstrate the tools' effectiveness in practice. Gain a better understanding of the technology so that you can decide whether to apply it.

Paul Anderson, GrammaTech
STAREAST 2009: The Marine Corps Principles of Leadership

Even if you have the best tools and processes in the world, if your staff is not motivated and productive, your testing efforts will be weak and ineffective. Retired Marine Colonel Rick Craig describes how using the Marine Corps Principles of Leadership can help you become a better leader and, as a result, a better test manager. Learn the difference between leadership and management and how they complement each other. Join in the discussion and share ideas that have helped energize your testers (and those that didn't). Rick discusses motivation, morale, training, span of control, immersion time, and how to promote the testing discipline within your organization. He also addresses the importance of influence leaders and how they can be used as agents of change.

Rick Craig, Software Quality Engineering
Improve Your Testing through Automation

Are you wondering how to increase progress with your test automation efforts? Do you understand how to measure the efficiency and effectiveness of your automation activities? Jim Sartain shares the test automation journeys of two leading software companies-Intuit and Adobe Systems-which are companies with long histories of investing in test automation. Some of these efforts were ad hoc, while others were carefully planned and based on software architected with the ability to test below the user interface. Jim describes the approaches he's used for improving testing through automation-some that worked, and others that didn't. He explains how to create an individual and organizational mindset that values and relies on test automation. Discover how software developers and testers can work together to build and use automated tests as an integral part of the software development process.

Jim Sartain, Adobe Systems
Beyond Testing: Becoming Part of the Innovation Machine

Testing, once a marginalized function at Google, is now an integral part of Google's innovation machine. Patrick Copeland describes how this vital transformation took place. As he was analyzing how to be more efficient and better align his testing team with the needs of the company, Patrick realized they had to move beyond "just testing" and become a force for change. His approach was based on five powerful principles: (1) Building feature factories rather than products, (2) Embedding testing at the grass roots level, (3) Solving problems of scale with technology, (4) Automating at the right level of abstraction, (5) Only doing what the team can do well. Learn how Google test teams used these principles to shift from a "service group" composed predominantly of exploratory testers to an "engineering group" with technical skills.

Patrick (Pat) Copeland, Google
Crossing the Chasm: Agile Transitions for Test Teams

Even if agile development has "crossed the chasm" and is becoming a mainstream set of practices, testers are often left behind when development teams "go agile." Developers learn test-driven development, continuous integration, refactoring, pair programming, and more. Project managers receive ScrumMaster training. What do testers get? Too often, just a wave to follow as the rest of the organization makes the move. Testers need some answers to their questions: If developers are writing tests, what am I supposed to do? How can I possibly keep up with two-week iterations and constantly changing requirements? Janet Gregory describes the skills that are vital for agile testers and discusses how testers can engage with agile development teams.

Janet Gregory, DragonFire Inc.
What Haven't You Noticed Lately? Building Awareness in Testers

"What haven't you noticed lately?" Marshall McLuhan is said to have asked this paradoxical question-a vital one for testers, because it prompts more questions about things that testers could and should notice. Great testing is about noticing things and asking questions about them. Have you ever found a problem in a program without using a named testing technique or found that some testers seem to be magnets for bugs, seeing things that you don't? As a test manager, do you wish that your team could look beyond the obvious and discover more defects? Have you noticed that artists, comedians, designers, and novelists notice things that the rest of us don't notice? Michael Bolton believes that many important problems in our products are not found by using formulaic testing techniques. Instead, they are discovered through a rich set of cognitive skills that can be taught and learned.

Michael Bolton, DevelopSense
The Testing Dashboard: Becoming and Information Provider

Primary concerns for test managers are keeping the testing on schedule, meeting test objectives, making sure tests are effective, and satisfying stakeholders. Like taking a trip by car, you have a destination and only so many resources to get there. By keeping an eye on your car's dashboard, you know your status anywhere along the way. Likewise in testing, a dashboard is an effective way to organize and communicate the status of testing so that you and others can easily evaluate your progress. Randy Rice explains the test metrics that can help test managers better understand and report the status of testing. He explains which metrics are the right ones for your projects, how to gather them, how to create a testing dashboard to display them, and how to make mid-course changes so your testing project can reach its destination safely. Randy shares his ideas on how to avoid using metrics in incorrect and dangerous ways.

Randy Rice, Rice Consulting Services

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.