Conference Presentations

The Power of Modern Testing

Testing continues to be thought of as the poor neighbor of software engineering. This appears to be due to the occurrence of numerous, well-publicized software failures. Les Hatton takes a closer look at how effective different forms of testing have been in certain areas. He asks why some technologies appear to be more valuable than others, while some seem doomed to be relentlessly ignored. While too many companies still don't recognize the strategic role of testing-and fail to make sufficient resources available-we seem to be making progress. This provocative presentation shows you what you can do to halt the upwardly spiraling cost of failure.

Les Hatton, University of Kent
Conversations I Never Expected to Have as a Test Manager

There are times in a test manager's career when the work situation becomes surreal. If you've been in situations where you think you must be dreaming, sometimes it helps to look at things from the other person's perspective. As we mature in our jobs, we can examine these situations and see how to better answer the questions we have about unexpected communications. In this session we'll look at some typical conversations and discuss alternative ways to help everyone find the true reality, then better deal with the situation. From her years of experience as a consultant and her personal encounters, Johanna Rothman shares her insights and gets you involved in discovering what's really being said in these strange conversations.

Johanna Rothman, Rothman Consulting Group
Applying Orthogonal Defect Classification Principles to Software Testing

Test escape analysis and corrective action tracking (TEACAT) is a method used to collect and utilize information about the causes of test escapes to prevent customer-found defects and improve internal test, development, and release processes. The TEACAT approach provides testers and test managers with the primary causes of defect escapes from the organizations into the field. Suzanne Garner takes you through the test escape analysis process at Cisco and shows you how test-specific ODC fields can be employed to provide customer focus to test process improvement activities, and ensure that test gaps are closed.

Suzanne Garner, Cisco Systems Inc
Keyword Testing at the Object Level

It's time to put a new spin on the technique of keyword testing using a data-driven engine. Brian Qualters shows you how to effectively place your focus not on the action or process to be completed, but rather on the object type that's to be manipulated. This redirected focus lets you avoid the pitfalls and resource requirements encountered when you move to test another application. He gives a demonstration of how this modified approach can be integrated into the manual test case creation as a way to tremendously improve efficiency.

Brian Qualters, TurboTesting Concepts
Getting Things Done: Practical Web Application/e-Commerce Stress Testing

Web and e-commerce applications are still the rising, often unreachable, stars of the testing world. Your team's ability to effectively stress test Web applications-before your customers do-is critical. This double-track session shows you the tools that support stress testing, including several that cost absolutely nothing. It also walks you through a variety of approaches to stress testing that are available during all phases of development. This journey allows you to develop a plan to automate your stress testing, as well as know how and when to implement it as part of the software development process.

Robert Sabourin, AmiBug.com Inc
A Test Automation Harness for the PocketPC

The emergence of the handheld platform is an exciting opportunity to reapply quality and usability paradigms. It gives us the chance to establish new, industrywide quality benchmarks for handheld applications that may propel society beyond the traditional human-machine interface. Handheld-based computing has its potential-and its limits. When moving from desktop-centered quality assurance to handheld-centered applications, there will be changes that affect software testing techniques. We must be prepared. This session covers the basics of handheld automation, details what's needed before designing test automation, and demonstrates a repository for tests designed for the PocketPC.

Ravindra Velhal, Intel Corporation
Traps That Can Kill a Review Program (And How to Avoid Them)

Technical reviews have been around for a long time, and they're generally recognized as a "good thing" for building quality software and reducing the cost of rework. Yet many software companies start to do reviews only to have the review program falter. So the question remains: How can you succeed with a review program? Management support and good training for review leaders is a good place to start. But it's the details of implementation that truly determine whether reviews will stick, or they'll fall by the wayside. Esther Derby offers her insights based on observations from both successful and failed review programs.

Esther Derby, Esther Derby Associates Inc
Automated Testing for Programmable Logic Control Systems

Developing real-time, automated testing for mission-critical programmable logic controller (PLC)-based control systems has been a challenge for many scientists and engineers. Some have elected to use customized software and hardware as a solution, but that can be expensive and time consuming to develop. Reginald Howard shows you a way to integrate a suite of commercially available, off-the-shelf tools and hardware to develop a scalable, Windows-based testing platform that's capable of performing an array of different tests including, but not limited to, black box, destructive, regression, and system security testing. He describes the use of the Jelinski-Morana statistical model for determining expected results from automated tests.

Reginald Howard, Advanced Systems Integration Inc. and Jon Hawkins, Alliance Technical Solutions
Improvement is a Journey: A Software Test Improvement Roadmap

With the wide array of software testing practices out there, how do you know where to start? Karen Rosengren shows you how a group of IBM testers developed a road map for implementing practices that takes into consideration things such as the skills required to implement them and how the practices relate to one another. She also explains IBM's Software Testing Improvement Road Map (STIR) which defines the levels of testing practices from "basic" to "engineered."

Karen Rosengren, IBM
Test Lab Stability through Health Check Test Automation

New application code is installed on Sunday. Your test team arrives on Monday to run test scripts and certify the release. Unfortunately, one environmental problem leads to another and suddently it's Friday before you run your first test script against the new code. Does this sound familiar? One way to buck this trend is to run daily health checks on the test environment. By running daily health checks, you'll minimize the time required to test new application code installs. Plus, you'll improve your test environment stability, reduce the number of variables to examine when a test fails, and reduce tension between your development and test teams.

John Rappa, Verizon Communications

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.