Conference Presentations

Keeping it Between the Ditches: A Dashboard to Guide Your Testing

As a test manager, you need to know how testing is proceeding at any point during the test. You are concerned with important factors such as test time remaining, resources expended, product quality, and test quality. When
unexpected things happen, you may need additional information. Like the dashboard in your car, a test manager's dashboard is a collection of metrics that
can help keep your testing effort on track (and out of the ditch). In this session, Randall Rice will explore what should be on your dashboard, how to obtain the data, how to track the results and use them to make informed decisions, and how to convey the results to management. Randall will present examples of various dashboard styles.

  • Build your own test management dashboard
  • Select useful metrics for your dashboard
  • Use the dashboard to successfully control the test
Randy Rice, Rice Consulting Services Inc
STARWEST 2006: Session-Based Exploratory Testing: A Large Project Adventure

Session-based exploratory testing has been proposed as a new and improved approach to software testing. It promotes a risk-conscious culture that focuses on areas where there are likely to be defects and allows for rapid course corrections in testing plans to accommodate testing "discoveries", feature-creep, and schedule changes.
How can a test manager take a highly talented manual testing team, accustomed to running test scripts, and
introduce the agility of an exploratory approach? What can be done to communicate the risks inherent in featurecreep
and schedule changes to senior stakeholders in a meaningful way? Bliss will demonstrate how he successfully
implemented session-based exploratory testing while maintaining and even improving the code quality. Using the
tool he developed (available for free download) and metrics available with this approach, stakeholders get real-time

George Bliss, Captaris
Say Yes - or Say No? What to Do When You're Faced with the Impossible

The ability to communicate is a tester's-and test manager's-most important skill. Imagine this scenario. You’re a test manager. Your team is working as hard as they can. You’re at full capacity, trying to find time to test the new system your boss just gave you. And now your boss is in your office, asking you to take on one more assignment. What do you do? Say "Yes" or say "No"? Johanna Rothman shows you how to make a compelling case and communicate effectively the work you have and the work you can accomplish, making an impossible situation possible.

Johanna Rothman, Rothman Consulting Group, Inc.
Software Security Testing: It's Not Just for Functions Anymore

What makes security testing different from classical software testing? Part of the answer lies in expertise, experience, and attitude. Security testing comes in two flavors and involves standard functional security testing (making sure that the security apparatus works as advertised), as well as risk-based testing (malicious testing that simulates attacks). Risk-based security testing should be driven by architectural risk analysis, abuse and misuse cases, and attack patterns. Unfortunately,
first generation "application security" testing misses the mark on all fronts. That's because canned black-box probes-at best-can show you that things are broken, but say very little about the total security posture. Join Gary McGraw to learn what software security testing should look like, what kinds of knowledge testers must have to carry out such testing, and what the results may say about security.

Gary McGraw, Cigital Inc
How to Build Your Own Robot Army

Software testing is tough-it can be exhausting and there is never enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. Two decades ago, software test engineers were cheap and machine time was expensive, demanding test suites to run as quickly and efficiently as possible. Today, test engineers are expensive and CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them … we're talking about programs that create and execute tests you never thought of and find bugs you never dreamed of. In this presentation, Harry Robinson will show you how to create your robot army using tools lying around on the Web.

Harry Robinson, Google
Dispelling Testing's Top Ten Illusions

Lloyd Roden as he unveils his list of the top ten illusions that we may face as testers and test managers. One illusion that we often encounter is "quality cannot be measured." While it is difficult to measure, Lloyd believes it can and should be measured regularly, otherwise we never improve. Another illusion Lloyd often encounters is "anyone can test." Typically when the project is behind schedule, inexperienced people are "drafted" to help with testing. While this gives us the illusion that more hands are better, we know the real impact of inexperienced people on the final product. While it is important to identify illusions when they appear, Lloyd will describe ways to reduce their impact or eliminate them entirely from your
organization. Only then can we become ultra-effective test professionals who are respected within our organizations.

Lloyd Roden, Grove Consultants
STARWEST 2006: Test Estimation: Painful or Painless?

As an experienced test manager, Lloyd Roden believes that test estimation is one of the most difficult parts of test management. In estimation we must deal with destabilizing dependencies such as poor quality code received by testers.
Lloyd presents seven powerful ways to estimate test effort. Some are easy and quick but prone to abuse; others are more detailed and complex but may be more accurate. Specifically, Lloyd discusses FIA (Finger in the Air), Formula or Percentage, Historical, Parkinson’s Law v. Pricing-to-Win estimates, Work Breakdown Structures, Estimation Models, and Assessment Estimation. Spreadsheets and utilities will be available during this session to help you as tester or test manager estimate better. By the end of this session you should feel that the painful experience of test estimation could, in fact, become a painless one.

  • Uncover common destabilizing dependencies
Lloyd Roden, Grove Consultants
STARWEST 2006: The Art of SOA Testing: Theory and Practice

SOA (Service Oriented Architecture) based on Web Services standards has ushered in a new era of how applications are being designed, developed, and deployed. But the promise of SOA to increase development productivity poses new challenges for testers, challenges dealing with multiple Web Services standards and implementations, legacy application (of unknown quality) now exposed as Web services, weak or non-existent security controls, and services of possibly diverse origins chained together to create applications. Learn concepts and techniques to master these challenges through powerful techniques such as WSDL chaining, schema mutation and automated filtration. Learn how traditional techniques such as black, gray, and white box testing are applied to SOA testing to maximize test coverage, minimize effort, and release better products.

  • Learn the Four Pillars of SOA Testing
Rizwan Mallal, Crosscheck Networks
Complete Your Automation with Runtime Analysis

So, you have solid automated tests to qualify your product. You have run these tests on various platforms. You have mapped the tests back to the design and requirements documents to verify full coverage. You have confidence that
results of these tests are reliable and accurate. But you are still seeing defects and customer issues-why? Could it be that your test automation is not properly targeted? Solid automated testing can be enhanced through runtime
analysis. Runtime analysis traces execution paths, evaluates code coverage, checks memory usage and memory leaks, exposes performance bottlenecks, and searches out threading problems. Adding runtime analysis to your
automation efforts provides you with information about your applications that cannot be gained even from effective automated testing.

  • Learn how runtime analysis enhances automation
  • Evaluate the pros and cons of code coverage
Poonam Chitale, IBM Rational
Building a Testing Factory

At Royal Bank Financial Group we are building a testing factory. Our vision is that code enters as raw material and exits as our finished product--thoroughly tested. As a roadmap for our work, we have used the IT Information Library (ITIL) standard. ITIL is well known throughout Europe and Canada but has yet to make inroads in the United States. It defines four disciplines: service support,
service delivery, the business perspective, and application management. These
disciplines define processes such as incident management, problem management, availability management, change management, and many others. Join Patricia Medhurst as she discusses their success and their next steps in completing their testing factory.

  • Learn how Royal Bank built their test factory
  • Understand how to integrate individual process into a cohesive whole
  • Determine if ITIL would be useful for your test organization
Patricia Medhurst, RBC Financial Group

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.