Conference Presentations

How to Fake a Test Project

It has never been easier to fool your manager into thinking that you're doing a great job testing! James Bach covers all of today's most respected test fakery. These techniques include: misleading test case metrics, vapid but impressive looking test documentation, repeatedly running old tests "just in case they find something," carefully maintaining obsolete tests, methodology doublespeak, endless tinkering with expensive test automation tools, and taking credit for a great product that would have been great even if no one had tested it. James covers best practices for blame deflection. By the time you're through, your executive management will not know whether to fire the programmers or the customers. But, you know it will not be you. (Disclaimer: It could be you if an outsourcing company fakes it more cheaply than you do.)

  • Cautionary true stories of test fakery, both purposeful and accidental
James Bach, Satisfice Inc
Harnessing the Power of Randomized Unit Testing

It is a problem all testers have had. We write tests believing we know how the system should behave, what inputs will precede others, and which calls will be made first and which will be made last. Unfortunately, the system may not operate that way, and as a result our tests are inadequate. However, there is a solution to this problem: Randomized unit testing helps you find bugs in places you wouldn't even think to look by selecting call sequences and parameter values randomly. James Andrews explains the power and potential of randomized testing with demonstrations and case studies of real-world software defects found. He presents RUTE-J, a free Java package modeled after JUnit, which can help you develop code for testing Java units in a randomized way. James explains how assertion style, parameter range selection, and method weight selection can make randomized testing more effective and thorough.

James Andrews, University of Western Ontario
Unit Testing Code Coverage: Myths, Mistakes, and Realities

You've committed to an agile process that encourages test driven development. That decision has fostered a concerted effort to actively unit test your code. But, you may be wondering about the effectiveness of those tests. Experience shows that while the collective confidence of the development team is increased, defects still manage to raise their ugly heads. Are your tests really covering the code adequately or are big chunks remaining untested? And, are those areas that report coverage really covered with robust tests? Andrew Glover explains what code coverage represents, how to effectively apply it, and how to avoid its pitfalls. Code coverage metrics can give you an unprecedented understanding of how your unit tests may or may not be protecting you from sneaky defects.

Andrew Glover, Stelligent
Finding Success in System Testing

To achieve success in system testing-efficiently preventing important defects from reaching users-technical excellence is certainly necessary but it is not sufficient. Even more important are the skills to influence the project and team behavior to prevent defects from ever reaching the system test. Nathan Petschenik shares his insights into the technical skills you need for a successful system test. In addition, he explains how system test leaders can and must change project attitudes and influence behavior to significantly impact the quality of the software that reaches the system test team. Among other recommendations, Nathan explains how getting developers to fulfill their testing role is one way system test team leaders can influence quality on projects.

Nathan Petschenik, Software Testing Services, Inc.
STAREAST 2007: Lightning Talks: A Potpourri of 5-minute Presentations

Lightning Talks are nine five-minute talks in a fifty-minute time period. Lightning Talks represent a much smaller investment of time than track speaking and offer the chance to try conference speaking without the heavy commitment. Lightning Talks are an opportunity to present your single, biggest bang-for-the-buck idea quickly. Use this as an opportunity to give a first time talk or to present a new topic for the first time. Maybe you just want to ask a question, invite people to help you with your project, boast about something you did, or tell a short cautionary story. These things are all interesting and worth talking about, but there might not be enough to say about them to fill up a full track presentation. For more information on how to submit a Lightning Talk, visit www.techwell.com/lightningtalks.

Matthew Heusser, Priority-Health
Build a Model-Based Testing Framework for Dynamic Automation

The promises of faster, better, and cheaper testing through automation are rarely realized. Most test automation scripts simply repeat the same test steps every time. Join Ben Simo as he shares his answers to some thought-provoking questions: What if your automated tests were easier to create and maintain? What if your test automation could go where no manual tester had gone before? What if your test automation could actually create new tests? Ben says model-based testing can. With model-based testing, testers describe the behavior of the application under test and let computers generate and execute the tests. Instead of writing test cases, the tester can focus more on the application's behavior. A simple test generator then creates and executes tests based on the application's modeled behavior. When an application changes, the behavioral model is updated rather than manually changing all the test cases impacted by the change.

Ben Simo, Standard & Poor's
Keyword-Driven Test Automation Illuminated

Test Automation has come a long way in the last twenty years. During that time many of today's most popular test execution automation tools have come into use, and a variety of implementation methods have been tried and tested. Many successful organizations began their automation effort with a data-driven approach and enhanced their efforts into what is now called keyword-driven test automation. Many versions of the keyword-driven test execution concept have been implemented. Some are difficult to distinguish from their data-driven predecessors. So what is keyword-driven test automation? Mark Fewster provides an objective analysis of keyword-driven test automation by examining the various implementations, the advantages and disadvantages of each, and the benefits and pitfalls of this automation concept.

Mark Fewster, Grove Consultants
Testing Requirements: Ensuring Quality Before the Coding Begins

Software that performs well is useless if it ultimately fails to meet user needs and requirements. Requirements errors are the number one cause of software project failures, yet many organizations continue to create requirements specifications that are unclear, ambiguous, and incomplete. What's the problem? All too often, requirements quality gets lost in translation between business people who think in words and software architects and engineers who prefer visual models. Joe Marasco discusses practical approaches for testing requirements to verify that they are as complete, accurate, and precise as possible-a process that requires new, collaborative approaches to requirements definition, communication, and validation.

Joe Marasco, Ravenflow
Crucial Test Conversations

Many test managers feel that Development or Management or The Business does not understand or support the contributions of their test teams. You know what? They're probably right! However, once we accept that fact, we should ask: Why? Bob Galen believes that it is our inability and ineffectiveness at 360º communications, in other words, "selling" ourselves, our abilities and our contribution. We believe that our work should speak for itself or that everyone should inherently understand our worth. Wrong! We need to work hard to create crucial conversations in which we communicate our impact on the product and the organization. Bob shares with you specific techniques for improving the communication skills of test managers and testers so that others in your organization will better understand your role and contributions.

Robert Galen, RGCG, LLC
Testing the Heathrow Terminal 5 Baggage Handling System (Before It Is Built)

London Heathrow Terminal 5 will open in March 2008. This new terminal will handle 30 million passengers a year, and all of these passengers will expect their baggage to accompany them on their flights. To achieve this end, a new baggage handling system is being built that will handle more than 100,000 bags a day. The challenge of testing the integrated software is related not only to its size and complexity but also to the limited time that will be available to test the software in its actual environment. Roger Derksen explains the vital role of factory integration testing using models that emulate the full system. Roger discusses the limitations of these techniques and explains what can-and cannot-be done in the factory environment and what issues still must be addressed on site.

  • A testing strategy for use on very large, complex systems
  • How to use models for testing when physical systems are unavailable
Roger Derksen, Transfer Solutions BV

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.