Conference Presentations

STARWEST 2009: The Irrational Tester

As a tester or test manager, you probably have wondered just how important reasoning and rational thinking actually are in many management decisions. It seems that many decisions are influenced by far more-or far less-than thoughtful analysis. Surprise! Testers make decisions every day that are just as irrational as those made by the managers about whom they complain. James Lyndsay presents his view of tester bias—why we so often labor under the illusion of control, how we lock onto the behaviors we're looking for, and how two testers can use the same evidence to support opposing positions. Using demonstrations and entertaining real-life stories, James helps you understand how biases can affect our everyday testing activities. Gain a new perspective on why timeboxes work and why independence really matters.

James Lyndsay, Workroom Productions, Ltd.
The Top Testing Challenges - or Opportunities - We Face Today

Some people thrive on challenges, while others struggle with how to deal with them. Handled well, challenges can make us stronger in our passion, drive, and determination. Lloyd Roden describes the challenges we face today in software testing and how we can respond in a positive, constructive manner. One of the challenges Lloyd often sees is identifying and eliminating metrics that lie. While we (hopefully) do not set out to deceive, we must endeavor to employ metrics that have significance, integrity, and operational value. Another challenge test leaders face is providing estimates that have clarity, accuracy, and meaning. Often we omit a vital ingredient when developing test estimates-the quality required in the product. A third challenge is convincing test managers to actually test regularly to attain credibility and respect with the team they are leading.

Lloyd Roden, Grove Consultants
Moving to an Agile Testing Environment: What Went Right, What Went Wrong

About a year ago, Ray Arell called his software staff together and declared, "Hey! We are going agile!" Ray read an agile project management book on a long flight to India, and, like all good reactionary development managers, he was sold! Now-two years later-their agile/Scrum process has taken shape; however, its adoption was not without strain on development, test, and other QA practices. Join Ray as he takes you on a retrospective of what went right and, more importantly, what went wrong as they evolved to a new development/test process. He introduces the software validation strategies developed and adapted for Scrum, explains what makes up a flexible validation plan, and discusses their iterative test method. Learn how they use customer personas to help test teams understand expectations for quality in each sprint and employ exploratory testing in the Scrum development flow.

Ray Arell, Intel Corporation
STARWEST 2009: The Marine Corps Principles of Leadership

Even with the best tools and processes in the world, if your staff is not focused and productive, your testing efforts will be weak and ineffective and your finished product will reflect this. Retired Marine Colonel, long-time test consultant Rick Craig describes how using the Marine Corps Principles of Leadership will help you become a better leader and, as a result, a better test manager or tester. Learn the differences between leadership and management and how they can complement each other. Discover new approaches to energize your testers and learn to avoid some that won't. Rick explores motivation, morale, training, span of control, immersion time, and how to promote a consistent testing discipline within your organization. He addresses the role of "influence leaders" and how to use them as powerful agents of change.

Rick Craig, Software Quality Engineering
Large-scale Exploratory Testing at Microsoft: Let's Take a Tour

Manual testing is the best way to find the bugs most likely to bite users badly after a product ships. However, manual testing remains a very ad hoc, aimless process. At a number of companies across the globe, groups of test innovators gathered in think tank settings to create a better way to do manual testing—a way that is more prescriptive, repeatable, and capable of finding the highest quality bugs. The result is a new methodology for exploratory testing based on the concept of tours through the application under test. In short, tours represent a more purposeful way to plan and execute exploratory tests. James Whittaker describes the tourist metaphor for this novel approach and demonstrates tours taken by test teams from various companies including Microsoft and Google. He presents results from numerous projects where the tours were used in critical-path production environments.

James Whittaker, Google
Five Test Automation Fallacies that Will Make You Sick

Five common fallacies about test automation can leave even the most experienced test and development teams severely ill. If allowed to go unchallenged, these beliefs will almost guarantee the death of an automation effort. The five fallacies are: (1) Automated tests find many bugs-they don't. (2) Manual tests make good automated tests-they don't. (3) You know what the expected results are-often you don't. (4) Checking actual against expected is simple-it isn't. (5) More automated regression tests are always better-they aren't. Join Doug Hoffman to explore these fallacies-why we believe them, how to avoid them, and what to do now if you've based your automation efforts on them. Take back a set of antidotes to each of these fallacies and build a successful test automation framework or repair the sick one you are living with now.

Douglas Hoffman, Software Quality Methods, LLC.
Integrating Security Testing into the QA Process

Although organizations have vastly increased their efforts to secure operating systems and networks from attackers, most have neglected the security of their applications-making them the weakest link in their overall security chain. By some industry estimates, 75 percent of security attacks now focus on the application layer. All too often, the departmental responsibility for verifying application security is not defined, and security within the SDLC is either addressed too late or not at all. Based on his experience in a Fortune 1000 company, Mike Hryekewicz describes a step-wise strategy for extending the QA department’s role to include security as a quality attribute to verify prior to an application going into production. Learn how to deploy a security testing capability within your QA department and how to extend its coverage and activities as the process gains acceptance.

Mike Hryekewicz, Standard Insurance Company
Successful Teams are TDD Teams

Test-Driven Development (TDD) is the practice of writing a test before writing code that implements the tested behavior, thus finding defects earlier. Rob Myers explains the two basic types of TDD: the original unit-level approach used mostly by developers, and the agile-inspired Acceptance-Test Driven Development (ATDD) which involves the entire team. Rob has experienced various difficulties in adopting TDD: developers who don't spend a few extra moments to look for and clean up a new bit of code duplication; inexperienced coaches who confuse the developer-style TDD with the team ATDD; and waffling over the use of TDD, which limits its effectiveness. The resistance (overt or subtle) to these practices that can help developers' succeed is deeply rooted in our brains and our cultures.

Rob Myers, Agile Institute
Getting Started with Static Analysis

Static analysis is a technique for finding defects in code without executing it. Static analysis tools are easy to use because no test cases or manual code reviews are needed. Static analysis technology has advanced significantly in the past few years. Although the use of this technique is increasing, many misconceptions still exist about the capabilities of advanced static analysis tools. Paul Anderson describes the latest breed of static analysis tools, explains how they work, and clarifies their strengths and limitations. He demystifies static analysis jargon-terms such as object-sensitive, context-sensitive, and others. Paul describes how best to use static analysis tools in the software life cycle and how these can make traditional testing activities more effective. Paul presents data from real case studies to demonstrate the usage and effectiveness of these tools in practice.

Paul Anderson, GrammaTech, Inc.
Better Software Conference 2009: A Software Quality Engineering Maturity Model

You are probably familiar with maturity models for software development. Greg Pope and Ellen Hill describe a corresponding five-stage maturity model for software quality-not just testing-which addresses the challenges faced by organizations attempting to improve the quality of their software. How do you go about transforming your organization to improve software quality in today’s better, cheaper, faster world? Greg and Ellen present the different maturity levels of software quality organizations: (1) the whiner or know-it-all phase, (2) writing documents phase, (3) the measure the process phase, (4) the measure-based improvements phase, and (5) the tools and process automation phase. Learn how to recognize the signs of each maturity level, where and how to start the quality improvement process, how to get buy-in from developers and management, and the tools to predict and measure software quality.

Gregory Pope, Lawrence Livermore National Laboratory

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.