Conference Presentations

Picking the Right Test Automation Strategy for Your Project

The choice of a test automation strategy is a key determining factor in whether your test automation initiative will repay your investment, or become a sink hole devouring time and money. Gerard Meszaros helps you understand what kinds of tests you should be running, which ones should be automated, who should prepare the tests, what tools they should be using, and when the tests should be prepared and run. A well-executed test automation strategy is key to preventing defects rather than finding defects after they have occurred. Gerard gives you the information you need to make an intelligent decision about how to approach test automation to minimize cost and maximize quality.

Gerard Meszaros, ClearStream Consulting
Agile Usability Testing

Agile development has become mainstream in the past few years, and for thousands of companies around the world, it has succeeded in reducing risk and delivering more value for less money. Yet, with the emphasis on pleasing the customer, and the philosophy of doing the simplest thing that could possibly work, there's one area where agile development has fallen short of more traditional methodologies-creating highly usable software. Practices such as test-driven development and continuous integration show little concern for the end-user experience. John De Goes explains the importance of creating humane software, and explains how he has integrated user-interface design and usability testing into the tight feedback loop that is the hallmark of agile development processes.

John De Goes, N-BRAIN, Inc.
Seven Years Later: What the Agile Manifesto Left Out

Although the Agile Manifesto has worked well to help many organizations change the way they build software, the agile movement is now suffering from some backsliding, lots of overselling, and a resulting backlash. Brian Marick believes that is partly because the Agile Manifesto is almost entirely focused outwardly—it talks to the business about how the development team will work with it. What it does not talk about is how the team must work within itself and with the code. Even though those omissions were appropriate then, now more is needed. Teams starting agile need to know that more discipline is required of them, and that discipline is fruitless without a strong emphasis on skills. Teams need to recognize that success is not just fulfilling requirements. It is also increasing productivity and decreasing the consequences of mistakes.

Brian Marick, Exampler Consulting
Going Mobile: The New Challenges for Testers

Mobile device manufacturers face many challenges bringing quality products to market. Most testing methodologies were created for data processing, client/server, and Web products. As such, they often fail to address key areas of interest to mobile applications-usability, security, and stability. Wayne Hom discusses approaches you can use to transform requirements into usability guides and use cases into test cases to ensure maximum test coverage. He discusses automation frameworks that support multiple platforms to reduce test cycle times and increase test coverage, while measuring and reporting at the different phases of the software lifecycle. Wayne presents case studies to illustrate how to reduce test cycles by up to 75 percent. He demonstrates solutions that have helped providers of third-party applications and services manage testing cycles for multiple mobile device releases.

Wayne Hom, Augmentum Inc.
A Modeling Framework for Scenario-Based Testing

Scenario-based testing is a powerful method for finding problems that really matter to users and other stakeholders. By including scenario tests representing actual sequences of transactions and events, you can uncover the hidden bugs often missed by other functional testing. Designing scenarios requires you to use your imagination to create narratives that play out through systems from various points of view. Basing scenarios on a structured analysis of the data provides a solid foundation for a scenario model. Good scenario design demands that you combine details of business process, data flows-including their frequency and variations-and clear data entry and verification points. Fiona Charles describes a framework for modeling scenario-based tests and designing structured scenarios according to these principles.

Fiona Charles, Quality Intelligence Inc.
STARWEST 2008: Quality Metrics for Testers: Evaluating Our Products, Evaluating Ourselves

As testers, we focus our efforts on measuring the quality of our organization's products. We count defects and list them by severity; we compute defect density; we examine the changes in those metrics over time for trends, and we chart customer satisfaction. While these are important, Lee Copeland suggests that to reach a higher level of testing maturity, we must apply similar measurements to ourselves. He suggests you count the number of defects in your own test cases and the length of time needed to find and fix them; compute test coverage--the measure of how much of the software you have actually exercised under test conditions--and determine Defect Removal Effectiveness--the ratio of the number of defects you actually found divided by the total number you should have found. These and other metrics will help you evaluate and then improve the effectiveness and efficiency of your testing process.

Lee Copeland, Software Quality Engineering
Lessons Learned in Acceptance Test-Driven Development

Acceptance Test-Driven Development (ATDD), an application of the test-first practice of XP and agile development, can add enormous value to agile teams that are proficient in these practices. Moving from awareness of ATDD to being proficient at practicing ATDD comes about only after learning some important lessons. First, no one group can "own" the process. Second, ATDD is first about helping the customer and the team understand the problem; then it is about testing. Third, writing automated acceptance tests in ATDD is not the same as writing automated tests with typical automation tools. Antony Marcano shares his experiences with ATDD-the good, the bad, and the ugly-and the many other lessons he's learned in the process. Discover the benefits and pitfalls of ATDD and take advantage of Antony's experiences so that you avoid common mistakes that teams make on their journey to becoming proficient practitioners of ATDD.

Antony Marcano, Testing Reflections
Truths and Myths of Static Analysis

Identifying defects with static analysis tools has advanced significantly in the last few years. Yet, there still are many misconceptions about the capabilities and limits of these innovative tools-and sales propaganda such as "100 percent path coverage" has not helped at all. Paul Anderson debunks common myths and clarifies the strengths and limitations of static-analysis technology. You'll learn about the types of defects that these tools can catch and the types they miss. Paul demystifies static analysis jargon, explaining terms such as "object-sensitive" and "context-sensitive". Find out how the FDA uses static analysis today to evaluate medical device software. Paul jump-starts your understanding of static analysis so you can decide where to apply this technology and have more knowledge and confidence in your interactions with tool vendors.

Paul Anderson, GrammaTech, Inc.
Toward an Exploratory Testing Culture

Traditional testing teams often agonize over exploratory testing. How can they plan and design tests without detailed up-front documentation? Stubborn testers may want to quit because they are being asked to move out of their comfort zone. Can a team’s testing culture be changed? Rob Sabourin describes how several teams have undergone dramatic shifts to embrace exploratory testing. Learn how to blend cognitive thinking skills, subject matter expertise, and “hard earned” experience to help refocus your team and improve your outcomes. Learn to separate bureaucracy from thinking and paperwork from value. Explore motivations for change and resistance to it in different project contexts. Leverage Parkinson's Law-work expands to fill the time available-and Dijkstra-s Principle-testing can show the presence of bugs, but not their absence-to inspire and motivate you and your team to get comfortable in the world of exploratory testing.

Robert Sabourin, AmiBug.com Inc
Adding Measurement to Reviews

Conceptually, most testers and developers agree that reviews and inspections of software designs and code can improve software and reduce development costs. However, most are unaware that measuring reviews and inspections greatly magnifies these improvements and savings. Riley Rice presents data from more than 4,000 real-world software projects in different domains-defense, commercial, and government. He compares the results of three scenarios: doing few or no reviews, doing unmeasured reviews, and doing measured reviews. For each scenario, Riley compares resulting metrics: defects delivered to customers, total project pre-release costs, total project post-release costs, total project lifecycle costs, project duration, mean time between failures, and productivity. The results are surprising-measured reviews are substantially more effective-and go far beyond what most people would expect.

Riley Rice, Booz Allen Hamilton

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.