|
Performance Testing Throughout the Life Cycle Even though it is easy to say that you should continuously test your application for performance during development, how do you really do it? What are the processes for testing performance early and often? What kinds of problems will you find at the different stages? Chris Patterson shares the tools and techniques he recently used during the development of a highly concurrent and highly scalable server that is shipping soon. Chris explores how developers and testers used common tools and frameworks to accelerate the start of performance testing during product development. Explore the challenges they faced while testing a version 1 product, including defining appropriate performance and scale goals, simulating concurrent user access patterns, and generating a real world data set. Learn from his team's mistakes and their successes as Chris shares both the good and the bad of the process and results.
|
Chris Patterson, Microsoft
|
|
A Test Odyssey: Building a High Performance, Distributed Team It seemed simple enough-hire the best available technical staff that would work from home to build some great software. Along the way, the team encountered the usual problems: time zone differences, communication headaches, and a surprising regression test monster. Matt Heusser describes how Socialtext built their high-performance development and test team, got the right people on the bus, built a culture of "assume good intent and then just do it," created the infrastructure to enable remote work, and employed a lightweight yet accountable process. Of course, the story has the impossible deadlines, conflicting expectations, unclear roles, and everything you'd get in many development projects. Matt shares how the team cut through the noise, including building a test framework integrated into the product, to achieve their product and quality aims.
|
Matthew Heusser, Socialtext
|
|
Executable Specs w/ FitNesse Selenium "Executable Specifications with FitNesse and Selenium."
|
Dawn Cannan, DocSite LLC
|
|
Meet "Ellen": Improving Software Quality through Personas Users are the ultimate judge of the software we deliver because it is critical to their success and the success of their business. However, as a tester, do you really understand their tasks, skills, motivation, and work style? Are you delivering software that matches their needs and capabilities-or yours? Personas are a way to define user roles-imaginary characters-that represent common sets of characteristics of different users. David shares how his team at Microsoft defined and used one persona named “Ellen” to help them design, develop, and test the first version of a new product. David shares before Ellen and after Ellen examples of the product, showing how the product changed when Ellen joined the team. See examples of the robust test cases and acceptance scenarios they defined from unique insights that Ellen provided.
|
David Elizondo, Microsoft Corporation
|
|
STAREAST 2010: Service-driven Test Management Over the years, the test manager's role has evolved from "struggling to get involved early" to today's more common "indispensable partner in project success." In the past, when "us vs. them" thinking was common, it was easy to complain that the testing effort could not be carried out as planned due to insufficient specs, not enough people, late and incomplete delivery, no appropriate environments, no tools, tremendous time pressure, etc. Martin Pol explains how today's test managers must focus on providing a high level of performance. By using a service-driven test management approach, test managers support and enhance product development, enabling the project team to improve overall quality and find solutions for any testing problem that could negatively impact the project's success.
|
Martin Pol, POLTEQ IT Services BV
|
|
Chartering the Course: Guiding Exploratory Testing Charters help you guide and focus exploratory testing. Well-formed charters help testers find defects that matter and provide vital information to stakeholders about the quality and state of the software under test. Rob Sabourin shares his experiences defining different exploratory testing charters for a diverse group of test projects. For example, reconnaissance charters focus on discovering application features, functions, and capabilities; failure mode charters explore what happens to applications when something goes wrong. In addition, you can base charters on what systems do for users, what users do with systems, or simply the requirements, design, or code. Rob reviews key elements of a well-formed testing charter-its mission, purpose, focus, understanding, and scope. Learn how to evolve a test idea into an exploratory charter using examples from systems testing, Scrum story testing, and developer unit testing.
|
Robert Sabourin, AmiBug.com
|
|
Test Automation Success: Choosing the Right People and Process Many testing organizations mistakenly declare success when they first introduce test automation into an application or system. However, the true measure of success is sustaining and growing the automation suite over time. You need to develop and implement a flexible process, and engage knowledgeable testers and automation engineers. Kiran Pyneni describes Aetna’s two-team automation structure, the functions that each group performs, and how their collaborative efforts provide for the most efficient test automation. Kiran explains how to seamlessly integrate your test automation lifecycle with your software development lifecycle. He shares specific details on how Aetna’s automation lifecycle benefits their entire IT department and organization, and the measurements they use to track and report progress.
|
Kiran Pyneni, Aetna, Inc.
|
|
Automated Test Case Generation Using Classification Trees The basic problem in software testing is choosing a subset from the near infinite number of possible test cases. Testers must select test cases to design, create, and then execute. Often, test resources are limited-but you still want to select the best possible set of tests. Peter M. Kruse and Magdalena Luniak share their experiences designing test cases with the Classification-Tree Editor (CTE XL), the most popular tool for systematic black-box test case design of classification tree-based tests. Peter and Magdalena show how to integrate weighting factors into classification trees and automatically obtain prioritized test suites. In addition to “classical” approaches such as minimal combination and pair-wise, they share new generation rules and demonstrate the upcoming version of CTE XL that supports prioritization by occurrence probability, error probability, or risk.
|
Peter Kruse, Berner & Mattner Systemtechnik GmbH
|
|
A Deeper Dive Into Dashboards This session is a deeper examination of how to apply dashboards in software testing.I spent several months on a project primarily building a software testing dashboard. I have learned some interesting things, including:
- Resources for free examples
- Tools to help build dashboards
- The human issues
|
Randy Rice, Rice Consulting Services
|
|
The Many Hats of a Tester As testers, we must wear many hats to do our job effectively. Quite often, it is the pith helmet of an explorer, hacking through the vines and darkness of the unknown; or the baseball cap of the crime scene investigator, determining how the failure occurred. To make things even more interesting, the hats we need often differ from project to project and organization to organization. Adam Goucher begins with a general discussion of some hats testers typically wear and when they are appropriate or inappropriate. He then leads an “Art Show” exercise-a brainstorming process resulting in lots of “art” on the walls-illustrating the hats we all may wear in our daily testing activities. Through the Art Show process, you'll take away new insights into what hats you and other testers need, tips for wearing the beautiful ones with success, and how to avoid putting on the ugly ones.
|
Adam Goucher, Zerofootprint
|