Conference Presentations

Patterns for Reusable Test Cases

You can think of Q-Patterns as a structured set of questions (tests) about the different aspects of a software application under test. They are questions about the system that are categorized, grouped, sorted, and saved for reuse. These Q-Pattern questions can be written ahead of time and stored in a repository of test case templates, developed for requirements and design reviews or built in real-time as a way to both guide and document exploratory testing sessions. See examples of Q-Patterns that Vipul Kocher has developed for error messages, combo boxes, login screens, and list handling. Learn how to associate related Q-Patterns and aggregate them into hierarchical and Web models. Take back the beginnings of Q-Patterns for your test team and organization.

  • Sharable and reusable test case designs
  • Templates to organize requirements and design reviews
Vipul Kocher, PureTesting
STAREAST 2006: Testing Dialogues - Technical Issues

Is there an important technical test issue bothering you? Or, as a test engineer, are you looking for some career advice? If so, join experienced facilitators Esther Derby and Johanna Rothman for "Testing Dialogues-Technical Issues." Practice the power of group problem solving and develop novel approaches to solving your big problem. This double-track session takes on technical issues, such as automation challenges, model-based testing, testing immature technologies, open source test tools, testing Web services, and career development. You name it! Share your expertise and experiences, learn from the challenges and successes of others, and generate new topics in real-time. Discussions are structured in a framework so that participants receive a summary of their work product after the conference.

Facilitated by Esther Derby and Johanna Rothman
Hallmarks of a Great Tester

As a manager, you want to select and develop people with the talents to become great testers, the ability to learn the skills of great testers, and the willingness to work hard in order to become great testers. As an individual, you aspire to become a great tester. So, what does it take? Michael Hunter reveals his twenty hallmarks of a great tester from personality traits-curiosity, courage, and honesty-to skills-knowing where to find more bugs, writing precise bug reports, and setting appropriate test scope. Measure yourself and your team against other great testers, and find out how to achieve greatness in each area. Learn how to identify the great testers you don’t know that you already know!

  • The personality traits a person needs to become a great tester
  • The talents a person needs to become great tester
  • The skills you need to develop to become a great tester
Michael Hunter, Microsoft Corporation
Trends, Innovations and Blind Alleys in Performance Testing

Join experts Scott Barber and Ross Collard for a lively discussion/debate on leading edge performance testing tools and methods. Do you agree with Scott that performance testing is poised for a great leap forward or with Ross who believes that these "silver bullets" will not make much difference in resolving the difficulties performance testing poses? Scott and Ross will square off on topics including commercial vs. open source tools; compatibility and integration of test and live environments; design for performance testability; early performance testing during design; test case reuse; test load design; statistical methods; knowledge and skills of performance testers; predicting operational behavior and scalability limits; and much more. Deepen your understanding of the new technology in performance testing, the promises, and the limitations.

  • The latest tools and methods for performance testing
Scott Barber, PerTestPlus, and Ross Collard, Collard & Company
Inside The Masters' Mind: Describing the Tester's Art

Exploratory testing is both a craft and a science. It requires intuition and critical thinking. Traditional scripted test cases usually require much less practice and thinking, which is perhaps why, in comparison, exploratory testing is often seen as "sloppy," "random," and "unstructured." How, then, do so many software projects routinely rely on it as an approach for finding some of its most severe bugs? If one reason is because it lets testers use their intuition and skill, then we should not only study how that intuition and skill is executed, but also how it can be cultivated and taught to others as a martial art. Indeed, that's what has been happening for many years, but only recently have there been major discoveries about how an exploratory tester works and a new effort by exploratory testing practitioners and enthusiasts to create a vocabulary.

Jon Bach, Quardev Laboratories
Your Development and Testing Processes Are Defective

Verification at the end of a software development cycle is a very good thing. However, if verification routinely finds important defects, then something is wrong with your process. A process that allows defects to build up-only to be found and corrected later-is a process filled with waste. Processes which create long list of defects are . . . defective processes. A quality process builds quality into the software at every step of development, so that defect tracking systems become obsolete and verification becomes a formality. Impossible? Not at all. Lean companies have learned how wasteful defects and queues can be and attack them with a zero tolerance policy that creates outstanding levels of quality, speed, and low cost-all at the same time. Join Mary Poppendieck to learn how your organization can become leaner.

Mary Poppendieck, Poppendieck LLC
Test Patterns: Nine Techniques to Help Test for a Greater Variety of Bugs.

Building on his earlier columns covering James Bach's Heuristic Test Strategy Model, Michael Bolton delivers nine techniques--each of which affords a different way of modeling the product--to help you test your systems for a greater variety of bugs.

Michael Bolton's picture Michael Bolton
Creating Compelling Checklists

This article discusses practical and proven ideas on how testers can make their own checklists. It evolved partly as a response to many queries regarding testing checklists on this website’s Discussion Boards. The idea of this article is to guide testers to make their own checklists.

Yogita Sahoo's picture Yogita Sahoo
Support for Testing, Testing for Support

Where supportability and testability fit in the Quality Criteria dimension of the Heuristic Test Strategy Model.

Michael Bolton's picture Michael Bolton
The Value-added Manager: Five Pragmatic Practices

What do great managers do that others don't? Great managers focus their efforts, increase their productivity, and develop their people. In this session, Esther Derby describes five pragmatic practices that managers can apply to improve both work results and worker satisfaction-give both positive and corrective feedback weekly, consciously decide what not to do, limit multitasking, develop people, and meet with staff individually and as a group every week. Esther says these ideas are not rocket science. If you apply these five practices consistently you will improve the value of your team to the organization-and keep your sanity, too.

Esther Derby, Esther Derby Associates Inc

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.