|
Smaller-Scale Web Sites Need Performance Testing Too! Even a smaller-scale Web site requires careful planning and execution of performance tests. Making the critical decisions in a timely manner and identifying the performance goals are still prerequisites to a successful test. However, smaller sites don't necessarily have the resources required to do large-scale testing, so compromises have to be made. This requires good test planning. The instructor explains the testing of a small site looking to grow, as well as the successes and pitfalls of achieving reasonable goals.
- Define the test objectives; what's reasonable?
- Plan the test then utilize tools, choices, and tradeoffs effectively
- Apply and understand the results
|
Dale Perry, Software Quality Engineering
|
|
Is Your Haystack Missing a Needle? Using manual testing to determine if your application is missing any files is worse than looking for a needle in a haystack: it's like trying to determine if your haystack is missing any needles! One tester tells the story of how some clever coding saved his project a good deal of time and quite a few headaches.
|
|
|
Walk into My Parlor Just as a spider spins a web to capture her prey, testers weave an intricate net of ambiguity and conflict to catch program bugs. Find out how to use complex tests to expose program weaknesses and errors.
|
|
|
Don't Just Break Software. Make Software What if, instead of using tests to try to break software, we used tests to make software? That's the vision of storytest-driven development. We spoke to people who spend each day turning wishful thinking into working products. Find out how they do it.
|
|
|
Planned Chaos: Malicious Test Day In a test and verification organization, it can be easy to fall into predictable ruts and miss finding important defects. Use the creativity of your test team, developers, users, and managers to find those hidden bugs before the software goes into production. Ted Rivera details how his organization conceived of, administers, evaluates, and benefits from periodic malicious test days. Learn ways to make your days of planned chaos productive, valuable, and, yes, even fun. Give both testers and non-testers an opportunity to find inventive ways to break your products and you'll get some surprising results.
- The danger of too much predictability and the results you can expect from a malicious test day
- Create and administer your own malicious test day
- Maximize the benefits of malicious test days
|
Ted Rivera, Tivoli/IBM Quality Assurance
|
|
Evaluating Test Plans Using Rubrics The phrase "test plan" means different things to different people. There is even more disagreement about what makes one test plan better than another one. Bernie Berger makes the case for using multi-dimensional measurements to evaluate the goodness of test plans. Walk away with a practical technique to systematically evaluate any complex structure such as a test plan. Learn how to qualitatively measure multiple dimensions of test planning and gain a context-neutral framework for ranking each dimension. You'll also find out why measurement of staff technical performance is often worse than no measurement at all and how to use this technique as an alternative approach to traditional practices. [This presentation is based on work at Software Test Managers Roundtable (STMR) #8 held in conjunction with the STAR conference.]
- Qualitatively evaluate complex structures, like test
- Ten dimensions of test planning
|
Bernie Berger, Test Assured Inc.
|
|
Cross-Organizational Change Management The phrase "test plan" means different things to different people. There is even more disagreement about what makes one test plan better than another one. Bernie Berger makes the case for using multi-dimensional measurements to evaluate the goodness of test plans. Walk away with a practical technique to systematically evaluate any complex structure such as a test plan. Learn how to qualitatively measure multiple dimensions of test planning and gain a context-neutral framework for ranking each dimension. You'll also find out why measurement of staff technical performance is often worse than no measurement at all and how to use this technique as an alternative approach to traditional practices. [This presentation is based on work at Software Test Managers Roundtable (STMR) #8 held in conjunction with the STAR conference.]
• Qualitatively evaluate complex structures, like test plans
• Ten dimensions of test planning
|
Federico Pacquing, Jr., Getty Images, Inc.
|
|
Test Metrics: A Practical Approach To Tracking and Interpretation You can improve the overall quality of a software project through the use of test metrics. Test metrics can be used to track and measure the efficiency, effectiveness, and the success or shortcomings of various activities of a software development project. While it is important to recognize the value of gathering test metrics data, it is the interpretation of that data which makes the metrics meaningful or not. Shaun Bradshaw describes the metrics he tracks during a test effort and explains how to interpret the metrics so they are meaningful to the project and its team members.
- What types of test metrics should be tracked
- How to track and interpret test metrics
- The two categories of test metrics: base and calculated
|
Shaun Bradshaw, Questcon Technologies Inc
|
|
Measuring Testing Effectiveness using Defect Detection Percentage How good is your testing? Can you demonstrate the detrimental effect on testing if not enough time is allowed? Dorothy Graham discusses a simple measure that has proved very useful in a number of organizations-Defect Detection Percentage or DDP. Learn what DDP is, how to calculate it, and how to use it in your organization to communicate the effectiveness of your testing. From case studies of organizations that are using DPP, you'll find out the problems you may encounter and ways to overcome them.
- Learn what DDP is and how to calculate it using defect data you may already have
- How best to start measuring and using DDP
- Calculate DDP for different stages of testing (integration, system, user acceptance)
|
Dorothy Graham, Grove Consultants UK
|
|
Testing Dialogues- Technical Issues Test professionals face a myriad of issues with immature development technologies, changing systems environments, increasingly complex applications, and 24/7 reliability demands. We must choose the right methodology and best testing techniques to meet these challenges, all with a limited set of tools and not enough time. In this double-track session, you'll be able to ask for help from your peers, share you expertise with the group, and develop some new approaches to your biggest challenges. Johanna Rothman and Esther Derby facilitate this session, focusing on topics such as model-based testing, security testing, testing without requirements, testing in the XP/Agile world, and configuration management. Discussions are structured in a framework so participants will receive a summary of their work product after the conference.
|
Johanna Rothman, Rothman Consulting Group, Inc.
|