|
Release Criteria: Defining the Rules of the Product Release Game How do you know when you're finished testing? How do you know when the product is ready to ship? Sometimes the decision to stop testing and release a product seems as if someone's making deals in a smoke-filled room, or that there are rules of the game of which we are unaware. At times, these rules seem completely arbitrary. Instead of arbitrary decisions, it is possible to come to an agreement about when the product is ready to release, and even when it's time to stop testing. In this presentation, learn how to define release criteria, and then use those criteria to decide when to release the product.
|
Johanna Rothman
|
|
Risk: The New Language of eBusiness Testing Balancing testing against risk in eBusiness and e-commerce applications is essential because we never have the time to test everything. But it's tough to "get it right" with limited resources and the pressures to release software quickly. Paul Gerrard explains how to talk to the eBusiness risk-takers in their language to get the testing budget approved and the right amount of testing planned. Find out how to identify failure modes and translate these into consequences to the sponsors of the project. Using risk factors to plan tests means that testers can concentrate on designing effective tests to find faults and not worry about doing "too little" testing.
|
Paul Gerrard, Systeme Evolutif Limited
|
|
When Test Drives the Development Bus Once development reaches "code complete," the testing team takes over and drives the project to an acceptable quality level and stability. This is accomplished by weekly build cycles or dress rehearsals. The software is graded based on found, fixed, and outstanding errors. Development strives to increase the grades in each build--improving the quality and stability of the software. Learn how to use this "dress rehearsal" process to build team morale, develop ownership by the entire development team, and ensure success on opening night.
|
Cindy Necaise, MICROS Systems, Inc.
|
|
Managing Test Automation Projects Automation has three dimensions (organizational, process, and technical), and you should adopt a three-part solution: match skills to tasks; define requirements, environment, and hand-off; and adopt an automation approach and architecture.
|
Linda Hayes, WorkSoft, Inc.
|
|
Outsourced Testing: Should You Consider it? The need for a reliable test process and knowledgeable testers is more of a necessity than a luxury. Even if a company could afford to buy the latest testing tools and were able to find qualified QA/testing personnel, does it have the money and time to property train its staff on these latest tools? Learn why companies should consider outsourcing their test process-leaving testing to companies that are experts in testing.
|
Kenneth Paczas, Compuware Corporation
|
|
Virtual Test Management: Rapid Testing Over Multiple Time Zones With the ever-changing challenges of testing, here comes the latest one: managing multiple test locations. More and more companies are spreading testing organizations throughout the country and the world. Based on real-life experiences of the speakers, learn the mistakes to avoid and the lessons learned in managing multiple sites. Discover how the Virtual Test Manager can manage a dispersed test organization without having to always be physically present.
|
Jim Bampos, VeriTest and Eric Patel, Nokia
|
|
STAREAST 2001: Exploratory Testing in Pairs Exploratory testing involves simultaneous activities-learning about the program and the risks associated with it, planning and conducting tests, troubleshooting, and reporting results. This highly skilled work depends on the ability of the tester to stay focused and alert. Based on a successful pilot study, Cem Kaner discusses why two testers can be more effective working together than apart. Explore the advantages of testing in pairs, including ongoing dialogue to keep both testers alert and focused, faster and more effective troubleshooting, and an excellent opportunity for a seasoned tester to train a novice.
|
Cem Kaner, Florida Institute of Technology and James Bach, Satisfice Inc.
|
|
Software Testing at a Silicon Valley High-Tech Software Company This paper describes a methodology for allocating priority levels and resources to software testing and other quality activities to achieve "customer satisfaction." This methodology is based on understanding of what the market and the target users require at any point in time during the
product technology adoption life-cycle. The paper also describes the deployment by a leading market-driven company of effective software testing processes and methods that represent real-world customer issues.
|
Giora Ben-Yaacov and Lee Gazlay, Synopsys Inc.
|
|
System Test Measurement-What, When, How? Elaine Soat presents an easy set of measurements to use during system testing (QA test cycle). Examine measurements taken from defect tracking and application coverage to projected testing hours versus actual testig hours. Learn how such process and measurement information is evaluated and used for proposed process improvements. Gain the ability to do comparison reporting to measure successes of process improvement within your QA test cycle.
|
Elain Soat, CarteGraph Systems
|
|
Managing Concurrent Software Releases in Development and Test There is an ever-growing need to provide complex software products to customers on a short development schedule. Additionally, the customers need to be able to count on release dates for planning purposes. Instead of investing in an entirely new tool set that solves the configuration management issues associated with supporting concurrent development and support, existing tools can be used. This paper focuses on how to adapt and in some cases enhance an existing set of well-known tools to enable Lucent to excel in the market place. To this end, this project chose to implement the Fixed Interval Feature Delivery (FIFD) model of software development.
|
David Shinberg, Lucent Technologies
|