|
User Errors Are Not Software Bugs Traditional practices of user feedback are inefficient because they do not incorporate vital information about user errors. Typically, uers report on unexpected system behavior associated with their intentions, instead of their actions, resulting in software developers wasting their time chasing phantom bugs. Learn how to distinguish real bugs from user errors by using an operation logger integrated within your software product.
|
Avi Harel, ErgoLight Ltd.
|
|
Rooting Out System Bottlenecks in Web Applications One of the toughest challenges in testing and quality assurance today is eliminating the performance "bottlenecks" in your Web system. This session highlights a number of common problems that affect most Web systems. Things such as inefficient SQL, slow networks, improper firewall setup, bad connection pooling schemes, and failure to design for scalability are all factors that can degrade your system's performance. In this session, you'll encounter real-life examples of system performance problems and get tips on how to isolate and identify them.
|
Chris Nolan, Empirix, Inc.
|
|
Automating Reusable Test Designs Vendors and gurus agree that having a structured testing methodology is key to gaining maximum advantage from automated testing, but what this means in practice isn't always clear. One of the biggest potential paybacks comes from the ability to automate tests based on reusable test designs, which can be a key benefit of proactive structured testing. In this interactive session, Robin Goldsmith describes how to develop reusable test designs that can be automated to start testing sooner and run more tests in limited time.
|
Robin Goldsmith, Go Pro Management, Inc.
|
|
Tuning Application Performance in Production Even applications that have gone through rigorous testing in QA tend to have serious performance problems in production. Nearly every CIO or production manager has horror stories of applications that went live and failed. Yet with so much on the line, why are we in a constant firefighting mode? When confronted with new problems, we have to start with the basics and ask, "Is the problem in the application or in the infrastructure? How can I narrow it down fast?" Production tuning takes your good QA practices to the next level, and helps you get out of firefighting mode.
|
David Gehringer, Mercury Interactive
|
|
Reliability Management With Continuous Automated Testing If you're in business today, then you're relying heavily on enterprise and eBusiness applications for your success. But given our dependence, these applications are being upgraded and customized constantly. On the testing and development side of things, in order to meet tighter deadlines, we've seen that the quality and reliability of these applications are often sacrificed. This session presents a new methodology for ensuring the reliability of your enterprise and eBusiness applications. It also delivers specific suggestions on how to meet production deadlines without sacrificing quality.
|
Rohit Gupta, Segue Software
|
|
Test, Observe, and Assess Embedded Applications During Development Facing the paradox of developing better applications faster, developers of real-time, embedded, and networked applications have no choice but to use automated testing and runtime observation technologies. This session introduces you to processes and technologies designed to automate the unit, validation, and integration testing of everything from individual functions to complete distributed systems in embedded software applications.
|
Vincent Encontre, Rational Software Corporation
|
|
Software Test Automation 2002: A Case Study In Automating Web Performance Testing Key points from this presentation: define meaningful performance requirements; we're always searching for the maximum number of users, design bottlenecks, and performance tuning points; changing your site (hardware or software) invalidates all previous predictors; reduce the number of scripts through equivalence classes; don't underestimate the hardware needed to simulate the load; evaluate and improve your skills, knowledge, tools, and outsourced services; document your process and results so that others may learn from your work; use your new knowledge to improve your site's performance; and focus on progress, not perfection.
|
Lee Copeland, Software Quality Engineering
|
|
Test Tool Implementation Risks and Rewards Did you know that an alarmingly high percentage of the test tools purchased are never successfully implemented? Many organizations could benefit tremendously from the effective use of testing tools; problem is, they're not sure how to go about it. This session examines a dozen or more of the most common pitfalls encountered in the acquisition and implementation of testing tools and offers suggestions to avoid these mistakes.
|
Rick Craig, Software Quality Engineering
|
|
The Business Case for Test Automation In tight economic times, it's more important than ever to show a return on technology investments, including test automation. Unfortunately, management's expectations are usually unrealistic in that they expect immediate results and aren't prepared for the ongoing level of investment required after the tool is purchased. Find out why the benefits traditionally promised-reduced test resources and cycle time-are misleading and inaccurate.
|
Linda Hayes, WorkSoft, Inc.
|
|
Get Control of Your Test Automation Project Developing an automated regression test bed is no easy task. In fact, according to recent studies, more than 50 percent of test automation projects fail. To improve this statistic, companies must establish a consistent, repeatable approach for implementing test automation projects. Jeff Tatelman uses an inventory control application to teach you the key steps-from requirements gathering through implementation-to ensure success on any test automation endeavor.
|
Jeff Tatelman, Spherion Technology Architects
|