STARWEST 2007 - Software Testing Conference
PRESENTATIONS
Load Generation Capabilities for Effective Performance Testing
To carry out performance testing of Web applications, you must ensure that sufficiently powerful hardware is available to generate load levels. At the same time, you need to avoid investing in unnecessarily expensive hardware "just to be sure." A valid model for estimating the load generation capabilities of performance testing tools on different hardware configurations will help you generate the load you need with the minimum hardware. Rajeev Joshi believes the models provided by most tool vendors are too simplistic for practical use. |
John Scarborough, Aztecsoft
|
Load Testing New Web Technologies
Web 2.0 applications represent a major evolution in Web development. These applications are based on new technologies such as AJAX, RIA, Web services, and SOA. Unless you, as a tester, understand the inner workings of these technologies, you cannot adequately test their functionality or prepare realistic and valid performance tests. Eran Witkon explains the new Web technologies, how to design and implement appropriate load tests, execute these tests, and interpret the results. |
Eran Witkon, RadView Software
|
Managing Keyword-Driven Testing
Keyword-driven test automation has become quite popular and has entered the mainstream of test automation. Although some hail it as a panacea, many companies using it in one form or another have been disappointed. Keyword-driven testing projects succeed only if they are managed well. This presentation is not about the keyword method itself. Instead, Hans Buwalda focuses on the management side: how to manage a keyword-driven project. What are the factors that indicate progress and success? |
Hans Buwalda, LogiGear Corporation |
Measures and Metrics for Your Biggest Testing Challenges
Over the course of many STAR conferences, Ed Weller has collected a list of your biggest challenges in testing-lack of time, unrealistic deadlines, lack of resources, inadequate requirements, last minute changes, knowing when to stop testing, and poor quality code from development. Using this list and Victor Basili's "Goal, Question, Metric" approach to measurement, Ed identifies the measurements and metrics that will help test managers and engineers objectively evaluate and analyze their biggest problems. |
Edward Weller, Integrated Productivity Solutions, LLC
|
Mission Possible: An Exploratory Testing Experience
Interested in exploratory testing and its use on rich Internet applications, the new interactive side of the Web? Erik Petersen searched the Web to find some interesting and diverse systems to test using exploratory testing techniques. Watch Erik as he goes on a testing exploration in real time with volunteers from the audience. He demonstrates and discusses the testing approaches he uses everyday-from the pure exploratory to more structured approaches suitable for teams. |
Erik Petersen, Emprove |
Perils and Pitfalls of the New "Agile" Tester
If your background is testing on traditional projects, you are used to receiving something called "requirements" to develop test cases-and sometime later receiving an operational system to test. In an agile project, you are expected to test continually changing code based on requirements that are being uncovered in almost real time. Many perils and pitfalls await testers new to agile development. For example, a tester new to agile might think, "I'll test the latest 'stories' on Tuesday when I get my next build.” And you would be WRONG! |
Janet Gregory, DragonFire Inc. |
Preparing for the Madness: Load Testing the College Bracket Challenge
For the past two seasons, the Windows Live development team has run the Live.com College Bracket Challenge, which hosts brackets for scores of customers during the "March Madness" NCAA basketball tournament. March Madness is the busiest time of the year for most sports Web sites. So, how do you build your Web application and test it for scalability to potentially millions of customers? |
Eric Morris, Microsoft
|
Result Driven Testing: Adding Value to Your Organization
Software testers often have great difficulty in quantifying and explaining the value of their work. One consequence is that many testing projects receive insufficient resources and, therefore, are unable to deliver the best value. |
Derk-Jan Grood, Collis
|
Selecting Mischief Makers: Vital Interviewing Skills
Much of testing is tedious-the focus on details, the repetitive execution of the same code, the detailed paperwork, the seemingly endless technical discussions, and the complex data analysis. All good testers have the skills and aptitude necessary to deal with these activities. However, great testers have one other characteristic-they are mischievous. As a hiring manager, detecting mischievous testers is a challenge you should pursue to build the best testing staff. |
Andy Bozman, Orthodyne Electronics
|
Session-Based Exploratory Testing-With a Test
Session-based exploratory testing is an effective means to test when time is short and requirements are not clearly defined. Is it advisable to use session-based exploratory testing when the requirements are known and documented? How about when the test cases are already defined? What if half of the test team is unfamiliar with the software under test? The answers are yes, yes, yes. Brenda Lee explains how her team modified the session-based exploratory testing approach to include requirements and test cases as part of its charter. |
Brenda Lee, Parallax Inc.
|