Conference Presentations

A White Box Approach to Testing an eCommerce System

This presentation gives one team's experience installing and testing a multiserver eCommerce system that had storefronts that were to be created by the customer.

Andrew O. Mellinger, Critical Path Software
Enjoying the Perks of Model-Based Testing

Software testing demands the use of some model to guide such test tasks as selecting test inputs, validating
the adequacy of tests, and gaining insight into test effectiveness. Most testers gradually build a mental
model of the system under test, which would enable them to further understand and better test its many
functions. Explicit models, being formal and precise representations of a tester’s perception of a program,
are excellent shareable, reusable vehicles of communication between and among testers and other teams
and of automation for many tasks that are normally tedious and labor-intensive.

Ibrahim K. El-Far, Florida Institute of Technology
Introduction to Usability Testing

What is usability? Why is it important? If these questions wake you in the middle of the night, then this presentation is for you. Cheryl Nesta discusses the relevance of usability testing within the broad framework of quality assurance and appropriate expectations based on its uses and applicability. Explore methodology, process flow, goal identification, and definition. Real-world examples create a hands-on introductory experience.

Cheryl L. Nesta, Vanteon
Test Result Checking Patterns

Determining how a test case detects a product failure requires several test case design trade-offs. These trade-offs include the characteristics of test data used and when comparisons are done. This document addresses how result checking impacts test design.

Keith Stobie, Microsoft
Data in Functional Testing-You Can't Live Without It

This paper sets out to illustrate some of the ways that data can influence the test process, and will show that testing can be improved by a careful choice of input data. In doing this, the paper will concentrate most on data-heavy applications; those which use databases or are heavily influenced by the data they hold. The paper will focus on input data, rather than output data or the transitional states the data passes through during processing, as input data has the greatest influence on functional testing and is the simplest to manipulate. The paper will not consider areas where data is important to non-functional testing, such as operational profiles, massive datasets and environmental tuning.

James Lyndsay, Workroom Productions
Targeted Software Fault Insertion

Since the completely random software fault insertion techniques suggested in much of the research literature are not practical for most software products, this paper suggests that a modest targeted software fault insertion effort for a few common error conditions can have a dramatic impact on defect detection rates and quality. The paper uses the example of a software fault insertion subsystem, codenamed Faulty Towers, which was added to Mangosoft Incorporated’s test automation in order to target
common failures and errors.

Paul Houlihan, MangoSoft Corporation
White-Box Testing: What Your Developers Don't Want You to Know

In this presentation, John Peraza describes how to use white-box testing to discover those defects that would otherwise remain undetected if you only conducted black-box testing. Learn various techniques-including test coverage, run-time memory leak detection, dynamic bounds checking, and code assessment for internationalization-that you can use to conduct white-box testing. Discover how BMC Software has benefited from including white-box testing in its quality assurance efforts.

John Peraza, BMC Software, Inc.
STAREAST 2001: Designing an Automated Web Test Environment

This paper offers an alternative to the typical automated test scripting method of "record and playback now and enhance the automation environment later." It explores a regression automation system design for testing Internet applications through the GUI, along with scripting techniques to enhance the scalability and flexibility of an automated test suite. This paper will present a basic structure for an automated test environment, and will expand on each of the items found in that structure. Web testing levels will be laid out, along with a basic approach to designing test scripts based on those web-testing levels.

Dion Johnson, Pointe Technology Group, Inc.
Looking Under the Covers to Test Web Applications

Web applications are more difficult to test than other applications, yet their mission-critical nature and high visibility make high quality testing essential. Oliver Cole discusses how white-box testing techniques can be used to improve the quality and reliability of Web applications. Learn about the four key types of Web testing: functionality/correctness testing, load/stress testing, performance testing, and fault injection. Examples are provided in each category.

Oliver Cole, OC Systems, Inc.
STAREAST 2001: Exploratory Testing in Pairs

Exploratory testing involves simultaneous activities-learning about the program and the risks associated with it, planning and conducting tests, troubleshooting, and reporting results. This highly skilled work depends on the ability of the tester to stay focused and alert. Based on a successful pilot study, Cem Kaner discusses why two testers can be more effective working together than apart. Explore the advantages of testing in pairs, including ongoing dialogue to keep both testers alert and focused, faster and more effective troubleshooting, and an excellent opportunity for a seasoned tester to train a novice.

Cem Kaner, Florida Institute of Technology and James Bach, Satisfice Inc.

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.