Conference Presentations

The Skill of Factoring: Identifying What to Test

When you're given a product to test, a variety of clients to satisfy, and a short deadline to meet, how do you decide specifically what to test and how to test it? One way is to identify the things that might matter to some clients and not to others. In this interactive session, Michael Bolton describes the skill of factoring (not to be confused with refactoring)-ways to identify dimensions of interest relevant to testing. Through individual and group exercises, you'll practice the skills of factoring, learn guideword heuristics that can help identify important factors, and develop guidewords of your own. Experience the ways in which small changes in context can dramatically expand or contract your understanding of what's important and unimportant. Learn a framework for identifying factors that matter to your clients so you can respond rapidly, confidently, and expertly to any testing mission.

Michael Bolton, DevelopSense
Six Budget Killers for Testing Organizations

You already have taken some basic cost-cutting steps and saved your organization money. Now, you are asked to dig even deeper into your testing budget. Where should you start? You may be looking right at the areas to address and not know what you are seeing or what to do about them. Paul Trompeter explains how to take a fresh look at your existing hardware components, re-examine reliability and availability requirements, and prepare for a future scalable environment. Paul discusses how to get regulatory affairs in order, defuse the ticking storage overload bomb, and streamline testing of complex software systems. For each budget killer, you'll learn innovative ways to overcome budget challenges while maintaining an effective test organization. Discover how to slow the spending of your testing budget while increasing the return on your testing investment and, at the same time, keeping your sanity and sense of humor.

Paul Trompeter, GDI Infotech
Offshoring Test Automation: Double Benefit or Double Backlash

Although software testing can be an exciting challenge, testers often are bogged down in voluminous, manual testing and re-testing with relatively shallow requirements-based test cases. This old approach costs projects time and money while stealing resources away from more creative testing. Organizations look at two ways to reduce these repetitive testing costs-automation and offshoring. Combining these two approaches has the promise of even more savings to the organization. However, the reality of off-shored test automation can be disappointing and even lead to a "double backlash" instead of a "double benefit" because both automation and offshoring are complex operations in and of themselves. Hans Buwalda, a test automation pioneer, presents his key elements for success-clear direction, good methods, appropriate tools, and effective supervision- and explores the most common pitfalls to avoid.

Hans Buwalda, LogiGear Corporation
Maximize Your Investment in Automation Tools

Experience has shown that many organizations attempt to automate their testing processes without effective vision, planning, and follow through. As a result, within a year or two, test automation efforts are declared worthless and the tools are moved to the shelf. By creating a centralized team with domain expertise and identifying specific test automation needs, Intuit is able to build, deploy, and test products using a common set of tools, processes, and methodologies they call Autolab. Shoba Raj describes how the Small Business Group at Intuit maximizes its return on investment by utilizing the Autolab. She explores the benefits of time savings, capital cost savings, quality improvements, product health checks, and tool license fee aggregation. Learn how to build a centralized testing team and create your own Autolab that can leverage your services with standard tools, test environments, and processes.

Shoba Raj, Intuit, Inc.
Test Environments: The Weakest Link in Your Testing Chain

Test environments are an important part of our testing portfolio, yet often we seem to spend very little time planning, creating, and maintaining them. Julie Gardiner explains the reasons we fail to build test environments that are realistic, reliable, representative, and have integrity. As a result, they become the weakest link in our testing process. Julie provides examples of environments-good, bad, and sometimes ugly-and shares why the ugly are often a symptom of the organization's disregard for testing. She offers practical advice for transforming your current test environment from the weakest into the strongest link of your testing.

Julie Gardiner, Grove Consultants
Improving Software Testing: One Organization's Journey

In the coming years, testers will be placed under ever increasing pressure. Joachim Herschmann describes key future trends including the increasing alignment of development and test with business needs, the integration of traditionally separate disciplines, a shared responsibility for quality, and the increased use of testing technology. Joachim describes the experiences of Borland's Linz development lab as a framework for a broader discussion about these kinds of changes and their cultural impact on the organization. He describes the journey from a waterfall-based methodology to an iterative, sprint-based development approach and the integration of developers and testers into a single team of engineers. They found that agile development provided new levels of productivity and value-and posed new challenges of shortened test cycles and a need for new test skills and tools.

Joachim Herschmann, Borland Software
System Integration Testing of Portable Devices

System integration testing of portable devices delivered as part of a larger system is often not recognized by project managers, developers, and even some testers as a critical component of the testing effort. Because portable devices require several embedded applications working together to meet functional expectations, much of the testing effort must include system integration tests. Often, testers do not have experience with portable devices, and, in particular, how to test the complete, integrated system with the devices. Using Windows CE as the example operating system, JeanAnn Harrison describes how to plan the testing effort to maximize test coverage and reduce time spent on regression testing. Learn when to test applications separately and when to integrate applications to uncover hidden, potentially deadly bugs.

JeanAnn Harrison, CardioNet, Inc.
Toward 21st Century Automation for Agile Testing

As more companies move to agile software delivery approaches, new challenges and dynamics are impacting their testing practices. Organizations face many issues when implementing automation, including selecting tools that are usable and flexible, encouraging non-technical and non-testing staff to contribute tests, enabling open-source integration, and promoting test-driven development. Dietmar Strasser shares his experiences tackling these challenges as many organizations shift from traditional test automation to agile. Learn about the increased importance of testing in the agile development environment; the role that process and tools play in supporting the agile team; the differences between traditional and agile test automation; how to develop fast, automated test scripts; the use of agile and traditional testing methods side-by-side; and how to deal with test automation in a distributed development environment.

Dietmar Strasser, Micro Focus
Choosing the Right Test Cases for Automation

With hopes of reducing testing cost and effort, companies often look to test automation as the cure-all for their problems. However, without clear and practical objectives, a test automation project is bound to fail. One key factor in setting automation objectives is to identify which test cases should be automated and which should remain manual processes. Pradeep G describes a practical methodology to identify the best test cases as candidates for automation. His nine-point decision tree process for selecting test cases examines technical feasibility, execution frequency, component reusability, criticality, effort required for automation, total resource requirements, test case complexity, portability, and execution time. Discover how to achieve significant return on your automation investments by creating test scripts that are amenable to easy execution, reuse, and more.

Pradeep Kumar, Cognizant Technology Solutions
Test Planning: Defining Boundaries and Setting Expectations

Is testing often the last thing considered in your projects? Does your test team always seem out of the loop? Then, Jane Fraser can help you. She describes a process in which testers focus on reaching consensus with the whole project team. With Jane's approach, you work through the requirements and design to document what you plan to test, how you plan to test, and a most important element-what you are NOT going to test. Learn how to reach agreement among developers, product owners, and testers about how the project will be tested before coding starts. In her work, Jane has found that defining the boundaries of the testing upfront has brought the development group closer to the testing group and improved communications about changes and risks. Join Jane to review sample test plans that help improve projects by setting the expectations early.

Jane Fraser, Electronic Arts

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.