Conference Presentations

Test Strategies for the Modern Distributed World

Enterprise application development is quickly evolving with SOA and Web 2.0 taking center stage. Organizational structures are changing, with growing numbers of testing teams employing offshore resources. What do these changes mean to you, and what should you do to prepare? Most testing groups were created based on traditional development processes, traditional application architectures, and traditional organizational structures. As agile enters the mainstream, more change is on the way. Outsourcing, offshore development, and acquisitions continuously change the organizational landscape. Dan Koloski discusses proven and practical practices for adapting to today's new technologies, new structures, and the new modern distributed world. He will discuss how to effectively communicate across virtual and physical silos as well as ways to adapt your test strategies and execution to component-based applications.

Dan Koloski, Empirix
Optimize Your Testing with Virtual Test Lab Automation

The complex nature of software development often requires testing on multiple hardware platforms, operating systems, Web and application servers, and databases. Add to it the many different builds, patches, and regionalized versions that development delivers and you understand the immense challenge faced by test engineers trying to provide adequate test coverage. Adding virtual lab automation to your testing process can help your organization overcome these challenges and may dramatically improve the way you test--at a fraction of the cost of traditional multi-system approaches. Brad Johnson explores ways to seamlessly integrate virtual environments into the software testing process and explains how virtual test labs enable a test team to test more efficiently, across a wider range of environments, and with greater coverage of critical requirements.

Brad Johnson, Borland - The Open ALM Company
Avoid Preformance Testing Data Deception

Don't be fooled by your performance test results. Performance testing can easily generate an unwieldy amount of data-some relevant and some not. Testers and their tools often use statistical methods to make sense of the data, but using statistics requires sacrificing accuracy and thoroughness. The good news is that we do not need to understand all the details to make good use of test results. The challenge is to determine what information really matters and how to present it in a useful manner. Join Ben Simo as he addresses common performance test statistical problems including built-in bias, agreeable averages, invisible inadequacies, gargantuan groupings, stingy sets, mountainous molehills, creative charting, alien alliances, and more. Find out how statistical reporting can deceive rather than inform-often unintentionally-and recognize what the numbers do not say.

Ben Simo, Standard & Poor's
STAREAST 2008: Seven Habits of Highly Effective Automation Testers

In many organizations, test automation is becoming a specialized career path. Mukesh Mulchandani identifies seven habits of highly effective automation specialists and compares them with Stephen Covey's classic Seven Habits of Highly Effective People. Mukesh not only describes behavior patterns of effective automation testers but he also discusses how to internalize these patterns so that you use them instinctively. Drawing on his experience of managing large test automation projects for financial applications, Mukesh describes obvious habits such as saving and reusing tests. He then describes the uncommon but essential habits of strategizing, seeking, selling, and communicating. Learn how to avoid the bad habits that automation test novices-and even experts-may unconsciously adopt.

  • Keys to successful test automation
  • Leadership skills needed by test automation specialists
Mukesh Mulchandani, ZenTEST Labs
STAREAST 2008: Understanding Test Coverage

Test coverage of application functionality is often poorly understood and always hard to measure. If they do it at all, many testers express coverage in terms of numbers, as a percentage or proportion-but a percentage of what? When we test, we develop two parallel stories. The "product story" is what we know and can infer about the product-important information about how it works and how it might fail. The "testing story" is how we modeled the testing space, the oracles that we used, and the extent to which we configured, operated, observed, and evaluated the product. To understand test coverage, we must know what we did not test and that what we did test was good enough.

Michael Bolton, DevelopSense
Learning From the Past: Leveraging Defect Data

If test improvement activities are to be successful, we must convince management that our efforts are focused on areas with significant payback opportunities. Brian Robinson reports that in his organization a data-driven approach to improvement has led management, developers, and testers to adopt new approaches and strategies. They collect data from their existing defect tracking system, source code repository, and a document management system used in development. From this data, they analyze and classify defects that impacted schedule (late phase test failures) or cost (customer failures). Each defect type is then mapped to a test phase that is responsible for finding it. This mapping helps define a test strategy for each phase of testing. At the same time, areas for test improvement become obvious.

Brian Robinson, ABB Inc.
Preformance Testing in Enterprise Application Environments

As systems become more complex--serving the enterprise and implemented on the Web and across the Internet-performance testing is becoming more important and more difficult. David Chadwick suggests that the starting point is to design tests that reflect real user activity, including independent arrivals of transactions and varying input data to prevent "cache only" results. David explains how to break down the end-to-end system response time into the distributed components involved in processing the transactions. Learn to use resource-monitoring data to discover bottlenecks on individual systems. By examining the frequency and time spent in various processes, performance testers can determine where resources are being consumed and how to tune a system for better performance.

David Chadwick, IBM
For Success, Build Record/Playback into Your Application

Stories about failed attempts to automate functional testing are very easy to find and have given the record/playback style test automation a black eye. Is this approach fundamentally flawed or can the business benefits of automated testing be realized through recorded tests? The flaw with most commercial record/playback tools is that they are intended for use with existing applications that have not been designed for testability. Therefore, the tools can only interact with the application through the user interface, making execution slow and prone to flakiness because user interfaces make terrible machine interfaces. Gerard Meszaros introduces the concept of designing testability and test recording capabilities directly into the application. This approach allows automated tests to interact with the application via a programming API to make them much more robust.

Gerard Meszaros, ClearStream Consulting
Monty Python's Flying Test Lab

And now for something completely different ... Monty Python's Flying Circus revolutionized comedy and brought zany British humor to a world-wide audience. However, buried deep in the hilarity and camouflaged in its twisted wit, lie many important testing lessons-tips and techniques you can apply to real world problems to deal with turbulent projects, changing requirements, and stubborn project stakeholders. Rob Sabourin examines some of the most famous Python bits-"The Spanish Inquisition" telling us to expect the unexpected, "The Dead Parrot" asking if we should really deliver this product to the customer, "The Argument" teaching us about bug advocacy, "Self Defense Against Fresh Fruit" demonstrating the need to pick the right testing tool, and a host of other goofy gags, each one with a lesson for testers.

  • How to test effectively with persistence
  • Make your point with effective communication
Robert Sabourin, AmiBug.com Inc
The ROI of Testing

In today's competitive business environment, corporations need and demand a good return on investment (ROI) for everything they do-and testing is no exception. Although executive managers are requesting meaningful metrics more often than ever, many test managers are struggling to justify the cost versus benefit of their departments' work. Often these test managers are unsure how to calculate investment costs versus dollars saved when using solid QA and testing methodologies. Test managers need the business tools and techniques to provide this business critical information, not only to satisfy upper management but also to ensure their departments are indeed making a positive contribution. Shaun Bradshaw demonstrates the tangible benefits of testing on the software development lifecycle by defining ROI in the context of software testing and defect prevention.

Shaun Bradshaw, Questcon Technologies, A Division of Howard Systems Intl.

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.