Conference Presentations

Implementing Agile Testing

Once the company decides to move to an agile development methodology, questions invariably arise: How should we implement this methodology? What are the expected benefits and pitfalls? How does testing fit into this new approach? Join Robert Reff as he describes real world experiences that helped his test team move from the design-code-test approach to a test-driven, agile development philosophy. Robert offers concrete advice on how to integrate testing, what testing activities to include or drop, and what to expect from both automation and exploratory testing. He describes possible practices, focus, and pitfalls, rather than the all-or-nothing approach often recommend by well-meaning experts.

Robert Reff, Thomas Reuters
Executable Specs w/ FitNesse Selenium

"Executable Specifications with FitNesse and Selenium."

Dawn Cannan, DocSite LLC
Test Automation Success: Choosing the Right People and Process

Many testing organizations mistakenly declare success when they first introduce test automation into an application or system. However, the true measure of success is sustaining and growing the automation suite over time. You need to develop and implement a flexible process, and engage knowledgeable testers and automation engineers. Kiran Pyneni describes Aetna’s two-team automation structure, the functions that each group performs, and how their collaborative efforts provide for the most efficient test automation. Kiran explains how to seamlessly integrate your test automation lifecycle with your software development lifecycle. He shares specific details on how Aetna’s automation lifecycle benefits their entire IT department and organization, and the measurements they use to track and report progress.

Kiran Pyneni, Aetna, Inc.
STAREAST 2010: Testing AJAX: What Does It Take?

Using AJAX technologies, Web 2.0 applications execute much of the application functionality directly in the browser. While creating a richer user-experience, these technologies pose significant new challenges for testers. Joachim Herschmann describes the factors that are critical in testing Web 2.0 applications and what it takes to master these challenges. After presenting an overview of typical Web 2.0 application technologies, Joachim explains why object recognition, synchronization, and speed are the pillars for a truly robust and reliable AJAX test automation approach. He shows how to architect testability directly into AJAX applications, including examples of how to instrument applications to provide the data that testing tools require. Joachim shares his experiences of Micro Focus's Linz development lab and describes how they overcame the challenges of testing their modern AJAX applications.

Joachim Herschmann, Borland (a Micro Focus company)
Using Test Automation Frameworks

As you embark on implementing or improving automation within your testing process, you'll want to avoid the "Just Do It" attitude some have taken. Perhaps you've heard the term "test automation framework" and wondered what it means, what it does for testing, and if you need one. Andrew Pollner, who has developed automated testing frameworks for more than fifteen years, outlines how frameworks have grown up around test automation tools. Regardless of which automation tool you use, the concepts of a framework are similar. Andrew answers many of your questions: Why build a framework? What benefit does it provide? What does it cost to build a framework? What ROI can I expect when using a framework? Explore the different approaches to framework development and identify problems to watch out for to ensure the approach you take will provide years of productivity.

Andrew Pollner, ALP International Corp
Automated Test Design: Its Time Has Come

With model-based test design, you first create a high-level functional model of the system to be tested. The model is the input to an automated test generation tool that creates the test designs and associated test scripts. Recently available commercial, automated test generation tools are making automated test generation a practical and powerful alternative to manual test design. Antti Huima discusses the advances in modeling methods, test generation tools, and the implications for test design productivity, quality, and overall test process efficiency. He explains the differences between “modeling for implementation” and “modeling for test” that make model-based test design applicable to both new and existing systems. Antti uses real-world examples to demonstrate automated test design tools and presents the results achieved deploying these tools.

Antti Huima, Conformiq Inc.
Maximize Your Investment in Automation Tools

Experience has shown that many organizations attempt to automate their testing processes without effective vision, planning, and follow through. As a result, within a year or two, test automation efforts are declared worthless and the tools are moved to the shelf. By creating a centralized team with domain expertise and identifying specific test automation needs, Intuit is able to build, deploy, and test products using a common set of tools, processes, and methodologies they call Autolab. Shoba Raj describes how the Small Business Group at Intuit maximizes its return on investment by utilizing the Autolab. She explores the benefits of time savings, capital cost savings, quality improvements, product health checks, and tool license fee aggregation. Learn how to build a centralized testing team and create your own Autolab that can leverage your services with standard tools, test environments, and processes.

Shoba Raj, Intuit, Inc.
Choosing the Right Test Cases for Automation

With hopes of reducing testing cost and effort, companies often look to test automation as the cure-all for their problems. However, without clear and practical objectives, a test automation project is bound to fail. One key factor in setting automation objectives is to identify which test cases should be automated and which should remain manual processes. Pradeep G describes a practical methodology to identify the best test cases as candidates for automation. His nine-point decision tree process for selecting test cases examines technical feasibility, execution frequency, component reusability, criticality, effort required for automation, total resource requirements, test case complexity, portability, and execution time. Discover how to achieve significant return on your automation investments by creating test scripts that are amenable to easy execution, reuse, and more.

Pradeep Kumar, Cognizant Technology Solutions
STARWEST 2009: Seven Habits of Highly Effective Automation Testers

In many organizations, test automation is becoming a specialized career path. Mukesh Mulchandani and Krishna Iyer identify seven habits of highly effective automation specialists and compare them with Stephen Covey’s classic "Seven Habits of Highly Effective People." Mukesh and Krishna not only describe behavior patterns of effective automation testers, but also discuss how to internalize these patterns so that you use them instinctively. Drawing on their experience of managing large test automation projects for financial applications, they describe obvious habits such as saving and reusing tests. They then describe the uncommon but essential habits of strategizing, seeking, simplifying, selling, and communicating. Learn how to avoid the bad habits that automation test novices-and even experts-may subconsciously adopt.

Mukesh Mulchandani, ZenTEST Labs
Test Automation Objectives

Test automation efforts frequently fail because of unrealistic expectations, often the result of choosing poor objectives for automation. Dorothy Graham explains the pitfalls of a number of commonly-held objectives for automation and describes characteristics of good automation objectives. These objectives seem sensible at first and are common in organizations-find more bugs, run regression tests overnight and weekends, reduce testing staff, reduce elapsed time for testing, and automate x% of the testing. Finding more bugs is a good objective for testing, but not for automation, especially automation of regression tests. Running tests outside working hours is only worth doing if the tests are worth running. Reducing testing staff is a management issue, not an automation objective-in the majority of cases, more staff is needed, not less!

Dorothy Graham, Consultant

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.