Conference Presentations

Top Ten Non-Technical Skills of the Better Testing

In the era of SOA and Web 2.0, as it becomes more and more difficult to accomplish comprehensive testing, Krishna Iyer and Mukesh Mulchandani describe ten non-technical skills that will make you a better tester. The first five are qualities we often look for in testers yet seldom practice scientifically and diligently--collaboration, creativity, experimentation, passion, and alertness. The second five are abilities that are seldom mentioned, yet equally important for testers--connect the dots, challenge the orthodox, picture and predict, prioritize, and leave work at work. Drawing from their experiences of building a testing team for their organization and consulting with global firms in building "testing capability," Krishna and Mukesh show how you and your test team can improve each of these ten non-technical skills. Practice these skills during the session and take back techniques you can use to hone your skills at work.

Krishna Iyer, ZenTEST Labs
Test Automation Techniques for Dynamic and Data Intensive Systems

If you think you're doing everything right with test automation but it just won't scale, join the crowd. If the amount of data you're managing and the dynamic changes in applications and workflows keep you in constant maintenance mode, this is the session for you. Encountering these problems, Chris Condron's group reviewed their existing automation successes and pain points. Based on this analysis, they created a tool agnostic architecture and automation process that allowed them to scale up their automation to include many more tests. By aligning their test scripts with the business processes, his team developed a single test case model they use for both manual and automated tests. They developed a test data management system incorporating storage of and a review process for three types of test data: scenarios, screen mappings, and references.

Chris Condron, The Hanover Insurance Group
Exploratory Testing: The Next Generation

Exploratory testing is sometimes associated with "ad hoc" testing, randomly navigating through an application. However, emerging exploratory techniques are anything but ad hoc. David Gorena Elizondo describes new approaches to exploratory testing that are highly effective, very efficient, and supported by automation. David describes the information testers need for exploration, explains how to gather that information, and shows you how to use it to find more bugs and find them faster. He demonstrates a faster and directed (not accidental) exploratory bug finding methodology and compares it to more commonly used approaches. Learn how test history and prior test cases guide exploratory testers; how to use data types, value ranges, and other code summary information to populate test cases; how to optimize record and playback tools during exploratory testing; and how exploratory testing can impact churn, coverage, and other metrics.

David Elizondo, Microsoft Corporation
STARWEST 2008: Test Estimation: Painful or Painless?

As an experienced test manager, Lloyd Roden believes that test estimation is one of the most difficult aspects of test management. You must deal with many unknowns, including dependencies on development activities and the variable quality of the software you test. Lloyd presents seven proven ways he has used to estimate test effort. Some are easy and quick but prone to abuse; others are more detailed and complex but may be more accurate. Lloyd discusses FIA (finger in the air), formula/percentage, historical reference, Parkinson's Law vs. pricing, work breakdown structures, estimation models, and assessment estimation. He shares spreadsheet templates and utilities that you can use and take back to help you improve your estimations. By the end of this session, you might just be thinking that the once painful experience of test estimation can, in fact, be painless.

Lloyd Roden, Grove Consultants
STARWEST 2008: The Case Against Test Cases

A test case is a kind of container. You already know that counting the containers in a supermarket would tell you little about the value of the food they contain. So, why do we count test cases executed as a measure of testing's value? The impact and value a test case actually has varies greatly from one to the next. In many cases, the percentage of test cases passing or failing reveals nothing about the reliability or quality of the software under test. Managers and other non-testers love test cases because they provide the illusion of both control and value for money spent. However, that doesn't mean testers have to go along with the deceit. James Bach stopped managing testing using test cases long ago and switched to test activities, test sessions, risk areas, and coverage areas to measure the value of his testing. Join James as he explains how you can make the switch-and why you should.

James Bach, Satisfice, Inc.
Automate API with White-box Tests with windows PowerShell

Although a myriad of testing tools have emerged over the years, only a few focus on the area of API testing for Windows-based applications. Nikhil Bhandari describes how to automate these types of software tests with Windows PowerShell, the free command line shell and scripting language. Unlike other scripting shells, PowerShell works with WMI, XML, ADO, COM, and .NET objects as well as data stores, such as the file system, registry, and certificates. With PowerShell, you can easily develop frameworks for testing-unit, functional, regression, performance, deployment, etc.-and integrate them into a single, consistent overall automation environment. With PowerShell, you can develop scripts to check logs, events, process status, registry check, file system management, and more. Use it to parse XML statements and other test files.

Nikhil Bhandari, Intuit
STARWEST 2008: Understanding Test Coverage

Test coverage of application functionality is often poorly understood and always hard to measure. If they do it at all, many testers express coverage in terms of numbers, as a percentage or proportion-but a percentage of what? When we test, we develop two parallel stories. The "product story" is what we know and can infer about the software product-important information about how it works and how it might fail. The "testing story" is how we modeled the testing space, the oracles that we used, and the extent to which we configured, operated, observed, and evaluated the product. To understand test coverage, we must know what we did not test and that what we did test was good enough.

Michael Bolton, DevelopSense
Great Test Teams Don't Just Happen

Test teams are just groups of people who work on projects together. But how do great test teams become great? More importantly, how can you lead your team to greatness? Jane Fraser describes the changes she made after several people on her testing staff asked to move out of testing and into other groups-production and engineering-and how helping them has improved the whole team and made Jane a much better leader. Join Jane as she shares her team's journey toward greatness. She started by getting to really know the people on the team-makes them tick, how they react to situations, what excites them, what makes them feel good and bad. She discovered the questions to ask and the behaviors to observe that will give you the insight you need to lead.

Jane Fraser, Electronic Arts
Fun with Regulated Testing

Does your test process need to pass regulatory audits (FDA, SOX, ISO, etc.)? Do you find that an endless queue of documentation and maintenance is choking your ability to do actual testing? Is your team losing good testers due to boredom? With the right methods and attitude, you can do interesting and valuable testing while passing a process audit with flying colors. It may be easier than you think to incorporate exploratory techniques, test automation, test management tools, and iterative test design into your regulated process. You'll be able to find better bugs more quickly and keep those pesky auditors happy at the same time. John McConda shares how he uses exploratory testing with screen recording tools to produce the objective evidence auditors crave. He explains how to optimize your test management tools to preserve and confidently present accountability and traceability data.

John McConda, Mobius Test Labs
Adventures with Test Monkey's

Most test automation focuses on regression testing-repeating the same sequence of tests to reveal unexpected behavior. Despite its many advantages, this traditional test automation approach has limitations and often misses serious defects in the software. John Fodeh describes "test monkeys," automated testing that employs random inputs to exercise the software under test. Unlike regression test suites, test monkeys explore the software in a new way each time a test case executes and offers the promise of finding new and different types of defects. The good news is that test monkey automation is easy to develop and maintain and can be used early in development before the software is stable. Join John to discover different approaches you can take to implement test monkeys, depending on the desired "intelligence" level.

John Fodeh, Hewlett-Packard

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.