Conference Presentations

Model-based Testing: The Next Generation

Spotify is a music streaming service offering high-quality, instant access to music from a range of major and independent record labels. Model-based testing (MBT) is an important test technique they use to ensure that their systems deliver quality service. Spotify has discovered new ways to use MBT for effective testing in support of its more than ten-million user base. Alexander Andelkovic shares the challenges of implementing and integrating new MBT solutions and convincing company management that MBT is both efficient and effective. Explore the choice they made between buying or building an advanced MBT tool, the benefits of using MBT in new ways, and the increased visibility from improved quality. Whether your organization employs automated testing or not, Alexander shows you how to successfully integrate advanced MBT techniques with traditional test methods.

Alexander Andelkovic, Spotify
Structural Testing: When Quality Matters

Jamie Mitchell explores an underused and often forgotten test type-white-box testing. Also known as structural testing, white-box techniques require some programming expertise and access to the code. Using only black-box testing, you could easily ship a system having tested only 50 percent or less of the code base. Are you comfortable with that? For mission-critical systems, such low test code coverage is clearly insufficient. Although you might believe that the developers have performed sufficient unit and integration testing, how do you know that they have achieved the level of coverage that your project requires? Jamie describes the levels of code coverage that the business and your customers may need-from statement coverage to modified condition/decision coverage. He explains when you should strive to achieve different code coverage target levels and leads you through examples of pseudo code.

Jamie Mitchell, Jamie Mitchell Consulting
Managing Test Data in Large and Complex Web-based Systems

Are you testing an application or web site whose complexity has grown exponentially through the years? Is your test data efficiently and effectively supporting your test suites? Does the test data reside in systems not under your direct control? Learn how the WellsFargo.com test team integrated test data management processes and provisions to gain control over test data in their very large and complex web system environment. Join Ron Schioldager to explore the lifecycle of data, its relationship to effective testing, and how you can develop conditioned, trusted, and comprehensive test data for your systems. Learn about the tools Wells Fargo developed and employs today to support their test data management process, enabling them to maintain a shorter data maintenance cycle while improving their test reliability.

Ron Schioldager, Wells Fargo
Google's New Methodology for Risk-driven Testing

Risk mitigation and risk analysis are delicious ingredients in a recipe Google calls risk-driven testing. Most of us are familiar with how to approach risk mitigation from a test perspective-in the form of test plan development, test cases, and documentation. However, comprehensive risk analysis is still considered black magic by many in our field. In this hands-on presentation, Jason Arbon and Sebastián Schiavone introduce ACC-Attributes–Components–Capabilities-a methodology for systematically breaking down an application into coherent and logically-related elements for risk analysis. ACC prescribes a very easy-to-follow process that you can apply consistently and quickly to many types of projects. Jason and Sebastián break down risk analysis into seven simple steps and walk participants through the complete ACC and risk analysis process for several high-profile Google products.

Sebastian Schiavone, Google, Inc.
Fuzzing and Fault Modeling for Product Evaluation

Test environments are often maintained under favorable conditions, providing a stable, reliable configuration for testing. However, once the product is released to production, it is subject to the stresses of low resources, noisy–even malicious–data, unusual users, and much more. Will real-world use destroy the reliability of your software and embarrass your organization? Shmuel Gershon describes fuzzing and fault modeling, techniques used to simulate worst-case run-time scenarios in his testing. Fuzzing explores the limits of a system interface with high volumes of random input data. Fault models examine and highlight system resource usage, pointing out potential problems. These techniques help raise important questions about product quality–even for conditions that aren't explicit in requirements.

Shmuel Gershon, Intel Corporation
Test Execution and Results Analysis: Time for a "Legal" Separation

Generally, testers mix test execution and test analysis. Typically, each test case execution also does its bit of analysis focusing on the feature under test, comparing actual to expected results. Jacques Durand explains that by declaring and enforcing a "legal" separation between execution and analysis tasks, testers' perspectives automatically change. Rather than focusing only on actual vs. expected results of one output, every output in every test becomes fair game for a more comprehensive analysis, leading to finding more bugs sooner. With this separation approach, each test suite is split into a set of test scenarios plus a set of logical test assertions. Join Jacques to learn how to leverage XML to format scenario outputs and other analyzer inputs, and how to write executable declarative test assertions independent from test scenarios.

Jacques Durand, Fujitsu Software Corporation
Real-time Test Design for Exploratory Testing

Exploratory testing is a form of unscripted testing that mixes concurrent learning with rapid, iterative test design, and test execution. Done well, exploratory testing helps you intentionally and quickly discover the important problems in your software. So, how do you actually design tests on-the-fly, taking into account the current, known risks, input from stakeholders, and the limits on time and resources? Paul Carvalho shares practical models, tips, and guidelines he uses to design exploratory tests. Paul shows you a model that breaks down any system into its basic, testable components; and a model to help you visualize your testing strategy across five important dimensions. He also shares proven tips on how to reduce the number of test ideas to the most important ones-those that are most likely to find important bugs.

Paul Carvalho, STAQS
Keys to a Successful Beta Testing Program

Your company is ready to launch its new product. How will it perform under real-world conditions? Will it meet the needs and expectations of the users? Will it operate on all the platforms and configurations for which it was designed? With the future of the product, your company, and perhaps your job depending on the answers, beta testing is a great way to maximize your chances of success. Beta testing provides empirical metrics that prove or disprove that your product will meet clients’ expectations, providing you with input for necessary course corrections in the product. Rob Swoboda explains the process of beta testing as well as the key concepts needed to plan, execute, and evaluate a successful beta testing effort. Rob shares his insights into the practices he employs to design and manage high-priority beta test efforts and offers you the keys to succeed in your own beta test program.

Rob Swoboda, HomeShore Solutions
Focusing Test Efforts with System Usage Patterns

Faced with the reality of tight deadlines and limited resources, many software delivery teams turn to risk-based test planning to ensure that the most critical components of the software are production ready. Although this strategy can prove effective, it is only as good as your underlying risk analysis. Unfortunately, understanding where risk lies within a product is difficult with the analysis often resulting in little more than an “educated guess.” These risk-based testing exercises can lead to uneven test coverage and the uneasy feeling that the team has neglected to test what is really important. Dan Craig describes how to employ system usage patterns and production defect reports to identify the real risks in a system.

Dan Craig, Coveros, Inc.
Testing Lessons Learned from the Great Detectives

What the great detectives have taught me about testing.

Robert Sabourin, AmiBug.com

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.