Conference Presentations

Total Quality Assurance: Stepping Out of the Testing Box

In the QA/testing world we tend to focus on improving quality by altering or creating new software development models and process, and implementing tools to better manage them. In this way, we often put ourselves into a testing box where quality becomes quantified by "Were the requirements met?" and “Does the solution work as expected?" No matter what model is used-waterfall, agile, spiral, or iterative-the box still exists. Bryan Sebring examines four specific areas-your business, your staff, your customers, and, ultimately, yourself-to focus on to help you get out of the testing box and increase total quality in software. Learn to identify relationships among these areas and how they can positively or negatively impact quality. Explore how a decision in one area impacts the other three as you discover opportunities for improving in all four areas to increase your product's total quality.

Bryan Sebring, Georgia Department of Transportation
Artful Testing: Learning from the Arts

At first glance, art and testing may seem like an odd couple. However, Glenford Myers combined both in his book, The Art of Software Testing-though his "art" referred only to skill and mastery. More recently, Robert Austin and Lee Devin published Artful Making which relates software development to the creation of a piece of artwork. These authors inspired Zeger Van Hese to consider the idea of artful testing. Zeger investigates what happens when we combine and infuse testing with aesthetics. With some surprising examples, Zeger shows how the fine arts can support and complement our testing efforts. For instance, the tools art critics use for their critiques are valuable additions to the tester toolbox, enabling testers to become more professional software critics.

Zeger Hese, CTG
Testing in the Mobile Environment: A Functional View

Everyone is talking about mobile testing these days. Testers are focused on the vast and growing number of devices and how to do testing right in this fast-changing technology. Take a deep dive into functional testing for mobile devices with Karen Johnson, who has worked with multiple clients in mobile testing for the past two years. Karen reviews test considerations for search, currency, multi-lingual settings, and more. In addition, she explores special user interface considerations for mobile devices-custom controls, links, video, and site maps-and discusses testing passwords, download settings, app permissions, and other security concerns that often are overlooked. Learn what you need to know to start testing in the mobile environment as Karen shares her experiences and gets you on the road for mobile testing.

Karen Johnson, Software Test Management, Inc.
Automated Software Testing for Embedded Systems

Test automation for an embedded system presents a unique set of challenges. For starters, it requires a specialized set of automation tools that may be expensive or hard to come by. In addition, because embedded systems involve an amalgamation of hardware and software, you'll need a specialized tester-to-controller interface to drive the tests. Join David Palm to learn about the "gotchas" posed by test automation on embedded systems. He'll discuss race conditions and how can you craft your automation to detect them. David will explain why embedded systems are so vulnerable to initialization problems and what to do about it. Find out why testing extreme values and end-points presents a real obstacle in many embedded systems.

David Palm, Trane
Enhancing Collaboration through Acceptance Tests

Even though acceptance testing principles and tools are common today, teams often stumble during implementation. In the worst cases, acceptance tests start to feel like a burden rather than a boon. Paul Nelson guides you through common acceptance testing pitfalls and provides practical, “tested” solutions to keep your acceptance testing efforts on track. Starting with a typical example, Paul guides you through important principles that focus on collaboration with the business-getting the words right, managing the level of detail, dependency isolation, and refactoring in safe steps. Paul explores common abstraction patterns and demonstrates examples using Cucumber, though the principles are equally applicable with other tools. Leave with a renewed confidence in your ability to maintain control of your acceptance tests and make them the collaboration tool they should be.

Paul Nelson, ThoughtWorks, Inc.
Test Process Improvement with the TMMi® Model

The Test Maturity Model integration® (TMMi®) model, developed to complement the CMMI® framework, is rapidly becoming the test process improvement model of choice in Europe, Asia, and the US. Erik van Veenendaal, one of the developers of TMMi, describes the model’s five maturity levels-Initial, Managed, Defined, Management and Measurement, and Optimization-and the key testing practices required at each level. The model’s definition of maturity levels provides the basis for standardized TMMi assessments and certification, enabling companies to consistently deploy testing practices and collect industry metrics. The benefits of using the TMMi model include an improvement of testing methods, reduction in costs, and improved product quality.

Erik Veenendaal, Improve Quality Services BV
Best Practices for Implementing Crowdsourced Testing

Global markets, quick time to market, and a feature-rich design are major drivers in determining many products’ success. Product companies and businesses with customer-facing systems are constantly on the lookout for innovative development and testing techniques to control these driving forces. One such software testing technique gaining popularity is crowdsourced testing. With its scale, flexibility, cost effectiveness, and fast turnaround, crowdsourcing brings new solutions to many testing problems. Is it a perfect solution for all product companies to leverage? Not necessarily. Rajini Padmanaban describes the best practices in implementing a crowdsourced test effort. She discusses whether or not crowdsourcing makes sense for a given product; what, when, and how to crowdsource; what risks exist; and how to mitigate the risks.

Rajini Padmanaban, QA InfoTech
Performance Testing Earlier in the Software Development Lifecycle

Historically, performance testing has been relegated to simply adding a few weeks to the back end of a project to run a series of prescripted tests. The problem with this approach is that issues that performance test engineers uncover late in the project are often too costly to remediate, placing the entire effort at risk. Agile development methodologies can further complicate the issue due to their ever-changing landscape and often a lack of focus on performance testing. Eric Gee shares innovative ideas and techniques on how testers can engage as meaningful partners earlier in the software development lifecycle. Eric explores the benefits of partnering with software engineers in unit testing under load, testing at the component level, and other novel approaches you can use for early performance testing. If you are concerned about finding performance problems late in development, this session is for you.

Eric Gee, Raymond James & Associates
Agile Testing: What Would Deming Do?

Through his quality practices, Edwards Deming transformed Japanese industry in the 1950s and later American industry, proving that building quality intrinsically into a product dramatically lowers costs. Although agile development brings the software industry into closer alignment with Deming, we often continue to rely on end-of-process inspection and rework to ensure quality. A fresh look at Deming’s principles will help testers make a powerful impact on the success of the software projects and the organization. Mark Strange reviews how agile testing differs from traditional software testing, explores what test teams should be measuring (more than defect counts), and shares statistical techniques to help identify problems and bottlenecks in your testing process.

Mark Strange, Wood Cliff Consulting
End-to-End Test Automation of ERP Software: A Case Study

Enterprise Resource Planning (ERP) packages, which have become a mainstay in many businesses, increase in complexity with each release. Most testers are turning to automation to ease the burdens of testing these packages. Although end-to-end automation better tests the ERP systems by emulating real-world use and business process flows, creating a proper end-to-end automation framework can be a daunting, complex task. Join test automation experts David Dang and Dave Satterlee for an in-depth look at automation for ERP as they share a case study of SAP at a major manufacturing company. David and Dave discuss the benefits and objectives of end-to-end ERP test automation as well as the challenges and solutions of ideal automation frameworks, test data strategy, and integration issues.

David Dang, Zenergy Technologies

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.