|
User Acceptance Testing in the Testing Center of Excellence
Slideshow
Centralization of testing services into a testing center of excellence (TCoE) for system testing is common in IT shops today. To make this transformation mature, the next logical step is to incorporate the user acceptance testing (UAT) function into the TCoE. This poses unique challenges...
|
Deepika Mamnani, Capgemini
|
|
User Acceptance Testing: Make the User a Part of the Team
Slideshow
Adding user acceptance testing (UAT) to your testing lifecycle can increase the probability of finding defects before software is released. The challenge is to fully engage users and assist them in becoming effective testers. Help achieve this goal by involving users early and setting...
|
Susan Bradley, Grange Mutual Insurance
|
|
Specification-by-Example: A Cucumber Implementation
Slideshow
We've all been there. You work incredibly hard to develop a feature and design tests based on written requirements. You build a detailed test plan that aligns the tests with the software and the documented business needs. When you put the tests to the software, it all falls apart because the requirements were updated without informing everyone. But help is at hand. Enter business-driven development and Cucumber, a tool for running automated acceptance tests. Join Mary Thorn as she explores the nuances of Cucumber and shows you how to implement specification-by-example, behavior-driven development, and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber bridges the communication gap between business stakeholders and implementation teams.
|
Mary Thorn, Deutsche Bank
|
|
The Why and How of Usability and User Experience (UX) Testing
Slideshow
Although usability and user experience may seem synonymous, they are separate and much different concepts. While usability is well defined in standards, UX has no agreed upon definition because it relates to a more nebulous attribute-user satisfaction. Both are, however, key ingredients for successful system deployment. Because they don’t know how to measure and evaluate UX, many teams ignore this important attribute until the end of development. Philip Lew discusses how to model both usability and UX by breaking each attribute down into measurable characteristics-learnability, user effectiveness, user efficiency, content quality, user errors, and more. Phil shows you how to derive measurements and metrics that your development and team can employ to benchmark, analyze, and improve both usability and UX.
|
Philip Lew, XBOSoft
|
|
Ready, Really Ready, and Really Really Ready Stories Product owners create stories they believe are ready for development. Developers accept and then estimate stories that are not really ready to be started. This disconnect between being “ready” and “really ready” results in miscommunication and frustration. For example, story development can take much longer than original estimates because of the details and “sad paths” that were not expressed in the story. Ken Pugh describes how to turn vague acceptance criteria into specific acceptance tests. He explains how levels of detail in acceptance tests can help to more closely estimate the effort required by stories and shows how acceptance tests determine when stories are complete. With Ken, you’ll go through creating a “really really ready” story and examine when it should be created and who should participate.
|
Ken Pugh, Net Objectives
|
|
Specification by Example: Building Executable Requirements Specification by Example is a collaborative approach for constructing executable requirements. Examples demonstrate how the system should operate through the eyes of its users and shows understanding of the application’s functions. Michael Connolly demonstrates the practical and easy-to-implement Specification by Example method which he uses to write user stories and acceptance criteria. This direct approach, in which requirements are elaborated via executable code, creates a solid communication bridge between non-technical and technical staff and managers within the organization. Eventually, these executable requirements become the basis for the system’s acceptance test suite. As a take away, Michael provides participants with a lightweight requirements document format and an acceptance criteria framework to help you translate written specifications into automation.
|
Michael Connolly, OPOWER
|
|
Acceptance Test-driven Development: Tests with the Future in Mind Acceptance Test-driven Development (ATDD) is a popular topic these days-everyone’s excited about the idea of writing tests prior to development. Yet many teams run into difficulties as they attempt to implement this practice. It’s all too easy to fall into the trap of writing acceptance tests that mostly specify keystrokes and button clicks. Join "Cheezy" Morgan as he offers an overview of ATDD while sharing his experiences and insights gained working with numerous teams implementing ATDD. "Cheezy" will take you on a journey of discovery, demonstrating practical techniques for writing ATDD tests that describe the essence of what they are specifying while hiding unnecessary details that obfuscate their meaning. Because ease of maintenance is a key to ATDD’s long-term ROI, "Cheezy" shows how to structure and layer test code to reduce brittleness and fragility so your ATDD test suite will retain its usefulness well into the future.
|
Jeff Morgan, LeanDog
|
|
How to Rework Poorly Defined Requirements Poorly defined requirements are even more dangerous than no requirements because they offer the illusion that all is well during development. However, when user acceptance testing begins, requirements problems surface and the users rightly say, “I don’t care that the system test has passed, this isn’t what we need, and we won’t be signing off.” Steve Caseley reviews the actions he took to rework the requirements on two failed projects and the changes he made to get new projects off to the right start. Steve explores how statements such as “new reports must be balanced with the old reports” were re-written to identify quantifiable variances. He shows how other loosely defined requirements were reworked to provide clear mapping of measurable requirements to expected test results.
|
Steve Caseley, CBTNuggets
|
|
Avoid Failure with Acceptance Test-Driven Development One of the major challenges confronting traditional testers in agile environments is that requirements are incrementally defined rather than specified at the start. Testers must adapt to this new reality to survive and excel in agile development. C.V. Narayanan explains the Acceptance Test-Driven Development (ATDD) process that helps testers tackle this challenge. He describes how to create acceptance test checkpoints, develop regression tests for these checkpoints, and identify ways to mitigate risks with ATDD. Learn to map acceptance test cases against requirements in an incremental fashion and validate releases against acceptance checkpoints. See how to handle risks such as requirements churn and requirements that overflow into the next iteration. Using ATDD as the basis, learn new collaboration techniques that help unite testing and development toward the common goal of delivering high-quality systems.
|
C.V. Narayanan, Sonata Software Ltd.
|
|
High Speed Testing Cycles: An Approach to Accelerated Delivery of Bug-Free Software Large companies often have multiple software development projects running at the same time. Getting enough infrastructure in place to test these projects concurrently, however, can be very difficult. A High Speed Testing Methodology (called "Testing Trains") has been developed to perform system/acceptance testing for large-scale projects in two-week periods. Learn how Testing Trains can be successful in delivering bug-free software on schedule for your organization.
|
Daniel Navarro, Banco Nacional de Mexico
|