Conference Presentations

Stop Finding Bugs, Start Building Quality

Many testers believe that their job is to find bugs. While finding bugs is indeed an important aspect of testing, detecting bugs earlier or preventing them from ever occurring has a far greater impact on improving software quality. You have probably seen charts showing the exponential increase in cost of fixing bugs late in the product development cycle; yet despite calls to "move quality upstream", the end of the product cycle is where many software projects focus their testing efforts. Long time Microsoft tester Alan Page will discuss how common functionality, security, and performance bugs can be prevented or detected much earlier on software projects of any size using simple scripts, or tools such as a source code compiler or FxCop.

  • Causes of common bugs
  • Analyze source code
  • Make detection techniques automatic
Alan Page, Microsoft Corporation
Gain Control over Chaotic Development Projects

Testers are frequently assigned to projects in which applications are undergoing major modifications, yet documentation may be incomplete, wrong, or non-existent. With limited time, testers must rely on developers, business partners, and others to tell them what to test. The result is often an incomplete grasp of the application resulting in inadequate testing. Dennis Tagliabue shares a real-world approach that allows you to gain control over chaotic application development environment. By employing a simplified use case and scenario-based testing approach, you can develop a high-level view of the application and drive this view down to low-level reusable test cases. The emerging picture of the application reduces the future learning curve, improves communication among stakeholders, and provides a basis for test planning and estimating. All this can be accomplished without sacrificing short-term testing objectives.

Dennis Tagliabue, Dell
Recruiting, Hiring, and Retaining Great Testers

Hiring great testers is the single biggest challenge that test managers face. Unfortunately the number of experienced testers is dwindling while the number of testers with weak skill sets is proliferating. Drawing on his experience of building an independent testing company, Krishna Iyer shares unconventional-yet quite effective-methods to find, hire, and retain great testers. He looks for testers outside the software world and has had success, for example, with auditors-they have the same inquisitiveness that makes testers great. Krishna describes good interviewing techniques such as "vague questioning" that probe the candidates' thinking skills rather than their ability to recall facts. Krishna concludes with suggestions on how to retain great testers, including supporting social responsibility projects and balancing testers' personal needs with the demands of work.

  • New pools of talent for recruiting testers
Krishna Iyer, ZenTEST Labs
Will Your SOA Systems Work in the Real World?

The fundamental promise of Service Oriented Architectures (SOA) and Web services demands consistent and reliable interoperability. Despite this promise, existing Web services standards and emerging specifications present an array of challenges for developers and testers alike. Because these standards and specifications often permit multiple acceptable implementation alternatives or usage options, interoperability issues often result. The Web Services Interoperability Organization (WS-I) has focused on providing guidance, tools, and other resources to developers and testers to help ensure consistent and reliable Web services. Jacques Durand focuses on the WS-I testing tools that are used to determine whether the messages exchanged with a Web service conform to WS-I guidelines.

Jacques Durand, Fujitsu Software Corporation
Top Ten Reasons Test Automation Projects Fail

Test automation is the perennial "hot topic" for many test managers. The promises of automation are many; however, many test automation initiatives fail to achieve those promises. Shrini Kulkarni explores ten classic reasons why test automation fails. Starting with Number Ten ... having no clear objectives. Often people set off down different, uncoordinated paths. With no objectives, there is no defined direction. At Number Nine ... expecting immediate payback. Test automation requires a substantial investment of resources which is not recovered immediately. At Number Eight ... having no criteria to evaluate the success. Without defined success criteria, no one can really say whether the efforts were successful. At Number Seven ... Join Shrini for the entire Top Ten list and discover how you can avoid these problems.

  • Why so many automation efforts fail
  • A readiness assessment to begin test automation
Shrinivas Kulkarni, iGATE Global Solutions
Essential Regression Testing

You are responsible for testing application releases, and the demand for quality is high. You must ensure that new functionality is adequately tested and that existing functionality is not negatively impacted when applications are modified. If you plan to conduct formal regression testing, you must answer a multitude of questions: What exactly is regression testing? What resources do I need? How can I justify the cost of regression testing? How can I quantify the benefits? Learn the "who, what, when, where, why, and how" of regression testing as Deakon Provost describes how to organize a regression test team, how to obtain funding for that team and their work, what methods you can use to save the organization money while regression testing, and how to quantify the value that regression testing provides.

  • How to implement regression testing in your organization
Deakon Provost, State Farm Insurance
From Start Up to World Class Testing

So you have been asked to start or improve a testing group within your organization. Where do you start? What services should you provide? Who are the right people for the job? Iris Trout presents a framework of best practices needed to implement or rapidly improve your testing organization. Hear how Bloomberg LP, a large financial reporting institution, tackled the issue of implementing a new testing organization. Iris describes how she built a strong testing process in minimal time and achieved exceptional results. She shares her interviewing techniques, automation how to's, and many other ways to implement quick successes. Learn to create Service Level Agreements. Discuss the value of peer reviews and how to evaluate their results. Iris shares handouts full of user-friendly ideas to help you get started.

  • The essential components of a strong testing organization
Iris Trout, Bloomberg, lp
A Unique Testing Approach for SOA Systems

Service Oriented Architecture (SOA) systems most often use services that are shared across different applications. Some services may even be supplied by third-parties, outside the direct control of a project, system, or organization. As these services evolve, organizations face the issue of ensuring the continuing proper functionality and performance of their ever-changing SOA systems. The implication of even a minor change to a service is often not fully understood until the systems dependent on that service operate in production and then fail. Creating an environment in which all SOA systems dependent on a particular service can be tested is virtually impossible. However, Ed Horst presents a unique approach to testing services that does not require a detailed knowledge of the systems that use that service. Ed shares real-world examples of organizations that have successfully managed service changes.

Ed Horst, Amberpoint
Verification Points for Better Testing Efficiency

More then one-third of all testing time is spent verifying test results-determining if the actual result matches the expected result within some pre-determined tolerance. Sometimes actual test results are simple-a value displayed on a screen. Other results are more complex-a database that has been properly updated, a state change within the application, or an electrical signal sent to an external device. Dani Almog suggests a different approach to results verification: separating the design of verification from the design of the tests. His test cases include "verification points," with each point associated with one or more verification methods, which can later be used on different test cases and occasions. Some of the verification methods are very simple numerical or textual comparison; others are complex; such as photo comparison.

Dani Almog, Amdocs Inc
When There is Too Much to Test: Ask Pareto for Help

Preventing defects has been our goal for years, but the changing technology landscape-architectures, languages, operating systems, data bases, Web standards, software releases, service packs, and patches-makes perfection impossible to reach. The Pareto Principle, which states that for many phenomena 80% of the consequences stem from 20% of the causes, often applies to defects in software. Employing this principle, Claire Caudry describes ways to collect and analyze potential risks and causes of defects through technology analysis, customer surveys, T-Matrix charting, Web trends reports, and more. Then, Claire discusses ways to provide adequate testing without a huge financial investment-use of virtual machines, hardware evaluation programs, vendor labs, and pre-release beta programs.

Claire Caudry, Perceptive Software

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.