|
Deal Me In: Playing the Mangage Your Manager Game We all have managers above us with whom we must deal-and how we deal with them requires skill and practice. To be successful and help a team be its best, you, as a test manager, need daily practice at managing your manager(s). Using an "arms length" viewpoint of gaming, Jon Hagar examines seven situations in which you may need to win in order to get what you want and what your team needs. But not all games can be won or at least not in exactly the way we might want to win them. The test management game is about positive intent, taking the high road-you do not have to cheat-and knowing when to bet and when to fold your cards.
- You want a promotion or even your manager's job. What can you do?
- Your manager sets impossible schedule or budget goals. What can you do?
- A manager is not listening to the test information. What can you do?
|
Jon Hagar, Lockheed Martin
|
|
SOA and Web Services Testing Involve the Whole Team Serious enterprise application development is moving to Service Oriented Architectures as companies try to leverage existing applications while meeting new customer demands. Even as the ability to connect Web sites dynamically adds significant new levels of business functionality, it opens up a new point of failure with each connection. Code coverage is becoming far less important than the ability to test every component of your J2EE stack in the same environment as it will be deployed in production. John Michelsen shares the current trends in SOA testing, including unit testing with JUnit, test-driven Development (XP, TDD methods), test script automation, load testing, continuous testing, and much more. Learn about the pitfalls in testing SOA systems and why some companies wrongly give up on even trying.
- Trends in testing SOA and Web service enabled applications
|
John Michelsen, iTKO, Inc.
|
|
The Art of Exploration In order for exploratory testing to be perceived as a valuable process by all stakeholders in the organization, we need to make sure the result of that testing-our documentation-is presented with the same professionalism and attention to detail that distinguishes an artistic masterpiece from a paint-by-number kit. David Gilbert discusses the practical steps testers can take to improve the perceived value of exploratory testing in their organizations. He explains how we can apply a consistent, professional, and structured methodology to our exploratory testing and employ processes that will consistently create the level of detailed output that is considered the hallmark of any investigative analysis. Finally, David tells us how better to communicate the value of exploratory tests and document both the process and results of exploration in a way that stakeholders will understand.
|
David Gilbert, Sirius Software Quality Associates, Inc.
|
|
She Said, He Heard: Challenges and Triumphs in Global Outsourcing You are asked to put together a QA group in India that will work in tandem with your US team to provide twenty-four hour support for a global financial company. And what did Judy Hallstrom, Manager of Testing Services, and Indian Project Manager, Ravi Sekhar Reddy, and their group accomplish? The successful implementation of a fully integrated QA function, from scratch, in less than one year with minimal infrastructure. Walk through the challenges and triumphs as they built their unit from the ground up with no outsourcing service company support. With obstacles ranging from leased equipment, inadequate infrastructure, and shared office space to training issues, visas, Indian Customs, and much more, Judy and Ravi have seen and overcome them all. Now, two years later, they have a global QA team with processes that meet industry recognized quality standards.
- Working with a sourcing partner vs. going it alone
|
Judy Hallstrom, Franklin Templeton Investments
|
|
You'll Be Surprised by the Things Testers Miss Why do some bugs lie undetected until live operation of the software and then almost immediately bite us? Drawing on instances of problems that were obvious in production--but missed or nearly missed in testing, James Lyndsay can help you catch more bugs starting the day you return to work. James first describes bugs not found because too little time is spent on testing. Then, looking at the testers' knowledge, he discusses bugs missed because of requirements issues or because testers did not understand the underlying technology's potential for failure. In the most substantial part of the session, James looks at bugs missed because they could not be observed or because testers skimmed over the issue. Learn to recognize each type of testing problem and discover ways to mitigate or eliminate it.
- Coding errors that are hard to spot with typical test
- Working with emergent behaviors and unexpected risks
|
James Lyndsay, Workroom Productions
|
|
The Last Presentation on Test Estimation You Will Ever Need to Attend Estimating the test effort for a project has always been a thorn in the test manager's side. How do you get close to something reasonable when there are so many variables to consider? Sometimes, estimating test effort seems to be no more accurate than a finger in the wind. As Geoff Horne likes to call it, the "testimation" process can work for you if you do it right. Learn where to start, the steps involved, how to refine estimates, ways to sell the process and the result to management, and how to use the process to develop a test plan that resembles reality. Geoff demonstrates a spreadsheet-based tool that he uses to formulate his "testimations" and shows you how to use it at each step of the process.
- The different variables that need to be considered
- How to convert the "testimation" into a workable test schedule
- A spreadsheet template to help you estimate test effort
|
Geoff Horne, Geoff Horne Testing
|
|
Risk-Based Testing in Practice The testing community has been talking about risk-based testing for quite a while, and now most projects apply some sort of implicit risk-based testing approach. However, risk-based testing should be more than just brainstorming within the test team; it should be based on business drivers and business value. The Test team is not the risk owner-the products' stakeholders are. It is our job to inform the stakeholders about risk-based decisions and provide visibility on product risk status. Erik discusses a real-world method for applying structured risk-based testing applicable in most software projects. He describes how risk identification and analysis can be carried out in close cooperation with stakeholders Join Erik to learn how the outcome of the risk analysis can-and should-be used in test projects in terms of differentiated test approaches.
|
Erik van Veenendaal, Improve Quality Services BV
|
|
Testing and the Flow of Value in Software Development High quality software should be measured by the value it delivers to customers, and high quality software process should be measured by the continual flow of customer value. Modern processes have taught us that managing flow is all about the constraints restricting that flow. Testing, rather than being thought of as a conduit in that flow, is often perceived as an obstacle. It doesn't help that most testers struggle to answer the questions that their managers ask: What has and hasn't been tested? What do we need to test next? Where do we need to shift resources? If it works in the lab, why isn’t it working on those production machines? Where do we need to fix the performance or security? The ability-or inability- to answer these questions can determine the success and budget of a test team as well as how it is valued by its organization.
|
Sam Guckenheimer, Microsoft
|
|
Model-Based Security Testing Preventing the release of exploitable software defects is critical for all applications. Traditional software testing approaches are insufficient, and generic tools are incapable of properly targeting your code. We need to detect these defects before going live, and we need a methodology for detection that is cost-efficient and practical. A model-based testing strategy can be applied directly to the security testing problem. Starting with very simple models, you can generate millions of relevant tests that can be executed in a matter of hours. Learn how to build and refine models to focus quickly on the defects that matter. Kyle Larsen shows you how to create a test oracle that can detect application-specific security defects: buffer overflows, uninitialized memory references, denial of service attacks, assertion failures, and memory leaks.
|
Kyle Larsen, Microsoft Corporation
|
|
Don't Whine - Build Your Own Test Tools The highly customized hardware-software system making up the new flight operations system for the world's largest airline did not lend itself to off-the-shelf tools for test automation. With a convergence of on-demand, highly available technologies and the requirement to make the new system compatible with hundreds of legacy applications, the test team was forced to build their own test software. Written in Java, these tools have helped increase test coverage and improved the efficiency of the test team. One tool compares the thirty-one year old legacy system with its new equivalent for undocumented differences. Clay Bailey will demonstrate these tools, including one that implements predictive randomization methods and another that decodes and manipulates hexadecimal bit string representations.
- Custom test tools for a unique systems environment
- Innovative ways to develop and use Java for writing test tools
|
Clay Bailey, IBM
|