|
The Software Vulnerability Guide: Uncut and Uncensored Warning: This talk contains graphic examples of software failure . . . not suitable for the faint of heart. This "no holds barred" session arms testers with what they really need to know about finding serious security vulnerabilities. Herbert Thompson takes you on an illustrated tour of the top twelve security vulnerabilities in software and shows you how to find these flaws efficiently. Each vulnerability is brought to life through a live exploit followed by a look at the testing technique that would have exposed the bug. Testers and test managers will leave with a keen awareness of the major vulnerability types and the knowledge and insight to fundamentally improve the security of the applications they support and test.
|
Herbert Thompson, Security Innovation LLC
|
|
STAREAST 2006: Testing Dialogues - Management Issues As a test manager, are you struggling at work with a BIG test management issue or a personnel issue? If so, this session is for you. "Testing Dialogues--Management Issues" is a unique platform for you to share with and learn from test managers who have come to STAREAST from around the world. Facilitated by Esther Derby and Johanna Rothman, this double-track session takes on management issues--career paths for test managers, hiring, firing, executive buy-in, organization structures, and process improvement. You name it! Share your expertise and experiences, learn from others’ challenges and successes, and generate new topics in real time. Discussions are structured in a framework so that participants will receive a summary of their work product after the conference.
|
Facilitated by Esther Derby and Johanna Rothman
|
|
Testing: The Big Picture If all testers put all their many skills in a pot, surely everyone would come away with something new to try out. Every tester can learn something from other testers. But can a tester learn something from a ski-instructor? There is much to gain by examining and sharing industry best practices, but often much more can be gained by looking at problem solving techniques from beyond the boundaries of the Testing/QA department. Presented as a series of analogies, Brian Bryson covers the critical success factors for organizations challenged with the development and deployment of quality software applications. He takes strategies and lessons from within and beyond the QA industry to provide you with a new perspective on addressing the challenges of quality assurance.
|
Brian Bryson, IBM Rational Software
|
|
Build Rules: A Management System for Complex Test Environments Due to the interaction of many software components, there is increased complexity in testing today's software solutions. The problem becomes especially difficult when the solution includes combinations of hardware, software, and multiple operating systems. To automate this process, Steven Hagerott's company developed "Build Rules," a Web-based application with inputs from their build management and test execution systems. Using logical rules about the builds, test engineers define the characteristics of the build solution points. To deliver the latest and greatest builds that meet the characteristics defined for each solution point, the system dynamically translates these rules into server side nested SQL queries. Learn how their efficiency and accuracy has improved significantly, allowing test engineers to stay on track with many different build combinations and to communicate results to outside departments and customers.
|
Steve Hagerott, Engenio Storage Group, LSI Logic Corporation
|
|
Progressive Performance Testing: Adapting to Changing Conditions An inflexible approach to performance testing is a prelude to disaster. "What you see at the start isn't always what you get in the end," says Jeff Jewell. Based on his experience performance testing applications on numerous consulting projects, Jeff demonstrates the challenges you may face testing your applications and how to overcome these obstacles. Examples from performance testing on these projects will demonstrate some of the ways that changing conditions of the projects and the information they discovered in early tests caused the testing approach to change dramatically. Find out how hardware configuration, hardware performance, script variations, bandwidth, monitoring, and randomness can all affect the measurement of performance.
|
Jeff Jewell, ProtoTest LLC
|
|
Test Metrics in a CMMI Level 5 Organization As a CMMI® Level 5 company, Motorola Global Software Group is heavily involved in software verification and validation activities. Shalini Aiyaroo, senior software engineer at Motorola, shows how tracking specific testing metrics can serve as key indicators of the health of testing and how these metrics can be used to improve your testing practices. Find out how to track and measure phase screening effectiveness, fault density, and test execution productivity. Shalini describes the use of Software Reliability Engineering (SRE) and fault prediction models to measure test effectiveness and take corrective actions. By performing orthogonal defect classification (ODC) and escaped defect analysis, the group has found ways to improve test coverage.
CMMI® is a registered trademark of Carnegie Mellon University.
- Structured approach to outsource testing
|
Shalini Aiyaroo, Motorola Malaysia Sdn. Bhd
|
|
Do Your CM/ALM Tools Help Secure Your Development Assets? You're part of a very successful growing software company. As you approach the office one morning, fire trucks out front indicate that this is not business as usual. Fortunately, you have nightly off-site back-ups. Unfortunately, you'll need equipment, software, back-up recovery operations, and perhaps things can be back to normal in a few days with limited data loss. Or maybe you've noticed data problems creeping into your development repository ever since the recent round of layoffs. Or a hacker. Maybe there was a critical disk crash. Or maybe a new software release has introduced data inconsistency. There are many ways your development assets can be compromised. So you really need many avenues to secure them. Your CM and/or ALM suites are part of your development backbone - they must be up to the task of getting you back on your feet, the same day.
|
|
|
From Peer Review to Pair Programming There is always talk about improving application quality. In many instances, a large quality program gets initiated that either takes a lot of resources and time or introduces change that is too challenging for the organization (or project team) to handle. It is usually better to start on a smaller scale. Focusing on improving application quality in the programming phase, a couple of suggests are: 1) initiate peer reviews (e.g., code reviews) and/or 2) initiate pair programming. While peer review is more widely known and used in the software development industry, pair programming offers more problem solving possibilities. Both are known to reduce defects and improve quality. The key is to introduce a small initiative like peer review or pair programming ensuring you are building the practice for success.
|
|
|
Five Core Metrics to Guide the Testing Endgame By its very nature, the endgame of software projects is a hostile environment. Typical dynamics include release pressure, continuous bug discovery, additional requirements, exhausted development teams, frenzied project managers, and "crunch mode"-a politically correct term for unpaid overtime. Although testing teams are usually in the thick of this battle, they usually do not do enough to help guide the project in this critical stage. To improve the overall endgame experience, testers can help entire team’s focus with a few key defects metrics. Robert Galen discusses ways to track these five key defect metrics: found vs. fixed; high priority defects found; project keywords; defect transition progress; and functional distribution of errors. Join Robert to increase the likelihood of delivering your projects on time-and surviving yet another endgame.
- Help traffic the action for the incoming defect stream during the endgame
|
Robert Galen, RGCG, LLC
|
|
PairWise Testing: A Best Practice that Isn't By evaluating software based on its form, structure, content, and documentation, you can use static analysis to test code within a program without actually running or executing the program. Static analysis testing helps us stop defects from entering the code stream in the first place rather than waiting for the costly and time-consuming manual intervention of testing to find defects. With real-world examples, Djenana Campara describes the mechanics of static analysis-when it should be used, where it can be executed most beneficially within your testing process, and how it works in different development scenarios. Find out how you can begin using code analysis to improve code security and reliability.
- The mechanics of automated static analysis
- Static analysis for security and reliability testing
- Integrating static analysis into the testing process
|
James Bach, Satisfice Inc
|