deployment

Conference Presentations

Quantitative and Statistical Management Applications

There is no longer any question that-when appropriately used-quantitative measurement and management of software projects works. As with any tool, the phrase "appropriately used" tells the tale. Drawing on his experiences using quantitative and statistical measurement, Ed Weller provides insights into the key phrase "appropriate use." Ed offers cases of useful-and not so useful-attempts to use the "high maturity" concepts in the Capability Maturity Model Integration® (CMMI®) to illustrate how you can either achieve a high return on your investment in these methods or fail miserably. After an introduction to the theory of statistical measurement, Ed presents examples of the successful use of statistical measures and discusses the traps and pitfalls of their incorrect implementation.

Edward Weller, Integrated Productivity Solutions, LLC
Static Analysis and Secure Code Reviews

Security threats are becoming increasingly more dangerous to consumers and to your organization. Paco Hope provides the latest on static analysis techniques for finding vulnerabilities and the tools you need for performing white-box secure code reviews. He provides guidance on selecting and using source code static analysis and navigation tools. Learn why secure code reviews are imperative and how to implement a secure code review process in terms of tasks, tools, and artifacts. In addition to describing the steps in the static analysis process, Paco explains methods for examining threat boundaries, error handling, and other "hot spots" in software. Find out about the analysis techniques of Attack Resistance Analysis, Ambiguity Analysis, and Underlying Framework Analysis as ways to expose risk and prioritize remediation of insecure code.

  • Why secure code reviews are the right approach for finding security defects
Paco Hope, Cigital
Improving Code Quality with Eclipse and its Java Plug-ins

One of the features that makes Eclipse so popular within the Java community is the abundance of easy to use plug-ins. Many of these are freely available open-source tools. Plug-ins are available for virtually anything from implementing database connectivity to instant messaging. Because code quality is a critical aspect of production software applications, Eclipse has built-in tools that help developers write and deliver high quality code. Levent Gurses has employed a number of external plug-ins, including PMD, CheckStyle, JDepend, FindBugs, Cobertura, CPD, Metrics, and others to transform Eclipse into a powerhouse for writing, testing, and releasing high quality Java code. Levent shows you how to use Eclipse to improve your team's coding habits, enforce organizational standards, and zap bugs before they reach the client.

  • The standard quality check tools available in Eclipse
Levent Gurses, Stelligent
Open Source Tools for Web Application Performance Testing

OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing will introduce you to the basics of OpenSTA including downloading and installing
the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, as well as exporting captured performance data into Excel for analysis and reporting. As with many open source tools, self-training is the rule. Support is not provided by a big vendor
staff but by fellow practitioners via email. Learn how to find critical documentation that is often hidden in FAQs and discussion forum threads. If you are up to the support challenge, OpenSTA is an excellent alternative to high-priced commercial tools.

  • Learn the capabilities of OpenSTA
Dan Downing, Mentora Inc
How to Build Your Own Robot Army

Software testing is tough-it can be exhausting and there is never enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. Two decades ago, software test engineers were cheap and machine time was expensive, demanding test suites to run as quickly and efficiently as possible. Today, test engineers are expensive and CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them … we're talking about programs that create and execute tests you never thought of and find bugs you never dreamed of. In this presentation, Harry Robinson will show you how to create your robot army using tools lying around on the Web.

Harry Robinson, Google
Software Disasters and Lessons Learned

Software defects come in many forms--from those that cause a brief inconvenience to those that cause fatalities. Patricia McQuaid believes it is important to study software disasters, to alert developers and testers to be ever vigilant, and to understand that huge catastrophes can arise from what seem like small problems. Examining such failures as the Therac-25, Denver airport baggage handling, the Mars Polar Lander, and the Patriot missile, Pat focuses on factors that led to these problems, analyzes the problems, and then explains the lessons to be learned that relate to software engineering, safety engineering, government and corporate regulations, and oversight by users of the systems.

  • Learn from our mistakes-not in generalities but in specifics
  • Understand the synergistic effects of errors
  • Distinguish between technical failures and management failures
Patricia McQuaid, Cal Poly State University
Industry Benchmarks: Insights and Pitfalls

Software and technology managers often quote industry benchmarks such as The Standish Group's CHAOS report on software project failures; other organizations use this data to judge their internal operations. Although these external benchmarks can provide insights into your company's software development performance, you need to balance the picture with internal information to make an objective evaluation. Jim Brosseau takes a deeper look at common benchmarks, including the CHAOS report, published SEI benchmark data, and more. He describes the pros and cons of these commonly used industry benchmarks with key insights into often-quoted statistics. Take away an approach that Jim has used successfully with companies to help them gain an understanding of the relationship between the demographics, practices, and performance in their groups and how these relate to external benchmarks.

Jim Brosseau, Clarrus Consulting Group, Inc.
Beat the Odds in Vega$: Measurement Theory Applied to Development and Testing

James McCaffrey describes in detail how to use measurement theory to create a simple software system that predicts with 87 percent accuracy the results of NFL professional football game scores. So, what does this have to do with a conference about developing better software? You can apply the same measurement theory principles embedded in this program to more accurately predict or compare results of software development, testing, and management. Using the information James presents, you can extend the system to predict the scores in other sports and apply the principles to a wide range of software engineering problems such as predicting the Web site usage in a new system, evaluating the overall quality of similar systems, and much more.

  • Why the statistical approach does not work for making some accurate predictions
  • Measurement theory to predict and compare
James McCaffrey, Volt Information Sciences, Inc.
Code Coverage Myths and Realities

You've made a commitment to automate unit testing as part of your development process or you are spending precious resources for automated functional testing at higher levels. You may be asking yourself: How good are those tests anyway? Are many tests checking the same thing while large parts of the code go completely untested? Are your tests triggering the exceptions that normally show up only in production? Are your automated tests adequately covering the code, the requirements-both, neither? Andrew discusses the truths and untruths about code coverage and looks at the tools available to gather and report coverage metrics in both the opensource and commercial worlds. He describes the different types of code coverage, their advantages and disadvantages, and how to interpret the results of coverage reports.

  • The concept of mutation testing and how it fits into a code coverage strategy
Andrew Glover, Vanward Technologies
User Stories for Better Software Requirements

The technique of expressing requirements as user stories is one of the most broadly applicable techniques introduced by Extreme Programming. In fact, user stories are an effective approach on all time-constrained projects, not just those using Agile methods. Mike Cohn explains how to identify the functionality for a user story and how to write it well. He describes the attributes all good user stories must exhibit and presents guidelines for writing them. Learn to employ user role modeling when gathering a project’s initial stories. Whether you are a developer, tester, manager, or analyst, you can learn to write user stories that will speed up development and help you deliver the systems that users really need.

  • Defining a user story and learning how to write one
  • Six attributes of all user stories
  • Thirteen guidelines for writing better user stories
Mike Cohn, Mountain Goat Software

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.