Conference Presentations

Web 2.0: The Fall and Rise of the User Experience

The Web has enabled pervasive global information sharing, commerce, and communications on a scale thought to be impossible only ten years ago. At the same time, the Web dealt a setback in the user interface experience of networked applications. Only now are Web standards and technologies emerging that can bring us back to the rich and robust user experiences that were developed in the desktop client/server era before the Web came along. Wayne Hom presents examples of great, rich client Web user interfaces and discusses the enabling tools, technologies, and methodologies for today’s popular Web 2.0 approaches. Wayne discusses the not-so-obvious pitfalls of the new technologies and concludes with a look at user interface opportunities beyond the current Web 2.0 state-of-the-art to see what may be possible in the future.

  • User experiences on the Web versus older technologies
Wayne Hom, Augmentum Inc.
A CMMI® Success Story: What Happens in Guadalajara Doesn't Stay in Guadalajara

Can a group of software developers, located in Mexico, achieve CMMI® certification and set the standard for their larger U.S. parent company to follow? A software branch of Freescale Semiconductors Inc., located in Guadalajara, did exactly that. Developing the CMMI® processes and procedures that made business sense for a remote software group was tricky, but not as tricky as assuring that they aligned their practices with the parent company's processes and requirements. The months of work that led to this achievement were filled with high points-and big challenges. Jeff Fiebrich discusses the planning, budgeting, and implementation that contributed to their ultimately successful CMMI® certification. He addresses the collaboration between their parent company and the local government that was an essential part of this effort. And, most importantly, Jeff reveals the immediate impact of their certification on the entire company.

Jeff Fiebrich and Diego Garay, Freescale Semiconductors, Inc.
Practical Software Sizing with Testable Requirements

A new strategic project is in the design stages-how much will it cost? Your application requirements are constantly growing-what is the impact? System testing is scheduled soon-how much time and what resources will we need? And how do you get the answers? Measurement. Although software developers are often collecting measures of defects, earned value, variances, etc., the most fundamental measure-how big is the system?-is usually lacking. Lines of code and function points are established sizing measures but both have limitations that have prevented their widespread acceptance. Karen Johns presents testable requirements, an alternative sizing measure that can help you meet these challenges and more. Learn from Karen what testable requirements are and how to use them to size your software systems.

Karen Johns, Mosaic, Inc.
Real World SOA: From Concept to Application

Service Oriented Architecture (SOA) is becoming a widely adopted approach for enterprises to attain agility and economy while meeting their Information Technology (IT) needs. Unlike previous attempts at component based software development efforts, development in SOA environments stresses code reuse through development, deployment, and orchestration standards. Frank Cohen explains the major trends currently in play that impact the use of XML in SOA and how a new base of computing technology using composite applications, Registry/Repository, Enterprise Service Bus (ESB,) Master Data Management (MDM), Database, and SOA protocols and practices deliver a solution. Frank begins with the SOA basics, explains their applications, and shares specific metrics that you can use to govern your SOA efforts.

  • Identify where SOA is applicable, and where it is not
  • The importance of performance and scalability
Frank Cohen, PushToTest
Better Software Conference & EXPO 2007: The Art of SOA Testing: Theory and Practice

Service Oriented Architecture (SOA) based on Web Services standards has ushered in a new era of how applications are being designed, developed, and deployed. SOA's promise to enable the development of applications that are built by combining loosely coupled and interoperable services poses new challenges for testers and everyone involved with the software reliability and security. Among the challenges are dealing with multiple Web Services standards and implementations, legacy applications (of unknown quality) now exposed as Web services, weak or non-existent security controls, and services of possibly diverse origins chained together to create dynamic application implementations. Join Rizwan Mallal to learn concepts, skills, and powerful techniques-WSDL chaining, schema mutation, and automated filtration-needed to meet these challenges.

Rizwan Mallal, Crosscheck Networks
Static Analysis and Secure Code Reviews

Security threats are becoming increasingly more dangerous to consumers and to your organization. Paco Hope provides the latest on static analysis techniques for finding vulnerabilities and the tools you need for performing white-box secure code reviews. He provides guidance on selecting and using source code static analysis and navigation tools. Learn why secure code reviews are imperative and how to implement a secure code review process in terms of tasks, tools, and artifacts. In addition to describing the steps in the static analysis process, Paco explains methods for examining threat boundaries, error handling, and other "hot spots" in software. Find out about the analysis techniques of Attack Resistance Analysis, Ambiguity Analysis, and Underlying Framework Analysis as ways to expose risk and prioritize remediation of insecure code.

  • Why secure code reviews are the right approach for finding security defects
Paco Hope, Cigital
Analyze Customer-Found Defects to Improve System Testing

How do we know if we have made the right choices regarding the way we tested a product? Did we focus our efforts in the right areas? Only a careful and orchestrated analysis of customer-found bugs will give us the answers. You can obtain a wealth of information from post-release bugs: the need for more code coverage in our tests, the value of our regression testing, the validity of our load generating scripts, our choices of target environments, tests we do not need to run, and more. Evelyn Moritz describes how to gather, analyze, categorize, and measure customer-found bugs in ways that will help testers and test departments become more efficient and effective at finding the types of bugs that impact their customers the most.

  • Information you should collect about customer-found bugs
  • Techniques for bug analysis and reporting
  • How customer-found bugs can be used to improve system testing
Evelyn Moritz, AVAYA
The Testing Center of Excellence

When it comes to system and acceptance testing, project teams often end up scrambling for resources, late in the project schedule. The test team must be assembled or expanded, learn the application, and improve their skills before testing begins. When the project ends, the team is downsized or disbanded and its knowledge, skills, and experience are all diminished or lost. David Wong thinks there is a better way-organize skilled individuals into a Testing Center of Excellence (TCOE) to leverage their built-up expertise and application knowledge. A TCOE increases operational efficiencies and provides your organization with one-stop shopping for all testing services. The TCOE is responsible for scheduling test cycles, recruiting and training new staff, and retaining a pool of talented test professionals.

David Wong and Dalim Khandaker, CGI
When Will the Product be Ready to Ship - a "Hurricane Tracking System"

Most test execution tracking systems are backward looking and do not attempt to quantify what remains to be done. Management, on the other hand, is forward looking-asking, "When will testing be done?" And that question itself is fundamentally flawed, implying that testing is either "done" or "not done." What management should be asking is "When will the risks be acceptable to release the product?" David Gilbert presents a unique approach to tracking and predicting the progress of testing efforts. Using the metaphor of hurricane tracking, he shows how "what if" scenarios can be created to demonstrate the costs and benefits of various test execution scenarios. Take back novel techniques to provide your team and senior management the key information they need to relate the testing effort to the "bottom line" impact of product release.

  • Hurricane tracking as a model for test progress tracking
David Gilbert, Sirius Software Quality Associates
How to Design Frustrating Products

In the software business, poor product design can lead to frustration and wasted time for our customers. Although we can ignore "usability" and "good design" without negatively impacting the initial success of a product, sales and customer satisfaction will suffer in the long run. Usability is a topic that has been discussed at great length, but many of the accepted design conventions either lack explanations of where and how to apply them, or they are entirely untrue. Sanjeev Verma explains how to ignore usability and save valuable time during the design phase of a product and apply it where it really counts-on new feature development. Kendra Yourtee offers proven practices that she has used in her daily routines to improve the usability of products as they are updated. She discusses simple ways to "test" designs against real data before the software is complete.

Sanjeev Verma, Microsoft

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.