In this interview, technology and organizational consultant Dan North discusses deliberate testing in an agile world. He talks about how testing was perceived before agile became such a big part of the industry, and whether or not we've lulled ourselves into a false sense of testing security.
Josiah Renaudin: Today I'm joined by Dan North, who's a keynote speaker at our upcoming STAREAST conference held in Orlando. First, could you tell us a bit about your experience in the industry?
Dan North: I’ve been working in IT as a developer, coach, consultant, and various other roles for about twenty-five years, in a varied mix of organizations and industries. In terms of agile experience, I first came across Extreme Programming (XP) around 2000 and joined agile pioneers ThoughtWorks in 2002 as their first UK technical hire. I spent eight years there, helping to grow the London office to around 250 people, and along the way I developed behavior-driven development, which I describe as a second-generation agile method, inspired by the work of Kent Beck, Ward Cunningham, Martin Fowler, and other Agile Manifesto signatories.
Since leaving ThoughtWorks at the end of 2009, I’ve been exploring other software delivery methods, as an employee of an electronic trading firm and then as an independent, which has led to my current “Software, Faster” body of work.
Josiah Renaudin: Before agile became a mainstream methodology, how was testing treated or perceived within a standard organization?
Dan North: Well, “agile” is really a blanket term for a whole family of methodologies. The industry seems to have adopted agile as a synonym for Scrum, but that’s a historical accident. In any case, traditional plan-driven software delivery methods tend to view testing as a separate stage, near the end of development, and informally that testers were like second-rate programmers. Testing was viewed as something you did to learn your technical chops so that one day you would graduate to programming.
Josiah Renaudin: Was it difficult to maintain project cohesion within an integrated development team with testing often being outsourced?
Dan North: I think it’s difficult to maintain cohesion with anything being outsourced, unless the outsourcing partner is genuinely a partner and is treated as a first-class player. Usually we outsource things we think are commodity activities, as a cost-saving strategy. Outsourcing something as critical as testing has never made sense to me.
Josiah Renaudin: Why do you think we’ve lulled ourselves into a false sense of security with agile testing?
Dan North: Most of the teams I work with who would describe themselves as agile tend to have two types of testing: automated feature and unit testing, and manual exploratory testing. When you look at the rich and varied landscape of software testing, it’s almost embarrassing how many types of testing we aren’t even aware of; never mind whether or not we are choosing to do them.
Josiah Renaudin: Do you think we automate too much, or too little in our current testing climate?
Dan North: Yes! I believe we automate both too much and too little, or rather, we tend to automate indiscriminately, which leads to both of these. This is a result of having an arbitrary goal of “automation,” driven either by a test coverage metric or just the received wisdom that “Automation Is Good.” Automation is just a technique, and like any other technique, it can be used well or poorly, and can provide benefit or hindrance.
Josiah Renaudin: Can you talk about some of the classes of tests that aren’t being considered today?
Dan North: To give you a frame of reference, a technical leader I know was putting together a talk about testing and built a list of all the types of testing he could find, asking numerous testers and researching various testing resources. His final list was well over 100 distinct types of testing. Most teams I know can only even think of ten or twenty types of testing, even those with dedicated testers. It’s not surprising that we have so many blind spots.
Josiah Renaudin: Are we purposely ignoring these tests, or are testing teams just not knowledgeable enough about these classes of tests to take notice?
Dan North: I think it’s down to perspective. Terms like “test-driven development” or “automated acceptance testing” imply that driving behavior using automated examples is a substitute for proper testing. That was one of the reasons I started using the term “behavior-driven development,” taking the testing vocabulary right out of it. An unexpected side-effect of that was how much the tester role became central to BDD. I believe the idea of testing teams itself is flawed. Testing is a set of capabilities that should be intrinsic to any software delivery team, rather than something handed off to a dedicated testing team.
Josiah Renaudin: More than anything else, what message would you like to leave with your audience at STAREAST?
Dan North: Mostly to reaffirm that testing is a first-class discipline in itself and is a necessary and vital part of successful software delivery. And that the role of a tester in an agile team is about raising the team’s awareness and capabilities in the rich domain of testing.
With more than twenty years of IT experience, Dan North uses his deep technical and organizational knowledge to help CIOs, businesses, and software teams deliver quickly and successfully. Putting people first, Dan finds simple, pragmatic solutions to business and technical problems, often using lean and agile techniques. He originated Behaviour-Driven Development (BDD) and Deliberate Discovery, published feature articles, and contributed to The RSpec Book: Behaviour Driven Development with RSpec, Cucumber, and Friends and 97 Things Every Programmer Should Know: Collective Wisdom from the Experts. Dan is a frequent speaker at technology conferences worldwide and occasionally blogs.
User Comments
Testing is essential, as is automating the right tests, and exploratory testing. What I am missing tto be mentioned in the interview as key testing type is a systematic approach. Just yesterday I tested an application that moves data from system A to system B. That app was supposedly already tested, yet there were no test cases and no test results. By the looks of it all worked well until I conceved of a way to compare all the records on both sides. As it turns out a key element of data was not always moved over properly. While it worked in 99% of the cases in my test data the 1% where it failed are unacceptable. In production the failure rate might have been significantly higher (or lower) depending of the nature of data. That showed that the exploratory, undocumented testing was not sufficient. I am sure that defect would have been found eventually...by a customer. The other aspect to the problem is that management thought that all is well and fully tested which is why the app was already deployed in production, fortunately only on early adopter/pilot sites.
Another important aspect to testing is not just to find the defects and document them, but also to get the defects fixed. That is becoming more and more of a struggle as features always trump quality. Adding features as bad as they may be will make it possible for companies to deliver a product at the promised delivery date. Making it work right becomes more and more an afterthought. To me THAT is the new reality where testing / QA is again a second class citizen even with Agile methods in place. In fact, Agile makes this even worse because it invites decision makers to dismiss issues with the argument that the fix will be "backlogged and hit the next iteration". Often the story gets pushed to the next iterations and eventually put on the postponed list until a customer complains, then it is out of a sudden front and center and we all have to drop everything to fix it.
One solution might be to determine quality metrics and acceptable levels of these metrics for "ready to ship". Quality in the softare world is a very subjective thing and difficult to measure aside from counting open bug reports or other obvious metrics that do not carry much information.