Over the last decade, I've had the privilege of teaching thousands of professional software developers how to be effective with test-driven development (TDD). From these experiences, I have learned that there are three key ingredients for learning test-driven development: understanding what it really is, making code reliably testable, and getting hands-on experience. Let’s look at each of these factors to see what it takes to use TDD effectively on your projects.
1. Understand What TDD Is
The first key ingredient for being effective with test-driven development is understanding what it truly is. I find that there are a lot of misconceptions around how to do TDD properly, and TDD is one of the practices that if you do it wrong, you often pay a pretty high price.
There's more to TDD than could possibly be said in this short article, but one of the things I’ve noticed is most challenging for people is that they think of TDD as a form of testing or quality assurance. I believe that's the wrong mindset to be in when doing TDD.
The QA mindset is thinking about what could go wrong and finding ways of assuring that it doesn't happen. The developer mindset is hopefully more optimistic, focusing on what has to happen in order for things to go right.
Rather than thinking about doing TDD as a way of testing code, I like to think of doing TDD as a way of specifying behaviors in the system. This leads me to create very different kinds of tests that tend to be more resilient to change in the future, because I'm verifying behaviors rather than testing pieces of code.
The term “unit test” also can be somewhat misleading. If we write the test before we write the code, then it’s really not a test, because when we write it, there’s nothing yet to test. It’s a bit strange to call it a test at this point. Instead, I like to think of it as a hypothesis. When we write the test before we write the code, we’re hypothesizing about how the code will behave, what we need to pass in, and what it will return.
This is similar to how we approach science. We don’t randomly run experiments. We always start with a hypothesis: something we’re trying to prove or disprove. We can then devise an experiment to either prove or disprove our hypothesis. Think of your test as the hypothesis and the code you write to make the test pass as the experiment that proves the hypothesis.
But the larger misconception, the one I find people really get hung up on when attempting to do TDD, is what they think a “unit” means. For most developers, when they see the term “unit” in “unit test,” they think of a unit of code, like a method or a block of statements, or even a single line of code. This is not what a “unit” means in this context. As I understand it, the term “unit” was adopted to emphasize a functionally independent unit of behavior.
Ideally, the behavior we’re after is in direct support of the acceptance criteria we’re trying to achieve. When unit tests are also acceptance tests, we get requirements traceability and verifiability for free.
A “unit of behavior” might involve several objects working in collaboration. For example, to test the rules around bidding on an auction might require a seller object to create an auction object and a bidder object to bid on that auction. Some people would call that an integration test because it involves the interaction of several objects. I call it a unit test because I’m testing one unit of behavior, the bid.
I often find that when we focus on building features that fulfill acceptance criteria, we write code that is significantly less expensive to maintain because the design is more straightforward to understand and extend.
2. Make Untestable Code Testable
The second key ingredient for learning TDD involves mastering a whole range of techniques that make untestable code testable. A lot of existing code is very difficult to test, and when we have to interact with that code, it can be hard to get it under test.
One main problem I see in code across the many companies I visit is that in order to use a service, a client will instantiate and then directly call that service. From the outside, the service and the client of the service appear to be the same thing and cannot be split apart. But when this is done over and over again across a system, it makes the system one tangled rat’s nest of code that’s impossible to separate and test independently.
One solution to this problem is a technique called dependency injection. You may be familiar with dependency injection frameworks, like Spring. But you can inject dependencies manually, without using a framework. Instead of making an object instantiate a service and then use it, we delegate instantiation to a different object and then pass the reference to the client who uses it.
Allowing the reference to a service to be passed into the consumer of the service lets us pass in a fake when we’re testing. It’s a simple concept that is vitally important for making small, testable units of behavior and breaking up monolithic code.
There are several kinds of fakes I can pass in to replace a dependency. One approach is to create a handcrafted mock by simply subclassing the dependency and overriding the methods your code interacts with. Instead of calling the real dependency, you can call the overridden method of your mock, which can return anything that makes sense. Remember, our goal here is to test our code’s interaction with the external dependency, not the dependency itself.
3. Get Experience Doing TDD
Having the skills to write good tests of behaviors and being able to write good, testable code is only part of what's needed to master TDD. The third and most important ingredient for mastering TDD is experiencing doing it. When developers have done test-driven development and see how their tests catch problems immediately—and how much better their code is as a result—they start getting on board with doing TDD on projects.
It's helpful to learn TDD on a greenfield project because there are a lot more complications around doing test-driven development on legacy code. That in itself is a whole field of study, and there's been a few excellent books on the subject. I think every professional software developer should read Martin Fowler's Refactoring: Improving the Design of Existing Code. And if you are working in legacy code, then you also should read Michael C. Feathers’s Working Effectively with Legacy Code—and don’t forget to check out my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
When I first started teaching software developers about test-driven development, I gave them the lecture on what TDD really is and how to make untestable code testable. They had the knowledge of what to do, but we didn't practice TDD together on a project, so it didn't stick. I'd go back six months later and no one on the team was doing TDD any longer.
But when I started to include twelve hours of hands-on practice using TDD as part of this training, I saw people make a radical shift as they saw for themselves the benefits of doing TDD. This is really the only way we learn and acquire new behaviors: by doing them and proving to ourselves that they're valuable. You don’t get that experience listening to someone else talk about a subject.
Understanding what TDD really is, knowing how to make untestable code testable, and gaining practice experiencing the benefits of doing TDD on a project are the three key ingredients for mastering TDD. I find that when developers have these three ingredients, they get excited about TDD and continue to do it on their projects.
User Comments
Well said. TDD is really a way of designing: it is a bottom-up (inductive) approach. It also presumes that you start with a hypothesis about granular desired behavior, rather than a system architecture concept that you have vetted against desired overall system behavior.
Indeed, dependency injection is the main challenge with TDD. It is often a-lot of work to create mocks to the extent that one can get high coverage with the test suite. The question is, is all that work worth it?
There is great division about this, as evidenced by the debates about it, e.g., https://martinfowler.com/articles/is-tdd-dead/
I think the reason for the division is that people work differently: some people prefer a mostly inductive approach, which TDD supports, while others prefer a mostly reductive (top down) approach, which TDD greatly inhibits.
There is also the issue of whether TDD increases or decreases agility: the ability to rapidly and reliably make changes to a codebase. I would suggest that for typesafe languages like Java, C++, and Go, the TDD test suite greatly reduces agility, because every change requires so many changes to so many tests, so one tends to say, "ah, forget it - we'll just keep that technical debta". However, for a non-typesafe language like Ruby, Python, or Javascript, a comprehensive unit test suite is essential, as a safety net for when changes are made. I would say that that is a cost of those languages, however: typesafe languages are lower initial productivity but more maintainable, while non-typesafe languages are higher initial productivity but less maintainable.
Hi Clifford,
Thank you for your comment and I’m glad you find value in doing TDD as well.
It’s too bad you can’t be with us here in Las Vegas this week at #BetterSoftwareCon where I’m presenting Overcoming Test-Driven Damage. It addresses some of the issues raised in the TDD is Dead movement.
The first point in my post is to test behaviors, not implementation. If, when refactoring code, your tests break then you’re testing implementation, by definition, because refactoring means to change the implementation without changing the behavior. These tests, as you point out, are costly to maintain.
Doing TDD well is far more involved than can be described in a blog post or comment. I’m not saying that it solves all problems or should be used in all cases but good unit test coverage of the behaviors in an object-oriented system can drop the cost of maintaining and extending code, which is important to a lot of businesses.
I hope this helps.
David.