Part 2 of the debate is much more interesting than Part 1 for those of you interested in software architecture. It covers the concept of “Test Induced Design Damage” and discusses whether TDD leads you towards good designs or to bad designs.
This post summarises the conversation for those of you without time to watch the whole video:
Martin Fowler introduces saying the 3 questions intended for discussion are:
1. Is TDD the cause of the damage observed by David?
2. Is it actually damage?
3. How can we tell whether something is damage or not?
David shows his latest Gist on Test Induced Design Damage which contrasts an original rail controller and action class with a much more complex architecture, and explains the idea for the code or architecture was to use mockist classes to separate the controller from the model and the database. It uses the Repository and command patterns.
David’s complaint with this type of architecture is it has additional layers of indirection and additional complexity for the purpose of unit testability. He dislikes overuse of the repository and command patterns. David reminds us that the previous debate agreed that three levels of mocking was bad but feels stronger than that and feels one level of mocking is bad.
David says mocking externals such as if you’re talking to a payment gateway that should be mocked out, but if he’s testing a controller he doesn’t want to mock out the model and if he’s testing the model he doesn’t want to mock out the database.
Kent Beck says “If you get out of a car and find your in some place you don’t want to be, getting a new car isn’t going to fix that”. The analogy here is TDD is the car and the programmer is the driver who has full control over which directions to go in. “To say that it’s TDD that causes that seems like a conflation of cause and effect. You could design the same code without any tests at all.”
“All of the tricks displayed in the Gist are good tricks understand certain circumstances. It’s a question of when each of those design moves is worth the cost and when it isn’t.”
David says that’s fair but says TDD wants to lead you in a certain way. When you start unit testing with controllers and applications TDD becomes less useful. He asserts TDD leads people to a place where if they knew that’s where they were going to end up they wouldn’t have gone in that direction to start with, and that developers end up with a monstrosity that they never intended to create and they got there one test at a time.
Kent says not one test at a time but one design decision at a time. TDD, and testability, puts an evolutionary pressure on a design. Kent says there are a lot of binary positions along analog dimensions. Granularity is one of those and neither extreme position is good. He can’t understand the argument of TDD automatically leading to a poor design.
David says there’s a direct correlation between the size of the code base and the ease with which you can change it. The larger it is the harder to change. Multiple layers of indirection that need to be kept in sync slows down developers. The repository pattern needs to updated every time there’s a new interaction with your model. It’s not just more costly to change but more costly to understand. 10 clear lines of code is endlessly better than 60 lines with three levels of indirection that need to be comprehended. The oversimplified argument is more code to do the same thing is bad.
David sees a lot of people getting addicted to TDD and says red green refactor is an addictive flow. Kent jokes that he’s the poorest drug dealer on the planet.
Martin disagrees completely saying the effect doesn’t have anything to do with TDD per se, and although a desire to make the code testable is a driver, it’s really the isolation that’s driving you to that spot.
David agrees but says people see isolation as a goal because of TDD. He says justifications for isolation other than testability are ludicrous. David think arguments for the Repository pattern as allowing you to easily swap out your database for an in-memory store or web service are a nonsense pipe dream.
Kent likes the example because although you might think you’re decoupled you’re really really not because the reliability of an in-memory store vs a web service is huge. The question is how much effort are we willing to spend to get how much decoupling? 10 lines vs 60 lines is a cohesion argument as 10 lines has a much higher cohesion.
David says cohesion and coupling often act against each other and striving for low coupling also reduces cohesion. There are many cases where David is willing to trade high coupling for high cohesion. The drive towards testability harms cohesion.
Kent says creating intermediate results can be done without mocking and some of these techniques have been used very successfully in compilers for many years. One of the options is there may be a missing piece of the design that allows isolation without using mocks. He alleges that difficulty testing is a symptom of a poor design. The gating factor preventing good architectures isn’t the coding workflow but the amount of design insight that the developer has.
David agrees that easily maintainable and testable code is the ideal and has seen examples of testing leading you towards a better design. But he’s also seen the opposite. There is the fallacy of an oversimplified argument more testable = better. He says a good cohesive design and a highly testable design is not always there.
Kent thinks David doesn’t have enough self confidence and there are design insights that lead to cohesive and testable designs. He believes there is a good design out there.
David thinks that Kent’s view crystallises “faith based TDD” i.e. the belief that TDD will eventually somehow lead you to the right design.
Kent says TDD doesn’t take you anywhere because you make the decisions.