We have been running the Coder´s Dojo Sweden during the autumn, and yesterday we started the spring sessions. One of the interesting observations that we have made is a follow-up on a comment that I made on Robert C. Martins page about the Bowling Kata.
The observation was that if you follow the rules “refactor only in the green” (you may only refactor while you have working and passing tests as your life-west) and “only develop new functionality in the red” (while you have a failing test for that functionality), you will inevitably end up with a situation that your new functionality driving test case forces a re-design, right?
It may be obvious to most of you out there, but we couldn’t find a single mention of how you are supposed to solve that predicament. We surely do not want to do a redesign with a failing test, the green bar is our safety net. Of course you could keep an eye on which tests are causing the red, but that is tedious and error-prone.
What seems logical is to temporarily get rid of the failing test, which you can easily do by commenting out the failing test. Simple, right? And possibly second nature to seasoned TDD:ers. But if so, why has there been no developmen of tools support for this? I could easy envision a “disable test” feature in the Eclipse/JUnit GUI…
I think the idea behind the rule is to implement the functionality first, although you know that you will have to refactor it later. Then, when the when you have the green bar, go ahead and refactor it.
I don’t agree. One essential function of refactoring is to improve the design. The issue I was trying to address in the article was the fact that I often want to redesign to be able to slot in the new functionality better, or even make the growth possible. I strongly belive in refactoring to allow functional growth as opposed to force new functionality in and then try to clean up using refactoring.
I don’t disagree with you. I am a pragmatic person. I just wanted to explain the idea behind the rule. One of the fundamental ideas behind agile development is to move in small steps.
So, the “by-the-book” way of doing it would be:
You create the one test, implement just a little, then you realize the need to refactor a little, before you move on to the next test.
Another way of doing it is to realize the need to refactor before you even start to create the first test. But be careful, big speculative refactoring must be avoided, since you may have to refactor it again if you were wrong.
The third situation, is the one you described. You realize the need for refactoring in the middle of it when you are sitting there with a failing test. Well, shit happens, I say. The situation will happen sometimes, and I would do as you, “disable” the test, refactor a little, enable the test, implement until the bar turns green.
The difference between the three cases above is the time when the developer realizes the need to refactor. Pair programming is a tool for realizing it early in the process. But no rule in the any book can force you to realize it at one time or the other. It just happens.