Chris Oldwood from The OldWood Thing
I recently worked on a codebase where I had a new feature to implement but found myself struggling to understand the existing structure. Despite paring a considerable amount I realised that without other people to easily guide me I still got lost trying to find where I needed to make the change. I felt like I was walking through a familiar wood but the exact route eluded me without my usual guides.
I reverted the changes I had made and proposed that now might be a good point to do a little reorganisation. The response was met with a brief and light-hearted game of â€œKen Beck Quote Tennisâ€ - some suggested we do the refactoring before the feature whilst others preferred after. I felt there was a somewhat superficial conflict here that I hadnâ€™t really noticed before and wondered what the drivers might be to taking one approach over the other.
If youâ€™re into Test Driven Development (TDD) then youâ€™ll have the mantra â€œRed, Green, Refactorâ€ firmly lodged in your psyche. When practicing TDD you first write the test, then make it pass, and finally finish up by refactoring the code to remove duplication or otherwise simplify it. Ken Beckâ€™s Test Driven Development: By Example is probably the de facto read for adopting this practice.
The approach here can be seen as one where the refactoring comes after you have the functionality working. From a value perspective most of it comes from having the functionality itself â€“ the refactoring step is an investment in the codebase to allow future value to be added more easily later.
Just after adding a feature is the point where youâ€™ve probably learned the most about the problem at hand and so ensuring the design best represents your current understanding is a worthwhile aid to future comprehension.
Another saying from Kent Beck that Iâ€™m particularly fond of is â€œmake the change easy, then make the easy changeâ€ . Here he is alluding to a dose of refactoring up-front to mould the codebase into a shape that is more amenable to allowing you to add the feature you really want.
At this point we are not adding anything new but are leaning on all the existing tests, and maybe improving them too, to ensure that we make no functional changes. The value here is about reducing the risk of the new feature by showing that the codebase can safely evolve towards supporting it. More importantly It also gives the earliest visibility to others about the new direction the code will take .
We know the least amount about what it will take to implement the new feature at this point but we also have a working product that we can leverage to see how itâ€™s likely to be impacted.
Refactor Before, During & After
Taken at face value it might appear to be contradictory about when the best time to refactor is. Of course this is really a straw man argument as the best time is in fact â€œall the timeâ€ â€“ we should continually keep the code in good shape .
That said the act of refactoring should not occur within a vacuum, it should be driven by a need to make a more valuable change. If the code never needed to change we wouldnâ€™t be doing it in the first place and this should be borne in mind when working on a large codebase where there might be a temptation to refactor purely for the sake of it. Seeing stories or tasks go on the backlog which solely amount to a refactoring are a smell and should be heavily scrutinised.
That said, there are no absolutes and whilst I would view any isolated refactoring task with suspicion, that is effectively what I was proposing back at the beginning of this post. One of the side-effects of emergent design is that you can get yourself into quite a state before a cohesive design finally emerges.
Whilst on paper we had a number of potential designs all vying for a place in the architecture we had gone with the simplest possible thing for as long as possible in the hope that more complex features would arrive on the backlog and we would then have the forces we needed to evaluate one design over another.
Hence the refactoring decision became one between digging ourselves into an even deeper hole first, and then refactoring heavily once we had made the functional change, or doing some up-front preparation to solidify some of the emerging concepts first. There is the potential for waste if you go too far down the up-front route but if youâ€™ve been watching how the design and feature list have been emerging over time itâ€™s likely you already know where you are heading when the time comes to put the design into action.
 I tend to elide the warning from the original quote about the first part potentially being hard when saying it out loud because the audience is usually well aware of that :o).
 See â€œThe Cost of Long-Lived Feature Branchesâ€ for a cautionary tale about storing up changes.
 See â€œRelentless Refactoringâ€ for the changes in attitude towards this practice.