Mark Needham

Thoughts on Software Development

The Refactoring Dilemma

with 8 comments

On several of the projects that I’ve worked on over the last couple of years we’ve seen the following situation evolve:

  • The team starts coding the application.
  • At some stage there is a breakthrough in understanding and a chance to really improve the code.
  • However the deadline is tight and we wouldn’t see a return within the time left if we refactored the code now
  • The team keeps on going with the old approach
  • The project ends up going on for longer than the original deadline
  • It’s now much more work to move towards the new solution

In the situations I describe the refactorings could have been done incrementally but doing that would take longer than continuing with the original approach and also leave the code in an inconsistent state.

I think the reason this situation evolves consistently in this manner is because although we talk about writing maintainable code, delivery is often considered more important. Pushing out a delivery date in order to refactor code so that it will be easier to work with in the future isn’t something that I’ve seen happen.

Pushing a delivery date out is a cost that we can see straight away.

On the other hand it’s quite difficult to estimate how much of a gain you’ll get by refactoring to a more maintainable/easier to test solution and that gain will not be immediate.

We therefore end up in the situation where we tend to make major refactorings only if we’re going to see a benefit from doing that refactoring before the project ends.

In one sense that seems reasonable because we’re trying to ensure that we’re adding as much value as possible while the client is paying for our time.

On the other hand we’re making life harder for future maintainers of the code base which may in fact be us!

I’d be keen to hear how others handle these types of situations because it feels like this trade off is quite a common one and the way we’ve dealt with it doesn’t seem optimal.

Written by Mark Needham

June 13th, 2010 at 1:37 pm

Posted in Coding

Tagged with

  • http://blog.punchbarrel.com/ Frank Carver

    You’re right on both counts – it is common situation, and it is tough.

    The way I try to deal with this situation is to remember that (a) “refactor” has a precise definition, and (b) changes can occur at different levels in the code.

    Taken together, these realisations help to show that while at the largest level (the whole system or application) a proposed change may indeed be a refactoring (an internal change which preserves existing external behaviour), at more detailed levels within the code the changes are not behaviour-preserving, and thus not strictly refactorings.

    The options you offer (either leave the suboptimal implementation alone, or refactor the whole system in one go seem based on a view of the system only at the largest level. Just because a whole-system refactoring is likely to be too costly to afford in one go, splitting that goal into linear sub-steps is not the only option.

    Instead, step down a level in your view of the application. Instead of one big chunk, consider the system as several smaller chunks collaborating. Whatever these chunks may be (classes, subsystems, packages, layers, phases, blocks of code. etc.) depends on the scale and structure of your system.

    Now you have this view, your choice may well be more practicable. If any of these sub-chunks can be refactored independently (while maintaining their interaction with other chunks), then you have some pieces of work which can be done. Of course that work may also be too big to stomach in one go, but this process is nicely recursive: consider that chunk as the system and re-apply these rules.

    If your sub-chunks seem too tangled, either re-consider how you divided your system to see if you can find any separable work, or look for any one (ideally small) change you can make to reduce that coupling.

    For a complex inter-twined code base, it’s most likely that only the second option (work to de-couple components and isolate decisions) will be affordable initially. Although such refactorings do not _directly_ contribute to the overall change, they help to get you into a position where you can split, and thus achieve the main goal.

  • http://blog.andrei.rinea.ro Andrei Rinea

    This is called technical debt ( http://martinfowler.com/bliki/TechnicalDebt.html )

    You just need courage to convince the people that can take decisions regarding budgets to allow the refactoring. I’ve seen projects fail because of lack of this.

  • Drew

    Communicating clearly about the benefits in business terms is also helpful. Talking about “Technical Debt” in the abstract can be useful. However, putting it in terms business understands such as increased time to market, decreased accuracy in estimates, decreased morale in technical teams and increased bug-counts can help qualify your arguments.

  • Vinod

    In my experience, I’ve noticed that the time deadline always trumps, but then I work in a services company. I reckon that companies which build products will probably be interested in spending more time upfront for maintainable and simpler code.

  • http://www.markhneedham.com Mark Needham

    @Drew that sounds like quite an interesting approach, I haven’t ever been in the position on a team where I had that situation yet.

    My colleague Julio Maia suggests not calling it technical debt but instead productivity improvers because that’s the reason that you would choose to address those problems in the first place.

  • Pingback: Tweets that mention The Refactoring Dilemma at Mark Needham -- Topsy.com

  • INTPnerd

    If you do constant aggressive refactoring at every level, you are less likely to run into a situation where it will take you too long to refactor a large chunk of the system. It is easier to refactor well designed code, no matter how much of it you need to change. It was actually not all that long ago when I wrote my first truly really well designed code. If someone told me that I had never done this yet, I would have thought they were crazy. For those that have not done this before, if you keep working at refactoring the same code long after it seems “good”, it will become so much better than you ever thought it could be that it will be a real eye opener. If you agressively refactor all your code in this way, it will result in a system that is easier to do large refactorings on. I am not saying this is the solution to the problem you expressed, just something that might help.

  • http://davesquared.net David Tchepak

    I find it useful to distinguish between refactoring and redesign. Refactorings are small improvements that can be quickly and continually done throughout a project. Redesigns are the large scale changes that can take days or weeks, with the promise of making the current code/design much clearer and easier to change.

    The problem I’ve found with redesign is that it is optimising for the current code and requirements, both of which can change quite dramatically over the course of a project. Sometimes these changes mean the team has just wasted a large amount of time making optimisations that now need to be undone.

    I’ve had more success by sticking to refactoring in these cases — slowly nudging the design toward the direction that seems to be right for the current requirements (i.e. toward the redesign that was identified), and as things change the direction can be tweaked accordingly. I appreciate your concerns about consistency, but ideally the team can identify refactorings that are small enough to be done quickly and not ripple through too much of the code, while still giving immediate benefits in terms of changeability.

    (More rambling on this here: http://www.davesquared.net/2010/02/refactor-or-redesign.html)

    Cheers,
    David