Large scale refactoring: Why would you ever do something like that?

If it ain’t broke, don’t fix it.

It’s a well known phrase, but as we know, most of the human technological progress was made by people who decided to fix what isn’t broken. Especially in the software industry one could argue that most of what we do is fixing what isn’t broken.

Fixing functionality, improving the UI, improving speed and memory efficiency, adding features: these are all activities for which it is easy to see if they are worth doing, and then we argue for or against spending our time on them. However, there is an activity, which for the most part falls into a gray area–refactoring, and especially large scale refactoring.

The term large scale refactoring merits explaining. Exactly what can be considered “large scale” will vary from case to case as the term is a bit vague, but I consider anything significantly affecting more than just a few classes, or more than just one subsystem, and its interface to be “large.” On the other end, any refactoring that stays hidden behind a single class’s interface would definitely be “small.” Of course, there’s a lot of gray area in between. Finally, trust your gut, if you dread doing it, then it’s probably “large.”

Refactoring, by definition, doesn’t produce any visible functionality, nothing you can show to the client, no deliverables. At best they might produce small speed and memory usage improvements, but that is not the primary goal. One might say that the primary goal is code you are happy with. But because you are rearranging code in such a way that it has far reaching consequences throughout the codebase, there is a chance that all hell will break loose and there will be problems. That is of course where the dread we mentioned comes from. Have you ever introduced someone new to your codebase and after they asked about a piece of peculiarly organized code, you responded with something along the lines of:

Yeeeaahh, this is legacy code that made sense at the time but specifications changed and it’s now too expensive to fix it?

Maybe you even gave them a very serious look and told them to leave it be and don’t touch it.

The question, “Why would we even want to do it?” is a natural one and may be as important as doing it...

The question, “Why would we even want to do it?” is a natural one and may be as important as doing it because quite often there are other people you have to convince to allow you to spend your expensive time on refactoring. So let’s consider cases when you would want to do it and the benefits to be gained:

Performance improvements

You are happy with the current organization of your code from maintainability point but it is still causing problems with performance. It is just too hard to optimize the way it is currently setup and the changes would be very brittle.

There is only one thing to do here and that is profile it extensively. Run benchmarks and make estimates on how much you will gain and then try to estimate how it will translate to concrete gains. Sometimes you might even realize that the refactoring is not worth it. Other times you’ll have cold hard data to back your case.

Architectural improvements

Maybe the architecture is okay but somewhat outdated, or maybe it’s so bad you cringe every time you touch that part of the codebase. It works fine and fast but it is a pain to add new features. In that pain lays the business value of refactoring. The “pain” also means it will take longer to add new features, maybe much longer.

And there’s a benefit to be gained. Make estimates of the cost/benefit for a few sample features with and without your proposed big refactoring. Explain that similar differences will apply to most of the upcoming features touching that part of the system now and forever in the future while the system is being developed. Your estimates might be wrong as they often are in software development but their ratios will probably be in the ballpark.

Bringing it up to date

Sometimes the code is initially well written. You are extremely happy with it. It is fast, memory efficient, maintainable and well aligned with specifications. Initially. But then specifications change, business goals shift or you learn something new about your end users that invalidates your initial assumptions. The code still works well, and you are still pretty happy about it, but something is just awkward when you look at it in context of the end product. Things are placed on the slightly wrong subsystem or properties are sitting in the wrong class, or maybe some names make no sense anymore. They are now fulfilling a role that is in business terms named completely differently. However, it is still very hard to justify any kind of refactoring since the work involved will be on scale with any of the other examples, but benefits are much less tangible. When you think about it, it is not even that hard to maintain it. You just have to remember that some things are actually something else. You just have to remember that A actually means B and property Y on A actually relates to C.

And here lies the real benefit. In the field of neuro-psychology there are many experiments suggesting that our short-term or working memory is capable of holding just 7+/-2 elements, one of them being the Sternberg experiment. When we study a subject we start with basic elements and, initially, when we think about higher-level concepts we have to think about their definitions. For example, consider a simple term “salted SHA256 password.” Initially we have to hold in our working memory definitions for “salted” and “SHA256” and maybe even a definition of “hash function.” But once we fully understand the term, it occupies only one memory slot because we understand it intuitively. That is one of the reasons why we need to fully understand lower level concepts to be able to reason about higher-level ones. The same is true for terms and definitions specific to our project. But if we have to remember the translation to the real meaning every time we discuss our code, that translation is occupying another one of those precious working memory slots. It produces cognitive load and makes it harder to reason through the logic in our code. In turn, if it’s harder to reason it means there is a greater chance that we will overlook an important point and introduce a bug.

And lets not forget more obvious side effects. There is a good chance for confusion when discussing changes with our client or anyone familiar with just the correct business terms. New people joining the team have to get familiar with both the business terminology as well as its counterparts in the code.

I think these reasons are very compelling and justify the cost of refactoring in many cases. Still, be careful, there may be plenty of edge cases where you have to use your best judgment.

Ultimately, large scale refactoring is good for the same reasons many of us enjoy starting a new project. You look at that blank source file and a brave new world starts swirling through your mind. This time you’ll do it right, the code will be elegant, it will be both beautifully laid out as well a fast and robust and easily extensible, and most importantly it will be a joy to work with every single day. Refactoring, small and large scale allows you to recapture that feeling, breath new life into an old codebase and repay that technical debt.

Finally, it is best if the refactoring is driven by plans to make it easier to implement a certain new feature. In that case the refactoring will be more focused and a lot of the time spent refactoring will also be gained back immediately through quicker implementation of the feature itself.

Preparation

Make sure that your test coverage is very good in all of the areas of the codebase that you are likely to touch. If you see certain parts that are not well covered, first spend some time bringing the test coverage up. If you don’t have tests at all then you should first spend time creating those tests. If you cannot create a proper test suite, concentrate on acceptance tests and write as many as you can, and make sure to write unit tests while you refactor. Theoretically you can do the refactoring without good test coverage but it will require you to do a lot of manual testing and do it often. It will take much longer and be more error prone. Ultimately, if your test coverage is not good enough, the cost of performing a large scale refactoring might be so high that you should, regrettably, consider not doing it at all. In my opinion, that is a benefit of automated tests that is not emphasized often enough. Automated tests allow you to refactor often and more importantly, to be bolder about it.

Once you’ve made sure your test coverage is good, it’s time to start mapping out your changes. At first you shouldn’t be doing any coding. You need to roughly map out all of the changes involved and trace all of the consequences through the codebase as well as load knowledge about all of that into your mind. Your goal is to understand exactly why you are changing something and the role it plays in the codebase. If you stumble through it changing things just because they look like they need to be changed or because something broke and this seems to fix it, you’ll likely end up in a dead alley. The new code seems to work, but incorrectly, and now you can’t even remember all of the changes you’ve made. At this point you might need to abandon the work you’ve done on the refactoring, and essentially you’ve wasted your time. So take your time and explore the code to understand the ramifications of each change you are about to make. It will pay of handsomely in the end.

You’ll need an aid for the process. You might prefer something else but I like a simple piece of blank paper and a pen. I start by writing the initial change I want to make in the top left of the paper. Then I start looking for all of the places affected by the change and I write them down under the initial change. It is important here to use your judgment. Ultimately the notes and diagrams on the paper are there for yourself so pick a style that best suits your memory. I write out short code snippets with bullet point marks below them and a lot of arrows leading to other such notes indicating things that depend on it directly (full arrow) or indirectly (dashed arrows). I also annotate the arrows with shorthand marks as a reminder to some specific thing I noticed in the codebase. Remember, you will only be coming back to those notes over the next few days while you perform the changes planned out in them and it is perfectly ok to use very short and cryptic reminders so they use less space and are easier to layout on the paper. A few times I was cleaning my desk months after a refactoring and I found one of those papers. It was complete gibberish, I had absolutely no idea what anything on that paper meant, except that it might have been written out by someone gone insane. But I know that piece of paper was indispensable while I was working on the problem. Also, don’t think you need to write out every single change. You can group them and track the details in a different way. For example on your main paper you can note that you need to “rename all occurrences of A.b to C.d” and then you can track the specifics in several different ways. You can write them all out on a separate piece of paper, you can plan to perform a global search for all occurrences of it once again, or you can simply leave all source files where the change needs to be made open in your editor of choice and make a mental note to go back through them once you finish mapping out the changes.

When you map out the consequences of your initial change, by the nature of it being large scale, you will most probably identify additional changes that have further consequences themselves. Repeat the analysis for them as well, noting all the dependent changes. Depending on the size of the changes you can write them out on the same piece of paper or pick a new blank one. A very important thing to try and do while mapping out the changes is to try and identify boundaries where you can actually halt the branching changes. You want to limit the refactoring to the smallest sensible, rounded, set of changes. If you see a point at which you can just stop and leave the rest as it is, do so even if you see it should be refactored, even if it is related in concept to your other changes. Finish off this round of refactoring, thoroughly test, deploy and come back for more. You should actively be looking for those points to keep the size of changes manageable. Of course, as always, make a judgment call. Quite often I came up to a point where I could cut off the refactoring by adding some proxy classes to do a bit of interface translation. I even started implementing them when I realized that they’ll be as much work as pushing the refactoring a bit further to the point where there will be a “natural stop” (i.e. almost no proxy code needed) point. I then backtracked, reverting my last changes and refactored. If it all sounds a bit like mapping out uncharted territory it’s because I feel like it is, except territory maps are only two-dimensional.

Execution

Once you’ve done your preparation it’s time to execute on the plan. Make sure your concentration is up and secure a distraction free environment. I sometimes go as far as completely turning of internet connection at this point. The thing is, if you’ve prepared well, have a good set of notes on the paper next to you, and your concentration is up! You can often move very fast through the changes this way. In theory, most of the work was done beforehand, during preparation.

Once you’re refactoring actual code pay attention to strange bits of code that do something very specific and may seem like bad code. Maybe they’re bad code, but quite often they are actually handling a strange corner case that was discovered while investigating a bug in production. Over time, most code grows “hairs” or “warts” which are handling weird corner case bugs, for example, a strange response code here that is maybe needed for IE6 or a condition there that handles a strange timing bug. They are not important for the big picture but are still significant details. Ideally, they are explicitly covered with unit tests, if not try to first cover them. I was once tasked with porting a mid sized application from Rails 2 to Rails 3. I was very much familiar with the code but it was a bit messy and there were a lot of changes to take into account so I opted for reimplementation. Actually, it wasn’t a real reimplementation, as that is hardly ever a smart move, but I started with a blank Rails 3 app and I refactored vertical slices of the old app into the new one, roughly using the process described. Each time I finished a vertical slice, I went through the old code, looking at each line and double checking that it has its counterpart in the new code. I was essentially, picking all the old code “hairs” and replicating them in the new codebase. In the end, the new codebase had all the corner cases addressed.

Make sure to perform manual testing often enough. It will both force you to look for natural “breaks” in the refactoring that will allow you to test a part of the system as well as give you confidence that you didn’t break anything you didn’t expect to break in the process.

Wrap it up

Once you’re done, make sure to review all of your changes one final time. Look at the entire diff and go over it. Quite often, you’ll notice subtle things you missed at the start of the refactoring because you didn’t have the knowledge you have now. It’s a nice benefit of large scale refactoring: you get a clearer mental image of the code organization, especially if you didn’t originally write it.

If at all possible, get a colleague to review it as well. He doesn’t even have to be particularly familiar with that exact part of the codebase, but he should have a general familiarity with the project and its code. Having a fresh set of eyes on the changes can help out a lot. If you absolutely can’t get another developer to look at them, you’ll have to pretend to be one yourself. Get a good night’s sleep and review it with a fresh mind.

If you lack QA, you’ll have to wear that hat as well. Again, take a break and distance yourself from the code then come back to perform manual testing. You’ve just undergone the equivalent of going into a cluttered electrical wiring cabinet with a bunch of tools and sorted it all out, possibly cutting and rewiring stuff, so a bit more care needs to be taken than usual.

Finally, enjoy the fruits of your labor considering all the planned changes that will now be much cleaner and easier to implement.

When would you not do it?

While there are many benefits to performing large scale refactoring regularly to keep the project code fresh and high quality, it is still a very costly operation. There are also cases where it would not be advisable:

Your test coverage is poor

As was mentioned: very poor test coverage might be a big problem. Use your own judgment, but it might be better in the short term to focus on bringing the coverage up while working on new features and performing as many localized small-scale refactoring as possible. That will help you a lot once you decide to take the plunge and sort larger parts of the codebase.

Refactoring is not driven by a new feature and the codebase hasn’t changed in a long time

I used the past tense instead of saying “the codebase will not change” on purpose. Judging from experience (and by experience I mean being wrong many times) you can almost never rely on your predictions as to when a certain part of the codebase will need to be changed. So, do the next best thing: look to the past and assume the past will repeat itself. If something hasn’t been changed for a long time then you probably don’t need to change it now. Wait for that change to come along and work on something else.

You are pressed for time

Maintenance is the most expensive part in the project lifecycle and refactoring makes it less expensive. It is absolutely necessary for any business to use refactoring to reduce technical debt to make the future maintenance cheaper. Otherwise it is in danger of entering a vicious cycle in which it becomes more and more expensive to add new features. I hope it is self-evident why that is bad.

That said, large scale refactoring is very, very unpredictable when it comes to how long it will take, and you shouldn’t do it half way. If for whatever internal or external reasons you are being pressed for time and you are unsure you will be able to finish within that timeframe, then you might need to abandon refactoring. Pressure and stress, especially the time-induced type, leads to a lower level of concentration, which is absolutely necessary for large scale refactoring. Work on getting more “buy in” from your team to set aside the time for it and look into your calendar for a period where you will have the time. It is not necessary that it is a continuous stretch of time. Of course you will have other issues to solve, but those breaks should not be longer than a day or two. If so, you will have to remind yourself of your own plan because you will start forgetting what you’ve learned about the codebase and exactly where you’ve stopped.

Conclusion

I hope that I have given you some useful guidelines and convinced you of the benefits, and dare I say necessity, of performing large scale refactoring in certain occasions. The topic is very vague, and of course nothing said here is a definite truth and particulars will vary from project to project. I tried to give advice, which is in my opinion, generally applicable, but as always, consider your particular case and use your own experience to adapt to its specific challenges. Good luck refactoring!

Hiring? Meet the Top 10 Ruby on Rails Developers for Hire in November 2014
Like what you're reading?
Get the latest updates first.
Don't miss out.
Get the latest updates first.

Comments

Dmitry Pavlov
Nice read. Thanks! I agree. Never touch the code not covered with tests. I'd also recommend to avoid refactoring when if you can't keep your code working at the end of the day. Split the refactoring task if the day is not enough. Yes, it's usually faster to refactor many things in code at once, but very dangerous.
Radan Skoric
Hey Dmitry, thanks! I agree. Of course, some times you won't be able to end the day with the code in the working state (maybe even because of an unplanned interruption) but it should definitely be the aim for the day.
Tomas Agrimbau
Nice Blog Post!
brianm101
Nice article and rational! Find it useful to refactor upwards. Do the simple stuff first, rename classes and variables, that way you get to see the bigger picture more clearly. May even conclude the big refactoring might no loner be required as things look more sensible! Sometimes simply utilising new language features can suddenly make the old code look clear again, and even throw up undiscovered bugs - especially in c++
comments powered by Disqus
Subscribe
Free email updates
Get the latest content first.
Trending articles
Relevant technologies
Toptal Authors
Software Developer
Software Engineer
Software Engineer
PHP Developer