Skip to main content

Design for today πŸ§‘β€πŸŽ¨

“You should build software that is extensible and future-proof.”

A painter drawing general things.

That sounds like a good idea, doesn’t it? Well, that depends on how good you are at predicting the future.

Open-Closed Principle #

Many developers are familiar with the SOLID principles. They are intended to make designs more flexible and maintainable. With regards to future-proofing, the “O” in SOLID represents the Open-Closed Principle (OCP for short) coined by Bertrand Meyer which says that:1

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.

Its purpose is to encourage proper abstraction between concepts, to avoid cases where a change cascades through the whole system. Robert “Uncle Bob” Martin explains how the OCP prevents this in a 1996 article:2

The open-closed principle attacks this in a very straightforward way. It says that you should design modules that never change. When requirements change, you extend the behavior of such modules by adding new code, not by changing old code that already works.

The goal is to design modules which are “complete” but which exposes extension points that allow you to adjust or extend their behavior at a later point.

Speculative closure #

While the OCP can lead to well-designed code, it has a big drawback. You cannot create a meaningful module which can be extended in every possible way, so you will have to choose what kinds of changes to close against. Designing a module to be open for extension but closed for modification therefore requires the designer to speculatively determine what extensions will be desirable in the future. Basically, to predict the future.

Unfortunately, people are not very good at that. And speculatively designing for the future is a fast lane to over-engineering and unnecessary complexity.

Now, of course, I’m not the first person to realize this. In fact, even Martin’s article from 1996 highlights this. It talks about the need for “strategic closure” (which I guess sounds more trustworthy than “speculative closure”).

In general, no matter how β€œclosed” a module is, there will always be some kind of change against which it is not closed.

Since closure cannot be complete, it must be strategic. That is, the designer must choose the kinds of changes against which to close his design. This takes a certain amount of prescience derived from experience. The experienced designer knows the users and the industry well enough to judge the probability of different kinds of changes. He then makes sure that the open-closed principle is invoked for the most probable changes.

In my experience, the cases where you can accurately “judge the probability of different kinds of changes” are few and far between. For myself, I can barely predict what code I will write at the end of the day. And that’s on a good day! πŸ˜‰

The cost of complexity #

So what if building a more general and extensible solution takes a little bit longer time? It does not hurt to have a more general solution, right?

I would say it does hurt. It is easy to miss the continuous cost of working in a solution which is more complex than it needs to be. Making something extensible means you build more than you need. That is, by definition, unnecessary.

And if you later realize that you are in a situation where a more generalized solution would be helpful, there is nothing preventing you from generalizing at that point.

You may say that if we don’t design it “well” now (meaning a generalized solution), it will never be generalized. People will not realize it needs to be done, and continue building in the wrong direction. But then, to be frank, if you don’t think your team is capable of identifying a current need for generalization, what makes you think you will be able to correctly identify a future one?

Don’t ask yourself: what if I don’t build this abstraction now and then need it later? AskΒ instead: what if I build this abstraction now and then never use it?

Modifying code is often cheap #

Another dimension that I find lacking when talking about the OCP and future-proofing in general is the degree to which you control the source code.

For a publicly published library, following the OCP makes a lot of sense. You want to keep your API as stable as possible without preventing extensions. Especially if the extensions will be made by other people. The advantages likely overweighs the drawbacks.

However, for code that is fully under your control, the balance is not as favorable. If you have control over the whole codebase, the cost of modifying existing code is relatively low. And if the cost of change is low, what is the purpose of attempting to predict future needs?3

And in the rare case where you managed to design exactly the extensibility you needed? Great! Now, how much time did you save by adding that extensibility from the beginning? Would it really have cost you more to do it later instead?

Make the easy change #

While future-proofing your code by making it extensible may sound like a good idea, it comes with a cost. Making your code more general than needed increases the complexity. Especially when the code is under your control, consider focusing on the current needs, and know that you can always change it as you learn more.

This may look like an argument for not designing. That is not the case. You still need to make the hard design work. During planning, it’s perfectly ok to go ahead and anticipate future needs. Then take a step back and focus the design on your current needs rather the needs of tomorrow.

In the words of Sandi Metz in “Practical Object-Oriented Design”4:

Do not feel compelled to make design decisions prematurely. Resist, even if you fear your code would dismay the design gurus. […] When the future cost of doing nothing is the same as the current cost, postpone the decision. Make the decision only when you must with the information you have at that time.

So what can you do instead? If your design is not generalized and extensible, what should you then do when the feature you need to add is not supported by the architecture?

You modify it. You refactor. But you do it just in time, when you truly know what you need.

As Kent Beck expressed it.5

For each desired change, make the change easy (warning: this may be hard), then make the easy change.

glyn: Very helpful. I generally agree and I've definitely been guilty of over-engineering in the past. The only exception that springs to mind is that when it becomes necessary to generalise some code, it's sometimes worth going further than immediately necessary.

The criterion in this case should be to come up with a well-rounded abstraction (a good concept, if you like). The trick is to consider what general extension the current extension might be part of and then ask the question: is the general extension simpler than the specific extension needed right now? How can we judge simplicity? Two clues are if it's easier to document or easier to test.

  1. The Open-Closed Principle was first described in Object Oriented Software Construction by Bertrand Meyer. ↩︎

  2. The Open-Closed Principle, by Robert Martin, popularized the idea. ↩︎

  3. When feeling the urge to generalize, Ask yourself: will it be harder tomorrow? ↩︎

  4. As quoted by Jochen Lillich↩︎

  5. Tweet by Kent Beck↩︎