Continuous Deployment - Not Without Modularity

When asked about what's a module, no two people give the same answer. The interesting thing is, it doesn't keep modularity from being one of most pronounced words when it comes to the desirable set of attributes a software system should ideally have. It's somewhere just next to scalable and robust, in no particular order.

The first thing one should understand about modularity is that it's by no means boolean. It's not like the system at hand is either modular, or not (I guess in a way you could say every system is modular because it has at least one module - the system itself, but that's philosophy). Modularity is not discrete, it's ... continuous, right, just like Continuous Deployment, but that's not important right now. To get a grasp of this mysterious modularity thingy, one can ask a simple question - what parts of the system can and cannot be unit tested (which is actually two questions, but you get the drift). If your system is a monolithic piece of ... well, mind the kids, software, then the answer will probably be "Hmm I can only sorta test the whole thing, in a black box kind of way". If your system is modular, you can pretty much test most of it, down to the class level. Now, this doesn't necessarily mean you should, but you could if you wanted to.

For a system to be truly testable, it should be modular, which I remember as being a true "Aha" moment for me - "testable is modular, and modular is testable". Although it may sound trivial that in order to test a certain part it should first be detached from the rest of the system and placed in a test bed, making parts detachable is far from being trivial and plays a key role here. Unless you planned ahead for "detachable parts" there's little chance they would be, and you'd get a system that is most probably the opposite of modular, perhaps monolithic. Monolithic means it would be a one big chunk, tied up by never to be broken intra dependencies preventing one from taking parts out, say for testing. In a monolithic system, any attempt to detach a particular part requires that one also brings in that part's dependencies, and then that part dependencies' dependencies, and so on, which basically boils down to an "all or nothing" kind of system. The keen reader might have noticed I've been (somewhat irresponsibly) using the term part, but that's only because  I've already bashed the term module and it would be uncool of me to keep using it through the post. The truth of the matter is that "part" is no better than "module", but it is a bit less abused. If you replace the word part with "a group of classes" (which can be of size = 1) the text should still make sense, grammar issues aside.

So what does all this has to do with Continuous Deployment? Right, getting there.
With Continuous Deployment our code hits production pretty much right after you commit it, and unless you don't mind bringing down production stuff, you should be very careful about what you're doing. That's not to say you should be afraid of releasing to production, fear is not exactly an engineering discipline, you should rather be careful, as in have things under control, and be able to turn off/disable the effect of your changes in case of trouble. One could say this is the very purpose of using a source control, so that offending changes could be reverted, and while is is very true, reverting back to working revisions is often far from being an easy task**, and will most probably require a subsequent release of binary files. If only you had a way to turn features on and off using configuration... wait, you do have a way - feature flags.  Modular systems single most important enabler for effective and maintainable feature flags . While in modular systems feature flags may be implemented by employing object oriented techniques such as interfaces and polymorphism, in monolithic systems feature flags are simply a battalion of if statements wrapping the sections in your flow you're planning on disabling in case things go sideways.

** In a constantly evolving system reverting to the last known good version can be non trivial due to various reasons, including but not limited to:

  • loosing bug fixes done after the last known good revision
  • loosing must-have-features implemented after the last known good revision
  • running the risk of breaking APIs, being incompatible with other components  
  • revering versions may involve merging, which is a known black hole in the realm of source control


Popular posts from this blog

Eclipse vs. IntelliJ, an empirical rant

Sending out Storm metrics