Chris Oldwood from The OldWood Thing
In the world of IT support there is the universal solution to every problem â€“ turn it off and on again. You would hope that this kind of drastic action is only drawn upon when all other options have been explored or that the problem is known a priori to require such a response and is the least disruptive course of action. Sadly what was often just a joke in the past has become everyday life for many systems as rebooting or restarting is woven into the daily support routine.
In my professional career I have worked on a number of systems where there are daily scheduled jobs to reboot machines or restart services. Iâ€™m not talking about the modern, proactive kind like the Chaos Monkey which is used to probe for weaknesses, or when you force cluster failovers to check everythingâ€™s healthy; Iâ€™m talking about jobs where the restart is required to ensure the correct functioning of the system â€“ disabling them would cripple it.
The need for the restart is usually to overcome some kind of corrupt or immutable internal state, or to compensate for a resource leak, such as memory, which degrades the service to an unacceptable level. An example of the former Iâ€™ve seen is to flush a poisoned cache or pick up the change in date, while for the latter it might be unclosed database connections or file handles. The notion of â€œrecyclingâ€ processes to overcome residual effects has become so prominent that services like IIS provide explicit support for it .
Depending on where the service sits in the stack the restart could be somewhat disruptive, if itâ€™s located on the edge, or largely benign if itâ€™s purely internal, say, a background message queue listener. For example I once worked on a compute farm where one of the front-end services was restarted every night and that caused all clients to drop their connection resulting in a support email being sent due to the â€œunhandled exceptionâ€. Needless to say everyone just ignored all the emails as they only added to the background noise making genuine failures harder to spot.
These kind of draconian measures to try and attain some system stability actually make matters worse as the restarts then begin to hide genuine stability issues which eventually start happening during business hours as well and therefore cause grief for customers as unplanned downtime starts occurring. The impetus for one of my first ACCU articles â€œUtilising More Than 4GB of Memory in a 32-bit Windows Processâ€ came from just such an issue where a service suddenly starting failing with out-of-memory errors even after a restart if the load was awkwardly skewed. It took almost four weeks to diagnose and properly fix the issue during which there were no acceptable workarounds â€“ just constant manual intervention from the support team.
I also lost quite a few hours on the system I mentioned earlier debugging a problem in the caching mechanism which was masked by a restart and only surfaced because the restart failed to occur. No one had remembered about this failure mode because everyone was so used to the restart hiding it. Having additional complexity in the code for a feature that will never be used in practice is essentially waste.
Although itâ€™s not true in all cases (the memory problem just described being a good example) the restart option may be avoidable if the process exposed additional behaviours that allowed for a more surgical approach to recovery to take place. Do you really need to restart the entire process just to flush some internal cache, or maybe just a small part of it? Similarly if you need to bump the business date via an external stimulus can that not be done through a â€œdiscoverableâ€ API instead of hidden as part of a service restart ?
In some of my previous posts and articles, e.g â€œFrom Test Harness To Support Toolâ€, â€œHome-Grown Toolsâ€ and more recently in â€œLibraries, Console Apps, and GUIsâ€, I have described how useful I have found writing simple custom tools to be for development and deployment but also, more importantly, for support. What I think was missing from those posts that I have tried to capture in this one, most notably through its title, is to focus on resolving system problems with the minimal amount of disruption. Assuming that you cannot actually fix the underlying design issue without a serious amount of time and effort can you at least alleviate the problem in the shorter term by adding simple endpoints and tools that can be used to make surgical-like changes inside the critical parts of the system?
For example, imagine that youâ€™re working on a grid computing system where tasks are dished out to different processes and the results are aggregated. Ideally you would assume that failures are going to happen and that some kind of timeout and retry mechanism would be in place so that when a process dies the system recovers automatically . However, if for some reason that automatic mechanism does not exist how are you going to recover? Given that someone (or something) somewhere is waiting for their results how are you going to â€œunblock the systemâ€ and allow them to make progress, without also disturbing all your other users who are unaffected?
You can either try and re-submit the failed task and allow the entire job to complete or kill the entire job and get the client to re-submit its job. As weâ€™ve seen one way to achieve the latter would be to restart parts of the system thereby flushing the job from it. When this is a once in a million event it might make sense  but once the failures start racking up throwing away every in-flight request just to fix the odd broken one becomes more and more disruptive. Instead you need a way to identify the failed task (perhaps using the logs) and then instruct the system to recover such as by killing just that job or by asking it to resubmit it to another node.
Hence, ideally youâ€™d just like to hit one admin endpoint and say something like this:
> admin-cli kill-job --server prod --job-id 12345
If thatâ€™s not easily achievable and there is distributed state to clear up you might need a few separate little tools instead that can target each part of system, for example:
> admin-cli remove-node â€“s broker-prod --node NODE99
> admin-cli remove-results -s results-prod --job 12345
> admin-cli remove-tasks â€“s taskq-prod --job 12345
> admin-cli reset-client â€“s ui-prod --client CLT42
> admin-cli add-node â€“s broker-prod --node NODE99
This probably seems like a lot of work to write all these little tools but what Iâ€™ve found in practice is that usually most of the tricky logic in the services already exists â€“ you just need to find a way to invoke it externally with the correct arguments.
These days itâ€™s far easier to graft a simple administration endpoint onto an existing service. There are plenty of HTTP libraries available that will allow you to expose a very basic API which you could even poke with CURL. If youâ€™re already using something more meaty like WCF then adding another interface should be trivial too.
Modern systems are becoming more and more decoupled through the use of message queues which also adds a natural extension point as they typically already do all the heavy lifting and you just need to add new message handlers for the extra behaviours. One of the earliest distributed systems I worked on used pub/sub on a system-wide message bus both for functional and administrative use. Instead of using lots of different tools we had a single admin command line tool that the run playbook generally used (even for some classic sysadmin stuff like service restarts) as it made the whole support experience simpler.
Once you have these basic tools it then becomes easy to compose and automate them. For example on a similar system I started by adding a simple support stored procedure to find failed tasks in a batch processing system. This was soon augmented by another one to resubmit a failed task, which was then automated by a simple script. Eventually it got â€œproductionisedâ€ and became a formal part of the system providing the â€œslow retryâ€ path  for automatic recovery from less transient outages.
Design for Supportability
One of the design concepts Iâ€™ve personally found really helpful is Design for Testability; something which came out of the hardware world and pushes us to design our systems in a way that makes them much easier test. A knock-on effect of this is that you can reduce your end-to-end testing burden and deliver quicker.
A by-product of Design for Testability is that it causes you to design your system in a way that allows internal behaviours to be observed and controlled in isolation. These same behaviours are often the same ones that supporting a system in a more fine-grained manner will also require. Hence by thinking about how you test your system components you are almost certainly thinking about how they would need to be supported too.
Ultimately of course those same thoughts should also be making you think about how the system will likely fail and therefore what needs to be put in place beforehand to help it recover automatically. In the unfortunate event that you canâ€™t recover automatically in the short term you should still have some tools handy that should facilitate a swift and far less disruptive manual recovery.
 Note that this is different from a process restarting itself because it has detected that it might have become unstable, perhaps through the use of the Circuit Breaker pattern.
 Aside from the benefits of comprehension this makes the system far more testable as it means you can control the date and therefore write deterministic tests.
 See â€œWhen Does a Transient Failure Stop Being Transientâ€ for a tangent about fast and slow retries.
 Designing any distributed system that does not tolerate network failures is asking for trouble in my book but the enterprise has a habit of believing the network is â€œreliable enoughâ€.