Chris Oldwood from The OldWood Thing
The meme tells us to â€œautomate all the thingsâ€ and itâ€™s a noble cause which has sprung up as a backlash against the ridiculous amount of manual work weâ€™ve often had to do in the past. However in our endeavour to embrace the meme we should not go overboard and lose sight of what weâ€™re automating and why.
The main reason we tend to automate things is to save ourselves time (and by extension, money) by leveraging tools that can perform tasks quicker than we can, but also with more determinism and reliability, thereby saving even more time. For example, pasting a complex set of steps off a wiki page into a command prompt to perform a task is slower than an interpreter running a script and is fraught with danger as we might screw up at various points along the way and so end up not doing exactly what weâ€™d intended. Ultimately computers are good at boring repetitive tasks whilst we humans are not.
However if we only do this operation once every six months and there are too many potential points of failure we might spend far longer trying to automate it than it actually takes to do carefully, manually. Itâ€™s a classic trade-off and like most things in IT there are some XKCDâ€™s for that â€“ â€œAutomationâ€ and â€œIs It Worth the Timeâ€. They make sobering reading when youâ€™re trying to work out how much time you might save automating something and therefore also gives a good indication of the maximum amount of time you should spend on achieving that.
Orchestration First, Actor Later
Where I think the meme starts to break down is when we get this balance wrong and begin to lose sight of where the real value is, thereby wasting time trying to automate not only all the steps but also wire it into some job scheduling system (e.g. CI server) so that once in a blue moon we can push a button and the whole task from start to finish is executed for us without further intervention.
The dream suggests at that point we can go off and do something else more valuable instead. Whilst this notion of autonomy is idyllic it can also come with a considerable extra up-front cost and any shortcuts are likely to buy us false security (i.e. it silently fails and we lose time investigating downstream failures instead).
For example there are many crude command prompt one-liners Iâ€™ve written in the past to pick up common mistakes that are trivial for me to run because theyâ€™ve been written to automate the expensive bit, not the entire problem. I often rely on my own visual system to filter out the noise and compensate for the impurities within the process. Removing these wrinkles is often where the proverbial â€œlast 10% that takes 90% of the timeâ€ goes .
Itâ€™s all too easy to get seduced by the meme and believe that no automation task is truly complete until itâ€™s fully automated.
In .Net when you publish shared libraries as NuGet packages you have a .nuspec file which lists the package dependencies. The library .csproj build file also has project dependencies for use with compilation. However these two sets of dependencies should be kept in sync .
Initially with only a couple of NuGet packages it was easy to do manually as I knew it was unlikely to change. However once the monolithic library got split it up the dependencies started to grow and manually comparing the relevant sections got harder and more laborious.
Given the text based nature of the two files (XML) it was pretty easy to write a simple shell one-liner to grep the values from the two sets of relevant XML tags, dump them in a file, and then use diff to show a side-by-side comparison. Then it just needed wrapping in a for loop to traverse the solution workspace.
Because the one-liner was mine I got to take various shortcuts like hardcoding the input path and temporary files along with â€œknowingâ€ that a certain project was always misreported. At this point a previously manual process has largely been automated and as long as I run it regularly will catch any mistakes.
Of course itâ€™s nice to share things like this so that others can take advantage after Iâ€™m gone, and it might be even better if the process can be added as a build step so that itâ€™s caught the moment the problem surfaces rather than later in response to a more obscure issue. Now things begin to get tricky and we start to see diminishing returns.
First, the Gnu on Windows (GoW) toolset I used isnâ€™t standard on Windows so now I need to make the one-liner portable or make everyone else match my tooling choice . I also need to fix the hard coded paths and start adding a bit of error handling. I also need to find a way to remove the noise caused by the one â€œawkwardâ€ project.
None of this is onerous, but this all takes time and whilst Iâ€™m doing it Iâ€™m not doing something (probably) more valuable. The majority of the value was in being able to scale out this safety check, there is (probably) far less value in making it portable and making it run reliably as part of an automated build. This is because essentially it only needs to be run whenever the project dependencies change and that was incredibly rare once the initial split was done. Additionally the risk of not finding an impedance mismatch was small and should be caught by other automated aspects of the development process, i.e. the deployment and test suite.
Knowing When to Automate More
This scenario of cobbling something together and then finding you need to do it more often is the bread and butter of build & deployment pipelines. You often start out with a bunch of hacked together scripts which do just enough to allow the team to bootstrap itself in to an initial fluid state of delivery. This is commonly referred to as a walking skeleton because it forms the basis for the entire development process.
The point of starting with the walking skeleton rather than just diving headlong into features is to try and tackle some of the problems that historically got left until it was too late, such as packaging and deployment. In the modern era of continuous delivery we look to deliver a thin slice of functionality quickly and then build upon it piecemeal.
However itâ€™s all too easy to get bogged down early on in a project and spend lots of time just getting the build pipeline up and running and have nothing functional to show for it. This has always made me feel a little uncomfortable as it feels as though we should be able to get away with far less than perhaps we think we need to.
In â€œBuilding the Pipeline - Process Led or Automation Ledâ€ and my even earlier post â€œLayered Buildsâ€ Iâ€™ve tried to promote a more organic approach that focuses on what I think really matters most which is a consistent and extensible approach. In essence we focus first on producing a simple, repeatable process that can be used locally to enable the application skeleton to safely evolve and then balance the need for automating this further along with the other features. If quality or speed of delivery drops and more automation looks to be the answer then it can be added with the knowledge that itâ€™s being done for deliberate reasons, rather than because weâ€™ve got carried away gold plating the build system based on what other people think it should do (i.e. a cargo cult mentality).
The one caveat to being leaner about your automation is that you may (accidentally) put off addressing one or more technical risks because you donâ€™t perceive them as risks. This leads us back to why the meme exists in the first place â€“ failing to address certain aspects of software delivery until itâ€™s too late. If there is a technical concern, address it, but only to the extent that the risk is understood, you may not need to do anything about it now.
With a team of juniors there is likely to be far more unknowns  than with a team of experienced programmers, therefore the set of perceived risks will be higher. Whilst you might not know the most elegant approach to solving a problem, knowing an approach already reduces the risk because you know that you can trade technical debt in the short term for something else more valuable if necessary.
Everything is Negotiable
The thing I like most about an agile development process is that every trade-off gets put front-and-centre, everything is now negotiable . Every task now comes with an implicit question: is this the most valuable thing we could be doing?
Whilst manually building a private cloud for your production system using a UI is almost certainly not the most scalable approach, neither is starting day one of a project by diving into, say, Terraform when you donâ€™t even know what youâ€™re supposed to be building. There is nothing wrong with starting off manually, you just need to be diligent and ensure that your decision to only automate â€œenough of the thingsâ€ is always working in your favour.
 See â€œThe Curse of NTLM Based HTTP Proxiesâ€.
 Iâ€™m not aware of Visual Studio doing this yet although there may now be extensions and tools written by others Iâ€™m not aware of.
 Yes, the Unix command line tools should be ubiquitous and maybe finally they will be with Bash on Windows.
 See â€œTurning Unconscious Incompetence to Conscious Incompetenceâ€.
 See â€œEstimating is Liberatingâ€.