Piping Software for Less: Why, What & How (Part 1)

Paul Grenyer from Paul Grenyer

Developing software is hard and all good developers are lazy. This is one of the reasons we have tools which automate practices like continuous integration, static analysis and measuring test coverage. The practices help us to measure quality and find problems with code early. When you measure something you can make it better. Automation makes it easy to perform the practices and means that lazy developers are likely to perform them more often, especially if they’re automatically performed every time the developer checks code in.

This is old news. These practices have been around for more than twenty years. They have become industry standards and not using them is, quite rightly, frowned upon. What is relatively new is the introduction of cloud based services such as BitBucket Pipelines, CircleCI and SonarCloud which allow you to set up these practices in minutes, however with this flexibility and efficiency comes a cost.

Why

While BitBucket Pipelines, CircleCI and SonarCloud have free tiers there are limits.

With BitBucket Pipelines you only get 50 build minutes a month on the free tier. The next step up is $15/month and then you get 2500 build minutes.

On the free CircleCI tier you get 2500 free credits per week, but you can only use public repositories, which means anyone and everyone can see your code. The use of private repositories starts at $15 per month.

With SonarCloud you can analyse as many lines of code as you like, but again you have to have your code in a public repository or pay $10 per month for the first 100,000 lines of code.

If you want continuous integration and a static analysis repository which includes test coverage and you need to keep your source code private, you’re looking at a minimum of $15 per month for these cloud based solutions and that’s if you can manage with only 50 build minutes per month. If you can’t it’s more likely to be $30 per month, that’s $360 per year.

That’s not a lot of money for a large software company or even a well funded startup or SME, though as the number of users goes up so does that price. For a personal project it’s a lot of money. 

Cost isn’t the only drawback, with these approaches you can lose some flexibility as well. 

The alternative is to build your own development pipelines. 

I bet you’re thinking that setting up these tools from scratch is a royal pain in the arse and will take hours; when the cloud solutions can be set up in minutes. Not to mention running and managing your own pipeline on your personal machine and don’t they suck resources when they’re running in the background all the time? And shouldn’t they be set up on isolated machines? What if I told you, you could set all of this up in about an hour and turn it all on and off as necessary with a single command? And if you wanted to, you could run it all on a DigitalOcean Droplet for around $20 per month. 

Interested? Read on.

What

When you know how, setting up a continuous integration server such as Jenkins and a static analysis repository such as SonarQube in a Docker container is relatively straightforward. As is starting and stopping them altogether using Docker Compose. As I said, the key is knowing how; and what I explain in the rest of this article is the product of around twenty development hours, a lot of which was banging my head against a number of individual issues which turned out to have really simple solutions.

Docker

Docker is a way of encapsulating software in a container. Anything from an entire operating system such as Ubuntu to a simple tool such as the scanner for SonarQube. The configuration of the container is detailed in a Dockerfile and Docker uses Dockerfiles to build, start and stop containers. Jenkins and SonarQube all have publically available Docker images, which we’ll use with a few relatively minor modifications, to build a development pipeline.

Docker Compose

Docker Compose is a tool which orchestrates Docker containers. Via a simple YML file it is possible to start and stop multiple Docker containers with a single command. This means that once configured we can start and stop the entire development pipeline so that it is only running when we need it or, via a tool such as Terraform, construct and provision a DigitalOcean droplet (or AWS service, etc.) with a few simple commands and tear it down again just as easily so that it only incurs cost when we’re actually developing. Terraform and DigitalOcean are beyond the scope of this article, but I plan to cover them in the near future. 

See the Docker and Docker Compose websites for instructions on how to install them for your operating system.

How

In order to focus on the development pipeline configuration, Over this and a few other posts I’ll describe how to create an extremely simple Dotnet Core class library with a very basic test and describe in more detail how to configure and run Jenkins and SonarQube Docker containers and setup simple projects in both to demonstrate the pipeline. I’ll also describe how to orchestrate the containers with Docker Compose. 

I’m using Dotnet Core because that’s what I’m working with on a daily basis. The development pipeline can also be used with Java, Node, TypeScript or any other of the supported languages. Dotnet Core is also free to install and use on Windows, Linux and Mac which means that anyone can follow along.

A Simple Dotnet Core Class Library Project

I’ve chosen to use a class library project as an example for two reasons. It means that I can easily use a separate project for the tests, which allows me to describe the development pipeline more iteratively. It also means that I can use it as the groundwork for a future article which introduces the NuGet server Baget to the development pipeline.

Open a command prompt and start off by creating an empty directory and moving into it.

mkdir messagelib
cd messagelib

Then open the directory in your favorite IDE, I like VSCode for this sort of project. Add a Dotnet Core appropriate .gitignore file and then create a solution and a class library project and add it to the solution:

dotnet new sln
dotnet new classLib --name Messagelib
dotnet sln add Messagelib/Messagelib.csproj

Delete MessageLib/class1.cs and create a new class file and class called Message:

using System;

namespace Messagelib
{
    public class Message
    {
        public string Deliver()
        {
            return "Hello, World!";
        }
    }
}

Make sure it builds with:

dotnet build

Commit the solution to a public git repository or you can use the existing one in my bitbucket account here: https://bitbucket.org/findmytea/messagelib

A public repository keeps this example simple and although I won’t cover it here, it’s quite straightforward to add a key to a BitBucket or GitHub private repository and to Jenkins so that it can access them.

Remember that one of the main driving forces for setting up the development pipeline is to allow the use of private repositories without having to incur unnecessary cost.


Read the next parts here:




Sidebar 1


Continuous Integration 

Continuous Integration (CI) is a development practice where developers integrate code into a shared repository frequently, preferably several times a day. Each integration can then be verified by an automated build and automated tests. While automated testing is not strictly part of CI it is typically implied.


Static Analysis

Static (code) analysis is a method of debugging by examining source code before a program is run. It’s done by analyzing a set of code against a set (or multiple sets) of coding rules.


Measuring Code Coverage

Code coverage is a metric that can help you understand how much of your source is tested. It's a very useful metric that can help you assess the quality of your test suite.



Sidebar 2: CircleCI Credits

Credits are used to pay for your team’s usage based on machine type and size, and premium features like Docker layer caching.



Sidebar 3: What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.


Piping Software for Less: Jenkins (Part 2)

Paul Grenyer from Paul Grenyer



Run Jenkins in Docker with Docker Compose

Why use Jenkins I hear you ask? Well, for me the answers are simple: familiarity and the availability of an existing tested and officially supported Docker image. I have been using Jenkins for as long as I can remember. 

The official image is here: https://hub.docker.com/r/jenkins/jenkins

After getting Jenkins up and running in the container we’ll look at creating a ‘Pipeline’ with the Docker Pipeline plugin. Jenkins supports lots of different ‘Items’, which used to be called ‘Jobs’, but Docker can be used to encapsulate build and test environments as well. In fact this is what BitBucket Pipelines and CircleCI also do.

To run Jenkins Pipeline we need a Jenkins installation with Docker installed. The easiest way to do this is to use the existing Jenkins Docker image from Docker Hub. Open a new command prompt and create a new directory for the development pipeline configuration and a sub directory called Jenkins with the following Dockerfile in it: 

FROM jenkins/jenkins:lts

USER root
RUN apt-get update
RUN apt-get -y install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

RUN add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"

RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
RUN service docker start

# drop back to the regular jenkins user - good practice
USER jenkins

You can see that our Dockerfile imports the existing Jenkins Docker image and then installs Docker for Linux. The Jenkins image, like most Docker images, is based on a Linux base image.

To get Docker Compose to build and run the image, we need a simple docker-compose.yml file in the root of the development pipeline directory with the details of the Jenkins service:

version: '3'
services:
  jenkins:
    container_name: jenkins
    build: ./jenkins/
    ports:
      - "8080:8080"
      - "5000:5000"
    volumes:
        - ~/.jenkins:/var/jenkins_home
        - /var/run/docker.sock:/var/run/docker.sock

Note the build parameter which references a sub directory where the Jenkins Dockerfile should be located. Also note the volumes. We want the builds to persist even if the container does not, so create a .jenkins directory in your home directory:

mkdir ~/.jenkins

Specifying it as a volume in docker-compse.yml tells the Docker image to write anything which Jenkins writes to /var/jenkins_home in the container to ~/.jenkins on the host - your local machine. If the development pipeline is running on a DigitalOcean droplet, DigitalOcean Volumes can be used to persist the volumes even after the droplet is torn down.

As well as running Jenkins in a Docker container we’ll also be doing our build and running our tests in a Docker container. Docker doesn’t generally like being run in a Docker container itself, so by specifying /var/run/docker.sock as a volume, the Jenkins container and the test container can be run on the same Docker instance.

To run Jenkins, simply bring it up with Docker compose:

docker-compose up

(To stop it again just use ctrl+c)

Make sure the first time you note down the default password. It will appear in the log like this:

Jenkins initial setup is required. An admin user has been created and a password generated.

Please use the following password to proceed to installation:

<password>

This may also be found at: /var/jenkins_home/secrets/initialAdminPasswor

To configure Jenkins for the first time open a browser and navigate to:

http://localhost:8080

Then:

  1. Paste in the default password and click continue.
  2. Install the recommended plugins. This will take a few minutes. There is another plugin we need too which can be installed afterwards.
  3. Create the first admin user and click Save & Continue.
  4. Confirm the Jenkins url and click Save & Finish.
  5. Click Start Jenkins to start Jenkins.

You now have Jenkins up and running locally in a Docker container! 

  1. To use Docker pipelines in Jenkins we need to install the plugin. To do this:
  2. Select Manage Jenkins from the left hand menu, followed by Manage Plugins.
  3. Select the ‘Available’ tab, search for ‘Docker Pipeline’ and select it,
  4. Click ‘Download now and install after restart’. 
  5. On the next page put a tick in the ‘restart after download’ check box and wait for the installation and for Jenkins to restart. Then log in again.

Next we need to create the Docker Pipeline for the Messagelib solution. 

  1. Select ‘New Item’ from the left hand menu, enter ‘Messagelib’ as the name, select ‘Pipeline’ and click ok.
  2. Scroll to the ‘Pipeline’ section and select ‘Pipeline script from SCM’ from the ‘Definition’ dropdown. This is because we’re going to define our pipeline in a file in the Messagelib solution. 
  3. From the ‘SCM’ dropdown, select ‘Git’ and enter the repository URL of the Messagelib solution. 
  4. Then click Save.


Jenkins is now configured to run the Messagelib pipeline, but we need to tell it what to do by adding a text file called Jenkinsfile to the root of the Messagelib solution.

/* groovylint-disable CompileStatic, GStringExpressionWithinString, LineLength */

pipeline
{
    agent
    {
        docker { image 'pjgrenyer/dotnet-build-sonarscanner:latest' }
    }
    stages
    {
        stage('Build & Test')
        {
            steps
            {
                sh 'dotnet clean'
                sh 'dotnet restore'
                sh 'dotnet build'
            }
        }
    }
}

This very simple Groovy script tells the Jenkins pipeline to get the latest ‘dotnet-build-sonarscanner’ Docker image and then use it to clean, restore and build the dotnet project. ‘dotnet-build-sonarscanner’ is a Docker image I built and pushed to Docker Hub using the following Dockerfile:

FROM mcr.microsoft.com/dotnet/core/sdk:latest AS build-env
WORKDIR /
RUN apt update
RUN apt install -y default-jre
ARG dotnet_cli_home_arg=/tmp
ENV DOTNET_CLI_HOME=$dotnet_cli_home_arg
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1
ENV PATH="${DOTNET_CLI_HOME}/.dotnet/tools:${PATH}"
ENV HOME=${DOTNET_CLI_HOME}
RUN dotnet tool install --global dotnet-sonarscanner
RUN chmod 777 -R ${dotnet_cli_home_arg}

This creates and configures a development environment for Dotnet Core and Sonar Scanner, which requires Java. 

There is a way to use the Dockerfile directly, rather than getting it from Docker Hub, described here: https://www.jenkins.io/doc/book/pipeline/docker/

Once the Jenkins file is added to the project and committed, set the build off by clicking ‘Build now’ from the left hand menu of the MessageLib item. The first run will take a little while as the Docker image is pulled (or built). Future runs won’t have to do that and will be quicker. You should find that once the image is downloaded, the project is built quickly and Jenkins shows success.


Read the next parts here:


Part 2: Piping Software for Less: Jenkins (Part 2)
Part 3: Piping Software for Less: SonarQube (Part 2)

Learning useful stuff from the Projects chapter of my book

Derek Jones from The Shape of Code

What useful, practical things might professional software developers learn from the Projects chapter in my evidence-based software engineering book?

This week I checked the projects chapter; what useful things did I learn (combined with everything I learned during all the other weeks spent working on this chapter)?

There turned out to be around three to four times more data publicly available than I had first thought. This is good, but there is a trap for the unweary. For many topics there is one data set, and that one data set may not be representative. What is needed is a selection of data from various sources, all relating to a given topic.

Some data is better than no data, provided small data sets are treated with caution.

Estimation is a popular research topic: how long will a project take and how much will it cost.

After reading all the papers I learned that existing estimation models are even more unreliable than I had thought, and what is more, there are plenty of published benchmarks showing how unreliable the models really are (these papers never seem to get cited).

Models that include lines of code in the estimation process (i.e., the majority of models) need a good estimate of the likely number of lines in the final software system. One issue that nobody had considered was the impact of developer variability on the number of lines written to implement the same functionality, which turns out to be large. Oops.

Machine learning has infested effort estimation research. What the machine learning models actually do is estimate adjustment, i.e., they do not create their own estimate but adjust one passed in as input to the model. Most estimation data sets are tiny, and only contain a few different variables; unless the estimate is included in the training phase, the generated model produces laughable results. Oops.

The good news is that there appear to be lots of recurring patterns in the project data. This is good news because recurring patterns are something to be explained by a theory of software project development (apparent randomness is bad news, from the perspective of coming up with a model of what is going on). I think we are still a long way from having workable theories, but seeing patterns is a good sign that one or more theories will be possible.

I think that the main takeaway from this chapter is that software often has a short lifetime. People in industry probably have a vague feeling that this is true, from experience with short-lived projects. It is not cost effective to approach commercial software development from the perspective that the code will live a long time; some code does live a long time, but most dies young. I see the implications of this reality being a major source of contention with those in academia who have spent too long babbling away in front of teenagers (teaching the creation of idealized software that lives on forever), and little or no time building software systems.

A lot of software is written by teams of people, however, there is not a lot of data available on teams (software or otherwise). Given the difficulty of hiring developers, companies have to make do with what they have, so a theory of software teams might not be that useful in practice.

Readers might have a completely different learning experience from reading the projects chapter. What useful things did you learn from the projects chapter?

Further Still On A Very Cellular Process – student

student from thus spake a.k.

My fellow students and I have lately been spending our spare time experimenting with cellular automata, which are simple mathematical models of single celled creatures such as amoebas, governing their survival and reproduction from one generation to the next according to the population of their neighbourhoods. In particular, we have been considering an infinite line of boxes, some of which contain living cells, together with rules that specify whether or not a box will be populated in the next generation according to its, its left hand neighbour's and its right hand neighbour's contents in the current generation.
We have found that for many such automata we can figure the contents of the boxes in any generation that evolved from a single cell directly, in a few cases from the oddness or evenness of elements in the rows of Pascal's triangle and the related trinomial triangle, and in several others from the digits in terms of sequences of binary fractions.
We have since turned our attention to the evolution of generations from multiple cells rather then one; specifically, from an initial generation in which each box has an even chance of containing a cell or not.

Visual Lint 7.0.9.324 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 7.0. The following changes are included:

  • When a custom report folder is defined in the Options Dialog "Reports" page, generated reports will now be written into subfolders identifying the solution/workspace, analysis tool and analysed solution/workspace configuration rather than just the solution/workspace name. This allows analysis reports for the same project but using different analysis tools or configurations to co-exist without overwriting each other.

  • Fixed a bug in the persistence of the "Generate reports in..." report options in the Options Dialog "Reports" page.

  • Updated the PC-lint Plus message database to reflect changes in PC-lint Plus 1.3.5.

Download Visual Lint 7.0.9.324

Learning useful stuff from the Reliability chapter of my book

Derek Jones from The Shape of Code

What useful, practical things might professional software developers learn from my evidence-based software engineering book?

Once the book is officially released I need to have good answers to this question (saying: “Well, I decided to collect all the publicly available software engineering data and say something about it”, is not going to motivate people to read the book).

This week I checked the reliability chapter; what useful things did I learn (combined with everything I learned during all the other weeks spent working on this chapter)?

A casual reader skimming the chapter would conclude that little was known about software reliability, and they would be right (I already knew this, but I learned that we know even less than I thought was known), and many researchers continue to dig in unproductive holes.

A reader with some familiarity with reliability research would be surprised to see that some ‘major’ topics are not discussed.

The train wreck that is machine learning has been avoided (not forgetting that the data used is mostly worthless), mutation testing gets mentioned because of some interesting data (the underlying problem is that mutation testing assumes that coding mistakes are local to one line, but in practice coding mistakes often involve multiple lines), and the theory discussions don’t mention non-homogeneous Poisson process as the basis for software fault models (because this process is not capable of solving the questions asked).

What did I learn? My highlights include:

  • Anne Choa‘s work on population estimation. The takeaway from this work is that if people want to estimate the number of remaining fault experiences, based on previous experienced faults, then every occurrence (i.e., not just the first) of a fault needs to be counted,
  • Janet Dunham’s top read work on software testing,
  • the variability in the numeric percentage that people assign to probability terms (e.g., almost all, likely, unlikely) is much wider than I would have thought,
  • the impact of the distribution of input values on fault experiences may be detectable,
  • really a lowlight, but there is a lot less publicly available data than I had expected (for the other chapters there was more data than I had expected).

The last decade has seen fuzzing grow to dominate the headlines around software reliability and testing, and provide data for people who write evidence-based books. I don’t have much of a feel for how widely used it is in industry, but it is a very useful tool for reliability researchers.

Readers might have a completely different learning experience from reading the reliability chapter. What useful things did you learn from the reliability chapter?

short – command line tool to truncate lines to fit in the terminal

Andy Balaam from Andy Balaam&#039;s Blog

Sometimes I run grep commands that search files with hugely-long lines. If those lines match, they are printed out and spam my terminal with huge amounts of information, that I probably don’t need.

I couldn’t find a tool that limits the line-length of its output, so I wrote a tiny one.

It’s called short.

You use it like this (my typical usage):

grep foo myfile.txt | short

Or specify the column width like this:

short -w 5 myfile.txt

It’s written in Rust. Feel free to add features, fix bugs and package it for your operating system/distribution!

But our users don’t want to change

AllanAdmin from Allan Kelly Associates

A good question came into my mailbox:

“Much of the writing I’ve seen assumes that software can be shipped directly into the hands of customers to create value (hence the “smaller packages, more often” approach). My experience has been that especially with new launches or major releases, there needs to be a threshold of minimum functionality that needs to be in place.”

Check your phone. Is it set to auto-update apps? Is your desktop OS set to auto-update? Or do you manual choose when to update?

Look at the update notes on phone apps from the likes of Uber, Slack, SkyScanner, the BBC and others. They say little more than “we update our apps regularly.”

Today people are used to technology auto-changing on them. They may not like it but do they like a big change any more?

My guess is that most people don’t even notice those updates. When you batch up software releases users see lots of changes at once, when you release them as a regular stream of small updates then most go unnoticed.

Still, users will see some updates change things, and they will not like some of these. But how long do you want to hide these updates from your users?

The question that needs asking is: what is the cost of an update? The vast majority of updates are quick, easy, cheap and painless.

Of course people don’t like updates which introduce a new UI, a new payment model or which demand you uninstall an earlier app but when updates are easy and bring benefits – even benefits you don’t see – they happily accept them.

And remember, the alternative to 100 small updates is one big update where people are more likely to see changes.

If your updates are generally good why hold them back? And if your updates are going in the wrong direction shouldn’t you change something? If you run scared of giving your users changes then something is wrong.

Nor is it just apps. Most people (in Europe at least) use telco supplied handsets and when the telco calls up and says “Would you like a new handset at no additional cost?” people usually say Yes. That is how telcos keep their customers.

The question continues,

“there needs to be coordination across the company (e.g. training people from marketing, sales, channel partners, customer/ internal support, and so on). There is also the human element – the capacity to absorb these changes. As a user of tech, I’m not sure I could work (well) with a product where features were changing, new ones being added frequently (weekly or even monthly), etc.”

If every software update was introducing a big change then these would be problems. But most updates don’t. Most introduce teeny-tiny changes.

Of course sometimes things need to change. The companies which do this best invest time and energy in making these painless. For example, Google often offers a “try our new beta version” for months before an update. And for months afterwards they provide a “use the old interface option.”

The best companies invest in user experience design too. This can go along way to removing the need for training.

Just because a new feature is released doesn’t mean people have to use it. For starters new changes can be released but disabled. Feature toggles are not only a way of managing source code branches but they also allow new features to be released silently and switched on when everyone is ready. This allows for releases to be de-risked without the customer seeing.

And when they are switched on they can be switched on for a few users at a time. Feedback can be gathered and improvements made before the next release.

That can be co-ordinated with training: make the feature toggle user switchable, everyone gets the new software and as they complete the training they can choose to switch it on.

Now marketing… yes, marketeers do like the big bang release – “look at us, we have something shiny and new!”

You could leave lots of features switched off until your marketeers are ready to make a big bang. That also reduces the problem of marketers needing to know what will be ready when so they known when to launch a campaign.

Or you could release updates without any fuss and market when you have the critical mass.

Or you could change your marketing approach: market a stream of constant improvements rather than occasional big changes.

Best of all market the capabilities of your thing without mentioning features: market what the app or system allows you to do.

For years I’ve been hearing “business people” bemoan developers who “talk technical” but I see exactly the same thing with marketeers. Look at Sony Televisions, what is the “picture processor X1” ? And why should I care? I can’t remember when I last changed the contrast on my television so the “Backlight master drive” (what ever that is) means nothing to me.

Or, look at Samsung mobile phones, 5G, 5G, 5G – what do I care about 5G? What does 5G allow me to do that I can’t with my current phone?

Drill down, look at the Samsung Galaxy lineup: CPU speed, CPU type, screen size, resolution, RAM, ROM – what do I care? How does any of that help me? – Stop throwing technical details at me!

Don’t market features market solution. Tell me what “job to be done” the product the addresses, tell me how my life will be improved. Marketing a solution rather than features decouple marketing from the upgrade cycle.

So sure, people don’t like technology change – I’ll tell you a story in my next blog. But when technology change brings benefits are they still resistant?

Now, with modern technology, with agile and continuous delivery, technology can change faster than business functions like training and marketing. We can either choose to slow technology down or we can change those functions to work differently – not necessarily faster but differently in a way that is compatible with agile technology change.

These kind of tensions are common in businesses which move across to agile style working. A lot of company think agile applies to the “software engine room” and the rest of the business can carry on as before. Unfortunately they have released the Agile Virus – agile working has introduced a set of tensions into the organization which must either be embraced or killed.

Once again technology is disruptive.

Perhaps, if the marketing or training department are insisting on big-bang releases maybe it is them who should be changing. Maybe, just maybe, they need to rethink their approach, maybe they could learn a thing or two about agile and work with differently with technology teams.

Subscribe to my blog newsletter and download Project Myopia for Free

The post But our users don’t want to change appeared first on Allan Kelly Associates.

Regular releases reduce risk, increase value

AllanAdmin from Allan Kelly Associates

“If you’re not embarrassed by the product when you launch, you’ve launched too late.” Reid Hoffman, founder LinkedIn

Years ago I worked for a software company supplying Vodafone, Verizon, Nokia, etc. The last thing those companies wanted was to update the software on their engineers PC every months, let alone every week!

I was remembering this episode when I was drafting what will be my next post (“But our users don’t want to change”) and thought it was worth saying something about how regular releases change the risk-reward equation.

When you only release occasionally there is a big incentive to “get it right” – to do everything that might be needed and to remove every defect whether you think those changes are needed or not. When you release occasionally second chances don’t happen for weeks or months. So you err on the side of caution and that caution costs.

Regularly releases changes that equation. Now second chances come around often, additions and fixes are easy. Now you can err on the side of less and that allows you to save time and money.

The ability to deliver regularly – every two weeks as a baseline, every day for high performing teams – is more important than the actual deliveries. Releasable is more important than released. The actual question of whether to release or not is ultimately a question for business representatives to decide.

But, being releasable on a very regular basis is an indicator of the teams technical ability and the innate quality of the thing being built. Teams which are always asking for “more time” may well have a low quality product (lots of bugs to fix) or have something to hide.

The fact that a team can, and hopefully do, release (to live) massively reduces the risk involved. When software is only released at the end – and usually only tested before that end – then risk is tail loaded. Having releasable – and especially released – software reduces risk. The risk is spread across the work.

Actually releasing early further reduces risk because every step in the process is exercised. There are no hidden deployment problems.

That offsets sunk-cost and combats commitment escalation. Because at any time the business stakeholders can say “game over” and walk away with a working product means that they are no longer held captive by the fear of sunk-costs, suppliers and career threatening failures.

It is also a nice side effect that releasing new functionality early – or just fixing bugs – increases the return on investment because benefits are delivered earlier and therefore start earning a return sooner.

Just because new functionality is completed and even released early does not mean users need to see it. Feature-toggles allows feature and changes to be hidden from users – or only enabled for specified users. Releasing changed software with no apparent change may look pointless but it actually reduces risk because the changes are out there.

That also means testing is simplified. Rather than running tests against software with many changes tests are run against software with few changes which makes changes more efficient even if the users don’t see it. And it removes the “we can’t roll back one fix” problem when one of 10 changes don’t pass.

Back with Vodafone engineers who don’t want their laptops updated: that was then, that was the days of CD installs. Today the cloud changes that, there is only one install to do, it isn’t such an inconvenience. So they could have the updates but with disruptive changes hidden. At the same time they could have non-disruptive changes, e.g. bug fixes.

In a few cases regular deliveries may not be the right answer. The key thing though is to change the default answer from “we only deliver occasionally (or at the end)” to “we deliver regularly (unless otherwise requested).”


Subscribe to my blog newsletter and download Project Myopia for Free

The post Regular releases reduce risk, increase value appeared first on Allan Kelly Associates.