Review: Bone Silence Delivers

Paul Grenyer from Paul Grenyer

Bone Silence (Revenger, #3)Bone Silence by Alastair Reynolds
My rating: 5 of 5 stars

Alistair Reynolds is still by far my favorite author and he has continued his form, which rebooted with Revenger, in Bone Silence. A friend once said to me that Alistair Reynolds struggles to know how to write an ending and, disappointingly, I think this is still true with Bone Silence. This trilogy has the constant unanswered questions which drew me into and kept me hooked on the Revelation Space stories. There is a kind of answer at the end and a bit of a twist which, if I’m honest, left me underwhelmed. The answer lacks detail and explanation of the reasons and then the Ness sisters ride off into the sunset and Reynolds announces that he’s done with the pair.

That aside I loved Bone Silence and the entire trilogy. Each book is different and describes different aspects of the universe in which it is set. The characters are diverse and interesting and the story wide, far reaching and mostly unpredictable. The sisters' respect for each other is clear throughout, but like real siblings there are tensions and fights. There is a lot more to this universe and so much scope for expansion. I’d love Alistair Reynolds to return to it some day and expand this story, fill in some glossed over gaps and detail and tell some of the other stories, even if the cliched seafaring language did irritate me throughout.


View all my reviews

Piping Software for Less: Why, What & How (Part 1)

Paul Grenyer from Paul Grenyer

Developing software is hard and all good developers are lazy. This is one of the reasons we have tools which automate practices like continuous integration, static analysis and measuring test coverage. The practices help us to measure quality and find problems with code early. When you measure something you can make it better. Automation makes it easy to perform the practices and means that lazy developers are likely to perform them more often, especially if they’re automatically performed every time the developer checks code in.

This is old news. These practices have been around for more than twenty years. They have become industry standards and not using them is, quite rightly, frowned upon. What is relatively new is the introduction of cloud based services such as BitBucket Pipelines, CircleCI and SonarCloud which allow you to set up these practices in minutes, however with this flexibility and efficiency comes a cost.

Why

While BitBucket Pipelines, CircleCI and SonarCloud have free tiers there are limits.

With BitBucket Pipelines you only get 50 build minutes a month on the free tier. The next step up is $15/month and then you get 2500 build minutes.

On the free CircleCI tier you get 2500 free credits per week, but you can only use public repositories, which means anyone and everyone can see your code. The use of private repositories starts at $15 per month.

With SonarCloud you can analyse as many lines of code as you like, but again you have to have your code in a public repository or pay $10 per month for the first 100,000 lines of code.

If you want continuous integration and a static analysis repository which includes test coverage and you need to keep your source code private, you’re looking at a minimum of $15 per month for these cloud based solutions and that’s if you can manage with only 50 build minutes per month. If you can’t it’s more likely to be $30 per month, that’s $360 per year.

That’s not a lot of money for a large software company or even a well funded startup or SME, though as the number of users goes up so does that price. For a personal project it’s a lot of money. 

Cost isn’t the only drawback, with these approaches you can lose some flexibility as well. 

The alternative is to build your own development pipelines. 

I bet you’re thinking that setting up these tools from scratch is a royal pain in the arse and will take hours; when the cloud solutions can be set up in minutes. Not to mention running and managing your own pipeline on your personal machine and don’t they suck resources when they’re running in the background all the time? And shouldn’t they be set up on isolated machines? What if I told you, you could set all of this up in about an hour and turn it all on and off as necessary with a single command? And if you wanted to, you could run it all on a DigitalOcean Droplet for around $20 per month. 

Interested? Read on.

What

When you know how, setting up a continuous integration server such as Jenkins and a static analysis repository such as SonarQube in a Docker container is relatively straightforward. As is starting and stopping them altogether using Docker Compose. As I said, the key is knowing how; and what I explain in the rest of this article is the product of around twenty development hours, a lot of which was banging my head against a number of individual issues which turned out to have really simple solutions.

Docker

Docker is a way of encapsulating software in a container. Anything from an entire operating system such as Ubuntu to a simple tool such as the scanner for SonarQube. The configuration of the container is detailed in a Dockerfile and Docker uses Dockerfiles to build, start and stop containers. Jenkins and SonarQube all have publically available Docker images, which we’ll use with a few relatively minor modifications, to build a development pipeline.

Docker Compose

Docker Compose is a tool which orchestrates Docker containers. Via a simple YML file it is possible to start and stop multiple Docker containers with a single command. This means that once configured we can start and stop the entire development pipeline so that it is only running when we need it or, via a tool such as Terraform, construct and provision a DigitalOcean droplet (or AWS service, etc.) with a few simple commands and tear it down again just as easily so that it only incurs cost when we’re actually developing. Terraform and DigitalOcean are beyond the scope of this article, but I plan to cover them in the near future. 

See the Docker and Docker Compose websites for instructions on how to install them for your operating system.

How

In order to focus on the development pipeline configuration, Over this and a few other posts I’ll describe how to create an extremely simple Dotnet Core class library with a very basic test and describe in more detail how to configure and run Jenkins and SonarQube Docker containers and setup simple projects in both to demonstrate the pipeline. I’ll also describe how to orchestrate the containers with Docker Compose. 

I’m using Dotnet Core because that’s what I’m working with on a daily basis. The development pipeline can also be used with Java, Node, TypeScript or any other of the supported languages. Dotnet Core is also free to install and use on Windows, Linux and Mac which means that anyone can follow along.

A Simple Dotnet Core Class Library Project

I’ve chosen to use a class library project as an example for two reasons. It means that I can easily use a separate project for the tests, which allows me to describe the development pipeline more iteratively. It also means that I can use it as the groundwork for a future article which introduces the NuGet server Baget to the development pipeline.

Open a command prompt and start off by creating an empty directory and moving into it.

mkdir messagelib
cd messagelib

Then open the directory in your favorite IDE, I like VSCode for this sort of project. Add a Dotnet Core appropriate .gitignore file and then create a solution and a class library project and add it to the solution:

dotnet new sln
dotnet new classLib --name Messagelib
dotnet sln add Messagelib/Messagelib.csproj

Delete MessageLib/class1.cs and create a new class file and class called Message:

using System;

namespace Messagelib
{
    public class Message
    {
        public string Deliver()
        {
            return "Hello, World!";
        }
    }
}

Make sure it builds with:

dotnet build

Commit the solution to a public git repository or you can use the existing one in my bitbucket account here: https://bitbucket.org/findmytea/messagelib

A public repository keeps this example simple and although I won’t cover it here, it’s quite straightforward to add a key to a BitBucket or GitHub private repository and to Jenkins so that it can access them.

Remember that one of the main driving forces for setting up the development pipeline is to allow the use of private repositories without having to incur unnecessary cost.


Read the next parts here:




Sidebar 1


Continuous Integration 

Continuous Integration (CI) is a development practice where developers integrate code into a shared repository frequently, preferably several times a day. Each integration can then be verified by an automated build and automated tests. While automated testing is not strictly part of CI it is typically implied.


Static Analysis

Static (code) analysis is a method of debugging by examining source code before a program is run. It’s done by analyzing a set of code against a set (or multiple sets) of coding rules.


Measuring Code Coverage

Code coverage is a metric that can help you understand how much of your source is tested. It's a very useful metric that can help you assess the quality of your test suite.



Sidebar 2: CircleCI Credits

Credits are used to pay for your team’s usage based on machine type and size, and premium features like Docker layer caching.



Sidebar 3: What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.


Piping Software for Less: Jenkins (Part 2)

Paul Grenyer from Paul Grenyer



Run Jenkins in Docker with Docker Compose

Why use Jenkins I hear you ask? Well, for me the answers are simple: familiarity and the availability of an existing tested and officially supported Docker image. I have been using Jenkins for as long as I can remember. 

The official image is here: https://hub.docker.com/r/jenkins/jenkins

After getting Jenkins up and running in the container we’ll look at creating a ‘Pipeline’ with the Docker Pipeline plugin. Jenkins supports lots of different ‘Items’, which used to be called ‘Jobs’, but Docker can be used to encapsulate build and test environments as well. In fact this is what BitBucket Pipelines and CircleCI also do.

To run Jenkins Pipeline we need a Jenkins installation with Docker installed. The easiest way to do this is to use the existing Jenkins Docker image from Docker Hub. Open a new command prompt and create a new directory for the development pipeline configuration and a sub directory called Jenkins with the following Dockerfile in it: 

FROM jenkins/jenkins:lts

USER root
RUN apt-get update
RUN apt-get -y install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

RUN add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"

RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
RUN service docker start

# drop back to the regular jenkins user - good practice
USER jenkins

You can see that our Dockerfile imports the existing Jenkins Docker image and then installs Docker for Linux. The Jenkins image, like most Docker images, is based on a Linux base image.

To get Docker Compose to build and run the image, we need a simple docker-compose.yml file in the root of the development pipeline directory with the details of the Jenkins service:

version: '3'
services:
  jenkins:
    container_name: jenkins
    build: ./jenkins/
    ports:
      - "8080:8080"
      - "5000:5000"
    volumes:
        - ~/.jenkins:/var/jenkins_home
        - /var/run/docker.sock:/var/run/docker.sock

Note the build parameter which references a sub directory where the Jenkins Dockerfile should be located. Also note the volumes. We want the builds to persist even if the container does not, so create a .jenkins directory in your home directory:

mkdir ~/.jenkins

Specifying it as a volume in docker-compse.yml tells the Docker image to write anything which Jenkins writes to /var/jenkins_home in the container to ~/.jenkins on the host - your local machine. If the development pipeline is running on a DigitalOcean droplet, DigitalOcean Volumes can be used to persist the volumes even after the droplet is torn down.

As well as running Jenkins in a Docker container we’ll also be doing our build and running our tests in a Docker container. Docker doesn’t generally like being run in a Docker container itself, so by specifying /var/run/docker.sock as a volume, the Jenkins container and the test container can be run on the same Docker instance.

To run Jenkins, simply bring it up with Docker compose:

docker-compose up

(To stop it again just use ctrl+c)

Make sure the first time you note down the default password. It will appear in the log like this:

Jenkins initial setup is required. An admin user has been created and a password generated.

Please use the following password to proceed to installation:

<password>

This may also be found at: /var/jenkins_home/secrets/initialAdminPasswor

To configure Jenkins for the first time open a browser and navigate to:

http://localhost:8080

Then:

  1. Paste in the default password and click continue.
  2. Install the recommended plugins. This will take a few minutes. There is another plugin we need too which can be installed afterwards.
  3. Create the first admin user and click Save & Continue.
  4. Confirm the Jenkins url and click Save & Finish.
  5. Click Start Jenkins to start Jenkins.

You now have Jenkins up and running locally in a Docker container! 

  1. To use Docker pipelines in Jenkins we need to install the plugin. To do this:
  2. Select Manage Jenkins from the left hand menu, followed by Manage Plugins.
  3. Select the ‘Available’ tab, search for ‘Docker Pipeline’ and select it,
  4. Click ‘Download now and install after restart’. 
  5. On the next page put a tick in the ‘restart after download’ check box and wait for the installation and for Jenkins to restart. Then log in again.

Next we need to create the Docker Pipeline for the Messagelib solution. 

  1. Select ‘New Item’ from the left hand menu, enter ‘Messagelib’ as the name, select ‘Pipeline’ and click ok.
  2. Scroll to the ‘Pipeline’ section and select ‘Pipeline script from SCM’ from the ‘Definition’ dropdown. This is because we’re going to define our pipeline in a file in the Messagelib solution. 
  3. From the ‘SCM’ dropdown, select ‘Git’ and enter the repository URL of the Messagelib solution. 
  4. Then click Save.


Jenkins is now configured to run the Messagelib pipeline, but we need to tell it what to do by adding a text file called Jenkinsfile to the root of the Messagelib solution.

/* groovylint-disable CompileStatic, GStringExpressionWithinString, LineLength */

pipeline
{
    agent
    {
        docker { image 'pjgrenyer/dotnet-build-sonarscanner:latest' }
    }
    stages
    {
        stage('Build & Test')
        {
            steps
            {
                sh 'dotnet clean'
                sh 'dotnet restore'
                sh 'dotnet build'
            }
        }
    }
}

This very simple Groovy script tells the Jenkins pipeline to get the latest ‘dotnet-build-sonarscanner’ Docker image and then use it to clean, restore and build the dotnet project. ‘dotnet-build-sonarscanner’ is a Docker image I built and pushed to Docker Hub using the following Dockerfile:

FROM mcr.microsoft.com/dotnet/core/sdk:latest AS build-env
WORKDIR /
RUN apt update
RUN apt install -y default-jre
ARG dotnet_cli_home_arg=/tmp
ENV DOTNET_CLI_HOME=$dotnet_cli_home_arg
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1
ENV PATH="${DOTNET_CLI_HOME}/.dotnet/tools:${PATH}"
ENV HOME=${DOTNET_CLI_HOME}
RUN dotnet tool install --global dotnet-sonarscanner
RUN chmod 777 -R ${dotnet_cli_home_arg}

This creates and configures a development environment for Dotnet Core and Sonar Scanner, which requires Java. 

There is a way to use the Dockerfile directly, rather than getting it from Docker Hub, described here: https://www.jenkins.io/doc/book/pipeline/docker/

Once the Jenkins file is added to the project and committed, set the build off by clicking ‘Build now’ from the left hand menu of the MessageLib item. The first run will take a little while as the Docker image is pulled (or built). Future runs won’t have to do that and will be quicker. You should find that once the image is downloaded, the project is built quickly and Jenkins shows success.


Read the next parts here:


Part 2: Piping Software for Less: Jenkins (Part 2)
Part 3: Piping Software for Less: SonarQube (Part 2)

Digital Ocean’s PaaS Goes BETA

Paul Grenyer from Paul Grenyer

Make no mistake, I LOVE DigitalOcean! It’s simple to use and reasonably priced, especially compared to some of its better known competitors. They even respond quickly to queries on Twitter!

A couple of days ago I received an email from DigitalOcean inviting me to try out their new Beta 2 for App Platform (DigitalOcean’s PaaS product) which they described as follows:

“It handles common infrastructure-related tasks like provisioning and managing servers, databases, operating systems, application runtimes, and other dependencies. This means you can go from code to production in just minutes. We support Node.js, Python, Ruby, Go, PHP, and static sites right out of the box. If you have apps in other languages, simply create a Dockerfile and App Platform will do the rest. You can deploy apps directly from your Github repos and scale them (vertically and horizontally) if needed.….”

I’m also a fan of Heroku for its ease of application deployment and, with the exception of a few AWS services, Heroku is the only platform other than DigitalOcean which I regularly use for deploying my projects. I use Heroku because I don’t have to provision a Droplet (a Linux server on DigitalOcean) to run a simple web application. Now that DigitalOcean has a similar service there’s a good chance I won’t need Heroku.

The DigitalOcean App Platform (which I’ll refer to as ‘Apps’ from here on) doesn’t yet have as many features as Heroku, but the corresponding features which Apps does support are much simpler to work with. There are basically two types of applications you can run, a static website (static), a web application (service) or a worker. A worker is basically a service without any routing and can be used for background tasks. As with Heroku you can add databases as well.

Apps is currently in Beta which means it’s an initial release of a potential future product. Customers who participate in DigitalOceans beta programs have the opportunity to test, validate, and provide feedback on future functionality, which helps DigitalOcean to focus their efforts on what provides the most value to their customers.

  • Customer availability: Participation in beta releases is by invitation, and customers may choose not to participate. Beta invitations may be public or private. (How exciting, they picked me!).
  • Support: Beta releases are unsupported.
  • Production readiness: Beta releases may not be appropriate for production-level workloads.
  • Regions: Beta offerings may only be available in select regions.
  • Charges: Beta offerings may be charged or free. However, free use of beta offerings may be discontinued at any point in time.
  • Retirement: At the end of a beta release, DigitalOcean will determine whether to continue an offering through its lifecycle. We reserve the right to change the scope of or discontinue a Beta product or feature at any point in time without notice, as outlined in our terms of service.

I was (am!) very excited and decided to try a few things out on Apps. Below you’ll find what I tried, how I did it and some of what I learnt.

Static Asset App

The simplest type of ‘app’ which you can deploy to Apps is a static website and it really is straight forward. Remember the days when you would develop a website by creating files in a directory and opening them locally in a browser? Well, once you’ve done that you can just push them to GitHub and they’re on the web!

1. Create a new GitHub repository - it can be private or public.

2. Add a single index.html file, e.g:

<!doctype html>
<html lang="en">
  <head>
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <!-- Bootstrap CSS -->
    <link rel="stylesheet" ref="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous">
    <title>Hello, App Platform!</title>
  </head>
  <body>
    <h1>Hello, App Platform!</h1>
    <!-- Optional JavaScript -->
    <!-- jQuery first, then Popper.js, then Bootstrap JS -->
    <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script>
    <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js" integrity="sha384-9/reFTGAW83EW2RDu2S0VKaIzap3H66lZH81PoYlFhbGU+6BZp6G7niu735Sk7lN" crossorigin="anonymous"></script>
    <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" crossorigin="anonymous"></script>
  </body>
</html>

I’ve used the Bootstrap Hello, World! example as it brings in CSS and JavaScript, but any HTML example will do.

3. Log into DigitalOcean and select Apps from the left-hand menu.

4. If it’s your first App click ‘Launch App’. Otherwise click ‘Create App’.

5. Then click ‘GitHub’. If this is your first App, select ‘Configure your GitHub permissions’ and follow the instructions to link your GitHub account.

6. Back in Apps, select your new repository from the dropdown list and click Next.

On the next page you’ll be asked to choose a name for the app, select the branch to use from your repository and configure ‘Autodeploy on Push’.

7. Update the name of the app if you want to, leave the rest of the defaults as they are and click Next.

On the next page you have the option to add build and run commands. You don’t need any for a simple HTML app.

8. On the ‘Choose Build Configuration’ page click ‘Launch App’ to deploy the app and wait while Apps configures the app.

9. After receiving the ‘Deployed successfully!’ message, click the ‘Live App’ link to launch the app in a new tab.

That’s it! Your HTML page is now live on DigitalOcean’s App Platform. You can treat your repository just like the root directory of a website and add pages, images and JavaScript as you need. Just add them to the repository, commit, push and wait for them to be deployed.

Apps will generate a URL with a subdomain which is a combination of the name of your app and a unique sequence of characters, on the domain .ondigitalocean.app. You can configure a custom domain from the app’s settings tab and Apps provides a CNAME for redirection.


Node App

The next step up from a static asset app is a simple node webapp. Apps will install Node.js and your app’s dependencies for you and then fire up the app.

I was hoping to be able to deploy a very simple node webapp such as:

var http = require('http');

http.createServer(function (req, res) {

  res.write('Hello, App Platform!');

  res.end();

}).listen(process.env.PORT || '3000');

But this seemed to confuse Apps. It requires a package-lock.js file, which is generated by running npm install, to be checked into the repository and didn’t deploy successfully until I added the express package. 

1. Create a new directory for a simple node project and move into it.

2. Run npm init at the command line. Enter a name for the app and accept the other defaults.

3. Add a .gitignore file containing:

node_modules

so that dependencies are not checked into the repository.

4. Add the Express () package:

npm install express --save

This will also generate the package-lock.js which Apps needs and must be checked into the repository with the other files.

5. Create an index.js file at the root of the project:

const express = require('express')

const app = express()

const port = process.env.PORT || '3000';

app.get('/', (req, res) => {

  res.send('Hello World!')

})

app.listen(port, () => {

  console.log(`Example app listening at http://localhost:${port}`)

})

Apps injects the port the webapp should run on as an environment variable called, PORT. This is easily read by Node.js as shown.

6. Add a start command to the scripts section in package.json:

"scripts": {

    "start": "node index.js",

    "test": "echo \"Error: no test specified\" && exit 1"

  },

7. Create a new GitHub repository, it can be private or public, and check your node project into it.

Then follow from step 3 of the Static Asset app above. Note at step 8, Apps has automatically configured npm start as the run command, having detected a Node application and you can select the pricing plan on the same screen.

WARNING: Node applications are NOT free on DigitalOcean App Platform. Make sure you delete unwanted applications from the Settings tab.


Docker App

As well as Node.js, Apps appears to support Ruby, Go and Python nativity, as well as others. What about .Net or other languages and platforms? For those Apps supports Docker. Let’s see if we can get a simple dotnet core application running in Apps.

1. Create a new directory for a dotnet core project (e.g. dotnetcore) and move into it.

2. Create a dotnet core web application:

dotnet new webapp

3. Add a Dockerfile to the project:

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env

WORKDIR /app

# Copy everything else and build

COPY . ./

RUN dotnet publish dockercore.csproj -c Release -o out

# Build runtime image

FROM mcr.microsoft.com/dotnet/core/sdk:3.1

WORKDIR /app

COPY --from=build-env /app/out .

EXPOSE $PORT

ENTRYPOINT [ "dotnet", "dockercore.dll" ]

Apps injects the port the webapp should run on as an environment variable called, PORT. Make sure the Docker image will expose it as shown.

4. To make sure the application runs on the port injected add the following UseUrls method call in program.cs:

public static IHostBuilder CreateHostBuilder(string[] args)

{

    var port = Environment.GetEnvironmentVariable("PORT") ;

    return Host.CreateDefaultBuilder(args)

        .ConfigureWebHostDefaults(webBuilder =>

        {

            webBuilder.UseStartup<Startup>().UseUrls($"http://+:{port}");

        });

}

5. To prevent the application trying to redirect to a non-existent ssl port, remove or comment out the following line from startup.cs

// app.UseHttpsRedirection();

6. Building a dotnet core application generates a lot of intermediate files that you don’t want to check in, so add an appropriate .gitignore file to the root of the project.

7. Create a new GitHub repository, it can be private or public, and check your dotnet core project into it.

Then follow from step 3 of the Static Asset app above. Note at step 8, Apps has detected the Dockerfile and is not giving the option for build commands. You don’t need to specify any run commands and you can select the pricing plan on the same screen.

WARNING: Docker based  applications are NOT free on DigitalOcean App Platform. Make sure you delete unwanted applications from the Settings tab.


Finally

There was one big disadvantage for me and that’s the lack of a free tier for anything more advanced than a static web application. The cost isn’t extortionate (https://www.digitalocean.com/docs/app-platform/#plans-and-pricing), but quite a bit for hobby programmers. If you want to use a database on top there’s a further cost, whereas this is free to begin with in Heroku.

Apps currently only supports GitHub. You can use private repositories, which is great, but I’d like to see BitBucket support as well. Heroku has its own git repositories as well as supporting external repositories. 

I’d also like there to be Terraform support for Apps as there is for the rest of the DigitalOcean services. However, given that Apps in Beta, I can see why it isn’t supported yet.

Overall Apps was very easy to use and had a much shallower learning curve and was generally easier to use than Heroku.  DigitalOcean, do you think we could have AWS style Lambdas next, please?


Metal Commando – Primal Fear

Paul Grenyer from Paul Grenyer

I've had much anticipation for this release and I wasn't disappointed. While it lacks the epic nature of Severn Seals and New Religion until the final (13 minute!) track, it's packed full of solid power metal songs. Unlike most albums, I found it instantly enjoyable on first listen. Other than the lack of epicness, my only complaint would be it's rather short. I'm fully expecting this to become one of my favourite albums of 2020.

Metal Commando

Greenback backup

Paul Grenyer from Paul Grenyer


Why

When Naked Element was still a thing, we used DigitalOcean almost exclusively for our client’s hosting. For the sorts of projects we were doing it was the most straightforward and cost effective solution. DigitalOcean provided managed databases, but there was no facility to automatically back them up. This led us to develop a Python based program which was triggered once a day to perform the backup, push it to AWS S3 and send a confirmation or failure email.

We used Python due to familiarity, ease of use and low installation dependencies. I’ll demonstrate this later on in the Dockerfile. S3 was used for storage as DigitalOcean did not have their equivalent, ‘Spaces’, available in their UK data centre. The closest is in Amsterdam, but our clients preferred to have their data in the UK. 

Fast forward to May 2020 and I’m working on a personal project which uses a PostgreSQL database. I tried to use a combination of AWS and Terraform for the project’s infrastructure (as this is what I am using for my day job) but it just became too much effort to bend AWS to my will and it’s also quite expensive. I decided to move back to DigitalOcean and got the equivalent setup sorted in a day. I could have taken advantage of AWS’ free tier for the database for 12 months, but AWS backup storage is not free and I wanted as much as possible with one provider and within the same virtual private network (VPC).

I was back to needing my own backup solution. The new project I am working on uses Docker to run the main service. My Droplet (that’s what Digital Ocean calls its Linux server instances) setup up is  minimal: non-root user setup, firewall configuration and Docker install. The DigitalOcean Market Place includes a Docker image so most of that is done for me with a few clicks. I could have also installed Python and configured a backup program to run each evening. I’d also have to install the right version of the PostgreSQL client, which isn’t currently in the default Ubuntu repositories, so is a little involved. As I was already using Docker it made sense to create a new Docker image to install everything and run a Python programme to schedule and perform the backups. Of course some might argue that a whole Ubuntu install and configure in a Docker image is a bit much for one backup scheduler, but once it’s done it’s done and can easily be installed and run elsewhere as many times as is needed.

There are two more decisions to note. My new backup solution will use DigitalOcean spaces, as I’m not bothered about my data being in Amsterdam and I haven’t implemented an email server yet so there are no notification emails. This resulted in me jumping out of bed as soon as I woke each morning to check Spaces to see if the backup had worked, rather than just checking for an email. It took two days to get it all working correctly!

What

I reached for Naked Element’s trusty Python backup program affectionately named Greenback after the arch enemy of Danger Mouse (Green-back up, get it? No, me neither…) but discovered it was too specific and would need some work, but would serve as a great template to start with.

It’s worth nothing that I am a long way from a Python expert. I’m in the ‘reasonable working knowledge with lots of help from Google’ category. The first thing I needed the program to do was create the backup. At this point I was working locally where I had the correct PostgreSQL client installed, db_backup.py:

db_connection_string=os.environ['DATABASE_URL']

class GreenBack:
    def backup(self):    
        datestr = datetime.now().strftime("%d_%m_%Y_%H_%M_%S")
        backup_suffix = ".sql"
        backup_prefix = "backup_"

        destination = backup_prefix + datestr + backup_suffix
        backup_command = 'sh backup_command.sh ' + db_connection_string + ' ' + destination
        subprocess.check_output(backup_command.split(' '))
        return destination

I want to keep anything sensitive out of the code and out of source control, so I’ve brought in the connection string from an environment variable. The method constructs a filename based on the current date and time, calls an external bash script to perform the backup:

# connection string
# destination
pg_dump $1 > $2

and returns the backup file name. Of course for Ubuntu I had to make the bash script executable. Next I needed to push the backup file to Spaces, which means more environment variables:

region=''
access_key=os.environ['SPACES_KEY']
secret_access_key=os.environ['SPACES_SECRET']
bucket_url=os.environ['SPACES_URL']
backup_folder='dbbackups'
bucket_name='findmytea'

So that the program can access Spaces and another method:

class GreenBack:
    ...
    def archive(self, destination):
        session = boto3.session.Session()
        client = session.client('s3',
                                region_name=region,
                                endpoint_url=bucket_url,
                                aws_access_key_id=access_key,
                                aws_secret_access_key=secret_access_key)

        client.upload_file(destination, bucket_name, backup_folder + '/' + destination)
        os.remove(destination) 

It’s worth noting that DigitalOcean implemented the Spaces API to match the AWS S3 API so that the same tools can be used. The archive method creates a session and pushes the backup file to Spaces and then deletes it from the local file system. This is for reasons of disk space and security. A future enhancement to Greenback would be to automatically remove old backups from Spaces after a period of time.

The last thing the Python program needs to do is schedule the backups. A bit of Googling revealed an event loop which can be used to do this:

class GreenBack:
    last_backup_date = ""

    def callback(self, n, loop):
        today = datetime.now().strftime("%Y-%m-%d")
        if self.last_backup_date != today:
            logging.info('Backup started')
            destination = self.backup()
            self.archive(destination)
            
            self.last_backup_date = today
            logging.info('Backup finished')
        loop.call_at(loop.time() + n, self.callback, n, loop)
...

event_loop = asyncio.get_event_loop()
try:
    bk = GreenBack()
    bk.callback(60, event_loop)
    event_loop.run_forever()
finally:
    logging.info('closing event loop')
    event_loop.close()

On startup callback is executed. It checks the last_back_date against the current date and if they don’t match it runs the backup and updates the last_backup_date. If the dates do match and after running the backup, the callback method  is added to the event loop with a one minute delay. Calling event_loop.run_forever after the initial callback call means the program will wait forever and the process continues.

Now that I had a Python backup program I needed to create a Dockerfile that would be used to create a Docker image to setup the environment and start the program:

FROM ubuntu:xenial as ubuntu-env
WORKDIR /greenback

RUN apt update
RUN apt -y install python3 wget gnupg sysstat python3-pip

RUN pip3 install --upgrade pip
RUN pip3 install boto3 --upgrade
RUN pip3 install asyncio --upgrade

RUN echo 'deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main' > /etc/apt/sources.list.d/pgdg.list
RUN wget https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key add ACCC4CF8.asc

RUN apt update
RUN apt -y install postgresql-client-12

COPY db_backup.py ./
COPY backup_command.sh ./

ENTRYPOINT ["python3", "db_backup.py"]

The Dockerfile starts with an Ubuntu image. This is a bare bones, but fully functioning Ubuntu operating system. The Dockerfile then installs Python, its dependencies and the Greenback dependencies. Then it installs the PostgreSQL client, including adding the necessary repositories. Following that it copies the required Greenback files into the image and tells it how to run Greenback.

I like to automate as much as possible so while I did plenty of manual Docker image building, tagging and pushing to the repository during development, I also created a BitBucket Pipeline, which would do the same on every check in:

image: python:3.7.3

pipelines:
  default:
    - step:
          services:
            - docker
          script:
            - IMAGE="findmytea/greenback"
            - TAG=latest
            - docker login --username $DOCKER_USERNAME --password $DOCKER_PASSWORD
            - docker build -t $IMAGE:$TAG .
            - docker push $IMAGE:$TAG

Pipelines, BitBucket’s cloud based Continuous Integration and Continuous Deployment feature, is familiar with Python and Docker so it was quite simple to make it log in to Docker Hub, build, tag and push the image. To enable the pipeline all I had to do was add the bitbucket-pipelines.yml file to the root of the repository, checkin, follow the BitBucket pipeline process in the UI to enable it and add then add the build environment variables so the pipeline could log into Docker Hub. I’d already created the image repository in Docker Hub.

The Greenback image shouldn’t change very often and there isn’t a straightforward way of automating the updating of Docker images from Docker Hub, so I wrote a bash script to do it, deploy_greenback:

sudo docker pull findmytea/greenback
sudo docker kill greenback
sudo docker rm greenback
sudo docker run -d --name greenback  --restart always --env-file=.env findmytea/
greenback:latest
sudo docker ps
sudo docker logs -f greenback

Now, with a single command I can fetch the latest Greenback image, stop and remove the currently running image instance, install the new image, list the running images to reassure myself the new instance is running and follow the Greenback logs. When the latest image is run, it is named for easy identification, configured to restart when the Docker service is restarted and told where to read the environment variables from. The environment variables are in a local file called .env:

DATABASE_URL=...
SPACES_KEY=...
SPACES_SECRET=...
SPACES_URL=https://ams3.digitaloceanspaces.com

And that’s it! Greenback is now running in a Docker image instance on the application server and backs up the database to Spaces just after midnight every night.

Finally

While Greenback isn’t a perfect solution, it works, is configurable, a good platform for future enhancements and should require minimal configuration to be used with other projects in the future.

Greenback is checked into a public BitBucket repository and the full code can be found here:

https://bitbucket.org/findmytea/greenback/

The Greenback Docker image is in a public repository on Docker Hub and can be pulled with Docker:

docker pull findmytea/greenback

Test Driven Terraform [Online – Video Conf] – 7pm, 2 April 2020.

Paul Grenyer from Paul Grenyer


We'll use TDD to create a Terraform module which builds a Heroku app and deploys a simple React application.

If you'd like to follow along, you'll need the following prerequisites

  • Terraform installed
  • Go 1.14 installed
  • Heroku account - HEROKU_API_KEY added to environment variables.
  • Git installed
  • BitBucket account

This meetup will be via Zoom:- https://zoom.us/j/902141920

Please RSVP here: https://www.meetup.com/Norfolk-Developers-NorDev/events/269640463/

Insomnium – Norwich

Paul Grenyer from Paul Grenyer

Something my first proper girlfriend said to me has stuck with me my entire life as I disagree with it (mostly).  She said that the best way to discover a new band was to see them live first. The reason I disagree is because I get most pleasure from knowing the music I am listening to live - most of the time.

I’m a member of the Bloodstock Rock Society and their Facebook page is often a place of band discussion. Lots of people there were saying how good Insomnium are, but they didn’t do a great deal for me when I listened to them on Spotify. Then it was early 2020, I hadn’t been to a gig since Shakespears Sister in November, I fancied a night out and Insomnium were playing in Norwich. So I took a chance….

From the off they were great live and I really enjoyed it. I came to the conclusion that I must like some of their older stuff as it was the new album which hadn’t done much for me. There were lots of things I like, like widdly guitars, metal riffs and blast beats, but what really lets Insomnium down is the vocals. Death metal vocals, to a certain extent, are death metal vocals, but this guy sounded like he was singing a different song in a different band - it’s the same on the album I tried. If the vocals were more suited to the music, like there are with Wintersun, it would be even better. I also learned that Norwich City’s current start player is from the same town in Finland as the band.

The first thing I did this morning was look up which albums the setlist was from an make a list:

  • One for Sorrow
  • Across the Dark
  • Above the Weeping World
  • Shadows of the Dying Sun
  • Heart like a Grave

And then die a little inside at the prices on Amazon and eBay. I think I’ll be playing a lot of Insomnium on Spotify for the time being so I’m ready to enjoy them to the full next time.



A review: .Net Core in Action

Paul Grenyer from Paul Grenyer

.Net Core in Action
by Dustin Metzgar
ISBN-13: 978-1617294273

I still get a fair amount of flack for buying and reading technical books in the 21st Century - almost as much as I get for still buying and listening to CDs. If I was a vinyl loving hipster, it would be different of course…. However, books like .Net Core in Action are a perfect example of why I do it.  I needed to learn what .Net Core was and get a feel for it very quickly and that is what this book allowed me to do.

I’ve been very sceptical of .Net development for a number of years, mostly due to how large I perceived the total cost of ownership and the startup cost to be and the fact that you have to use Windows.  While this was previously true, .Net Core is different and .Net Core in Action made me understand that within the first few pages of the first chapter. It also got me over my prejudice towards Docker by the end of the second chapter.

The first two chapters are as you would expect, an introduction followed by various Hello World examples. Then it gets a bit weird as the book dives into the build system next and then Unit testing (actually, this is good so early) and then two chapters on connecting to relational databases, writing data access layers and ORMs. There’s a sensible chapter on micro services before the weirdness returns with chapters on debugging performance profiling and internationalisation. I can kind of see how the author is trying to show the reader the way different parts of .Net core work on different platforms (Windows, Linux, Mac), but this relatively small volume could have been more concise.




DevelopHER Overall Award 2019

Paul Grenyer from Paul Grenyer

I was honoured and delighted to be asked to judge and present the overall DevelopHER award once again this year. Everyone says choosing a winner is difficult. It may be a cliche, but that doesn’t change the fact that it is.

When the 13 category winners came across my desk I read through them all and reluctantly got it down to seven. Usually on a first pass I like to have it down to three or four and then all I need to agonise over is the order. Luckily on the second pass I was able to be ruthless and get it down to four.

To make it even more difficult, three of my four fell into three categories I am passionate about:

  • Technical excellence and diversity
  • Automated Testing
  • Practical, visual Agile

And the fourth achieved results for her organisation which just couldn’t be ignored.

So I read and reread and ordered and re-ordered. Made more tea, changed the CD and re-read and re-ordered some more. Eventually it became clear.

Technical excellent and the ability for a software engineer to turn their hand to new technologies is vital. When I started my career there were basically two main programming languages, C++ and Java. C# came along soon after, but most people fell into one camp or another and a few of us crossed over. Now are are many, many more to choose from and lots of young engineers decide to specialise in one and are reluctant to learn and use others. This diminishes us all as an industry. So someone who likes to learn new and different technologies is a jewel in any company’s crown.

The implementation of Agile methodologies in Software Development is extremely important. Software, by its very nature is complex. Only on the most trivial projects does the solution the users need look anything like what they thought they wanted at the beginning. Traditional waterfall approaches to software development do not allow for this. The client requires flexibility and we as software engineers need the flexibility to deliver what they need. Software development is a learning process for both the client and the software engineer. Agile gives us a framework for this. Unlike many of the traditional methods, Agile has the flexibility to be agile itself, giving continuous improvement.

When implementing Agile processes, the practices are often forgotten or neglected and in many ways they are more important. Not least of which is automated testing. The practice of writing code which tests your code and running it at least on every checkin. This gives you a safety net that code you’ve already written isn’t broken by new code you write. And when it is, the tests tell you, they tell you what’s wrong and where it’s wrong.  We need more of this as an industry and that is why I chose Rita Cristina Leitao, an automated software tester from Switch Studios as the overall DevelopHER winner.