Continuous Deployment of a platform and its variants using githook and cron

Tim Pizey from Tim Pizey


We have a prototypical webapp platform which has four variants, commodities, energy, minerals and wood. We use the Maven war overlay feature. Don't blame me it was before my time.

This architecture, with a core platform and four variants, means that one commit to platform can result in five staging sites needing to be redeployed.

Continuous Integration

We have a fairly mature Continuous Integration setup, using Jenkins to build five projects on commit. The team is small enough that we also build each developer's fork. Broken builds on trunk are not common.

NB This setup does deploy broken builds. Use a pipeline if broken staging builds are a problem in themselves.

Of Martin Fowler's Continuous Integration Checklist we have a score in every category but one:

  • Maintain a Single Source Repository
  • Automate the Build
    We build using Maven.
  • Make Your Build Self-Testing
    Maven runs our Spring tests and unit tests (coverage could be higher).
  • Everyone Commits To the Mainline Every Day
    I do, some with better memories keep longer running branches.
  • Every Commit Should Build the Mainline on an Integration Machine
    Jenkins as a service from Cloudbees.
  • Fix Broken Builds Immediately
    We have a prominently displayed build wall, the approach here deploys broken builds to staging so we rely upon having none.
  • Keep the Build Fast
    We have reduced the local build to eight minutes, twenty five minutes or so on Jenkins. This is not acceptable and does cause problems, such as tests not being run locally, but increasing the coverage and reducing the run time will not be easy.
  • Test in a Clone of the Production Environment
    There is no difference in kind between the production and development environments. Developers use the same operating system and deployment mechanism as is used on the servers.
  • Make it Easy for Anyone to Get the Latest Executable
    Jenkins deploys to a Maven snapshot repository.
  • Everyone can see what's happening
    Our Jenkins build wall is on display in the coding room.
  • Automate Deployment
    The missing piece, covered in this post.

Continuous, Unchecked, Deployment

Each project has an executable build file redo:

mvn clean install -DskipTests=true
sudo ./reload
which calls the deployment to tomcat reload

service tomcat7 stop
rm -rf /var/lib/tomcat7/webapps/ROOT
cp target/ROOT.war /var/lib/tomcat7/webapps/
service tomcat7 start
We can use a githook to call redo when the project is updated:
cd .git/hooks
ln -s ../../redo post-merge
Add a line to chrontab using
crontab -e

*/5 * * * * cd checkout/commodities && git pull -q origin master
This polls for changes to the project code every five minutes and calls redo if a change is detected.

However we also need to redeploy if a change is made to the prototype, platform.

We can achieve this with a script continuousDeployment


# Assumed to be invoked from derivative project which contains script redo

cd ../platform
git fetch > change_log.txt 2>&1
if [ -s change_log.txt ]
# Installs by fireing githook post-merge
git pull origin master
cd $pwd
rm change_log.txt
which is also invoked by a crontab:

*/5 * * * * cd checkout/commodities && ../platform/continuousDeployment

Using multiple SSH keys with git

Tim Pizey from Tim Pizey

The problem I have is that I have multiple accounts with git hosting suppliers (github and bitbucket) but they both want to keep a one to one relationship between users and ssh keys.

For both accounts I am separating work and personal repositories.



In the past I have authorised all my identities on all my repositories, this has resulted in multiple identities being used within one repository which makes the statistics look a mess.

The Solution

Generate an ssh key for your identity and store it in a named file for example ~/.ssh/id_rsa_timp.

Add the key to your github or bitbucket account.

Use an ssh config file ~/.ssh/config

Host bitbucket.timp
User git
IdentityFile ~/.ssh/id_rsa_timp

You should now be good to go:

git clone git@bitbucket.timp:timp/project.git

Update .git/config

[remote "bitbucket"]
url = git@bitbucket.timp:timp/wiki.git
fetch = +refs/heads/*:refs/remotes/bitbucket/*

Read Maven Surefire Test Result files using Perl

Tim Pizey from Tim Pizey

When you want something quick and dirty it doesn't get dirtier, or quicker, than Perl.

We have four thousand tests and they are taking way too long. To discover why we need to sort the tests by how long they take to run and see if a pattern emerges. The test runtimes are written to the target/surefire-reports directory. Each file is named for the class of the test file and contains information in the following format:

Test set: com.mycorp.MyTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.03 sec

#! /usr/bin/perl -wall

my %tests;
open(RESULTS, "grep 'Tests run' target/surefire-reports/*.txt|");
while () {
s/Tests run:.+Time elapsed://;
s/ sec//;
$tests{$1} = $2;

my $cumulative = 0.0;
foreach my $key (sort {$tests{$a} <=> $tests{$b}} keys %tests) {
$cumulative += $tests{$key};
($cumulative/60)%60, $cumulative%60,
($tests{$key}/60)%60, $tests{$key}%60,

The resultant CSV can be viewed using a google chart:

Tomcat7 User Config

Tim Pizey from Tim Pizey

Wouldn't it be nice if tomcat came with the following, commented out, in /etc/tomcat7/tomcat-users.xml ?

<?xml version='1.0' encoding='utf-8'?>
<role rolename="manager-gui" />
<role rolename="manager-status" />
<role rolename="manager-script" />
<role rolename="manager-jmx" />

<role rolename="admin-gui" />
<role rolename="admin-script" />

roles="manager-gui, manager-status, manager-script, manager-jmx, admin-gui, admin-script"/>


Debian Release Code Names РAide M̩moire

Tim Pizey from Tim Pizey

The name series for Debian releases is taken from characters in the Pixar/Disney film Toy Story.


The unstable release is always called Sid as the character in the film took delight in breaking his toys.

A backronym: Still In Development.


The current pending release is always called testing and will have been christened. At the time of writing the testing release is Stretch.


Release 8.0

Jessie is the current stable release.

After a considerable while a release will migrate from testing to stable, it will then become the Current Stable release and the previous version will join the (head) of the list of Obsolete Stable releases.


Release 7.0

The current head of the list of Obsolete Stable releases.


Release 8.0

Obsolete Stable release.


Release 5.0

Obsolete Stable release.


Release 4.0

Obsolete Stable release.


Release 3.1

Obsolete Stable release.


Release 3.0

Obsolete Stable release.


Release 2.2

Obsolete Stable release.


Release 2.1

Obsolete Stable release.


Release 2.0

Obsolete Stable release.


Release 1.3

Obsolete Stable release.


Release 1.2

Obsolete Stable release.


Release 1.1

Obsolete Stable release.

Previous versions did not have versions.

Groovy: Baby Steps

Tim Pizey from Tim Pizey

Posting a form in groovy, baby steps. Derived from

@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7' )
@Grab(group='org.codehaus.groovyfx', module='groovyfx', version='0.3.1')

public class Post {

public static void main(String[] args) {

def baseUrl = ""

def ret = null
def http = new HTTPBuilder(baseUrl)

http.request(Method.POST, ContentType.TEXT) {
//uri.path = path
uri.query = [db:"paneris",
headers.'User-Agent' = 'Mozilla/5.0 Ubuntu/8.10 Firefox/3.0.4'

response.success = { resp, reader ->
println "response status: ${resp.statusLine}"
println 'Headers: -----------'
resp.headers.each { h ->
println " ${} : ${h.value}"

ret = reader.getText()

println '--------------------'
println ret
println '--------------------'

Delete Jenkins job workspaces left after renaming

Tim Pizey from Tim Pizey

When renaming a job or moving one from slave to slave Jenkins copies it and does not delete the original. This script can fix that.

#!/usr/bin/env python
import urllib, json, os, sys
from shutil import rmtree

url = 'http://jenkins/api/python?pretty=true'

data = eval(urllib.urlopen(url).read())
jobnames = []
for job in data['jobs']:

def clean(path):
builds = os.listdir(path)
for build in builds:
if build not in jobnames:
build_path = os.path.join(path, build)
print "removing dir: %s " % build_path


Merging two sets of Jacoco Integration Test Coverage Reports for Sonar using Gradle

Tim Pizey from Tim Pizey

In a Jenkins build:

./gradlew cukeTest
./gradlew integTest
./gradlew sonarrunner

This leaves three .exec files behind. SonarQube can only use two. So we merge the two integration tests results.

task integCukeMerge(type: JacocoMerge) {
description = 'Merge test code coverage results from feign and cucumber'
// This assumes cuketests have already been run in a separate gradle session

doFirst {
delete destinationFile
// Wait until integration tests have actually finished
println start.process != null ? start.process.waitFor() : "In integCukeMerge tomcat is null"

executionData fileTree("${buildDir}/jacoco/it/")


sonarRunner {
tasks.sonarRunner.dependsOn integCukeMerge
sonarProperties {
property "sonar.projectDescription", "A legacy codebase."
property "sonar.exclusions", "**/fakes/**/*.java, **/domain/**.java, **/*"

properties["sonar.tests"] += sourceSets.integTest.allSource.srcDirs

property "sonar.jdbc.url", "jdbc:postgresql://localhost/sonar"
property "sonar.jdbc.driverClassName", "org.postgresql.Driver"
property "sonar.jdbc.username", "sonar"
property "", "http://sonar.we7.local:9000"

def jenkinsBranchName = System.getenv("GIT_BRANCH")
if (jenkinsBranchName != null) {
jenkinsBranchName = jenkinsBranchName.substring(jenkinsBranchName.lastIndexOf('/') + 1)
def branch = jenkinsBranchName ?: ('git rev-parse --abbrev-ref HEAD'.execute().text ?: 'unknown').trim()

def buildName = System.getenv("JOB_NAME")
if (buildName == null) {
property "sonar.projectKey", "${name}"
def username = System.getProperty('')
property "sonar.projectName", "~${name.capitalize()} (${username})"
property "sonar.branch", "developer"
} else {
property "sonar.projectKey", "$buildName"
property "sonar.projectName", name.capitalize()
property "sonar.branch", "${branch}"
property "", "http://jenkins/job/${buildName}/"

property "sonar.projectVersion", "git describe --abbrev=0".execute().text.trim()

property "", "jacoco"
property "sonar.jacoco.reportPath", "$project.buildDir/jacoco/test.exec"
// feign results
//property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/it/integTest.exec"
// Cucumber results
//property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/it/cukeTest.exec"
// Merged results
property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/integCukeMerge.exec"

property "sonar.links.homepage", "${org}/${name}"

Remember to use Overall Coverage in any Sonar Quality Gates!

An SSL Truster

Tim Pizey from Tim Pizey

Should you wish, when testing say, to trust all ssl certificates:



import com.mediagraft.shared.utils.UtilsHandbag;

* Modify the JVM wide SSL trusting so that locally signed https urls, and others, are
* no longer rejected.
* Use with care as the JVM should not be used in production after activation.
public class JvmSslTruster {

private static boolean activated_;

private static X509TrustManager allTrustingManager_ = new X509TrustManager() {

public[] getAcceptedIssuers() {
return null;

public void checkClientTrusted([] certs, String authType) {

public void checkServerTrusted([] certs, String authType) {

public static SSLSocketFactory trustingSSLFactory() {
SSLContext sc = null;
try {
sc = SSLContext.getInstance("SSL");
sc.init(null, new TrustManager[]{allTrustingManager_}, new;
new URL(UtilsHandbag.getSecureApplicationURL()); //Force loading of installed trust manager
catch (Exception e) {
throw new RuntimeException("Unhandled exception", e);
return sc.getSocketFactory();

public static void startTrusting() {
if (!activated_) {
activated_ = true;

private JvmSslTruster() {

Restoring user groups once no longer in sudoers

Tim Pizey from Tim Pizey

Ubuntu thinks it is neat not to have a password on root. Hmm.

It is easy to remove yourself from all groups in linux, I did it like this:

$ useradd -G docker timp

I thought that might add timp to the group docker, which indeed it does, but it also removes you from adm,cdrom,lpadmin,sudo,dip,plugdev,video,audio which you were previously in.

As you are no longer in sudoers you cannot add yourself back.

Getting a root shell using RefiT

What we now need to get to is a root shell. Googling did not help.

I have Ubuntu 14.04 LTS installed as a dual boot on my MacBookPro (2011). I installed it using rEFIt.

My normal boot is power up, select RefiT, select most recent Ubuntu, press Enter.

To change the grub boot command instead of Enter press F2 (or +). You now have three options, one of which is single user: this hangs for me. Move to the 'Standard Boot' and again press F2 rather than Enter. This enables you to edit the kernel string. I tried adding Single, single, s, init=/bin/bash, init=/bin/sh, 5, 3 and finally 1.

By adding 1 to the grub kernel line you are telling the machine to boot up only to run level one.

This will boot to a root prompt! Now to add yourself back to your groups:

$ usermod -G docker,adm,cdrom,lpadmin,sudo,dip,plugdev,video,audio timp

Whilst we are here:

$ passwd root

Hope this helps, I hope I don't need it again!

Never use useradd always use adduser