Digital Ocean’s PaaS Goes BETA

Paul Grenyer from Paul Grenyer

Make no mistake, I LOVE DigitalOcean! It’s simple to use and reasonably priced, especially compared to some of its better known competitors. They even respond quickly to queries on Twitter!

A couple of days ago I received an email from DigitalOcean inviting me to try out their new Beta 2 for App Platform (DigitalOcean’s PaaS product) which they described as follows:

“It handles common infrastructure-related tasks like provisioning and managing servers, databases, operating systems, application runtimes, and other dependencies. This means you can go from code to production in just minutes. We support Node.js, Python, Ruby, Go, PHP, and static sites right out of the box. If you have apps in other languages, simply create a Dockerfile and App Platform will do the rest. You can deploy apps directly from your Github repos and scale them (vertically and horizontally) if needed.….”

I’m also a fan of Heroku for its ease of application deployment and, with the exception of a few AWS services, Heroku is the only platform other than DigitalOcean which I regularly use for deploying my projects. I use Heroku because I don’t have to provision a Droplet (a Linux server on DigitalOcean) to run a simple web application. Now that DigitalOcean has a similar service there’s a good chance I won’t need Heroku.

The DigitalOcean App Platform (which I’ll refer to as ‘Apps’ from here on) doesn’t yet have as many features as Heroku, but the corresponding features which Apps does support are much simpler to work with. There are basically two types of applications you can run, a static website (static), a web application (service) or a worker. A worker is basically a service without any routing and can be used for background tasks. As with Heroku you can add databases as well.

Apps is currently in Beta which means it’s an initial release of a potential future product. Customers who participate in DigitalOceans beta programs have the opportunity to test, validate, and provide feedback on future functionality, which helps DigitalOcean to focus their efforts on what provides the most value to their customers.

  • Customer availability: Participation in beta releases is by invitation, and customers may choose not to participate. Beta invitations may be public or private. (How exciting, they picked me!).
  • Support: Beta releases are unsupported.
  • Production readiness: Beta releases may not be appropriate for production-level workloads.
  • Regions: Beta offerings may only be available in select regions.
  • Charges: Beta offerings may be charged or free. However, free use of beta offerings may be discontinued at any point in time.
  • Retirement: At the end of a beta release, DigitalOcean will determine whether to continue an offering through its lifecycle. We reserve the right to change the scope of or discontinue a Beta product or feature at any point in time without notice, as outlined in our terms of service.

I was (am!) very excited and decided to try a few things out on Apps. Below you’ll find what I tried, how I did it and some of what I learnt.

Static Asset App

The simplest type of ‘app’ which you can deploy to Apps is a static website and it really is straight forward. Remember the days when you would develop a website by creating files in a directory and opening them locally in a browser? Well, once you’ve done that you can just push them to GitHub and they’re on the web!

1. Create a new GitHub repository - it can be private or public.

2. Add a single index.html file, e.g:

<!doctype html>
<html lang="en">
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <!-- Bootstrap CSS -->
    <link rel="stylesheet" ref="" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous">
    <title>Hello, App Platform!</title>
    <h1>Hello, App Platform!</h1>
    <!-- Optional JavaScript -->
    <!-- jQuery first, then Popper.js, then Bootstrap JS -->
    <script src="" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script>
    <script src="" integrity="sha384-9/reFTGAW83EW2RDu2S0VKaIzap3H66lZH81PoYlFhbGU+6BZp6G7niu735Sk7lN" crossorigin="anonymous"></script>
    <script src="" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" crossorigin="anonymous"></script>

I’ve used the Bootstrap Hello, World! example as it brings in CSS and JavaScript, but any HTML example will do.

3. Log into DigitalOcean and select Apps from the left-hand menu.

4. If it’s your first App click ‘Launch App’. Otherwise click ‘Create App’.

5. Then click ‘GitHub’. If this is your first App, select ‘Configure your GitHub permissions’ and follow the instructions to link your GitHub account.

6. Back in Apps, select your new repository from the dropdown list and click Next.

On the next page you’ll be asked to choose a name for the app, select the branch to use from your repository and configure ‘Autodeploy on Push’.

7. Update the name of the app if you want to, leave the rest of the defaults as they are and click Next.

On the next page you have the option to add build and run commands. You don’t need any for a simple HTML app.

8. On the ‘Choose Build Configuration’ page click ‘Launch App’ to deploy the app and wait while Apps configures the app.

9. After receiving the ‘Deployed successfully!’ message, click the ‘Live App’ link to launch the app in a new tab.

That’s it! Your HTML page is now live on DigitalOcean’s App Platform. You can treat your repository just like the root directory of a website and add pages, images and JavaScript as you need. Just add them to the repository, commit, push and wait for them to be deployed.

Apps will generate a URL with a subdomain which is a combination of the name of your app and a unique sequence of characters, on the domain You can configure a custom domain from the app’s settings tab and Apps provides a CNAME for redirection.

Node App

The next step up from a static asset app is a simple node webapp. Apps will install Node.js and your app’s dependencies for you and then fire up the app.

I was hoping to be able to deploy a very simple node webapp such as:

var http = require('http');

http.createServer(function (req, res) {

  res.write('Hello, App Platform!');


}).listen(process.env.PORT || '3000');

But this seemed to confuse Apps. It requires a package-lock.js file, which is generated by running npm install, to be checked into the repository and didn’t deploy successfully until I added the express package. 

1. Create a new directory for a simple node project and move into it.

2. Run npm init at the command line. Enter a name for the app and accept the other defaults.

3. Add a .gitignore file containing:


so that dependencies are not checked into the repository.

4. Add the Express () package:

npm install express --save

This will also generate the package-lock.js which Apps needs and must be checked into the repository with the other files.

5. Create an index.js file at the root of the project:

const express = require('express')

const app = express()

const port = process.env.PORT || '3000';

app.get('/', (req, res) => {

  res.send('Hello World!')


app.listen(port, () => {

  console.log(`Example app listening at http://localhost:${port}`)


Apps injects the port the webapp should run on as an environment variable called, PORT. This is easily read by Node.js as shown.

6. Add a start command to the scripts section in package.json:

"scripts": {

    "start": "node index.js",

    "test": "echo \"Error: no test specified\" && exit 1"


7. Create a new GitHub repository, it can be private or public, and check your node project into it.

Then follow from step 3 of the Static Asset app above. Note at step 8, Apps has automatically configured npm start as the run command, having detected a Node application and you can select the pricing plan on the same screen.

WARNING: Node applications are NOT free on DigitalOcean App Platform. Make sure you delete unwanted applications from the Settings tab.

Docker App

As well as Node.js, Apps appears to support Ruby, Go and Python nativity, as well as others. What about .Net or other languages and platforms? For those Apps supports Docker. Let’s see if we can get a simple dotnet core application running in Apps.

1. Create a new directory for a dotnet core project (e.g. dotnetcore) and move into it.

2. Create a dotnet core web application:

dotnet new webapp

3. Add a Dockerfile to the project:

FROM AS build-env


# Copy everything else and build

COPY . ./

RUN dotnet publish dockercore.csproj -c Release -o out

# Build runtime image



COPY --from=build-env /app/out .


ENTRYPOINT [ "dotnet", "dockercore.dll" ]

Apps injects the port the webapp should run on as an environment variable called, PORT. Make sure the Docker image will expose it as shown.

4. To make sure the application runs on the port injected add the following UseUrls method call in program.cs:

public static IHostBuilder CreateHostBuilder(string[] args)


    var port = Environment.GetEnvironmentVariable("PORT") ;

    return Host.CreateDefaultBuilder(args)

        .ConfigureWebHostDefaults(webBuilder =>





5. To prevent the application trying to redirect to a non-existent ssl port, remove or comment out the following line from startup.cs

// app.UseHttpsRedirection();

6. Building a dotnet core application generates a lot of intermediate files that you don’t want to check in, so add an appropriate .gitignore file to the root of the project.

7. Create a new GitHub repository, it can be private or public, and check your dotnet core project into it.

Then follow from step 3 of the Static Asset app above. Note at step 8, Apps has detected the Dockerfile and is not giving the option for build commands. You don’t need to specify any run commands and you can select the pricing plan on the same screen.

WARNING: Docker based  applications are NOT free on DigitalOcean App Platform. Make sure you delete unwanted applications from the Settings tab.


There was one big disadvantage for me and that’s the lack of a free tier for anything more advanced than a static web application. The cost isn’t extortionate (, but quite a bit for hobby programmers. If you want to use a database on top there’s a further cost, whereas this is free to begin with in Heroku.

Apps currently only supports GitHub. You can use private repositories, which is great, but I’d like to see BitBucket support as well. Heroku has its own git repositories as well as supporting external repositories. 

I’d also like there to be Terraform support for Apps as there is for the rest of the DigitalOcean services. However, given that Apps in Beta, I can see why it isn’t supported yet.

Overall Apps was very easy to use and had a much shallower learning curve and was generally easier to use than Heroku.  DigitalOcean, do you think we could have AWS style Lambdas next, please?

Metal Commando – Primal Fear

Paul Grenyer from Paul Grenyer

I've had much anticipation for this release and I wasn't disappointed. While it lacks the epic nature of Severn Seals and New Religion until the final (13 minute!) track, it's packed full of solid power metal songs. Unlike most albums, I found it instantly enjoyable on first listen. Other than the lack of epicness, my only complaint would be it's rather short. I'm fully expecting this to become one of my favourite albums of 2020.

Metal Commando

Greenback backup

Paul Grenyer from Paul Grenyer


When Naked Element was still a thing, we used DigitalOcean almost exclusively for our client’s hosting. For the sorts of projects we were doing it was the most straightforward and cost effective solution. DigitalOcean provided managed databases, but there was no facility to automatically back them up. This led us to develop a Python based program which was triggered once a day to perform the backup, push it to AWS S3 and send a confirmation or failure email.

We used Python due to familiarity, ease of use and low installation dependencies. I’ll demonstrate this later on in the Dockerfile. S3 was used for storage as DigitalOcean did not have their equivalent, ‘Spaces’, available in their UK data centre. The closest is in Amsterdam, but our clients preferred to have their data in the UK. 

Fast forward to May 2020 and I’m working on a personal project which uses a PostgreSQL database. I tried to use a combination of AWS and Terraform for the project’s infrastructure (as this is what I am using for my day job) but it just became too much effort to bend AWS to my will and it’s also quite expensive. I decided to move back to DigitalOcean and got the equivalent setup sorted in a day. I could have taken advantage of AWS’ free tier for the database for 12 months, but AWS backup storage is not free and I wanted as much as possible with one provider and within the same virtual private network (VPC).

I was back to needing my own backup solution. The new project I am working on uses Docker to run the main service. My Droplet (that’s what Digital Ocean calls its Linux server instances) setup up is  minimal: non-root user setup, firewall configuration and Docker install. The DigitalOcean Market Place includes a Docker image so most of that is done for me with a few clicks. I could have also installed Python and configured a backup program to run each evening. I’d also have to install the right version of the PostgreSQL client, which isn’t currently in the default Ubuntu repositories, so is a little involved. As I was already using Docker it made sense to create a new Docker image to install everything and run a Python programme to schedule and perform the backups. Of course some might argue that a whole Ubuntu install and configure in a Docker image is a bit much for one backup scheduler, but once it’s done it’s done and can easily be installed and run elsewhere as many times as is needed.

There are two more decisions to note. My new backup solution will use DigitalOcean spaces, as I’m not bothered about my data being in Amsterdam and I haven’t implemented an email server yet so there are no notification emails. This resulted in me jumping out of bed as soon as I woke each morning to check Spaces to see if the backup had worked, rather than just checking for an email. It took two days to get it all working correctly!


I reached for Naked Element’s trusty Python backup program affectionately named Greenback after the arch enemy of Danger Mouse (Green-back up, get it? No, me neither…) but discovered it was too specific and would need some work, but would serve as a great template to start with.

It’s worth nothing that I am a long way from a Python expert. I’m in the ‘reasonable working knowledge with lots of help from Google’ category. The first thing I needed the program to do was create the backup. At this point I was working locally where I had the correct PostgreSQL client installed,


class GreenBack:
    def backup(self):    
        datestr ="%d_%m_%Y_%H_%M_%S")
        backup_suffix = ".sql"
        backup_prefix = "backup_"

        destination = backup_prefix + datestr + backup_suffix
        backup_command = 'sh ' + db_connection_string + ' ' + destination
        subprocess.check_output(backup_command.split(' '))
        return destination

I want to keep anything sensitive out of the code and out of source control, so I’ve brought in the connection string from an environment variable. The method constructs a filename based on the current date and time, calls an external bash script to perform the backup:

# connection string
# destination
pg_dump $1 > $2

and returns the backup file name. Of course for Ubuntu I had to make the bash script executable. Next I needed to push the backup file to Spaces, which means more environment variables:


So that the program can access Spaces and another method:

class GreenBack:
    def archive(self, destination):
        session = boto3.session.Session()
        client = session.client('s3',

        client.upload_file(destination, bucket_name, backup_folder + '/' + destination)

It’s worth noting that DigitalOcean implemented the Spaces API to match the AWS S3 API so that the same tools can be used. The archive method creates a session and pushes the backup file to Spaces and then deletes it from the local file system. This is for reasons of disk space and security. A future enhancement to Greenback would be to automatically remove old backups from Spaces after a period of time.

The last thing the Python program needs to do is schedule the backups. A bit of Googling revealed an event loop which can be used to do this:

class GreenBack:
    last_backup_date = ""

    def callback(self, n, loop):
        today ="%Y-%m-%d")
        if self.last_backup_date != today:
  'Backup started')
            destination = self.backup()
            self.last_backup_date = today
  'Backup finished')
        loop.call_at(loop.time() + n, self.callback, n, loop)

event_loop = asyncio.get_event_loop()
    bk = GreenBack()
    bk.callback(60, event_loop)
finally:'closing event loop')

On startup callback is executed. It checks the last_back_date against the current date and if they don’t match it runs the backup and updates the last_backup_date. If the dates do match and after running the backup, the callback method  is added to the event loop with a one minute delay. Calling event_loop.run_forever after the initial callback call means the program will wait forever and the process continues.

Now that I had a Python backup program I needed to create a Dockerfile that would be used to create a Docker image to setup the environment and start the program:

FROM ubuntu:xenial as ubuntu-env
WORKDIR /greenback

RUN apt update
RUN apt -y install python3 wget gnupg sysstat python3-pip

RUN pip3 install --upgrade pip
RUN pip3 install boto3 --upgrade
RUN pip3 install asyncio --upgrade

RUN echo 'deb xenial-pgdg main' > /etc/apt/sources.list.d/pgdg.list
RUN wget
RUN apt-key add ACCC4CF8.asc

RUN apt update
RUN apt -y install postgresql-client-12


ENTRYPOINT ["python3", ""]

The Dockerfile starts with an Ubuntu image. This is a bare bones, but fully functioning Ubuntu operating system. The Dockerfile then installs Python, its dependencies and the Greenback dependencies. Then it installs the PostgreSQL client, including adding the necessary repositories. Following that it copies the required Greenback files into the image and tells it how to run Greenback.

I like to automate as much as possible so while I did plenty of manual Docker image building, tagging and pushing to the repository during development, I also created a BitBucket Pipeline, which would do the same on every check in:

image: python:3.7.3

    - step:
            - docker
            - IMAGE="findmytea/greenback"
            - TAG=latest
            - docker login --username $DOCKER_USERNAME --password $DOCKER_PASSWORD
            - docker build -t $IMAGE:$TAG .
            - docker push $IMAGE:$TAG

Pipelines, BitBucket’s cloud based Continuous Integration and Continuous Deployment feature, is familiar with Python and Docker so it was quite simple to make it log in to Docker Hub, build, tag and push the image. To enable the pipeline all I had to do was add the bitbucket-pipelines.yml file to the root of the repository, checkin, follow the BitBucket pipeline process in the UI to enable it and add then add the build environment variables so the pipeline could log into Docker Hub. I’d already created the image repository in Docker Hub.

The Greenback image shouldn’t change very often and there isn’t a straightforward way of automating the updating of Docker images from Docker Hub, so I wrote a bash script to do it, deploy_greenback:

sudo docker pull findmytea/greenback
sudo docker kill greenback
sudo docker rm greenback
sudo docker run -d --name greenback  --restart always --env-file=.env findmytea/
sudo docker ps
sudo docker logs -f greenback

Now, with a single command I can fetch the latest Greenback image, stop and remove the currently running image instance, install the new image, list the running images to reassure myself the new instance is running and follow the Greenback logs. When the latest image is run, it is named for easy identification, configured to restart when the Docker service is restarted and told where to read the environment variables from. The environment variables are in a local file called .env:


And that’s it! Greenback is now running in a Docker image instance on the application server and backs up the database to Spaces just after midnight every night.


While Greenback isn’t a perfect solution, it works, is configurable, a good platform for future enhancements and should require minimal configuration to be used with other projects in the future.

Greenback is checked into a public BitBucket repository and the full code can be found here:

The Greenback Docker image is in a public repository on Docker Hub and can be pulled with Docker:

docker pull findmytea/greenback

Test Driven Terraform [Online – Video Conf] – 7pm, 2 April 2020.

Paul Grenyer from Paul Grenyer

We'll use TDD to create a Terraform module which builds a Heroku app and deploys a simple React application.

If you'd like to follow along, you'll need the following prerequisites

  • Terraform installed
  • Go 1.14 installed
  • Heroku account - HEROKU_API_KEY added to environment variables.
  • Git installed
  • BitBucket account

This meetup will be via Zoom:-

Please RSVP here:

Insomnium – Norwich

Paul Grenyer from Paul Grenyer

Something my first proper girlfriend said to me has stuck with me my entire life as I disagree with it (mostly).  She said that the best way to discover a new band was to see them live first. The reason I disagree is because I get most pleasure from knowing the music I am listening to live - most of the time.

I’m a member of the Bloodstock Rock Society and their Facebook page is often a place of band discussion. Lots of people there were saying how good Insomnium are, but they didn’t do a great deal for me when I listened to them on Spotify. Then it was early 2020, I hadn’t been to a gig since Shakespears Sister in November, I fancied a night out and Insomnium were playing in Norwich. So I took a chance….

From the off they were great live and I really enjoyed it. I came to the conclusion that I must like some of their older stuff as it was the new album which hadn’t done much for me. There were lots of things I like, like widdly guitars, metal riffs and blast beats, but what really lets Insomnium down is the vocals. Death metal vocals, to a certain extent, are death metal vocals, but this guy sounded like he was singing a different song in a different band - it’s the same on the album I tried. If the vocals were more suited to the music, like there are with Wintersun, it would be even better. I also learned that Norwich City’s current start player is from the same town in Finland as the band.

The first thing I did this morning was look up which albums the setlist was from an make a list:

  • One for Sorrow
  • Across the Dark
  • Above the Weeping World
  • Shadows of the Dying Sun
  • Heart like a Grave

And then die a little inside at the prices on Amazon and eBay. I think I’ll be playing a lot of Insomnium on Spotify for the time being so I’m ready to enjoy them to the full next time.

A review: .Net Core in Action

Paul Grenyer from Paul Grenyer

.Net Core in Action
by Dustin Metzgar
ISBN-13: 978-1617294273

I still get a fair amount of flack for buying and reading technical books in the 21st Century - almost as much as I get for still buying and listening to CDs. If I was a vinyl loving hipster, it would be different of course…. However, books like .Net Core in Action are a perfect example of why I do it.  I needed to learn what .Net Core was and get a feel for it very quickly and that is what this book allowed me to do.

I’ve been very sceptical of .Net development for a number of years, mostly due to how large I perceived the total cost of ownership and the startup cost to be and the fact that you have to use Windows.  While this was previously true, .Net Core is different and .Net Core in Action made me understand that within the first few pages of the first chapter. It also got me over my prejudice towards Docker by the end of the second chapter.

The first two chapters are as you would expect, an introduction followed by various Hello World examples. Then it gets a bit weird as the book dives into the build system next and then Unit testing (actually, this is good so early) and then two chapters on connecting to relational databases, writing data access layers and ORMs. There’s a sensible chapter on micro services before the weirdness returns with chapters on debugging performance profiling and internationalisation. I can kind of see how the author is trying to show the reader the way different parts of .Net core work on different platforms (Windows, Linux, Mac), but this relatively small volume could have been more concise.

DevelopHER Overall Award 2019

Paul Grenyer from Paul Grenyer

I was honoured and delighted to be asked to judge and present the overall DevelopHER award once again this year. Everyone says choosing a winner is difficult. It may be a cliche, but that doesn’t change the fact that it is.

When the 13 category winners came across my desk I read through them all and reluctantly got it down to seven. Usually on a first pass I like to have it down to three or four and then all I need to agonise over is the order. Luckily on the second pass I was able to be ruthless and get it down to four.

To make it even more difficult, three of my four fell into three categories I am passionate about:

  • Technical excellence and diversity
  • Automated Testing
  • Practical, visual Agile

And the fourth achieved results for her organisation which just couldn’t be ignored.

So I read and reread and ordered and re-ordered. Made more tea, changed the CD and re-read and re-ordered some more. Eventually it became clear.

Technical excellent and the ability for a software engineer to turn their hand to new technologies is vital. When I started my career there were basically two main programming languages, C++ and Java. C# came along soon after, but most people fell into one camp or another and a few of us crossed over. Now are are many, many more to choose from and lots of young engineers decide to specialise in one and are reluctant to learn and use others. This diminishes us all as an industry. So someone who likes to learn new and different technologies is a jewel in any company’s crown.

The implementation of Agile methodologies in Software Development is extremely important. Software, by its very nature is complex. Only on the most trivial projects does the solution the users need look anything like what they thought they wanted at the beginning. Traditional waterfall approaches to software development do not allow for this. The client requires flexibility and we as software engineers need the flexibility to deliver what they need. Software development is a learning process for both the client and the software engineer. Agile gives us a framework for this. Unlike many of the traditional methods, Agile has the flexibility to be agile itself, giving continuous improvement.

When implementing Agile processes, the practices are often forgotten or neglected and in many ways they are more important. Not least of which is automated testing. The practice of writing code which tests your code and running it at least on every checkin. This gives you a safety net that code you’ve already written isn’t broken by new code you write. And when it is, the tests tell you, they tell you what’s wrong and where it’s wrong.  We need more of this as an industry and that is why I chose Rita Cristina Leitao, an automated software tester from Switch Studios as the overall DevelopHER winner.

Shakespear Sister Ipswich November 2019

Paul Grenyer from Paul Grenyer

I was very surprised and excited and then immediately disappointed to see Shakespere Sister on the Graham Norton show. They performed Stay, which is their big hit (longest single at number in the UK be a female artist, 8 weeks), but Marcella wasn’t even trying to hit the high notes and it was awful. We decided to go and see them on tour anyway as it was potentially a once in a lifetime experience before they fell out again.

The Ipswich Regent was half empty in the stalls and the circle was closed and oddly there were quite a few security guards - apparently at the request of the band. Encouragingly Shakespear Sister came on on time and they sounded good! As they ploughed through many of their well known songs, new songs and a few older more obscure songs, the vocals were strong from both Marcella and Siobhan.

The rhythm section was incredible.  The drumming was tight, varied and interesting, but what really stood out was the bass. I think part of this was that the player had fantastic bass lines to play, but also oozed talent. It’s really uncommon for a bass player to need to change bass guitars between songs but Clare Kenny swapped frequently. It’s just a shame that the lead guitar player was totally unremarkable and I’ve no idea what the keyboard player was for.

The highlight I, and I imagine many others, had been looking forward to was Stay. It was better than with Graham Norton, but it’s clear that Marcella can not get to the highest notes and live, she doesn’t try. It was still a good performance of a fantastic song.

Would I go and see them again? Probably not, unless I was dragged.


Paul Grenyer from Paul Grenyer

I was pretty sure I had seen Borknagar support Cradle of Filth at the Astoria 2 in the ‘90s. It turns out that was Opeth and Extreme Noise Terror, so I don’t really remember how I got into them now.

Whatever the reason was, I really got into their 2000 album Quintessence. At the time I didn’t really enjoy their previous album, The Archaic Course, much so with the exception of the occasional relisten to Quintessence, Borknagar went by the wayside for me.  That was until ICS Vortex got himself kicked out of Dimmu Borgir for allegedly poor performances, produced a really rather bland and unlistenable solo album called Storm Seeker, and then got back properly with Borknagar.  That’s when things got interesting.

ICS Vortex has an incredible voice. When he joined Dimmu Borgir as bassist and second vocalist in time for Spiritual Black Dimensions, he brought a new dimension (pun intended) to an already amazing band. I’ve played Spiritual Black Dimensions to death since it came out and I think only Death Cult Armageddon is better.

ICS Vortex’s first album back with Borknagar is called Winter Thrice. Loving his voice and being bitterly disappointed with Storm Seeker I bought it desperately hoping for something more and I wasn’t disappointed. It’s an album with a cold feel and lyrical content about winter and the north. I loved it and played it constantly after release and regularly since. It’s progressive black metal which is the musical equivalent to walking through the snow early on a cold crisp morning.

This year Borknagar released a new album called True North. When I’ve loved an album intensely and the band brings out something new I always feel trepidation. Machine Head never bettered Burn My Eyes, WASP never bettered the Crimson Idol. I could go on, but you get the picture. True North is another album about winter and the north. So I ought to have been on safe ground, but then Arch Enemy have pretty much recorded the same album since Doomsday Machine, but never bettered it. They’re all good though.

My first listen to True North was tense, but it didn’t take long for that to dissipate. I had it on daily
play for a few weeks, together with the new albums from Winterfylleth and Opeth. True North was so brilliant I thought it might be even better than Winter Thrice. So cautiously I tried Winter Thrice again, but I wasn’t disappointed to find it was the slightly better album. The brilliant thing is that I now have two similar, but different enough albums I can enjoy again and again and other than Enslaved’s In Times, I haven’t found anything else like it.

I hope they do what Evergrey did with Hymns for the Broken, The Storm Within and The Atlantic and make it a set of three. Cross your fingers for me.


Paul Grenyer from Paul Grenyer

At school I had this friend, Jamie, and he once said to me that he always preferred having a band’s live album to their studio album of the same songs. His example was Queen’s Live Magic. To me, then, this was madness. It didn’t have all the same songs as It’s a Kind of Magic and the production isn’t as good. Let’s face it, unless it’s Pink Floyd, the production of a live album is never as good as in the studio. I avoided live albums for years.

Pink Floyd’s Pulse was the first live album I really got into and then there was nothing until the teenies when live albums from Emperor, Immortal, Arch Enemy, Dimmu Borgir and Blind Guardian got me completely hooked.

The latest live album I’ve bought is ‘The Siege Of Mercia: Live At Bloodstock 2017’ by Winterfylleth. It’s amazing for a number of reasons. I was at Bloodstock, watching the band, when it was recorded. It’s a fantastic performance of of some brilliant songs. It’s got a, atmospheric, synth version of an old track at the end. It’s encouraged me to relisten to their studio albums and enjoy them so much more.

Winterfylleth are a Black Metal band from Manchester. Their lyrics are based around England’s rich culture and heritage.  Some of their album covers and songs depict the Peak District. You’d have thought a song about Mam Tour, a hill in the Peak District, might be a bit boring, but it’s not! Maybe because, being black metal, you can’t really hear the words, but as always the vocal style, heavy guitars and fast drums make for the perfect mix.

I currently have the Siege of Mercia, along with some similar new albums from the likes of Borknager, on my regular daily playlist and it just gets better and better.