Signing off 2021 and Not rolling over

Allan Kelly from Allan Kelly

“The decision about what to abandon is by far the most important and most neglected. … No organization which purposefully and systematically abandons the unproductive and obsolete ever wants for opportunities.”

Peter Drucker, Age of Discontinuity

I’m writing this on the last day of 2021 and for once I am confident in predicting that next year will be better than the one that is ending. Like most people I have a few routines I follow around this year and one of these relates specifically to this blog.

I often get asked: how do you get ideas for your blog?

The truth is I have too many ideas for this blog, right now I have 18 ideas for blog posts that are part written. That might be as little as a title, or a completely written post I haven’t editted yet – plus there is one entry I wrote and decided it was for me alone. When I have an idea for a post I just add an entry into my blog list, normally that would be a title and few bullet points. Sometimes it is just a title, occasionally I just type the whole thing as a stream.

These entries will not be rolled over for 2022. In fact, I just deleted 7 after writing that last paragaph. The remaining 11 will be folded up and put to one side in the MacJournal software I use for blogging. I have already created a 2022 folder for next year.

It is a bit like buying a daily newspaper, once the day is gone there is rarely any point to going back to read the bits you missed. Tomorrow is a new day with a new newspaper, new priorities and new things to read.

In order the have new ideas one must make space for new ideas, and that means throwing away ideas which have not be able to win the competition for attention to date. This idea is embedded in my thinking when I recommend teams using OKRs throw away their backlog. It is why I like Jeff Bezos’ “Everyday is Day-1” thinking.

Don’t get weighed down by yesterday’s good ideas, if they are really good ideas they will appear again. If not then removing them will make space for better ideas. Plus, by practicing new ideas you will get better at new ideas.

More importantly: throwing away those ideas and work-in-progress forces one to think about what is really important, what really will make difference and move you forward. Dispensing with the day-to-day trivial allows one to think big.

In truth, while I just irrevocably deleted seven potential blog entries the other 11 will not be lost for ever. They will just be hidden. If I am stuck in a few months time I know where they are – but for that matter, I could equally fish in the unused entries from 2020, 2019, 2018… And in complete honesty, there are two early drafts already in the 2022 folder that I’m keen to write.

So while I say, and I advise you to: “Throw away the backlog” I have no objection to someone keeping (some of) those good ideas in a bottom draw as long as a) nobody pretends they might be done one day, b) they do not distract from your focus on the important things.

Finally, if I am to think big for the next year what would I like this blog to carry? – in other words, what might you expect?

Top of the list is to focus on OKRs: I’ve been blown away by the interest since I published “Succeeding with OKRs in Agile” and I know a lot of people are wrestling with OKRs with agile.

I’d also like to focus myself on “small practical bits”. I know I have a tendency to be “philosophical” – in part that is because I believe that our “philosophy” informs our daily actions and decisions, and the pattern of those decisions, far more, and for far longer, than any given list of “10 things you should …” But I also know, readers – and book buyers! – like “small practical bits” so commercially I should do more of those.

Subscribe to my blog newsletter and download Continuous Digital for free

The post Signing off 2021 and Not rolling over appeared first on Allan Kelly.

Fishing for software data

Derek Jones from The Shape of Code

During 2021 I sent around 100 emails whose first line started something like: “I have been reading your interesting blog post…”, followed by some background information, and then a request for software engineering data. Sometimes the request for data was specific (e.g., the data associated with the blog post), and sometimes it was a general request for any data they might have.

So far, these 100 email requests have produced one dataset. Around 80% failed to elicit a reply, compared to a 32% no reply for authors of published papers. Perhaps they don’t have any data, and don’t think a reply is worth the trouble. Perhaps they have some data, but it would be a hassle to get into a shippable state (I like this idea because it means that at least some people have data). Or perhaps they don’t understand why anybody would be interested in data and must be an odd-ball, and not somebody they want to engage with (I may well be odd, but I don’t bite :-).

Some of those who reply, that they don’t have any data, tell me that they don’t understand why I might be interested in data. Over my entire professional career, in many other contexts, I have often encountered surprise that data driven problem-solving increases the likelihood of reaching a workable solution. The seat of the pants approach to problem-solving is endemic within software engineering.

Others ask what kind of data I am interested in. My reply is that I am interested in human software engineering data, pointing out that lots of Open source is readily available, but that data relating to the human factors underpinning software development is much harder to find. I point them at my evidence-based book for examples of human centric software data.

In business, my experience is that people sometimes get in touch years after hearing me speak, or reading something I wrote, to talk about possible work. I am optimistic that the same will happen through my requests for data, i.e., somebody I emailed will encounter some data and think of me 🙂

What is different about 2021 is that I have been more willing to fail, and not just asking for data when I encounter somebody who obviously has data. That is to say, my expectation threshold for asking is lower than previous years, i.e., I am more willing to spend a few minutes crafting a targeted email on what appear to be tenuous cases (based on past experience).

In 2022 I plan to be even more active, in particular, by giving talks and attending lots of meetups (London based). If your company is looking for somebody to give an in-person lunchtime talk, feel free to contact me about possible topics (I’m always after feedback on my analysis of existing data, and will take a 10-second appeal for more data).

Software data is not commonly available because most people don’t collect data, and when data is collected, no thought is given to hanging onto it. At the moment, I don’t think it is possible to incentivize people to collect data (i.e., no saleable benefit to offset the cost of collecting it), but once collected the cost of hanging onto data is peanuts. So as well as asking for data, I also plan to sell the idea of hanging onto any data that is collected.

Fishing tips for software data welcome.

Christmas books for 2021

Derek Jones from The Shape of Code

This year, my list of Christmas books is very late because there is only one entry (first published in 1950), and I was not sure whether a 2021 Christmas book post was worthwhile.

The book is “Planning in Practice: Essays in Aircraft planning in war-time” by Ely Devons. A very readable, practical discussion, with data, on the issues involved in large scale planning; the discussion is timeless. Check out second-hand book sites for low costs editions.

Why isn’t my list longer?

Part of the reason is me. I have not been motivated to find new topics to explore, via books rather than blog posts. Things are starting to change, and perhaps the list will be longer in 2022.

Another reason is the changing nature of book publishing. There is rarely much money to be made from the sale of non-fiction books, and the desire to write down their thoughts and ideas seems to be the force that drives people to write a book. Sites like substack appear to be doing a good job of diverting those with a desire to write something (perhaps some authors will feel the need to create a book length tomb).

Why does an author need a publisher? The nitty-gritty technical details of putting together a book to self-publish are slowly being simplified by automation, e.g., document formatting and proofreading. It’s a win-win situation to make newly written books freely available, at least if they are any good. The author reaches the largest readership (which helps maximize the impact of their ideas), and readers get a free electronic book. Authors of not very good books want to limit the number of people who find this out for themselves, and so charge money for the electronic copy.

Another reason for the small number of good new, non-introductory, books, is having something new to say. Scientific revolutions, or even minor resets, are rare (i.e., measured in multi-decades). Once several good books are available, and nothing much new has happened, why write a new book on the subject?

The market for introductory books is much larger than that for books covering advanced material. While publishers obviously want to target the largest market, these are not the kind of books I tend to read.

Finally On A Clockwork Contagion – student

student from thus spake a.k.

Over the course of the year my fellow students and I have spent our free time building mathematical models of the spread of disease, initially assuming that upon contracting the infection a person would immediately and forever be infectious, then adding periods of incubation and recovery before finally introducing the concept of location whereby the proximate are significantly more likely to interact than the distant and examining the consequences for a population distributed between several disparate villages.
Whilst it is most certainly the case that this was more reasonable than assuming entirely random encounters it failed to take into account the fact that folk should have a much greater proclivity to meet with their friends, family and colleagues than with their neighbours and it is upon this deficiency that we have concentrated our most recent efforts.

Visual Lint and log4j (TL;DR: we don’t use it)

Products, the Universe and Everything from Products, the Universe and Everything

A good question from a customer given a bunch of headlines about security holes in the log4j logging library:

Triggered by the recent log4j vulnerability our organisation is asking all our software vendors if their software is affected by it - and if so by when a patch will be provided. May I ask for your confirmation that Visual Lint is not affected by this exploit?

I suppose that Visual Lint is Java free and thus has no problem with it. Thanks a lot for your answer!

Hopefully our answer will prove reassuring:

Visual Lint is written almost entirely in native C++ (more specifically, it's written in C++ 14). There is only one Java project in the entire codebase - the project which implements the Eclipse plugin (to our knowledge, Eclipse plugins can only be implemented in Java).

However, that project is just a thin Java wrapper around a native DLL - and it doesn't use log4j at all.

So, you're correct. Visual Lint (and indeed all our products and infrastructure) are 100% log4j free.

So your organisation can rest easy in this case.

Top agile books which aren’t about agile or software

Allan Kelly from Allan Kelly

Christmas is almost here and the end of the year is nye. That means it is time for newspapers and journals to start publishing “Top books of the year” lists and “Christmas recommendations.” So, prompted by a recent thread on LinkedIn I thought I’d offer up my own book list: top books for agile folk from outside of agile (and software).

That is: books which are not explicitly about agile (or software development) but which contain a valuable message, and possibly techniques for those wanting to expand their knowledge of, well, agile.

Most of the books I’m about to list address philosophical, or mindset, underpinnings of agile: how to think in an agile way, rather than “how to do agile.” That might disappoint but think about it, how could a book from outside agile tell you how to do agile?

Well, actually, there are three which do.

First The Goal: written in the style of a novel this book explores the theory of constraints and elementary queuing theory, without mentioning either by name. Since it is written as a novel – with characters and back story! – this is an easy read. But please, don’t judge it as a novel, judge it for the message inside.

Next up is The 7 Habits of Highly Effective People: I blogged a few months ago how these habits could also underpin a team working style. Whether you read this for yourself or with an eye on your team this book does contain actionable advice – and some ideas on how to think. It can be a bit of a cringe in places and I’m not sure I agree with all the ideas but it is still a worthwhile read.

Finally in this section is a book at the opposite end of readability: Factory Physics.

Make no mistake this is a text book so it sets out to teach and can be hard going in places – there are plenty of equations. But, if it is a solid grounding in queuing theory, variability, lead times and the like you want then this is the book to go to. In fact, it might be the only book.

That is it for hands on books which will tell you how to do things on Monday morning. The books which are ones I consider “philosophy” – how to think. Thats my way of putting it, a more popular way of putting it is: Mindset. These are books which have shaped my thinking, my mindset, and as such underpin my approach to all things agile – and more!

First here is The Fifth Discipline. This may be the founding text in the field of organizaional learning, ultimately all agile learning and applying that learning. The “learning view” underpinned my own first book and that still fundamentally shapes my approach to working with individuals, teams and companies.

My next choice continues the organizational learning theme and is the source of perhaps the most famous quote from the that field:

“We understand that the only competitive advantage the company of the future will have is its managers’ ability to learn faster than then their competitors.”

The Living Company presents an alternative view of companies and organizations: rather than being rational profit maximising entities this book encourages you to see companies as living organisms. As such the organizations true goal is to live and continue living. Trade, and even profit, is simply a means to an end. And like all successful living things companies must learn and adapt, those that don’t will die.

Living Company is not alone in presenting an alternative narrative of how companies work. My penultimate book presents an alternative view of that most sacred of management practices: strategy.

The Rise and Fall of Strategic Planning is a major work that not only charts the historical rise of strategic planning and the subsequent fall it also present an alternative view of what strategy is and how companies come by strategies: strategy is a consistent pattern of behaviour, strategy is part plan but is also emergent and changes in respect to what happens. Strategy claims to be forward looking but is equally retrospective, strategy offers a story to link past events together.

Along the way Rise and Fall accidentally explains where the waterfall comes from (Robert McNamara), how planning is controlling and why even with almost unlimited resources (GE and Gosplan) the best attempts at planning have failed. If you harbour any ambition to implement Scrum at the corporate level make sure you read this book.

All the books above are over 10 years old and had I written this list 10 years ago it would probably be the same. But two years ago I read Grow the Pie, this advances the discussion of why companies exist and how to be a successful company – the secret is to have purpose and benefit society. Written before the pandemic it is now more relevant than ever. Again it isn’t an easy read but it pays back in thoroughness of argument and reasoning. And if for no other reason, read Grow the Pie to really understand what constitutes value.

Subscribe to my blog newsletter and download Continuous Digital for free

The post Top agile books which aren’t about agile or software appeared first on Allan Kelly.

Providing MapLibre-compatible style JSON from openstreetmap-tile-server

Andy Balaam from Andy Balaam's Blog

[Previous: Self-hosting maps on my laptop]

In the previous post I showed how to run OSM tile server stack locally.

Now I’ve managed to connect a MapLibre GL JS front end to my local tile server and it’s showing maps!

(It’s running inside Element Web, the awesome Matrix messenger I am working on. NOTE: this is a very, very early prototype!)

In the previous post I ran a docker run command to launch the tile server.

This time, I had to create a file style.json:

  "version": 8,
  "sources": {
    "localsource": {
      "type": "raster",
      "tiles": [
  "layers": [
      "id": "locallayer",
      "source": "localsource",
      "type": "raster"

and then I launched the tile server with that file available in the document root:

docker run \
    -p 8080:80 \
    -v $PWD/style.json:/var/www/html/style.json \
    -v openstreetmap-data:/var/lib/postgresql/12/main \
    -v openstreetmap-rendered-tiles:/var/lib/mod_tile \
    -e THREADS=24 \
    -e ALLOW_CORS=enabled \
    -d overv/openstreetmap-tile-server:1.3.10 \

Now I can point my MapLibre GL JS at that style file with code something like this: = new maplibregl.Map({
    container: my_container,
    style: "",
    center: [0, 0],
    zoom: 13,

Very excited to be drawing maps without any requests leaving my machine!

Self-hosting maps on my laptop

Andy Balaam from Andy Balaam's Blog

As part of my research for working on location sharing for Element Web, the Matrix-based instant messenger, I have been learning about tile servers.

I managed to get OSM tile server stack working on my laptop:

Here are a couple useful pages I read during my research:

Today I managed to run a real tile server on my laptop, using data downloaded from OpenStreetMap in a way that I think complies with their terms of use.

To run these commands you will need Docker, and hopefully nothing much else.

Download the data

I downloaded the UK data like this:

wget ''

You can find downloads for other regions at

Import it

Then I ran an import, which converts the PBF data into tiles that can be shown in a UI:

docker volume create openstreetmap-data
docker volume create openstreetmap-rendered-tiles
docker run \
    -v $PWD/great-britain-latest.osm.pbf:/data.osm.pbf \
    -v openstreetmap-data:/var/lib/postgresql/12/main \
    -v openstreetmap-rendered-tiles:/var/lib/mod_tile \
    -e THREADS=24 \
    overv/openstreetmap-tile-server:1.3.10 \

(Change “great-britain” to match what you downloaded.)

On my quite powerful laptop this took 39 minutes to run.

Run the tile server

Finally, I launched the server:

(Make sure you’ve done the “Import it” step first.)

docker run \
    -p 8080:80 \
    -v openstreetmap-data:/var/lib/postgresql/12/main \
    -v openstreetmap-rendered-tiles:/var/lib/mod_tile \
    -e THREADS=24 \
    -e ALLOW_CORS=enabled \
    -d overv/openstreetmap-tile-server:1.3.10 \

This should launch the docker container in the background, which you can check with docker ps.

Test it

You can now grab a single file by going to, or interact with the map properly at

It was quite unresponsive at first, but once it had cached the tiles I was looking at, it was very smooth.

Parkinson’s law, striving to meet a deadline, or happenstance?

Derek Jones from The Shape of Code

How many minutes past the hour was it, when you stopped working on some software related task?

There are sixty minutes in an hour, so if stop times are random, the probability of finishing at any given minute is 1-in-60. If practice (based on the 200k+ time records in the CESAW dataset) the probability of stopping on the hour is 1-in-40, and for stopping on the half-hour is 1-in-48.

Why are developers more likely to stop working on a task, on the hour or half-hour?

Is this a case of Parkinson’s law, or are developers striving to complete a task within a specified time, or are they stopping because a scheduled activity takes priority?

The plot below shows the number of times (y-axis) work on a task stopped on a given minute past the hour (x-axis), for 16 different software projects (project number in blue, with top 10 numbers in red, code+data):

Number of times work on a task stopped at a given minute of the hour, for 16 projects.

Some projects have peaks at 50, 55, or thereabouts. Perhaps people are stopping because they have a meeting to attend, and a peak is visible because the project had lots of meetings, or no peak was visible because the project had few meetings. Some projects have a peak at 28 or 29, which might be some kind of time synchronization issue.

Is it possible to analyze the distribution of end minutes to reasonably infer developer project behavior, e.g., Parkinson’s law, striving to finish by a given time, or just not watching the clock?

An expected distribution pattern for both Parkinson’s law, and striving to complete, is a sharp decline of work stops after a reference time, e.g., end of an hour (this pattern is present in around ten of the projects plotted). A sharp increase in work stops prior to a reference time could also apply for both behaviors; stopping to switch to other work adds ‘noise’ to the distribution.

The CESAW data is organized by project, not developer, i.e., it does not list everything a developer did during the day. It is possible that end-of-hour work stops are driven by the need to synchronize with non-project activities, i.e., no Parkinson’s law or striving to complete.

In practice, some developers may sometimes follow Parkinson’s law, other times strive to complete, and other times not watch the clock. If models capable of separating out the behaviors were available, they might only be viable at the individual level.

Stop time equals start time plus work duration. If people choose a round number for the amount of work time, there is likely to be some correlation between start/end minutes past the hour. The plot below shows heat maps for start fraction of hour (y-axis) against end fraction of hour (x-axis) for four projects (code+data):

Heat map of start/end minute for tasks, for four projects.

Work durations that are exact multiples of an hour appear along the main diagonal, with zero/zero being the most common start/end pair (at 4% over all projects, with 0.02% expected for random start/end times). Other diagonal lines come from work durations that include a fraction of an hour, e.g., 30-minutes and 20-minutes.

For most work periods, the start minute occurs before the end minute, i.e., the work period does not cross an hour boundary.

What can be learned from this analysis?

The main takeaway is that there is a small bias for work start/end times to occur on the hour or half-hour, and other activities (e.g., meetings) cause ongoing work to be interrupted. Not exactly news.

More interesting ideas and suggestions welcome.

First understand the structure of a standard, then read it

Derek Jones from The Shape of Code

Extracting useful information from the text in an ISO programming language standard first requires an understanding of the stylized English in which it is written.

I regularly encounter people who cite wording from the C Standard to back up their interpretation of a particular language construct. My first thought when this happens is: Do I want to spend the time explaining how the standard ‘works’ to get to the point of dealing with the topic being discussed?

I am not aware of any “How to read the C Standard” guide, or such a guide for any language.

Explaining enough detail to have a sensible conversation about the text takes, maybe, 10-30 minutes. The interpretation of text in any standard can depend on the section in which it occurs, and particular phrases can be specified to have different interpretations in different contexts. For instance, in the C Standard, a source code construct that violates a “shall” requirement specified in a “Constraints” section is about as serious as things get (i.e., it’s a mandatory compile time error), while violating a “shall” requirement specified outside a “Constraints” is undefined behavior (i.e., the compiler can do what it likes, including nothing).

New readers also get hung up on footnotes, which are a great source of confusion. Footnotes have no normative meaning; technically, they are classified as informative (their real use is providing the committee a means to include wording in the document to satisfy some interested party, without the risk of breaking the standard {because this text has no normative status}).

Sometimes a person familiar with the C++ Standard applies the interpretation rules they have learned to the C Standard. This can work in limited cases, but the fundamental differences between how the two documents are structured requires a reorientation of thinking. For instance, the C Standard specifies the behavior of source code (from which the behavior of implementations has to be inferred), while the C++ Standard specifies the behavior of implementations (from which the behavior of source code constructs has to be inferred), and the C++ Standard does not contain “Constraints” sections.

The general committee response, at least in WG14, to complaints that the language standard requires effort to understand, is that the standard is not intended as a tutorial. At least there is a prose document to read, there are forms of language specification that don’t provide this luxury. At a minimum, a language standard first needs to be read two or three times before trying to answer detailed questions.

In general, once somebody has learned to interpret one ISO Standard, the know-how does not transfer to other ISO language standards, but they have an appreciation of the need for such an understanding.

In theory, know-how is supposed to be transferable; part 2 of the ISO directives, Principles and rules for the structure and drafting of ISO and IEC documents, “… stipulates certain rules to be applied in order to ensure that they are clear, precise and unambiguous.” There are also the technical reports: Guidelines for the Preparation of Conformity Clauses in Programming Language Standards (published in 1990), which I suspect few people have read, even within the standards’ programming language community, and Guidelines for the preparation of programming language standards (unchanged since the fourth edition in 2003).

In practice: The Fortran and Cobol standards were written before people had any idea which rules might be appropriate; I think the Pascal standard appeared just before the rules were formalised. Also, all three standards were created by National bodies (US, US, and UK respectively) as National standards, and then ‘promoted’ as-is to be ISO standards. ADA was a DoD standard that got ‘promoted’, and very much did its own thing with regard to stylized English.

The post-1990 language standards visually look as if they follow the ISO rules in force at the time they were first written (Directives, part 2 is on its ninth edition), i.e., the titles of clauses match the clause numbering scheme specified by ISO rules, e.g., clause 3 specifies “Terms and definitions”. However, readers are going to need some cultural background on the use of the language by its community, to figure out the intent of the text. For instance, the 1990 revision of the Pascal Standard contains extensive use of “shall”, but it is not clear how this is to be interpreted; I used Pascal extensively for 10-years, but never studied its ISO standard, and reading it now with my C Standard expertise is a strange experience, e.g., familiar language “constraints” do not appear to be specified in the text, and the compiler does not appear to be required to flag anything.

Two of the pre-1990 language standards, Fortran and Cobol, were initially written in the 1960s, and read like they are from another age (probably because of the way they are laid out, and the fonts used). The differences are so obvious that any readers with prior experience are likely to understand that they are going to have to figure out the structure from scratch. The formatting of post-1990 Fortran Standards lacks the 1960s vibe.