Regression line fitted to nosy data? Ask to see confidence intervals

Derek Jones from The Shape of Code

A little knowledge can be a dangerous thing. For instance, knowing how to fit a regression line to a set of points, but not knowing how to figure out whether the fitted line makes any sense. Fitting a regression line is trivial, with most modern data analysis packages; it’s difficult to find data that any of them fail to fit to a straight line (even randomly selected points usually contain enough bias on one direction, to enable the fitting algorithm to converge).

Two techniques for checking the goodness-of-fit, of a regression line, are plotting confidence intervals and listing the p-value. The confidence interval approach is a great way to visualize the goodness-of-fit, with the added advantage of not needing any technical knowledge. The p-value approach is great for blinding people with science, and a necessary technicality when dealing with multidimensional data (unless you happen to have a Tardis).

In 2016, the Nationwide Mutual Insurance Company won the IEEE Computer Society/Software Engineering Institute Watts S. Humphrey Software Process Achievement (SPA) Award, and there is a technical report, which reads like an infomercial, on the benefits Nationwide achieved from using SEI’s software improvement process. Thanks to Edward Weller for the link.

Figure 6 of the informercial technical report caught my eye. The fitted regression line shows delivered productivity going up over time, but the data looks very noisy. How good a fit is that regression line?

Thanks to WebPlotDigitizer, I quickly extracted the data (I’m a regular user, and WebPlotDigitizer just keeps getting better).

Below is the data plotted to look like Figure 6, with the fitted regression line in pink (code+data). The original did not include tick marks on the axis. For the x-axis I assumed each point was at a fixed 2-month interval (matching the axis labels), and for the y-axis I picked the point just below the zero to measure length (so my measurements may be off by a constant multiplier close to one; multiplying values by a constant will not have any influence on calculating goodness-of-fit).

Nationwide: delivery productivity over time; extracted data and fitted regression line.

The p-value for the fitted line is 0.15; gee-wiz, you say. Plotting with confidence intervals (in red; the usual 95%) makes the situation clear:

Nationwide: delivery productivity over time; extracted data and fitted regression line with 5% confidence intervals.

Ok, so the fitted model is fairly meaningless from a technical perspective; the line might actually go down, rather than up (there is too much noise in the data to tell). Think of the actual line likely appearing somewhere in the curved red tube.

Do Nationwide, IEEE or SEI care? The IEEE need a company to award the prize to, SEI want to promote their services, and Nationwide want to convince the rest of the world that their IT services are getting better.

Is there a company out there who feels hard done-by, because they did not receive the award? Perhaps there is, but are their numbers any better than Nationwide’s?

How much influence did the numbers in Figure 6 have on the award decision? Perhaps not a lot, the other plots look like they would tell a similar tail of wide confidence intervals on any fitted lines (readers might like to try their hand drawing confidence intervals for Figure 9). Perhaps Nationwide was the only company considered.

Who are the losers here? Other companies who decide to spend lots of money adopting the SEI software process? If evidence was available, perhaps something concrete could be figured out.

Convert a video to a GIF with reasonable colours

Andy Balaam from Andy Balaam's Blog

Here’s a little script I wrote to avoid copy-pasting the ffmpeg command from superuser every time I needed it.

It converts a video to a GIF file by pre-calculating a good palette, then using that palette.

Usage:

./to_gif input.mp4 output.gif

The file to_gif (which should be executable):

#!/bin/bash

set -e
set -u

# Credit: https://superuser.com/questions/556029/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality

INPUT="$1"
OUTPUT="$2"

PALETTE=$(tempfile --suffix=.png)

ffmpeg -y -i "${INPUT}" -vf palettegen "${PALETTE}"
ffmpeg -y -i "${INPUT}" -i "${PALETTE}" \
    -filter_complex "fps=15,paletteuse" "${OUTPUT}"

rm -f "${PALETTE}"

Note: you might want to modify the number after fps= to adjust how fast the video plays.

A Not So Minor Hardware Revision

Chris Oldwood from The OldWood Thing

[These events took place two decades ago, so consider it food for thought rather than a modern tale of misfortune. Naturally some details are hazy and possibly misremembered but the basic premise is still sound.]

Back in the late ‘90s I was working on a Travelling Salesman style problem (TSP) for a large oil company which had performance improvements as a key element. Essentially we were taking a new rewrite of their existing scheduling product and trying to solve some huge performance problems with it, such as taking many minutes to load, let alone perform any scheduling computations.

We had made a number of serious improvements, such as reducing the load time from minutes to mere seconds, and, given our successes so far, were tasked with continuing to implement the rest of the features that were needed to make it usable in practice. One feature was to import the set of orders from the various customer sites which were scheduled by the underlying TSP engine.

The Catalyst

The importing of orders required reading some reasonably large text files, parsing them (which was implemented using the classic Lex & YACC toolset) and pushing them into the database where upon the engine would find them and work out a schedule for their delivery.

Initially this importer was packaged as an ActiveX control, written in C and C++, and hosted inside the PowerBuilder (PB) based GUI. Working on the engine side (written entirely in C) we had created a number of native test harnesses (in C++/MFC) to avoid needing to use the PB front-end unless absolutely necessary due to its generally poor performance. Up until this point the importer appeared to work fine on our dev workstations, but when it was passed to the QA a performance problem started showing up.

The entire team (developers and tester) had all been given identical Compaq machines. Give that we needed to run Oracle locally as well as use it for development and testing we had a whopping 256 MB of RAM to play with along with a couple of cores. The workstations were running Windows NT 4.0 and we were using Visual C++ 2 to develop with. As far as we could see they looked and behaved identically too.

The Problem

The initial bug report from the QA was that after importing a fresh set of orders the scheduling engine run took orders of magnitude longer (no pun intended) to find a solution. However, after restarting the product the engine run took the normal amount of time. Hence the conclusion was that the importer ActiveX control, being in-process with the engine, was somehow causing the slowdown. (This was in the days before the low-fragmentation heap in Windows and heap fragmentation was known to be a problem for our kind of application.)

Weirdly though the developer of the importer could not reproduce this issue on their machine, or another developer’s machine that they tried, but it was pretty consistently reproducible on the QA’s machine. As a workaround the logic was hoisted into a separate command-line based tool instead which was then passed along to the QA to see if matters improved, but it didn’t. Restarting the product was the only way to get the engine to perform well after importing new orders and naturally this wasn’t a flyer with the client as this would happen in real-life throughout the day.

In the meantime I had started to read up on Windows heaps and found some info that allowed me to write some code which could help analyse the state of the heaps and see if fragmentation was likely to be an issue anyway, even with the importer running out-of-process now. This didn’t turn up anything useful at the time but the knowledge did come in handy some years later.

Tests on various other machines were now beginning to show that the problem was most likely with the QA’s machine or configuration rather than with the product itself. After checking some basic Windows settings it was posited that it might be a hardware problem, such as a faulty RAM chip. The Compaq machines we had been given weren’t cheap and weren’t using cheap RAM chips either; the POST was doing a memory check too, but it was worth checking out further. Despite swapping over the RAM (and possibly CPUs) with another machine the problem still persisted on the QA’s machine.

Whilst putting the machines back the way they were I somehow noticed that the motherboard revision was slightly different. We double-checked the version numbers and the QAs machine was one minor revision lower. We checked a few other machines we knew worked and lo-and-behold they were all on the newer revision too.

Fortunately, inside the case of one machine was the manual for the motherboard which gave a run down of the different revisions. According to the manual the slightly lower revision motherboard only supported caching of the first 64 MB RAM! Due to the way the application’s memory footprint changed during the order import and subsequent cache reloading it was entirely plausible that the new data could reside outside the cached region [1].

This was enough evidence to get the QA’s machine replaced and the problem never surfaced again.

Retrospective

Two decades of experience later and I find the way this issue was handled as rather peculiar by today’s standards.

Mostly I find the amount of time we devoted to identifying this problem as inappropriate. Granted, this problem was weird and one of the most enjoyable things about software development is dealing with “interesting” puzzles. I for one was no doubt guilty of wanting to solve the mystery at any cost. We should have been able to chalk the issue up to something environmental much sooner and been able to move on. Perhaps if a replacement machine had shown similar issues later it would be cause to investigate further [2].

I, along with most of the other devs, only had a handful of years of experience which probably meant we were young enough not to be bored by such issues, but also were likely too immature to escalate the problem and get a “grown-up” to make a more rational decision. While I suspect we had experienced some hardware failures in our time we hadn’t experienced enough weird ones (i.e. non-terminal) to suspect a hardware issue sooner.

Given the focus on performance and the fact that the project was acquired from a competing consultancy after they appeared to “drop the ball” I guess there were some political aspects that I would have been entirely unaware of. At the time I was solely interested in finding the cause [3] whereas now I might be far more aware of any ongoing “costs” in this kind of investigation and would no doubt have more clout to short-circuit it even if that means we never get to the bottom of it.

As more of the infrastructure we deal with moves into the cloud there is less need, or even ability, to deal with problems in this way. That’s great from a business point of view but I’m left wondering if that takes just a little bit more fun out of the job sometimes.

 

[1] This suggests to me that the OS was dishing out physical pages from a free-list where address ordering was somehow involved. I have no idea how realistic that is or was at the time.

[2] It’s entirely possible that I’ve forgotten some details here and maybe more than one machine was acting weirdly but we focused on the QA’s machine for some reason.

[3] I’m going to avoid using the term “root cause” because we know from How Complex Systems Fail that we still haven’t gotten to the bottom of it. For example, where does the responsibility for verifying the hardware was identical lie, etc.?

Polished human cognitive characteristics chapter

Derek Jones from The Shape of Code

It has been just over two years since I release the first draft of the Human cognitive characteristics chapter of my evidence-based software engineering book. As new material was discovered, it got added where it seemed to belong (at the time), no effort was invested in maintaining any degree of coherence.

The plan was to find enough material to paint a coherence picture of the impact of human cognitive characteristics on software engineering. In practice, finishing the book in a reasonable time-frame requires that I stop looking for new material (assuming it exists), and go with what is currently available. There are a few datasets that have been promised, and having these would help fill some holes in the later sections.

The material has been reorganized into what is essentially a pass over what I think are the major issues, discussed via studies for which I have data (the rule of requiring data for a topic to be discussed, gets bent out of shape the most in this chapter), presented in almost a bullet point-like style. At least there are plenty of figures for people to look at, and they are in color.

I think the material will convince readers that human cognition is a crucial topic in software development; download the draft pdf.

Model building by cognitive psychologists is starting to become popular, with probabilistic languages, such as JAGS and Stan, becoming widely used. I was hoping to build models like this for software engineering tasks, but it would have taken too much time, and will have to wait until the book is done.

As always, if you know of any interesting software engineering data, please let me know.

Next, the cognitive capitalism chapter.

Feature Branches and Package Dependencies

Chris Oldwood from The OldWood Thing

For most of my programming career I have worked directly on the main integration branch (aka trunk / master) for day-to-day development. Release branches have featured occasionally at various clients, mostly to compensate for bureaucracy, and I once had the misfortune to work with project-level branches (see “The Cost of Long-Lived Feature Branches”) which was a merge nightmare. More recently I got to work in a team that used much shorter lived feature branches [1] and I got a reminder of the kinds of problems even they cause.

When the branch is confined to a single repository and that repo is for the delivered product then the only people who are affected by the changes are those working on the branch. (We’re not talking about the Cost of Delay here for the customer, this is about intra-team delays.) However once we start making changes outside the main repository, such as our library repos (nay packages) things get more complicated.

Changing Packages

Although in theory we could create a similar branch in the library repo the point of integration between the two codebases (product and library) is usually at a binary level, i.e. the package repository. The package manager almost certainly doesn’t deal in branches per-se, only published versions of a package [2].

Where things get even more complicated is when you have a few packages that all make use of some 3rd party library, e.g. a message queuing product, and you discover you need to upgrade that as part of your feature work too. If you go via the normal channels you’ll end up upgrading the dependency in the packages, publish them, and then pull those upgraded packages into your feature branch and carry on where you left off [3].

Dependent Changes

However, anyone working on another feature branch or even the trunk can no longer make an orthogonal change to those packages because pulling them in would likely create an impedance mismatch, unless they also duplicate the integration work done on the feature branch or cherry pick it. Ideally that would be a trivial merge but the very nature of feature branches is to work in isolation and therefore changes tend to get intertwined because the focus is on the feature itself, not the integration steps in-between.

Essentially until the new package versions are fully integrated any changes to them will be delayed, which if you’ve already started work means a blocker goes up on the board. In this instance, being used to trunk based development, I didn’t want to wait so instead reverted the upgrade to the 3rd party library, published it (with my changes), and then integrated it into the main product directly on the trunk.

Unfortunately this creates an extra burden for those on the feature branch as they will need to re-upgrade the package again before integrating their changes back into the trunk. Such is the price for working in isolation.

Small is Beautiful

One of the benefits of working in an Agile way using trunk based development is that it teaches you to focus of really small, but nonetheless valuable changes. A user story may be split across a number of commits and so we have to think about the way we’ll deliver the changes. Feature toggles help us to hide our work in progress but, as just described, occasionally we may need to make a more sweeping change.

In these scenarios we should be able to push the feature work temporarily onto the “stack”, make the package changes, and then pop the feature work and carry on. With trunk based development the work is woven in just like any other, but when using a feature branch you need to switch back to the trunk, perform the upgrade, then merge trunk into the feature branch before carrying on.

I believe that if you are able to make this work smoothly using a feature branch then you are almost certainly capable of making the changes directly on the trunk in the first place. Planning doesn’t stop at the release and sprint level, we also need to plan how we evolve the codebase at story and task level too to minimise disruption to others whilst also making progress on our own work.

 

[1] They only lasted a few days and were trying to move away from that where possible.

[2] In theory this is where sematic versioning comes into play as the breaking change demands a new major version. My change would then have been made for both major versions. I say “in theory” because In my experience this is not an approach commonly taken by enterprise development teams for internal libraries – the path of least resistance usually wins.

[3] Alternatively you may be able to hide the library’s dependency, assuming it’s backwardly compatible, by binding to it directly such as via a static library or merging assemblies. However, as with [2], it’s not the usual approach.

A Heap Of Stuff – a.k.

a.k. from thus spake a.k.

A little over a year ago we added a bunch of basic computer science algorithms to the ak library, taking inspiration from the C++ standard library. Now, JavaScript isn't just short of algorithms in its standard library, it's also lacking all but a few data structures. In fact, from the programmer's perspective, it only has hash maps which use a function to map strings to non-negative integers that are then used as indices into an array of sets of key-value pairs. Technically speaking, even arrays are hash maps of the string representation of their indices, although in practice the interpreter will use more efficient data structures whenever it can.
As clever as its implementation might be, it's unlikely that it will always figure out exactly what we're trying to do and pick the most appropriate data structure and so it will occasionally be worth explicitly implementing a data structure so that we can be certain of its performance.
The first such data structure that we're going to need is a min-heap which will allow us to efficiently add elements in any order and remove them in ascending order, according to some comparison function.

Modular vs. monolithic programs: a big performance difference

Derek Jones from The Shape of Code

For a long time now I have been telling people that no experiment has found a situation where the treatment (e.g., use of a technique or tool) produces a performance difference that is larger than the performance difference between the subjects.

The usual results are that differences between people is the source of the largest performance difference, successive runs are the next largest (i.e., people get better with practice), and the smallest performance difference occurs between using/not using the technique or tool.

This is rather disheartening news.

While rummaging through a pile of books I had not looked at in many years, I (re)discovered the paper “An empirical study of the effects of modularity on program modifiability” by Korson and Vaishnavi, in “Empirical Studies of Programmers” (the first one in the series). It’s based on Korson’s 1988 PhD thesis, with the same title.

There were four experiments, involving seven people from industry and nine students, each involving modifying a 900(ish)-line program in some way. There were two versions of each program, they differed in that one was written in a modular form, while the other was monolithic. Subjects were permuted between various combinations of program version/problem, but all problems were solved in the same order.

The performance data was published in the paper, so I fitted various regressions models to it (code+data). There is enough information in the data to separate out the effects of modular/monolithic, kind of problem and subject differences. Because all subjects solved problems in the same order, it is not possible to extract the impact of learning on performance.

The modular/monolithic performance difference was around twice as large as the difference between subjects (removing two very poorly performing subjects reduces the difference to 1.5). I’m going to have to change my slides.

Would the performance difference have been so large if all the subjects had been experienced developers? There is not a lot of well written modular code out there, and so experienced developers get lots of practice with spaghetti code. But, even if the performance difference is of the same order as the difference between developers, that is still a very worthwhile difference.

Now there are lots of ways to write a program in modular form, and we don’t know what kind of job Korson did in creating, or locating, his modular programs.

There are also lots of ways of writing a monolithic program, some of them might be easy to modify, others a tangled mess. Were these programs intentionally written as spaghetti code, or was some effort put into making them easy to modify?

The good news from the Korson study is that there appears to be a technique that delivers larger performance improvements than the difference between people (replication needed). We can quibble over how modular a modular program needs to be, and how spaghetti-like a monolithic program has to be.

Gradle: what is a task, and how can I make a task depend on another task?

Andy Balaam from Andy Balaam's Blog

In an insane world, Gradle sometimes seems like the sanest choice for building a Java or Kotlin project. But what on Earth does all the stuff inside build.gradle actually mean? And when does my code run? And how do you make a task? And how do you persuade a task to depend on another task?

Setting up

To use Gradle, get hold of any version of it for long enough to create a local gradlew file, and the use that.
$ mkdir gradle-experiments
$ cd gradle-experiments
$ sudo apt install gradle  # Briefly install the system version of gradle
...
$ gradle wrapper --gradle-version=5.2.1
$ sudo apt remove gradle   # Optional - uninstalls the system version
$ ./gradlew tasks
... If all is good, this should ...
... print a list of available tasks. ...
It is normal for gradlew and the whole gradle directory it creates to be checked into source control. This means everyone who fetches the code from source control will have a predictable Gradle version.

What is build.gradle?

build.gradle is a Groovy program that Gradle runs within a context that it has set up for you. That context means that you are actually calling methods of a Project object, and modifying its properties. The fact that Groovy lets you miss out a lot of punctuation makes that harder to see, but it's true. The first thing to get your head around is that Gradle actually runs your code immediately, so if your build.gradle looks like this (and only this):
println("Hello")
when you run Gradle your code runs:
$ ./gradlew -q 
Hello
... more guff ...
So that code runs even if you don't ask Gradle to run a task containing that code. It runs at "configuration time" - i.e. when Gradle is understanding your build.gradle file. Actually, "understanding" it means executing it. Remember when I said this code runs in the context of a Project? What that means is that if you have something like this in your build.gradle:
repositories {
    jcenter()
}
what it really means is something like this:
project.repositories(
    {
        it.jcenter()
    }
)
You are calling the repositories method on the project object. The argument to the repositories method is a Groovy closure, which is a blob of code that will get run later. I've used the magic it name above to demonstrate that jcenter is just a method being called on the object that is the context for the closure when it is run. When does it run? Let's find out:
println("before")
project.repositories( {
    println("within")
    jcenter()
})
println("after")
$ ./gradlew -q
before
within
after
... more guff ...
This surprised me - it means the closure you pass in to repositories is actually run immediately, as part of running repositories, before execution gets to the line after that call. As we'll see later, some closures you create do not run immediately like this one. Once you know that build.gradle is actually modifying a Project object, you have starting point for understanding the Gradle reference documentation.

How do you make a task

You probably shouldn't do it very often, but it was instructive for me to understand how to make my own custom task. Here's an example:
tasks.register("mytask") {
    doLast {
        println("running mytask")
    }
}
This creates a new task by calling the register method on the tasks property of the Project object. Register takes two arguments: a name for the task ("mytask" here), and a closure with some code in it to run when we decide we need this task. That closure gets run in a context that can't see the Project object, but instead can see a Task object which it is helping to make. That Task object has a doLast method that we call, and passing it a closure that will be run when the task is actually executed (not immediately). If we remove some of the syntactic sugar the above build.gradle looks like this:
tasks.register(
    "mytask",
    {
        it.doLast {
            println("running mytask")
        }
    }
)
Above we can see that register really does take two arguments as I said above - the first version uses a Groovy feature where if you miss out the last argument and write a closure immediately afterwards the closure is passed as the last argument. Confusing, eh? Again, notice that doLast is a method on the Task object that is implicitly available when the closure is run. So we have created a task that we can run:
 ./gradlew -q mytask
running mytask

How do you make a task depend on another task?

If I want to run my code formatting before my compile (for example) I sometimes need to modify a task to make it depend on another one. This can done for tasks you create or for pre-existing ones. Here's an example:
plugins {
    id "java"
}
tasks.register("mytask") {
    doLast {
        println("running mytask")
    }
}
compileJava {
    dependsOn tasks.named("mytask")
}
So, calling the plugins on the Project at the top with a closure that ran the id method on something modified the Project so that it had a new method called compileJava which we called at the bottom, passing it a closure to run. That closure ran in the context of a Task object (similar to when we created a task, but now allow us to modify a pre-existing one). We called the dependsOn method of the Task object, passing in another Task object which we had got by calling the named method on the tasks object. [Side note: the register method actually returns a Task object that we could have passed to dependsOn without looking it up again using named, but Groovy doesn't provide a very convenient way of holding on to that reference, so we did do it. The Kotlin example below shows that this is quite simple in Kotlin.]

How do I do all this in Kotlin?

Because one DSL that hides what's really going on wasn't enough for you, Gradle now provides a second DSL that hides what's going on in subtly different ways, which is a program written in Kotlin instead of Groovy. This is marginally better, because Kotlin doesn't let you do quite so many stupid tricks as Groovy does. Below are all our examples in Kotlin. You get started exactly the same way, by following "Setting up" above. Remember to name your build file build.gradle.kts.

Say hello in Gradle Kotlin

println("Hello")
This is identical to the Groovy version.)

Use jcenter repo in Gradle Kotlin

repositories {
    jcenter()
}
This is identical to the Groovy version, and with the same meaning: repositories is a method on the implicitly-available Project object. The "unsugared" version looks like this in Kotlin:
this.repositories(
    {
        this.jcenter()
    }
)
[Note that the word this is used to access the implicit context. The word it has a different meaning in Kotlin from in Groovy. In Groovy it means the implicit context, but in Kotlin it means the first argument. We didn't pass any arguments to jcenter when we called it, so we can't use it, but we were being run in a context, which we can refer to using this. Simple. huh?]

Execution order in Gradle Kotlin

We this build.gradle.kts:
println("before")
project.repositories( {
    println("within")
    jcenter()
})
println("after")
We see this behaviour:
$ ./gradlew -q
before
within
after
which is all identical to the Groovy version.

Making a new task in Gradle Kotlin

tasks.register("mytask") {
    doLast {
        println("running mytask")
    }
}
This is identical to the Groovy version, but slightly different when unsugared:
tasks.register(
    "mytask",
    {
        this.doLast(
            {
                println("running mytask")
            }
        )
    }
)
Notice that Kotlin lets you do the same trick as Groovy: providing an extra argument to a function that is a closure by writing it immediately after it looks like you've finished calling it. It's good for people who dislike closing brackets hanging around longer than they're welcome. As someone who likes Lisp, I'm OK with it, but what do I know?

One task depending on another in Gradle Kotlin

plugins {
    java
}
val mytask = tasks.register("mytask") {
    doLast {
        println("running mytask")
    }
}
tasks.compileJava {
    dependsOn(mytask)
}
This differs slightly from the Groovy version, even though the meaning is the same: we start off in the context of a Project object that we call methods on. The code to make one task depend on another gets hold of the Task object called compileJava from inside the tasks property of the Project, and calls it (because it's a callable object). We pass in a closure that runs in the context of this Task object, calling its dependsOn method, and passing in a reference to the mytask object, which is a Task and was created in the code above.

Corrections and clarifications welcome

The above is what I have worked out by experimentation and trying to read the Gradle documentation. Please add comments that clear up confusions and correct mistakes.