Fitting discontinuous data from disparate sources

Derek Jones from The Shape of Code

Sorting and searching are probably the most widely performed operations in computing; they are extensively covered in volume 3 of The Art of Computer Programming. Algorithm performance is influence by the characteristics of the processor on which it runs, and the size of the processor cache(s) has a significant impact on performance.

A study by Khuong and Morin investigated the performance of various search algorithms on 46 different processors. Khuong kindly sent me a copy of the raw data; the study webpage includes lots of plots.

The performance comparison involved 46 processors (mostly Intel x86 compatible cpus, plus a few ARM cpus) times 3 array datatypes times 81 array sizes times 28 search algorithms. First a 32/64/128-bit array of unsigned integers containing N elements was initialized with known values. The benchmark iterated 2-million times around randomly selecting one of the known values, and then searching for it using the algorithm under test. The time taken to iterate 2-million times was recorded. This was repeated for the 81 values of N, up to 63,095,734, on each of the 46 processors.

The plot below shows the results of running each algorithm benchmarked (colored lines) on an Intel Atom D2700 @ 2.13GHz, for 32-bit array elements; the kink in the lines occur roughly at the point where the size of the array exceeds the cache size (all code+data):

Benchmark runtime at various array sizes, for each algorithm using a 32-bit datatype.

What is the most effective way of analyzing the measurements to produce consistent results?

One approach is to build two regression models, one for the measurements before the cache ‘kink’ and one for the measurements after this kink. By adding in a dummy variable at the kink-point, it is possible to merge these two models into one model. The problem with this approach is that the kink-point has to be chosen in advance. The plot shows that the performance kink occurs before the array size exceeds the cache size; other variables are using up some of the cache storage.

This approach requires fitting 46*3=138 models (I think the algorithm used can be integrated into the model).

If data from lots of processors is to be fitted, or the three datatypes handled, an automatic way of picking where the first regression model should end, and where the second regression model should start is needed.

Regression discontinuity design looks like it might be applicable; treating the point where the array size exceeds the cache size as the discontinuity. Traditionally discontinuity designs assume a sharp discontinuity, which is not the case for these benchmarks (R’s rdd package worked for one algorithm, one datatype running on one processor); the more recent continuity-based approach supports a transition interval before/after the discontinuity. The R package rdrobust supports a continued-based approach, but seems to expect the discontinuity to be a change of intercept, rather than a change of slope (or rather, I could not figure out how to get it to model a just change of slope; suggestions welcome).

Another approach is to use segmented regression, i.e., one of more distinct lines. The package segmented supports fitting this kind of model, and does estimate what they call the breakpoint (the user has to provide a first estimate).

I managed to fit a segmented model that included all the algorithms for 32-bit data, running on one processor (code+data). Looking at the fitted model I am not hopeful that adding data from more than one processor would produce something that contained useful information. I suspect that there are enough irregular behaviors in the benchmark runs to throw off fitting quality.

I’m always asking for more data, and now I have more data than I know how to analyze in a way that does not require me to build 100+ models :-(

Suggestions welcome.

Visual Studio crashes when docking windows (TL;DR: it wasn’t us)

Products, the Universe and Everything from Products, the Universe and Everything

We've all done it. You prepare a new build, install it, start testing before releasing it and then...it crashes. The immediate thought is always "What have we done...?".

Exactly that happened to us recently when testing a Visual Lint build - all we did was dock a window and then Visual Studio crashed and restarted.

Oops.

So, what had we done? As it turns out, nothing - the crash was actually caused by a Windows 10 update - and happened irrespective of whether any third party extensions are installed. KB4598301 is reportedly the cause, though others (e.g. KB4598299) also seem to cause the same effects.

The following community post has more details of the bug and possible workarounds:

https://developercommunity.visualstudio.com/content/problem/1323017/unexpected-vs-crash-when-docking-or-splitting-wind.html

We have no doubt that Microsoft will fix this very soon (in Visual Studio 2017 and 2019, at least) so for now we're not too worried, but in the meantime - or if you're using an older version of Visual Studio - the workarounds are to uninstall the update, or edit devenv.exe.config.

If you decide to do the latter, check to see if the if the configuration/runtime/AppContextSwitchOverrides element exists. If it does append the following to its value:

;Switch.System.Windows.Interop.MouseInput.OptOutOfMoveToChromedWindowFix=true;
Switch.System.Windows.Interop.MouseInput.DoNotOptOutOfMoveToChromedWindowFix=true

If however the element does not exist (it may not for older versions of Visual Studio), you can just add it:

<AppContextSwitchOverrides
value="Switch.System.Windows.Interop.MouseInput.OptOutOfMoveToChromedWindowFix=true;
Switch.System.Windows.Interop.MouseInput.DoNotOptOutOfMoveToChromedWindowFix=true" />

Once you restart Visual Studio you should then be good to go.

Visual Studio crashes when docking windows (TL;DR: it wasn’t us)

Products, the Universe and Everything from Products, the Universe and Everything

We've all done it. You prepare a new build, install it, start testing before releasing it and then...it crashes. The immediate thought is always "What have we done...?".

Exactly that happened to us recently when testing a Visual Lint build - all we did was dock a window and then Visual Studio crashed and restarted.

Oops.

So, what had we done? As it turns out, nothing - the crash was actually caused by a Windows 10 update - and happened irrespective of whether any third party extensions are installed. KB4598301 is reportedly the cause, though others (e.g. KB4598299) also seem to cause the same effects.

The following community post has more details of the bug and possible workarounds:

https://developercommunity.visualstudio.com/content/problem/1323017/unexpected-vs-crash-when-docking-or-splitting-wind.html

We have no doubt that Microsoft will fix this very soon (in Visual Studio 2017 and 2019, at least) so for now we're not too worried, but in the meantime - or if you're using an older version of Visual Studio - the workarounds are to uninstall the update, or edit devenv.exe.config.

If you decide to do the latter, check to see if the if the configuration/runtime/AppContextSwitchOverrides element exists. If it does append the following to its value:

;Switch.System.Windows.Interop.MouseInput.OptOutOfMoveToChromedWindowFix=true;
Switch.System.Windows.Interop.MouseInput.DoNotOptOutOfMoveToChromedWindowFix=true

If however the element does not exist (it may not for older versions of Visual Studio), you can just add it:

<AppContextSwitchOverrides
value="Switch.System.Windows.Interop.MouseInput.OptOutOfMoveToChromedWindowFix=true;
Switch.System.Windows.Interop.MouseInput.DoNotOptOutOfMoveToChromedWindowFix=true" />

Once you restart Visual Studio you should then be good to go.

Crash when docking windows in Visual Studio

Products, the Universe and Everything from Products, the Universe and Everything

We've all done it. You prepare a new build, install it, start testing before releasing it and then...it crashes. The immediate thought is always "What have we done...?".

Exactly that happened to us recently when testing a Visual Lint build - all we did was dock a window and then Visual Studio crashed and restarted.

Oops.

So, what had we done? As it turns out, nothing - the crash was actually caused by a Windows 10 update - and happened irrespective of whether any third party extensions are installed. KB4598301 is reportedly the cause, though others (e.g. KB4598299) also seem to cause the same effects.

The following community post has more details of the bug and possible workarounds:

https://developercommunity.visualstudio.com/content/problem/1323017/unexpected-vs-crash-when-docking-or-splitting-wind.html

We have no doubt that Microsoft will fix this very soon (in Visual Studio 2017 and 2019, at least) so for now we're not too worried, but in the meantime - or if you're using an older version of Visual Studio - the workarounds are to uninstall the update, or edit devenv.exe.config.

If you decide to do the latter, check to see if the if the configuration/runtime/AppContextSwitchOverrides element exists. If it does append the following to its value:

;Switch.System.Windows.Interop.MouseInput.OptOutOfMoveToChromedWindowFix=true;
Switch.System.Windows.Interop.MouseInput.DoNotOptOutOfMoveToChromedWindowFix=true

If however the element does not exist (it may not for older versions of Visual Studio), you can just add it:

<AppContextSwitchOverrides
value="Switch.System.Windows.Interop.MouseInput.OptOutOfMoveToChromedWindowFix=true;
Switch.System.Windows.Interop.MouseInput.DoNotOptOutOfMoveToChromedWindowFix=true" />

Once you restart Visual Studio you should then be good to go.

Research software code is likely to remain a tangled mess

Derek Jones from The Shape of Code

Research software (i.e., software written to support research in engineering or the sciences) is usually a tangled mess of spaghetti code that only the author knows how to use. Very occasionally I encounter well organized research software that can be used without having an email conversation with the author (who has invariably spent years iterating through many versions).

Spaghetti code is not unique to academia, there is plenty to be found in industry.

Structural differences between academia and industry make it likely that research software will always be a tangled mess, only usable by the person who wrote it. These structural differences include:

  • writing software is a low status academic activity; it is a low status activity in some companies, but those involved don’t commonly have other higher status tasks available to work on. Why would a researcher want to invest in becoming proficient in a low status activity? Why would the principal investigator spend lots of their grant money hiring a proficient developer to work on a low status activity?

    I think the lack of status is rooted in researchers’ lack of appreciation of the effort and skill needed to become a proficient developer of software. Software differs from that other essential tool, mathematics, in that most researchers have spent many years studying mathematics and understand that effort/skill is needed to be able to use it.

    Academic performance is often measured using citations, and there is a growing move towards citing software,

  • many of those writing software know very little about how to do it, and don’t have daily contact with people who do. Recent graduates are the pool from which many new researchers are drawn. People in industry are intimately familiar with the software development skills of recent graduates, i.e., the majority are essentially beginners; most developers in industry were once recent graduates, and the stream of new employees reminds them of the skill level of such people. Academics see a constant stream of people new to software development, this group forms the norm they have to work within, and many don’t appreciate the skill gulf that exists between a recent graduate and an experienced software developer,
  • paid a lot less. The handful of very competent software developers I know working in engineering/scientific research are doing it for their love of the engineering/scientific field in which they are active. Take this love away, and they will find that not only does industry pay better, but it also provides lots of interesting projects for them to work on (academics often have the idea that all work in industry is dull).

    I have met people who have taken jobs writing research software to learn about software development, to make themselves more employable outside academia.

Does it matter that the source code of research software is a tangled mess?

The author of a published paper is supposed to provide enough information to enable their work to be reproduced. It is very unlikely that I would be able to reproduce the results in a chemistry or genetics paper, because I don’t know enough about the subject, i.e., I am not skilled in the art. Given a tangled mess of source code, I think I could reproduce the results in the associated paper (assuming the author was shipping the code associated with the paper; I have encountered cases where this was not true). If the code failed to build correctly, I could figure out (eventually) what needed to be fixed. I think people have an unrealistic expectation that research code should just build out of the box. It takes a lot of work by a skilled person to create to build portable software that just builds.

Is it really cost-effective to insist on even a medium-degree of buildability for research software?

I suspect that the lifetime of source code used in research is just as short and lonely as it is in other domains. One study of 214 packages associated with papers published between 2001-2015 found that 73% had not been updated since publication.

I would argue that a more useful investment would be in testing that the software behaves as expected. Many researchers I have spoken to have not appreciated the importance of testing. A common misconception is that because the mathematics is correct, the software must be correct (completely ignoring the possibility of silly coding mistakes, which everybody makes). Commercial software has the benefit of user feedback, for detecting some incorrect failures. Research software may only ever have one user.

Research software engineer is the fancy title now being applied to people who write the software used in research. Originally this struck me as an example of what companies do when they cannot pay people more, they give them a fancy title. Recently the Society of Research Software Engineering was setup. This society could certainly help with training, but I don’t see it making much difference with regard status and salary.

Twenty-Niner – baron m.

baron m. from thus spake a.k.

Sir R----- my fine fellow! Come in from the cold and join me at my table for a tumbler of restorative spirits!

Might I also tempt you with a wager?

Good man!

I propose a game that was popular amongst the notoriously unsuccessful lunar prospectors of '29. Spurred on by rumours of gold nuggets scattered upon the ground simply for the taking, they arrived en-masse during winter woefully unprepared for the inclement weather. By the time that I arrived on a diplomatic mission to the king of the moon people they were in a frightful state, desperately short of provisions and futilely trying to work the frost bitten land to grow more.

Static site migration – we have working comments with isso!

Timo Geusch from The Lone C++ Coder&#039;s Blog

One “biggie” that was holding up this blog’s migration to a static site was getting a comments system up and running, followed by importing the existing comments. I had picked Isso a while back as it allows for easy import of existing comments from WordPress. I really didn’t want to depend on a third party […]

The post Static site migration – we have working comments with isso! appeared first on The Lone C++ Coder's Blog.

Static site migration – we have working comments with isso!

The Lone C++ Coder's Blog from The Lone C++ Coder&#039;s Blog

One “biggie” that was holding up this blog’s migration to a static site was getting a comments system up and running, followed by importing the existing comments. I had picked Isso a while back as it allows for easy import of existing comments from WordPress. I really didn’t want to depend on a third party comment hosting service like Disqus. I also didn’t want to use Staticman, mainly because it has dependencies on other services like Github or Gitlab.

Purpose over Backlog

Allan Kelly from Allan Kelly Associates

Backlogs are a good idea. Backlogs ease the transition from the old “requirements up front” world to the new more dynamic agile world. Backlogs provide a compatibility layer for agile teams to interface to more traditional project management and governance. Backlogs even allow you to take a stab at done date!

Backlogs allow you to even out work between the quiet periods and the busy times. Backlogs give you a place to store good ideas which you can’t do just now. And because stakeholders can see their request is not forgotten they don’t need to shout for it today.

Yes backlogs are good. I’ve seen them work well myself and I’ve taught many teams to effectively use backlogs.

But – you knew there was a but coming didn’t you? – but…

Backlogs have problems, too many teams are labouring under the Tyranny of the Backlog, they have become backlog-slaves and practice something we might call BLDD – Back Log Driven Development.

(To be clear, when I say “backlog” I am primarily thinking of the product backlog – the long list of all the things the team (might) do in the future. This is different to the sprint backlog (iteration backlog). The sprint backlog is a shorter list of things the team aims to do this iteration. I am using Scrum terminology but the ideas are pretty much “generic agile” and I’m thinking more broadly than Scrum. Many implementations of Kanban feature a product backlog of sorts so while Kanban is less prone to these problem it is not immune.)

1) Lump of Work Fallacy

There is usually an assumption that the backlog represents all the work to be done – an impression reinforced by early implementations of Scrum. In the short term that leads to agile teams being seen as inflexible and prioritising process over need because new work is not allowed in.

In some cases teams even struggle to get started on work because a big-up-front requirements gathering and analysts activity is required to create a backlog. In the worst cases that work is even estimated and end-dates forecast before a line of code is cut or developers hired.

In the longer term it is simply unrealistic to assume the backlog is fixed. Even with more and better analysis it is impossible to foreseen future requests. The agile adage “it is in doing the work that we understand the work” cuts both ways: coders understand what they need to build and customers/stakeholders/analysts understand what they want.

Work will arrive after you begin, any system that does not incorporate that truth will fail one-way or another.

2) Bigger then you think

Not only does the backlog grow with completely new work the work in it changes – and grows. There are many reasons this happens: new opportunities appear, hidden ones become clear, requests require more work than expected and so on.

Humans are very bad at estimating, especially about the future, and, it turns out, they are also very bad at estimating time spent in the past. If you want accurate forecasts you need to invest in them, you need to make structural changes and you need to use statistics.

However, because of the lump of work fallacy and the belief that humans can make estimates, poor end-date projections get made and when they are missed (because they were wrong to start with) everyone gets upset.

3) Fallacy of Done

Backlogs come with burn-down charts and an assumption that there is an end; and that end is when everything is “done.” The team will be done when the backlog is empty. That assumption is baked into BLDD, traditional project management and even governance.

I have long argued that software is never done. I’ll accept that I might be wrong, but in the digital age, when business runs on digital technology (i.e. software) your products are only done when you business is done. The technology is the business, and the business is the technology. Stop the backlog growing, stop growing you technology and you kill the business.

4) Backlog Bottomless pit

Put all those reasons together and the backlog becomes a bottomless pit. In the early days of agile, when I managed teams myself, the backlog would often sit on my desk, written out on index cards and held together with rubber bands. I could get a sense of how big the backlog was my looking.

Today everyone uses electronic tracking systems. Not only do these allow an infinite number of items they rob us of perspective. To paraphrase Comrade Stalin: “2 outstanding backlog items is tragedy, 200 is a statistic.”

5) Backlogs obscure strategy & purpose

With so many backlog items it is easy to get lost – you can’t see the wood for the trees. Arguments over what will be done next start to resemble deciding who should get a lifeboat place on a sinking ship, add in the demands “when will you be done?” (plus explaining why the date has changed) and “the bigger picture” gets lost.

In Back Log Driven Development the sense of purpose and strategic goals is lost as teams struggle with the day-to-day demands of just doing stuff.

6) Powerless product owner (i.e. backlog administrators)

Tyranny of the backlog seems worst were product owners lack real authority and skills. They are little more than backlog administrators. They spend most of the week adding requests to the backlog, then passing a few chosen items to developers in planning meetings. A vicious circle develops, the product owner can’t win so people trust them less, their authority wanes, and the backlog spirals.

Few organisations give product owners the power needed to get a grip on this situation. Indeed, many product owners are plucked from the ranks for development or support and given a battlefield promotion to product owner but lack the skills required. (See The problem with Product Owners.)

A solution?

For years I’ve been suggesting teams throw away the backlog – you will not forget the important things. But then how do you know what to do?

Take a step back, start with your purpose, your mission, the reason you team, your company, your organisation exists. What should you be doing? How can you fulfil that purpose and sever your customers?

This is where I see a role for OKRs and jobs to be done. Both these techniques – together, or separately – can be used as story generators. Every time you need to more work, more stories, you return to your OKRs and ask “what can we do now to move us towards our objective?”

When writing Succeeding with OKRs in Agile I became more and more convinced this is the path to take. Increasingly I sum this up as Purpose over Backlog.

Step 1: Clarify your purpose – what is your overarching reason for existing?
Step 2: Clarify how your existing strategy builds towards that purpose, and if you don’t have a strategy create one.

Repeat steps 1 & 2 annually.

Step 3: Think broadly, set your OKRs as a team so you build towards your purpose by following your strategy.
Step 4: Spend the next 12 weeks executing against those OKRs

Repeat steps 3 & 4 every 3 months.

Step 5: In each planning meeting take stock of what you have done and progress against OKRs
Step 6: Ask “what do we need to do next to move towards the OKRs?”

Succeeding with OKRs in Agile

Repeat steps 5 and 6 every 2 weeks

And if you are Kanban’ing then keep steps 1, 2, 3 and 4, adjust 5 and 6 as appropriate.

Having finished, completed, published Succeeding with OKRs I really wish I had been clearer in the book. The ideas are there but with time they have become so much clearer… maybe I need another book.

Buy Succeeding with OKRs in Agile at Amazon today.


Subscribe to my blog newsletter and download Project Myopia for Free

The post Purpose over Backlog appeared first on Allan Kelly Associates.

Performance impact of comments on tasks taking a few minutes

Derek Jones from The Shape of Code

How cost-effective is an investment in commenting code?

Answering this question requires knowing the time needed to write the comment and the time they save for later readers of the code.

A recent study investigated the impact of comments in small programming tasks on developer performance, and Sebastian Nielebock, the first author, kindly sent me a copy of the data.

How might the performance impact of comments be measured?

The obvious answer is to ask subjects to solve a coding problem, with half the subjects working with code containing comments and the other half the same code without the comments. This study used three kinds of commenting: No comments, Implementation comments and Documentation comments; the source was in Java.

Were the comments in the experiment useful, in the sense of providing information that was likely to save readers some time? A preliminary run was used to check that the comments provided some benefit.

The experiment was designed to be short enough that most subjects could complete it in less than an hour (average time to complete all tasks was 31 minutes). My own experience with running experiments is that it is possible to get professional developers to donate an hour of their time.

What is a realistic and experimentally useful amount of work to ask developers to in an hour?

The authors asked subjects to complete 9-tasks; three each of applying the code (i.e., use the code’s API), fix a bug in the code, and extend the code. Would a longer version of one of each, rather than a shorter three of each been better? I think the only way to find out is to try it both ways (I hope the authors plan to do another version).

What were the results (code+data)?

Regular readers will know, from other posts discussing experiments, that the biggest factor is likely to be subject (professional developers+students) differences, and this is true here.

Based on a fitted regression model, Documentation comments slowed performance on a task by 30 seconds, compared to No comments and Implementation comments (which both had the same performance impact). Given that average task completion time was 205 seconds, this is a 15% slowdown for Documentation comments.

This study set out to measure the performance impact of comments on small programming tasks. The answer, at least for tasks designed to take a few minutes, is that No comments, or if comments are required, then write Implementation comments.

This experiment measured the performance impact of comments on developers who did not write the code containing them. These developers have to first read and understand the comments (which takes time). However, the evidence suggests that code is mostly modified by the developer who wrote it (just reading the code does not leave a record that can be analysed). In this case, the reading a comment (that the developer previously wrote) can trigger existing memories, i.e., it has a greater information content for the original author.

Will comments have a bigger impact when read by the person who wrote them (and the code), or on tasks taking more than a few minutes? I await the results of more experiments…