Adding Value to User Stories

Allan Kelly from Allan Kelly Associates

4Coins-2020-05-21-12-06.jpg

My next online workshop is now available for booking: Adding Value to User Stories.

As with my previous online workshops this is a series of four 90 minute online (zoom) sessions delivered on consecutive weeks. And as before a few tickets are available for free to those who are furloughed or unemployed.

This workshop is for Product Owners (including business analysts and product managers), Scrum Masters, Project and Development Managers.

Learning Objectives

  • Appreciate the influence of value on effort estimates and technical architecture at the story and project level
  • Know how to estimate value for user stories and epics
  • Recognise how cost-of-delay changes value over time, why deadlines are elastic by value and how to use Best Before and Use By dates when prioritising work
  • Appreciate how values define value, and how this differs between organizations

Details and tickets on the EventBrite booking page. Alternatively you can book directly with me.

The post Adding Value to User Stories appeared first on Allan Kelly Associates.

Setting up rdiff-backup on FreeBSD 12.1

Timo Geusch from The Lone C++ Coder's Blog

My main PC workstation (as opposed to my Mac Pro) is a dual-boot Windows and Linux machine. While backing up the Windows portion is relatively easy via some cheap-ish commercial backup software, I ended up backing up my Linux home directories only very occasionally. Clearly, Something Had To Be Done ™. I had a look […]

The post Setting up rdiff-backup on FreeBSD 12.1 appeared first on The Lone C++ Coder's Blog.

Setting up rdiff-backup on FreeBSD 12.1

The Lone C++ Coder's Blog from The Lone C++ Coder's Blog

My main PC workstation (as opposed to my Mac Pro) is a dual-boot Windows and Linux machine. While backing up the Windows portion is relatively easy via some cheap-ish commercial backup software, I ended up backing up my Linux home directories only very occasionally. Clearly, Something Had To Be Done (tm).

I had a look around for Linux backup software. I was familiar with was Timeshift, but at least the Manjaro port can’t back up to a remote machine and was useless as a result. I eventually settled on rdiff-backup as it seemed to be simple, has been around for a while and also looks very cron-friendly. So far, so good.

Clang-Tidying up the house

Products, the Universe and Everything from Products, the Universe and Everything

If there is any single consolation amidst the circumstances we are all having to cope with at the moment it is that many of us have lots of time to fill - not only with unproductive things like binging Netflix (I really should get around to watching Dirk Gently's Holistic Detective Agency...) but also with tasks which might have a more lasting long-term benefit.

A Clang-Tidy analysis of a Visual Studio 2019 skeleton MFC project within VisualLintGui.

Those could be tasks like learning a language (programming or human), joining online yoga classes, writing a book, designing a website, blogging, reading (I can recommend Francis Buontempo's Genetic Algorithms and Machine Learning for Programmers if you fancy learning a little about ML) or generally just getting on with stuff with perhaps fewer distractions than usual.

On the latter note, for the past few months we've been working on the codebase of the next version of Visual Lint in our development branch, and it is coming along quite nicely.

One of the things we have planned to do for some time is to add direct support for the Clang-Tidy analysis tool to Visual Lint. When the UK lockdown started, focusing on this task in particular proved to be a very useful distraction from all the fear and uncertainty we found around us.

Sometimes being in the zone helps in more ways than usual.

The screenshot above should give you an idea of where we are at the moment. Whilst there is still a great deal to do before we can consider this is production-ready the foundation is in place and it is definitely usable. For example, selected issues can already be suppressed from the Analysis Results Display by inserting inline suppression directives ("// NOLINT") using the same context menu command used to suppress (for example) PC-lint, PC-lint Plus and CppCheck analysis issues.

With Microsoft Visual Studio being one of the major development environments we support one of the most important things to address is configuring Clang-Tidy to be tolerant of non-standard Visual C++ projects. The errors shown for some files in the Analysis Status Display in the screenshot above are exactly because of this - a standards compliant C++ compiler is likely to generate at least some errors while compiling most Visual C++ projects.

The only error we saw in the Visual C++ project mentioned above was: clang-diagnostic-error: -- call to non-static member function without an object argument.

The StackOverflow topic Refactor MFC message maps to include fully qualified member function pointers discusses exactly the same issue.

This is mitigated by the fact that recent versions of Clang-Tidy do a pretty good job of handling nonstandard code, so the things that Clang-Tidy warns about seem to be exactly the sort of things you would expect it to and a fair degree of tuning is possible. For example, the clang-diagnostic-error issues in the case above could be disabled entirely if that was what you wanted (as it's a trivial fix you could alternatively choose to correct the code).

Furthermore, with the C++ Core Guidelines checks enabled (e.g. using the --checks=modernize-* switch) Clang-Tidy can tell you exactly what you could do to your C++ code to bring it up to modern standards.

We will obviously be talking more about Clang-Tidy in due course - particularly once we start looking at using it to automatically correct the issues it finds (if you are not aware, this is one of the most compelling features of Clang-Tidy. The good news is that autofixing selected issues should be fairly straightforward using the output of the Clang-Tidy --export-fixes switch.

Lockdowns or eased lockdowns, more interesting times lie ahead.

Clang-Tidying up the house

Products, the Universe and Everything from Products, the Universe and Everything

If there is any single consolation amidst the circumstances we are all having to cope with at the moment it is that many of us have lots of time to fill - not only with unproductive things like binging Netflix (I really should get around to watching Dirk Gently's Holistic Detective Agency...) but also with tasks which might have a more lasting long-term benefit.

A Clang-Tidy analysis of a skeleton Visual Studio 2019 project within VisualLintGui

Those could be tasks like learning a language (programming or human), joining online yoga classes, writing a book, designing a website, blogging, reading (I can recommend Francis Buontempo's Genetic Algorithms and Machine Learning for Programmers if you fancy learning a little about ML) or generally just getting on with stuff with perhaps fewer distractions than usual.

On the latter note, for the past few months we've been working on the codebase of Visual Lint 7.5 (the next version) in our development branch, and it is coming along quite nicely.

One of the things we have planned to do for some time is to add direct support for the Clang-Tidy analysis tool to Visual Lint. When the UK lockdown started, focusing on this task in particular proved to be a very useful distraction from all the fear and uncertainty we found around us.

Sometimes being in the zone helps in more ways than usual.

The screenshot above should give you an idea of where we are at the moment. Whilst there is still a great deal to do before we can consider this is production-ready the foundation is in place and it is definitely usable. For example, selected issues can already be suppressed from the Analysis Results Display by inserting inline suppression directives ("// NOLINT") using the same context menu command used to suppress (for example) PC-lint, PC-lint Plus and CppCheck analysis issues.

With Microsoft Visual Studio being one of the major development environments we support one of the most important things to address is configuring Clang-Tidy to be tolerant of non-standard Visual C++ projects. The errors shown for some files in the Analysis Status Display in the screenshot above are exactly because of this - a standards compliant C++ compiler is likely to generate at least some errors while compiling most Visual C++ projects.

The only error we saw in the Visual C++ project mentioned above was: clang-diagnostic-error: -- call to non-static member function without an object argument.

The StackOverflow topic this is one of the most compelling features of Clang-Tidy). The good news is that autofixing selected issues should be fairly straightforward using the output of the Clang-Tidy ‑‑export-fixes switch.

Lockdowns or eased lockdowns, more interesting times lie ahead.

Happy 60th birthday: Algol 60

Derek Jones from The Shape of Code

Report on the Algorithmic Language ALGOL 60 is the title of a 16-page paper appearing in the May 1960 issue of the Communication of the ACM. Probably one of the most influential programming languages, and a language that readers may never have heard of.

During the 1960s there were three well known, widely used, programming languages: Algol 60, Cobol, and Fortran.

When somebody created a new programming languages Algol 60 tended to be their role-model. A few of the authors of the Algol 60 report cited beauty as one of their aims, a romantic notion that captured some users imaginations. Also, the language was full of quirky, out-there, features; plenty of scope for pin-head discussions.

Cobol appears visually clunky, is used by business people and focuses on data formatting (a deadly dull, but very important issue).

Fortran spent 20 years catching up with features supported by Algol 60.

Cobol and Fortran are still with us because they never had any serious competition within their target markets.

Algol 60 had lots of competition and its successor language, Algol 68, was groundbreaking within its academic niche, i.e., not in a developer useful way.

Language family trees ought to have Algol 60 at, or close to their root. But the Algol 60 descendants have been so successful, that the creators of these family trees have rarely heard of it.

In the US the ‘military’ language was Jovial, and in the UK it was Coral 66, both derived from Algol 60 (Coral 66 was the first language I used in industry after graduating). I used to hear people saying that Jovial was derived from Fortran; another example of people citing the language the popular language know.

Algol compiler implementers documented their techniques (probably because they were often academics); ALGOL 60 Implementation is a real gem of a book, and still worth a read today (as an introduction to compiling).

Algol 60 was ahead of its time in supporting undefined behaviors 😉 Such as: “The effect, of a go to statement, outside a for statement, which refers to a label within the for statement, is undefined.”

One feature of Algol 60 rarely adopted by other languages is its parameter passing mechanism, call-by-name (now that lambda expressions are starting to appear in widely used languages, call-by-name has a kind-of comeback). Call-by-name essentially has the same effect as textual substitution. Given the following procedure (it’s not a function because it does not return a value):

procedure swap (a, b);
   integer a, b, temp;
begin
temp := a;
a := b;
b:= temp
end;

the effect of the call: swap(i, x[i]) is:

  temp := i;
  i := x[i];
  x[i] := temp

which might come as a surprise to some.

Needless to say, programmers came up with ‘clever’ ways of exploiting this behavior; the most famous being Jensen’s device.

The follow example of go to usage appears in: International Standard 1538 Programming languages – ALGOL 60 (the first and only edition appeared in 1984, after most people had stopped using the language):

go to if Ab < c then L17
  else g[if w < 0 then 2 else n]

Orthogonality of language use won out over the goto FUD.

The Software Preservation Group is a great resource for Algol 60 books and papers.

May The Fours Be With You – baron m.

baron m. from thus spake a.k.

Sir R-----! Come join me for a glass of chilled wine! I have a notion that you're in the mood for a wager. What say you?

I knew it!

I have in mind a game of dice that reminds me of my time as the Russian military attaché to the city state of Coruscant and its territories during the traitorous popular uprising fomented by the blasphemous teachings of a fundamentalist religious sect known as the Jedi.

Having all the source code in one file

Derek Jones from The Shape of Code

An early, and supposedly influential, analysis of the Coronavirus outbreak was based on results from a model whose 15,000 line C implementation was contained in a single file. There has been lots of tut-tutting from the peanut gallery, about the code all being in one file rather than distributed over many files. The source on Github has been heavily reworked.

Why do programmers work with all the code in one file, rather than split across multiple files? What are the costs and benefits of having the 15K of source in one file, compared to distributing it across multiple files?

There are two kinds of people who work with code all in one file, novices and really capable developers. Richard Stallman is an example of a very capable developer who worked using files containing huge amounts of code, as anybody who looked at the early sources of gcc will be all to familiar.

The benefit of having all the code in one file is that it is easy to find stuff and make global changes. If the source is scattered over multiple files, then working on the code entails knowing which file to look in to find whatever; there is a learning curve (these days screens have lots of pixels, and editors support multiple windows with a different file in each window; I’m sure lots of readers work like this).

Many years ago, when 64K was a lot of memory, I sometimes had to do developer support: people would come to me complaining that the computer was preventing them writing a larger program. What had happened was they had hit the capacity limit of the editor. The source now had to be spread over multiple files to get over this ‘limitation’. In practice people experienced the benefits of using multiple files, e.g., editor loading files faster (because they were a lot smaller) and reduced program build time (because only the code that changed needed to be recompiled).

These days, 15K of source can be loaded or compiled in a blink of an eye (unless a really cheap laptop is being used). Computing power has significantly reduced these benefits that used to exist.

What costs might be associated with keeping all the source in one file?

Monolithic code makes sharing difficult. I don’t know anything about the development environment within which these researched worked. If there were lots of different programs using the same algorithms, or reading/writing the same file formats, then code reuse often provides a benefit that makes it worthwhile splitting off the common functionality. But then the researchers has to learn how to build a program from multiple source files, which a surprising number are unwilling to do (at least it has always been surprising to me).

Within a research group, sharing across researchers might be a possible (assuming they are making some use of the same algorithms and file formats). Involving multiple people in the ongoing evolution of software creates a need for some coordination. At the individual level it may be more cost-efficient for people to have their own private copies of the source, with savings only occurring at the group level. With software development having a low status in academia, I don’t see any of the senior researchers willingly take on a management role, for this code. Perhaps one of the people working on the code is much better than the others (it often happens), but are they going to volunteer themselves as chief dogs body for the code?

In the world of Open Source, where source code is available, cut-and-paste is rampant (along with wholesale copying of files). Working with a copy of somebody else’s source removes a dependency, and if their code works well enough, then go for it.

A cost often claimed by the peanut gallery is that having all the code in a single file is a signal of buggy code. Given that most of the programmers who do this are novices, rather than really capable developers, such code is likely to contain many mistakes. But splitting the code up into multiple files will not reduce the number of mistakes it contains, just distribute them among the files. Correlation is not causation.

For an individual developer, the main benefit of splitting code across multiple files is that it makes developers think about the structure of their code.

For multi-person projects there are the added potential benefits of reusing code, and reducing the time spent reading other people’s code (it’s no fun having to deal with 10K lines when only a few functions are of interest).

I’m not saying that the original code is good, bad, or indifferent. What I am saying is that the having all the source in one file may, or may not, be the most effective way of working. It’s complicated, and I have no problem going with the flow (and limiting the size of the source files I write), but let’s not criticise others for doing what works for them.

Victory in Europe (VE) Day in Churchill’s Toyshop

Tim Pizey from Tim Pizey

My grandfather, Norman Angier,

Norman Angier

worked at Churchill’s Toyshop (M.D.1) as the head civilian engineer during WWII.

Norman Angier

On VE day “Norman Angier felt it was an occasion for fireworks. He therefore acquired a large batch of quite big rockets and proceeded to poop them off, selecting as his firing site a point at the summit of a concrete road which led down to the ranges and the CMP’s camp. Unlike the Guy Fawkes day rockets, these were not provided with sticks for poking into the ground to keep the bodies upright ; but Norman had fixed up some sort of stand for doing this. All went well for a while and the show was most spectacular. Then Norman got careless. A rocket he had just initiated was not properly secured. It fell over, and instead of going up vertically proceeded at speed in the near horizontal plane. A weary CMP was walking along this road on his way back to the camp. The rocket struck him right on target. Luckily, he was not seriously hurt and we soon whipped him off to hospital . The trouble was to make him believe that the attack was not intentional. He had been the victim of a 1000 to 1 chance.” Norman prepares to detonate DeGaul