Visual Lint 6.5.2.295 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 6.5. The following changes are included:

  • Added basic support for Qt Creator projects (.pro/.pro.user files). Note that the implementation does not yet support subprojects or read preprocessor and include folder properties. As such, if the analysis tool you are using requires preprocessor or include folders to be defined (as PC-lint and PC-lint Plus do) for the time being they must be defined manually (e.g. written as -D and -i directives within a PC-lint/PC-lint Plus std.lnt indirect file).
  • The "Analysis Tool" Options page now recognises PC-lint Plus installations containing only a 64 bit executable if the "Use a 64 bit version of PC-lint if available" option is set.
  • When the PC-lint Plus installation folder is selected in the "Analysis Tool" Options page the PC-lint Plus manual (<installation folder>/doc/manual.pdf) is now correctly configured.
  • Added a workaround to the Eclipse plug-in for an issue identified with some Code Composer Studio installations which source plug-in startup and shutdown events in different threads.
  • Fixed a crash which affected some machines when the "Analysis Tool" Options page was activated when PC-lint was the active analysis tool.
  • Fixed a bug which caused the Visual Studio plug-in to be incorrrectly configured in Visual Studio 2017 v15.7.
  • Fixed a bug which could cause the PC-lint/PC-lint Plus environment file to reset to "Defined in std.lnt".
  • Updated the "Example PC-lint/PC-lint Plus project.lnt file" help topic and those relating to supported project types.

Download Visual Lint 6.5.2.295

Dialogue sheets update – translation & Amazon

Allan Kelly from Allan Kelly Associates

SprintRetroA1V5medium-2018-05-24-10-52.jpg

It is six years now since I introduced Retrospective Dialogue sheets to the world and I continue to get great feedback about the sheets. Now I’m running a little MVP with the sheets via Amazon, but first…

In the last few months Alan Baldo has translated the planning sheet to Portuguese and Sun Yuan-Yuan, with help from David Tanzer, has translated two of the retrospective sheets to German.

Thank you very much Alan, Sun and David!

I also updated the Sprint Retrospective sheet (above): version 5 has removed all references to software development. While can still be used by software teams it is more general. Actually, the sheet was largely domain neutral already which explains why it has been used in a Swedish kindergarten for retrospectives.

In the meantime I’ve been busy with an MVP experiment of my own – which has taken a surprising amount of work to get up and running – and you can help with.

I have made printed versions of the latest Sprint Retrospective sheet available on Amazon to buy. The sheets are still available as a free download to print yourself but I want to see if I can reach a broader audience by offering the sheets on Amazon. Plus I know some teams have trouble getting the sheets printed.

Right now this is market test, the printed sheets are only available in the UK I only have a few sheets in stock so this is a “Buy now while stocks last” offer.

If you are outside the UK (sorry) and want a printed sheet, or find stocks have run out, or want a different printed sheets please contact me and I’ll do my best.

Assuming this is a success then I’ll get more sheets printed, arrange to sell outside the UK, add more of the sheets to Amazon and make a renewed effort on translations. Pheww!

So now I need to ask for your help.

If you have used the sheets and find them good please write a review on Amazon – there are a few but there cannot be too many.

Conversely, if you have never tried a Dialogue Sheet retrospective please do so and let me know how it goes: I am always seeking feedback. Download and print for yourself or go over to Amazon and buy today – you could be the first buyer!

The post Dialogue sheets update – translation & Amazon appeared first on Allan Kelly Associates.

Event: Find out what makes Python so appealing!

Paul Grenyer from Paul Grenyer

A Tour of Python
Burkhard Kloss

Wednesday, 6th June
The Priory Centre, Priory Plain, Great Yarmouth

Find out what makes Python so appealing!

Burkhard will offer a brief tour of the Python language, and some of the features that make it so expressive, easy to use, and appealing in a wide range of fields. After that, he'll look at examples of Python usage in practice, from really small computers (micro:bits) to clouds, from database to web development, and data science to machine learning.

Burhard Kloss

I only came to England to walk the Pennine Way… 25 years later I still haven’t done it. I did, though, get round to starting an AI company (spectacularly unsuccessful), joining another startup long before it was cool, learning C++, and spending a lot of time on trading floors building systems for complex derivatives. Sometimes hands on, sometimes managing people. Somewhere along the way I realised you can do cool stuff quickly in Python, and I’ve never lost my fascination with making machines smarter.

RSVP: https://www.meetup.com/Norfolk-Developers-NorDev/events/249290648/

Premium mediocrity is software engineering’s demographic

Derek Jones from The Shape of Code

Software engineering is one of the skills needed to write software, but outside of student coursework is rarely an end in itself. Software is written to do something and the person writing the code needs to know about the something.

If enough people are involved in something, a job title gets created by inserting the appropriate application domain name before ‘software engineer’, e.g., the something software engineer; systems software engineering was one of the first recorded uses of ‘software engineering’, ’embedded software engineer’ is a common usage and more recently ‘research software engineer’ has been trending.

Customers want the software systems they use to fulfill their needs. Implementing a software system involves figuring out what the needs are, how best to implement them using the available resources and producing usable software; all within a given amount of time and money.

How much software engineering knowledge and skill does a something software engineer need? The obvious answer is: enough to get the something done. Ok, how much is needed to get the something done?

There are so many hours in a day: what percentage of available time is best spent learning about software engineering, what percentage leaning about the something and what percentage doing rather than learning?

The only data I have for answering this question is my own experience of talking to people, from a wide range of business and application areas, whose job includes writing software. My background is compilers (from C to Cobol) and static analysis, my knowledge of end-user application domains is derived from talking to the developers who were using the compilers or static analysis tools I was working on at the time.

I have always been struck by the minimalist knowledge of most developers, when it comes to the programming language they were using. It took a while, but eventually I accepted the obvious: most developers don’t need to know much about the language they are using to get their job done.

By a process that resembles incentivized trial and error, people learn how to write code that does what they want; the compiler does not complain and the output looks ok. For some languages, I used to be able to work out which books a relatively new developer had used to guide their learning, by matching a book’s example code snippets with the code they had written.

This minimalist knowledge approach to programming languages is cost effective because most code is simple and has a short lifetime; the cost of learning lots of language details does not provide enough benefit to be worthwhile.

I am a minimalist language Python developer. Why would I spend time learning more about the semantics of Python than I need to?

What are the benefits of being a language expert? Compiler writers get paid to learn the ins and outs of a language and I know a few people who became language experts without being compiler writers (they got hooked on knowing the language). I have found it useful for keeping my code simple (I am not tempted to write complicate code, or use obscure constructs, in the mistaken belief that they are better than the simple stuff), it is also useful for figuring out other people’s complicated or obscure usage (created intentionally or accidentally).

These benefits are not enough to convince me to learn more about Python, the language. I am content to wait until I need to learn more.

I have occasionally taught advanced programming courses, aimed at developers with a few years experience working in industry. These courses had to include the word ‘advanced’ in their title, otherwise developers with a few years experience would never have signed-up; ‘advanced’ is a necessary marketing signal (others who have run such courses report the same behavior). The course contents were essentially a review of basic material, with lots of examples; most of those attending did not know enough to follow real advanced material. The courses were really about uncovering and correcting bad habits that attendees had picked up over time (often, a technique was discovered to fix a problem and then subsequently adopted for more general use).

What about general software engineering skills? A minimalist knowledge approach to software engineering is cost effective because most code does not exist long enough to make it worthwhile investing in reducing future maintenance costs. Yes, it is more expensive for those that survive to become commonly used, but think of all the savings from not investing in those that did not survive. Software engineering decisions should not be driven by surviorship bias.

The first requirement of any commercial software system is to attract paying customers. In a rapidly changing market, being first with a saleable product can be the difference between life and death. Minimizing software engineering effort saves time and money (in the short term). If the product is a success, there will be money to pay for what needs to be done, if the product fails nobody cares. I have seen a lot of software systems that are a commercial success and a complete software engineering mess; successful, well engineered software is less common (or perhaps they just don’t need me to help them out).

Software engineering mediocrity is not only viable, for most people it’s the outcome of making a cost/benefit decision to invest their learning time in the application domain, not software engineering (or computer language).

Of course, nobody wants to be seen as being mediocre (for some people, mediocre overstates their skill level); their behavior is premium mediocre.

There are a few application areas where software engineering skills are needed, e.g., safety critical software and warehouse scale computing. A few high profile cases are hiding the reality that whatever works is cost effective for most software solutions.

Blockade – baron m.

baron m. from thus spake a.k.

Good heavens Sir R----- you look quite pallid! Come take a seat and let me fetch you a measure of rum to restore your humors.
To further improve your sanguinity might I suggest a small wager?

Splendid fellow!

I have in mind a game invented to commemorate my successfully quashing the Caribbean zombie uprising some few several years ago. Now, as I'm sure you well know, zombies have ever been a persistent, if sporadic, scourge of those islands. On that occasion, however, there arose a formidable leader from amongst their number; the zombie Lord J------ the Insensate.

It Compiles, Ship It!

Chris Oldwood from The OldWood Thing

The method was pretty simple and a fairly bog standard affair, it just attempted to look something up in a map and return the associated result, e.g.

public string LookupName(string key)
{
  string name;

  if (!customers.TryGetValue(key, out name)
    throw new Exception(“Customer not found”);

  return name;
}

The use of an exception here to signal failure implied to me that this really shouldn’t happen in practice unless the data structure is screwed up or some input validation was missed further upstream. Either way you know (from looking at the implementation) that the outcome of calling the method is either the value you’re after or an exception will be thrown.

So I was more than a little surprised when I saw the implementation of the method suddenly change to this:

public string LookupName(string key)
{
  string name;

  if (!customers.TryGetValue(key, out name)
    return null;

  return name;
}

The method no longer threw an exception on failure it now returned a null string reference.

This wouldn’t be quite so surprising if all the call sites that used this method had also been fixed-up to account for this change in behaviour. In fact what initially piqued my interest wasn’t that this method had changed (although we’ll see in a moment that it could have been expressed better) but how the calling logic would have changed.

Wishful Thinking

I always approach a change from a position of uncertainty. I’m invariably wrong or have something to learn, either from a patterns perspective or a business logic one. Hence my initial assumption was that I now needed to think differently about what happens when I need to “lookup a name” and that lookup fails. Where before it was truly exceptional and should never occur in practice (perhaps indicating a bug somewhere else) it’s now more likely and something to be formally considered, and resolving the failure needs to be handled on a case-by-case basis.

Of course that wasn’t the case at all. The method had been changed to return a null reference because it was now an implementation detail of another new method which didn’t want to use catching an exception for flow control. Instead they now simply check for null and act accordingly.

As none of the original call sites had been changed to handle the new semantics a rich exception thrown early had now been traded for (at best) a NullReferenceException later or (worse case) no error at all and an incorrect result calculated based on bad input data [1].

The TryXxx Pattern

Coming back to reality it’s easy to see that what the author really wanted here was another method that allowed them to attempt a lookup on a name, knowing that in their scenario it could possibly fail but that’s okay because they have a back-up plan. In C# this is a very common pattern that looks like this:

public bool TryLookupName(string key, out string name)

Success or failure is indicated by the return value and the result of the lookup returned via the final argument. (Personally I’ve tended to favour using ref over out for the return value [2].)

The Optional Approach

While statically types languages are great at catching all sorts of type related errors at compile time they cannot catch problems when you smuggle optional reference-type values in languages like C# and Java by using a null reference. Any reference-type value in C# can inherently be null and therefore the compiler is at a loss to help you.

JetBrains’ ReSharper has some useful annotations which you can use to help their static analyser point out mistakes or elide unnecessary checks, but you have to add noisy attributes everywhere. However expressing your intent in code is the goal and it’s one valid and very useful approach.

Winding the clock into the future we have the new “optional reference” feature to look forward to in C# (currently in preview). Rather than bury their heads in the sand the C# designers have worked hard to try and right an old wrong and reduce the impact of Sir Tony Hoare’s billion dollar mistake by making null references type unsafe.

In the meantime, and for those of us working with older C# compilers, we still have the ability to invent our own generic Optional<> type that we can use instead. This is something I’ve been dragging into C# codebases for many years (whilst standing on my soapbox [3]) in an effort to tame at least one aspect of complexity. Using one of these would have changed the signature of the method in question to:

public Optional<string> LookupName(string key)

Now all the call sites would have failed to compile and the author would have been forced to address the effects of their change. (If there had been any tests you would have hoped they would have triggered the alarm too.)

Fix the Design, Not the Compiler

Either of these two approaches allows you to “lean on the compiler” and leverage the power of a statically typed language. This is a useful feature to have but only if it’s put to good use and you know where the limitations are in the language.

While I would like to think that people listen to the compiler I often don’t think they hear it [4]. Too often the compiler is treated as something to be placated, or negotiated with. For example if the Optional<string> approach had been taken the call sites would all have failed to compile. However this calling code:

var name = LookupName(key);

...could easily be “fixed” by simply doing this to silence the compiler:

var name = LookupName(key).Value;

For my own Optional<> type we’d just have switched from a possible NullReferenceException on lookup failure to an InvalidOperationException. Granted this is better as we have at least avoided the chance of the null reference silently making its way further down the path but it doesn’t feel like we’ve addressed the underlying problem (if indeed there has even been a change in the way we should treat lookup failures).

Embracing Change

While the Optional<> approach is perhaps more composable the TryXxx pattern is more invasive and that probably has value in itself. Changing the signature and breaking compilation is supposed to put a speed bump in your way so that you consider the effects of your potential actions. In this sense the more invasive the workaround the more you are challenged to solve the underlying tension with the design.

At least that’s the way I like to think about it but I’m afraid I’m probably just being naïve. The reality, I suspect, is that anyone who could make such a change as switching an exception for a null reference is more concerned with getting their change completed rather than stopping to ponder the wider effects of what any compiler might be trying to tell them.

 

[1] See Postel’s Law and  consider how well that worked out for HTML.

[2] See “Out vs Ref For TryXxx Style Methods”.

[3] C# already has a “Nullable” type for optional values so I find it odd that C# developers find the equivalent type for reference-type values so peculiar. Yes it’s not integrated into the language but I find it’s usually a disconnect at the conceptual level, not a syntactic one.

[4] A passing nod to the conversation between Woody Harrelson and Wesley Snipes discussing Jimi Hendrix in White Men Can’t Jump.

Estimation, planning, teams and money, some data

Allan Kelly from Allan Kelly Associates

PlannedMay17-2018-05-17-11-46.jpg

When I deliver Agile training for teams I run an exercise called “The Extended XP Game”. It is based on the old “XP Game” but over the years I’ve enhanced it and added to it. We have a lot of fun, people are laughing and they still talk about it years later. The game illustrates a lot of agile concepts: iteration, business value, velocity, learning by doing, specification by example, quality is free, risk, the role of probability and some more.

When I run the exercise I divide the trainees into several teams, usually three or four people to a team. I show them I have some tasks written on cards which they will do in a two minute iteration. They do two minutes or work, review, retrospect then do another two minutes of work – and possibly repeat a third time.

The first thing is for teams to Get Ready: I hand out the tasks and ask them to estimate, in seconds how long it will take to do each task: fold a paper airplane that will fly, inflate a balloon, deflate a balloon, roll a single six on a dice, roll a double six on two dice, find a two in a pack of cards and find all the twos in the pack of cards. Strictly speaking, this estimate is a prospective estimate, “how long will it take to do this in future?”

Once they have estimated how long each task will take someone is appointed product owner and they have to plan the tasks to be done (with the team).

What I do not tell the teams is that I’m timing them at this stage. I let the teams take as long as they like to get ready: estimate and plan. But I time how long the estimation takes and how long the following planning takes.

Once all the teams are “ready” I ask the teams: “how long did that take?”

At this point I am asking for a retrospective estimate: how long did it take. The teams have perfect estimation conditions: they have just done it, no time has elapsed and no events have intervened.

Typically they answer are 5 or 6 minutes, maybe less, maybe more. Occasionally someone gets the right number and they are then frequently dismissed by their colleagues.

Although I’ve been running this exercise for nearly 10 years, and have been timing teams for about half that time I’ve only been recording the data the last couple of years. Still it comes from over 65 teams and is consistent.

The total time to get ready to do 2 minutes of work is close to 13 minutes – the fastest team took just 5.75 minutes but the slowest took a whopping 21.25.

The average time spent estimating the tasks is 7 minutes. The fastest team took 2.75 minutes and the slowest 14 minutes.

The average time planning once all tasks are estimated is just short of 6 minutes. One team took a further 13.5 minutes to plan after estimates while another took just 16 seconds. While I assume they had pretty much planned while estimating it is also interesting to note that that team contained several people who had done the exercise a few years before.

(For statistics nuts the mean and median are pretty close together and I don’t think the mode makes much sense in this scenario.)

So what conclusions can we draw from this data?

1) Teams take longer to estimate than do

Everyone taking part in the exercise has been told – several times – that they are preparing to do a 2 minute iteration. Yet on average teams spend 12.75 minutes preparing – estimating and planning – to do 2 minutes of work!

Or to put it another way: teams typically spend six times longer to plan work than to do work.

The slowest team ever took over 10 times longer to plan than to do.

In the years I’ve been running this exercise no team has ever done a complete dry run. They sometimes do little exercises and time themselves but even teams which do this spend a lot of time planning.

This has parallels in real life too: many participants tell me their organization spend a long time debating what should be done, planning and only belatedly executing. One company I met had a project that had been in planning for five years.

TeamSize-2018-05-17-11-46.jpg

2) Larger teams take longer to estimate than small teams

My second graph shows there is a clear correlation between team size and the time it takes to estimate and plan. I think this is no surprise, one would expect this. In fact, this is another piece of evidence supporting Diseconomies of Scale: the bigger the team the longer it will take to get ready.

This is one reason why some people prefer to have an “expert” make the estimate – it saves the time of other people. However this itself is a problem for several reasons.

Anyone who has read my notes on estimation research (and the later more notes on estimation research) may remember that research shows that those with expert knowledge or in a position of authority underestimate by more than those who do the work. So having an expert estimate isn’t a cure.

But, those same notes include research that shows that people are better at estimating time for other people than they are at estimating time for themselves, so maybe this isn’t all bad.

However, this approach just isn’t fair. Especially when someone is expected to work within an estimate. One might also argue that it is not en effective use of time because the first person – the estimator – has to understand the task in sufficient detail to estimate it but rather than reuse this learning the task is then given to someone else who has to learn it all over again.

PlanningDelta-2018-05-17-11-46.jpg

3) Post estimation planning is pretty constant

This graph shows the planning delta, that is: after the estimates are finished how long does it take teams to plan the work?

It turns out that the amount time it takes to estimate the task has little bearing on how long the subsequent planning takes. So whether you estimate fast or slow on average it will take six more minutes to plan the work.

Perhaps this isn’t that surprising.

(If I’ve told you about this data in person I might have said something different here. In preparing the data for this blog I found an error in my Excel graphs which I can only attribute to a bug in Excel’s scatter chart algorithm.)

4) Vierordt’s Law holds

People underestimate longer periods of time (typically anything over 10 minutes), and overestimate short period of time (typically things less than two minutes).

Not only do trainees consistently underestimate how long it has taken them to get ready – which is over 10 minutes – but teams which record how long it takes to actually do each task find that their estimates are much higher than the actual time it takes. Even when teams don’t time themselves observation shows that they do the work far faster than they thought they would.

TimeVMoney-2018-05-17-11-46.jpg

5) Less planning makes more money

One of my extensions to the original game is to introduce money: teams have to deliver value, measured in money. This graph shows teams which spend less time planning go on to make more money.

I can’t be as sure about this last finding as the earlier ones because I’ve not been recording this data for so long. To complicate matters a lot happens between the initial planning and the final money making, I introduce some money and teams get to plan for subsequent iterations.

Still, there are lessons here.

The first lesson is simply this: more planning does not lead to more money.

That is pretty significant in its own right but there is still the question: why do teams which spend less time planning make more money?

I have two possible explanations.

I normally play three rounds of the game. When time is tight I sometimes stop the game after two rounds. In general teams usually score more money in each successive round. Therefore, teams who spend longer in planning are less likely to get to the third round so their score comes from the second round. If they had time to play a third round they would probably score higher than in round two.

This has a parallel in real life: if extra planning time delays the date a product enters the market it is likely to make less money. Delivering something smaller sooner is worth more.

This perfectly demonstrates that doing creates more learning than planning: teams learn more (and therefore increase their score) from spending 2 minutes doing than spending an extra 2 minutes planning.

The second possible explanation is that the more planning a team does the more difficult they might find it to rethink and change the way they are working.

The $1,600 shown was recorded by a Dutch team this year but the record is held by a team in Australia who scored over $2,000: to break into these high scores teams need to reinterpret the rules of the game.

One of the points of the game is to learn by doing. I suspect that teams who spend longer in planning find it harder to break away from their original interpretation of the rules. How can you think outside the box when you’ve spent a lot of time thinking about the box?

In one training session in Brisbane last year the teams weren’t making the breakthrough to the big money. Although I’d dropped hints of how to do this nobody had made the connection so I said: “You know, a team in Perth once scored over $2,000.” That caused one of the players to rethink his approach and score $1,141.

I’ve since repeated the quote and discovered that simply telling people that such high scores are possible causes them to discover how to score higher.

* * *

I’m sure there is more I could read into all this data and I will carry on collecting the data. Although now I have two problems…

First, having shared this data I might find people coming on my agile software training who change their behaviour because they have read this far.

Second: I need more teams to do this to gather data! If you would like to do this exercise – either as part of a full agile training course or as a stand alone exercise – please call (+44 20 3286 4292) or mail me, contact@allankelly.net, my rates are quite reasonable!

Want to receive these posts by e-mail? – join the newsletter today and receive a free eBook: Xanpan: Team Centric Agile Software Development

The post Estimation, planning, teams and money, some data appeared first on Allan Kelly Associates.

Product Ownership book – a work in progress

Allan Kelly from Allan Kelly Associates

PrdOwnership-2018-05-17-11-04.jpg

A quick update: most of my recent blogs about the product owner role together with some new material, is now available in book form from LeanPub – https://leanpub.com/productownership.

I’m surprised to find I’ve written over 60 pages so far! Still, this is very much a work in progress, there are a few more chapters to add to part 1: The Product Owner role.

But it is part 2 which I’m itching to start writing: the tools of the trade.

For those who don’t know, the beauty of LeanPub is that you can buy my unfinished book now and you will receive updates – to your iPad, Kindle, PC, whatever – as they are produced.

That means three things to me.

Firstly I can receive your feedback – what do you like? What did I get wrong? What else should be in there?

Second, money is feedback, the more of you who buy the book the more motivated I am to write it – I like seeing sales, it tells me people want this book. And if you don’t buy… well maybe I should pivot and abandon it.

Third, it gives me a little beer money.

The bad news is: you also get my dyslexic spelling and grammar.

The post Product Ownership book – a work in progress appeared first on Allan Kelly Associates.