Visual Lint 7.0.9.324 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 7.0. The following changes are included:

  • When a custom report folder is defined in the Options Dialog "Reports" page, generated reports will now be written into subfolders identifying the solution/workspace, analysis tool and analysed solution/workspace configuration rather than just the solution/workspace name. This allows analysis reports for the same project but using different analysis tools or configurations to co-exist without overwriting each other.

  • Fixed a bug in the persistence of the "Generate reports in..." report options in the Options Dialog "Reports" page.

  • Updated the PC-lint Plus message database to reflect changes in PC-lint Plus 1.3.5.

Download Visual Lint 7.0.9.324

Visual Lint 7.0.9.324 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 7.0. The following changes are included:

  • When a custom report folder is defined in the Options Dialog "Reports" page, generated reports will now be written into subfolders identifying the solution/workspace, analysis tool and analysed solution/workspace configuration rather than just the solution/workspace name. This allows analysis reports for the same project but using different analysis tools or configurations to co-exist without overwriting each other.

  • Fixed a bug in the persistence of the "Generate reports in..." report options in the Options Dialog "Reports" page.

  • Updated the PC-lint Plus message database to reflect changes in PC-lint Plus 1.3.5.

Download Visual Lint 7.0.9.324

Learning useful stuff from the Reliability chapter of my book

Derek Jones from The Shape of Code

What useful, practical things might professional software developers learn from my evidence-based software engineering book?

Once the book is officially released I need to have good answers to this question (saying: “Well, I decided to collect all the publicly available software engineering data and say something about it”, is not going to motivate people to read the book).

This week I checked the reliability chapter; what useful things did I learn (combined with everything I learned during all the other weeks spent working on this chapter)?

A casual reader skimming the chapter would conclude that little was known about software reliability, and they would be right (I already knew this, but I learned that we know even less than I thought was known), and many researchers continue to dig in unproductive holes.

A reader with some familiarity with reliability research would be surprised to see that some ‘major’ topics are not discussed.

The train wreck that is machine learning has been avoided (not forgetting that the data used is mostly worthless), mutation testing gets mentioned because of some interesting data (the underlying problem is that mutation testing assumes that coding mistakes are local to one line, but in practice coding mistakes often involve multiple lines), and the theory discussions don’t mention non-homogeneous Poisson process as the basis for software fault models (because this process is not capable of solving the questions asked).

What did I learn? My highlights include:

  • Anne Choa‘s work on population estimation. The takeaway from this work is that if people want to estimate the number of remaining fault experiences, based on previous experienced faults, then every occurrence (i.e., not just the first) of a fault needs to be counted,
  • Janet Dunham’s top read work on software testing,
  • the variability in the numeric percentage that people assign to probability terms (e.g., almost all, likely, unlikely) is much wider than I would have thought,
  • the impact of the distribution of input values on fault experiences may be detectable,
  • really a lowlight, but there is a lot less publicly available data than I had expected (for the other chapters there was more data than I had expected).

The last decade has seen fuzzing grow to dominate the headlines around software reliability and testing, and provide data for people who write evidence-based books. I don’t have much of a feel for how widely used it is in industry, but it is a very useful tool for reliability researchers.

Readers might have a completely different learning experience from reading the reliability chapter. What useful things did you learn from the reliability chapter?

short – command line tool to truncate lines to fit in the terminal

Andy Balaam from Andy Balaam's Blog

Sometimes I run grep commands that search files with hugely-long lines. If those lines match, they are printed out and spam my terminal with huge amounts of information, that I probably don’t need.

I couldn’t find a tool that limits the line-length of its output, so I wrote a tiny one.

It’s called short.

You use it like this (my typical usage):

grep foo myfile.txt | short

Or specify the column width like this:

short -w 5 myfile.txt

It’s written in Rust. Feel free to add features, fix bugs and package it for your operating system/distribution!

But our users don’t want to change

AllanAdmin from Allan Kelly Associates

A good question came into my mailbox:

“Much of the writing I’ve seen assumes that software can be shipped directly into the hands of customers to create value (hence the “smaller packages, more often” approach). My experience has been that especially with new launches or major releases, there needs to be a threshold of minimum functionality that needs to be in place.”

Check your phone. Is it set to auto-update apps? Is your desktop OS set to auto-update? Or do you manual choose when to update?

Look at the update notes on phone apps from the likes of Uber, Slack, SkyScanner, the BBC and others. They say little more than “we update our apps regularly.”

Today people are used to technology auto-changing on them. They may not like it but do they like a big change any more?

My guess is that most people don’t even notice those updates. When you batch up software releases users see lots of changes at once, when you release them as a regular stream of small updates then most go unnoticed.

Still, users will see some updates change things, and they will not like some of these. But how long do you want to hide these updates from your users?

The question that needs asking is: what is the cost of an update? The vast majority of updates are quick, easy, cheap and painless.

Of course people don’t like updates which introduce a new UI, a new payment model or which demand you uninstall an earlier app but when updates are easy and bring benefits – even benefits you don’t see – they happily accept them.

And remember, the alternative to 100 small updates is one big update where people are more likely to see changes.

If your updates are generally good why hold them back? And if your updates are going in the wrong direction shouldn’t you change something? If you run scared of giving your users changes then something is wrong.

Nor is it just apps. Most people (in Europe at least) use telco supplied handsets and when the telco calls up and says “Would you like a new handset at no additional cost?” people usually say Yes. That is how telcos keep their customers.

The question continues,

“there needs to be coordination across the company (e.g. training people from marketing, sales, channel partners, customer/ internal support, and so on). There is also the human element – the capacity to absorb these changes. As a user of tech, I’m not sure I could work (well) with a product where features were changing, new ones being added frequently (weekly or even monthly), etc.”

If every software update was introducing a big change then these would be problems. But most updates don’t. Most introduce teeny-tiny changes.

Of course sometimes things need to change. The companies which do this best invest time and energy in making these painless. For example, Google often offers a “try our new beta version” for months before an update. And for months afterwards they provide a “use the old interface option.”

The best companies invest in user experience design too. This can go along way to removing the need for training.

Just because a new feature is released doesn’t mean people have to use it. For starters new changes can be released but disabled. Feature toggles are not only a way of managing source code branches but they also allow new features to be released silently and switched on when everyone is ready. This allows for releases to be de-risked without the customer seeing.

And when they are switched on they can be switched on for a few users at a time. Feedback can be gathered and improvements made before the next release.

That can be co-ordinated with training: make the feature toggle user switchable, everyone gets the new software and as they complete the training they can choose to switch it on.

Now marketing… yes, marketeers do like the big bang release – “look at us, we have something shiny and new!”

You could leave lots of features switched off until your marketeers are ready to make a big bang. That also reduces the problem of marketers needing to know what will be ready when so they known when to launch a campaign.

Or you could release updates without any fuss and market when you have the critical mass.

Or you could change your marketing approach: market a stream of constant improvements rather than occasional big changes.

Best of all market the capabilities of your thing without mentioning features: market what the app or system allows you to do.

For years I’ve been hearing “business people” bemoan developers who “talk technical” but I see exactly the same thing with marketeers. Look at Sony Televisions, what is the “picture processor X1” ? And why should I care? I can’t remember when I last changed the contrast on my television so the “Backlight master drive” (what ever that is) means nothing to me.

Or, look at Samsung mobile phones, 5G, 5G, 5G – what do I care about 5G? What does 5G allow me to do that I can’t with my current phone?

Drill down, look at the Samsung Galaxy lineup: CPU speed, CPU type, screen size, resolution, RAM, ROM – what do I care? How does any of that help me? – Stop throwing technical details at me!

Don’t market features market solution. Tell me what “job to be done” the product the addresses, tell me how my life will be improved. Marketing a solution rather than features decouple marketing from the upgrade cycle.

So sure, people don’t like technology change – I’ll tell you a story in my next blog. But when technology change brings benefits are they still resistant?

Now, with modern technology, with agile and continuous delivery, technology can change faster than business functions like training and marketing. We can either choose to slow technology down or we can change those functions to work differently – not necessarily faster but differently in a way that is compatible with agile technology change.

These kind of tensions are common in businesses which move across to agile style working. A lot of company think agile applies to the “software engine room” and the rest of the business can carry on as before. Unfortunately they have released the Agile Virus – agile working has introduced a set of tensions into the organization which must either be embraced or killed.

Once again technology is disruptive.

Perhaps, if the marketing or training department are insisting on big-bang releases maybe it is them who should be changing. Maybe, just maybe, they need to rethink their approach, maybe they could learn a thing or two about agile and work with differently with technology teams.

Subscribe to my blog newsletter and download Project Myopia for Free

The post But our users don’t want to change appeared first on Allan Kelly Associates.

Regular releases reduce risk, increase value

AllanAdmin from Allan Kelly Associates

“If you’re not embarrassed by the product when you launch, you’ve launched too late.” Reid Hoffman, founder LinkedIn

Years ago I worked for a software company supplying Vodafone, Verizon, Nokia, etc. The last thing those companies wanted was to update the software on their engineers PC every months, let alone every week!

I was remembering this episode when I was drafting what will be my next post (“But our users don’t want to change”) and thought it was worth saying something about how regular releases change the risk-reward equation.

When you only release occasionally there is a big incentive to “get it right” – to do everything that might be needed and to remove every defect whether you think those changes are needed or not. When you release occasionally second chances don’t happen for weeks or months. So you err on the side of caution and that caution costs.

Regularly releases changes that equation. Now second chances come around often, additions and fixes are easy. Now you can err on the side of less and that allows you to save time and money.

The ability to deliver regularly – every two weeks as a baseline, every day for high performing teams – is more important than the actual deliveries. Releasable is more important than released. The actual question of whether to release or not is ultimately a question for business representatives to decide.

But, being releasable on a very regular basis is an indicator of the teams technical ability and the innate quality of the thing being built. Teams which are always asking for “more time” may well have a low quality product (lots of bugs to fix) or have something to hide.

The fact that a team can, and hopefully do, release (to live) massively reduces the risk involved. When software is only released at the end – and usually only tested before that end – then risk is tail loaded. Having releasable – and especially released – software reduces risk. The risk is spread across the work.

Actually releasing early further reduces risk because every step in the process is exercised. There are no hidden deployment problems.

That offsets sunk-cost and combats commitment escalation. Because at any time the business stakeholders can say “game over” and walk away with a working product means that they are no longer held captive by the fear of sunk-costs, suppliers and career threatening failures.

It is also a nice side effect that releasing new functionality early – or just fixing bugs – increases the return on investment because benefits are delivered earlier and therefore start earning a return sooner.

Just because new functionality is completed and even released early does not mean users need to see it. Feature-toggles allows feature and changes to be hidden from users – or only enabled for specified users. Releasing changed software with no apparent change may look pointless but it actually reduces risk because the changes are out there.

That also means testing is simplified. Rather than running tests against software with many changes tests are run against software with few changes which makes changes more efficient even if the users don’t see it. And it removes the “we can’t roll back one fix” problem when one of 10 changes don’t pass.

Back with Vodafone engineers who don’t want their laptops updated: that was then, that was the days of CD installs. Today the cloud changes that, there is only one install to do, it isn’t such an inconvenience. So they could have the updates but with disruptive changes hidden. At the same time they could have non-disruptive changes, e.g. bug fixes.

In a few cases regular deliveries may not be the right answer. The key thing though is to change the default answer from “we only deliver occasionally (or at the end)” to “we deliver regularly (unless otherwise requested).”


Subscribe to my blog newsletter and download Project Myopia for Free

The post Regular releases reduce risk, increase value appeared first on Allan Kelly Associates.

Impact of function size on number of reported faults

Derek Jones from The Shape of Code

Are longer functions more likely to contain more coding mistakes than shorter functions?

Well, yes. Longer functions contain more code, and the more code developers write the more mistakes they are likely to make.

But wait, the evidence shows that most reported faults occur in short functions.

This is true, at least in Java. It is also true that most of a Java program’s code appears in short methods (in C 50% of the code is contained in functions containing 114 or fewer lines, while in Java 50% of code is contained in methods containing 4 or fewer lines). It is to be expected that most reported faults appear in short functions. The plot below shows, left: the percentage of code contained in functions/methods containing a given number of lines, and right: the cumulative percentage of lines contained in functions/methods containing less than a given number of lines (code+data):

left: the percentage of code contained in functions/methods containing a given number of lines, and right: the cumulative percentage of lines contained in functions/methods containing less than a given number of lines.

Does percentage of program source really explain all those reported faults in short methods/functions? Or are shorter functions more likely to contain more coding mistakes per line of code, than longer functions?

Reported faults per line of code is often referred to as: defect density.

If defect density was independent of function length, the plot of reported faults against function length (in lines of code) would be horizontal; red line below. If every function contained the same number of reported faults, the plotted line would have the form of the blue line below.

Number of reported faults in C++ classes (not methods) containing a given number of lines.

Two things need to occur for a fault to be experienced. A mistake has to appear in the code, and the code has to be executed with the ‘right’ input values.

Code that is never executed will never result in any fault reports.

In a function containing 100 lines of executable source code, say, 30 lines are rarely executed, they will not contribute as much to the final total number of reported faults as the other 70 lines.

How does the average percentage of executed LOC, in a function, vary with its length? I have been rummaging around looking for data to help answer this question, but so far without any luck (the llvm code coverage report is over all tests, rather than per test case). Pointers to such data very welcome.

Statement execution is controlled by if-statements, and around 17% of C source statements are if-statements. For functions containing between 1 and 10 executable statements, the percentage that don’t contain an if-statement is expected to be, respectively: 83, 69, 57, 47, 39, 33, 27, 23, 19, 16. Statements contained in shorter functions are more likely to be executed, providing more opportunities for any mistakes they contain to be triggered, generating a fault experience.

Longer functions contain more dependencies between the statements within the body, than shorter functions (I don’t have any data showing how much more). Dependencies create opportunities for making mistakes (there is data showing dependencies between files and classes is a source of mistakes).

The previous analysis makes a large assumption, that the mistake generating a fault experience is contained in one function. This is true for 70% of reported faults (in AspectJ).

What is the distribution of reported faults against function/method size? I don’t have this data (pointers to such data very welcome).

The plot below shows number of reported faults in C++ classes (not methods) containing a given number of lines (from a paper by Koru, Eman and Mathew; code+data):

Number of reported faults in C++ classes (not methods) containing a given number of lines.

It’s tempting to think that those three curved lines are each classes containing the same number of methods.

What is the conclusion? There is one good reason why shorter functions should have more reported faults, and another good’ish reason why longer functions should have more reported faults. Perhaps length is not important. We need more data before an answer is possible.

If At First You Don’t Succeed – a.k.

a.k. from thus spake a.k.

Last time we took a first look at Bernoulli processes which are formed from a sequence of independent experiments, known as Bernoulli trials, each of which is governed by the Bernoulli distribution with a probability p of success. Since the outcome of one trial has no effect upon the next, such processes are memoryless meaning that the number of trials that we need to perform before getting a success is independent of how many we have already performed whilst waiting for one.
We have already seen that if waiting times for memoryless events with fixed average arrival rates are continuous then they must be exponentially distributed and in this post we shall be looking at the discrete analogue.