Complex software makes economic sense

Derek Jones from The Shape of Code

Economic incentives motivate complexity as the common case for software systems.

When building or maintaining existing software, often the quickest/cheapest approach is to focus on the features/functionality being added, ignoring the existing code as much as possible. Yes, the new code may have some impact on the behavior of the existing code, and as new features/functionality are added it becomes harder and harder to predict the impact of the new code on the behavior of the existing code; in particular, is the existing behavior unchanged.

Software is said to have an attribute known as complexity; what is complexity? Many definitions have been proposed, and it’s not unusual for people to use multiple definitions in a discussion. The widely used measures of software complexity all involve counting various attributes of the source code contained within individual functions/methods (e.g., McCabe cyclomatic complexity, and Halstead); they are all highly correlated with lines of code. For the purpose of this post, the technical details of a definition are glossed over.

Complexity is often given as the reason that software is difficult to understand; difficult in the sense that lots of effort is required to figure out what is going on. Other causes of complexity, such as the domain problem being solved, or the design of the system, usually go unmentioned.

The fact that complexity, as a cause of requiring more effort to understand, has economic benefits is rarely mentioned, e.g., the effort needed to actively use a codebase is a barrier to entry which allows those already familiar with the code to charge higher prices or increases the demand for training courses.

One technique for reducing the complexity of a system is to redesign/rework its implementation, from a system/major component perspective; known as refactoring in the software world.

What benefit is expected to be obtained by investing in refactoring? The expected benefit of investing in redesign/rework is that a reduction in the complexity of a system will reduce the subsequent costs incurred, when adding new features/functionality.

What conditions need to be met to make it worthwhile making an investment, I, to reduce the complexity, C, of a software system?

Let’s assume that complexity increases the cost of adding a feature by some multiple (greater than one). The total cost of adding n features is:

K=C_1*F_1+C_2*F_2 ...+C_n*F_n

where: C_i is the system complexity when feature i is added, and F_i is the cost of adding this feature if no complexity is present.

C_2=C_B+C_1, C_3=C_B+C_1+C_2, … C_n=C_B+sum{i=1}{n}{C_i}

where: C_B is the base complexity before adding any new features.

Let’s assume that an investment, I, is made to reduce the complexity from C_b+C_N (with C_N=sum{i=1}{n}{C_i}) to C_B+C_N-C_R, where C_R is the reduction in the complexity achieved. The minimum condition for this investment to be worthwhile is that:

I+K_{r2} < K_{r1} or I < K_{r1}-K_{r2}

where: K_{r2} is the total cost of adding new features to the source code after the investment, and K_{r1} is the total cost of adding the same new features to the source code as it existed immediately prior to the investment.

Resetting the feature count back to 1, we have:

K_{r1}=(C_B+C_N+C_1)*F_1+(C_B+C_N+C_2)*F_2+...+(C_B+C_N+C_m)*F_m
and
K_{r2}=(C_B+C_N-C_R+C_1)*F_1+(C_B+C_N-C_R+C_2)*F_2+...+(C_B+C_N-C_R+C_m)*F_m

and the above condition becomes:

I < ((C_B+C_N+C_1)-(C_B+C_N-C_R+C_1))*F_1+...+((C_B+C_N+C_m)-(C_B+C_N-C_R+C_m))*F_m

I < C_R*F_1 ...+C_R*F_m

I < C_R*sum{i=1}{m}{F_i}

The decision on whether to invest in refactoring boils down to estimating the reduction in complexity likely to be achieved (as measured by effort), and the expected cost of future additions to the system.

Software systems eventually stop being used. If it looks like the software will continue to be used for years to come (software that is actively used will have users who want new features), it may be cost-effective to refactor the code to returning it to a less complex state; rinse and repeat for as long as it appears cost-effective.

Investing in software that is unlikely to be modified again is a waste of money (unless the code is intended to be admired in a book or course notes).

Chasm City

Paul Grenyer from Paul Grenyer

Chasm City

Alistair Reynolds
ISBN-13‏: ‎ 978-0575083158

Following the announcement of the release of Inhibitor Phase and then Elysium Fire I’ve been rereading some of the previous Revelation Space novels to pick up the thread. First time around I found Chasm City a dark story and it was no different the second time, but I got so much more out of it. I also remember losing the thread towards the end the first time, but not this time!

As with most of the series, the thread of the main story is inconsequential to the main Revelation Space arc. It’s the other aspects of the story which tie up with other Revelation Space events which make this such a fantastic book. By the time I read the last page I knew that Sky's Edge was named after the edge Sky Hassausman had over the other ships in the flotilla which settled the planet. I knew that the war had started between the ships of the flotilla and what they were fighting about. I knew how the Melding Plague had got to Chasm City and how it was discovered and spread. I knew that Sky had met Khouri, who is an important character in the main trilogy. And more, much more! I wish I knew how the Melding Plague came to be though.

I read the second half of the book in about two weeks. I just couldn’t put it down!

Pathfinder – baron m.

baron m. from thus spake a.k.

Welcome Sir R-----! Pray shed your overcoat and come dry yourself by the fire. I am told that these spring showers are of inestimable benefit to farming folk, but I fail to grasp why they can’t show the good manners to desist until noblemen have made their way indoors.

Will you join me in a warming measure and perchance a small wager?

I had no doubt sir!

I’ve a mind for a game oft played by the tribesmen of Borneo upon the cobbled floors of their homes as a means of practice for their legendary talent in forging paths through the dense forests in which they dwell.

The Excess Strategic WIP problem

Allan Kelly from Allan Kelly

Try pouring a bottle of milk into a glass with milk already in it. You have a choice: stop or tidy up the mess afterwards. That is my work-in-progress (WIP) analogy, if you try and do too much – no mater how much you want to do it – you will end up with a mess.

Agile folk are well versed in the problems created by too much WIP and how to deal with it – check out the Stockless Production video if you want to see. In the last six months I’ve been seeing a particular variant on this problem with I’ve come to label the Excess Strategic WIP problem.

In the latest report the manager told me how the team completed a great quarter with 3 priorities – set via OKRs. The senior management team were so impressed they asked for 19 priorities in the next period.

Right now I don’t have a “paint by numbers” solution, fixing this problem is more involved. I’m starting to understand it and I’ll make some suggestions later. Ultimately this a failure of leadership to say “No”. That failure is itself rooted in a failure of leadership to appreciate what is happening on the ground and what doing the worker are up against: the number of strategic initiatives or the amount of business as usual.

In a team it is easy to spot excess WIP using a visual system like a whiteboard or card system. When you track work like this you see an awful lot of work sitting in the “in progress” column. Typically there is more work than people on the team and the work isn’t moving. Some of the work may be actively marked as blocked but more likely most of it is nominally being worked on.

It can’t all be worked on at the same time because despite having two eyes, two hands, and two sides of a brain human’s can only really do one thing at a time – even new parents. I label this work “WHIP” – work hopefully in progress. While it is, in theory, being worked on, most of it is just sitting there waiting for a multi-tasking (i.e. time slicing) person to come back to it.

The good news is when you can see the WHIP you are half way to solving the problem: there are well known solutions. Accept less work, impose work in progress limits, sequence work by adding queues to the board, educate people to work on one thing until it is done, etc. Once you can see the work you have a feedback mechanism, you can take action and, thanks to the feedback mechanism, watch it reduce.

But Strategic WIP is more difficult. Strategic WIP is the stuff the organization decides in really important, the stuff the most senior leadership decides should be happening. The Excess Strategic WIP Problem occurs when those strategic priorities are greater than the organisation’s ability to deliver on them.

While some of this work may be transactional (“Build a Mega Widget”) much of it is transformational (“Adopt Agile working”, “Increase diversity”, “Tighten security”). Such things involve changing the way other work is done: changing the processes, changing the criteria, increasing awareness. The feedback cycle is long but to get started people need to devote time: attend training, arrange kick-off meetings, discuss approaches with consultants/coaches, etc.

In one case I saw this year the organization has, for years, been asked to do more than it is resourced to do. While a few months of such a mismatch might have been manageable the cumulative effect has been to create organizational debt and demoralised staff.

The second case I saw isn’t a result of under resourcing, if anything that organization has too much – money, people and perhaps equipment then they know what to do with. But because their management model resembles a sponge it is impossible to know when the organization has taken on too much. While some pockets were overworked I’m sure other pockets were idle.

In both these cases “lean” was a dirty word. Both organizations had been subject to lean programmes that had stripped out “waste” but, from what I could see, removing that waste had created both organizational debt and demoralised staff. Having removed “spare” staff those left were juggling. Staff didn’t have time for “agile” and their diaries had no space for daily essentials let alone another change programme.

The third case came to light when discussing OKRs. The company had a successful quarter were they focused on a few objectives that success seems to have bred the next failure when the organization requested many objectives.

Actually, this case brought it home: taking on too many, or being given too many, OKRs. I’ve heard of teams tackling too many OKRs so many times in the last year. (In fact, I should name this “Excess OKRs Problem”. This occurs whenever you have more OKRs than you can count on one hand. Don’t tell me your team is big enough to do so, check out Focus is not divisible so limit you OKRs.)

In a way, the Excess OKRs Problem is better than the Excess Strategic WIP Problem because you can see it. One can say “OK company, you have asked us to deliver 15 OKRs here, we need to sit down and talk about this.” Like visualising your work in progress excess OKRs is a thing you can identify and address, it is the reason to talk.

(Note: unlike User Stories which should always be small, OKRs should never be small. OKRs should be big, meaningful and preferably strategic. No team can take on 16 meaningful OKRs, even 5 is too many.)

One of the problems with excess Strategic WIP is that it can be difficult to see, there are different teams involved and in different places. Conflicting priorities are hidden. People on the ground may see problems but the senior people – the people creating the WIP – are too far removed. Those people may not want to hear people saying “You are asking too much”. They may have too much riding on getting multiple work streams done. They may be deaf to the cry of pain when people say “too much.” They may have too big an ego to accept that what they are asking for is a problem. And it may be politically unacceptable.

“Political” is an important word here: several of the cases I’ve heard of, and some of those example above, are Government agencies.

Because excess strategic WIP is difficult to see it is difficult to build a feedback loop and difficult to take action. How do you know when the problem is too much WHIP and when people are “crying wolf” ? – which in itself implies a problem of trust and maybe a belief that “everyone is lazy, we need to push harder.”

By its nature “strategy” is big, which means that the feedback cycles are long and the problems of excess strategic WIP take time to play out. What is a WIP problem looks like another failed strategy.

While I would like to think OKRs can help with this situation – because they force teams and organizations to take stock of what they are working on – they may be making things worse because OKRs include an ambition agenda. Teams are encouraged to “shoot for the moon” and build “10x solutions”. There is good logic here: if one aims for a “10x solution” (i.e. a solution 10 times better than the status quo) and falls short the “failure” may still be “better” (e.g. “5x”) than if one had aimed for a “2x solution” and succeeded.

Ambition with OKRs should not be about doing more OKRs, rather ambition is within the OKRs, a challenge that makes you approach it differently. One can draw a line here between having one OKR which aims for “10x” and having 10 OKRs but I suspect the subtlety will be lost on someone asking for “more than you think you can do.”

So whats is the solution?

I’m not sure there is a silver bullet but I would want to … make the problem visible, perhaps a portfolio level kanban board. I would want to build a feedback loop so I could measure change. I would want to show the strategic people the visualisation.

Management education has a part to play too. I can believe many senior managers would benefit from understand what WIP is, the problems excess WIP can cause, the way excess WIP plays out on a day-to-day basis and how it effects people’s working lives. And perhaps most of all, address trust and the belief that “we just need to push harder.”

That might also mean some of the Lean waste lessons and OKR ambition lessons need to be revisited.


Subscribe to my blog newsletter and download Continuous Digital for free

The post The Excess Strategic WIP problem appeared first on Allan Kelly.

A new career in software development: advice for non-youngsters

Derek Jones from The Shape of Code

Lately I have been encountering non-young people looking to switch careers, into software development. My suggestions have centered around the ageism culture and how they can take advantage of fashions in software ecosystems to improve their job prospects.

I start by telling them the good news: the demand for software developers outstrips supply, followed by the bad news that software development culture is ageist.

One consequence of the preponderance of the young is that people are heavily influenced by fads and fashions, which come and go over less than a decade.

The perception of technology progresses through the stages of fashionable, established and legacy (management-speak for unfashionable).

Non-youngsters can leverage the influence of fashion’s impact on job applicants by focusing on what is unfashionable, the more unfashionable the less likely that youngsters will apply, e.g., maintaining Cobol and Fortran code (both seriously unfashionable).

The benefits of applying to work with unfashionable technology include more than a smaller job applicant pool:

  • new technology (fashion is about the new) often experiences a period of rapid change, and keeping up with change requires time and effort. Does somebody with a family, or outside interests, really want to spend time keeping up with constant change at work? I suspect not,
  • systems depending on unfashionable technology have been around long enough to prove their worth, the sunk cost has been paid, and they will continue to be used until something a lot more cost-effective turns up, i.e., there is more job security compared to systems based on fashionable technology that has yet to prove their worth.

There is lots of unfashionable software technology out there. Software can be considered unfashionable simply because of the language in which it is written; some of the more well known of such languages include: Fortran, Cobol, Pascal, and Basic (in a multitude of forms), with less well known languages including, MUMPS, and almost any mainframe related language.

Unless you want to be competing for a job with hordes of keen/cheaper youngsters, don’t touch Rust, Go, or anything being touted as the latest language.

Databases also have a fashion status. The unfashionable include: dBase, Clarion, and a whole host of 4GL systems.

Be careful with any database that is NoSQL related, it may be fashionable or an established product being marketed using the latest buzzwords.

Testing and QA have always been very unsexy areas to work in. These areas provide the opportunity for the mature applicants to shine by highlighting their stability and reliability; what company would want to entrust some young kid with deciding whether the software is ready to be released to paying customers?

More suggestions for non-young people looking to get into software development welcome.

Return of the sprint goal? (Infographic)

Allan Kelly from Allan Kelly

Most of the sprint goals I’ve ever seen are rubbish. Pretty much “do the random collection of stuff we’ve decided already.” Such goals are meaningless – save time by skipping them.

If a team adopts OKRs then I really hope they move towards more goal based working, in which case the sprint goal starts to have meaning again – although maybe the OKR replaces the sprint goal?

Either way, I see an opportunity to move away from backlog driven development (BLDD) and towards a more purposeful style of working. So it was interesting a few weeks ago when Gareth Davies from Parabol (online meeting exercises and such) sent me over this infographic and a link to a blog on Sprint goals. Food for thought.


Subscribe to my blog newsletter and download Continuous Digital for free

The post Return of the sprint goal? (Infographic) appeared first on Allan Kelly.

How I write books (my new book)

Allan Kelly from Allan Kelly

Regular readers will know I write books – quite a few by now, it gets embarrassing.

Being an author is a great conversation starter, when people hear you’ve written a book or two they want to know more – everyone seems to have a dream of writing their own book. It also means that people seek me out to ask my advice about a book they are writing, or thinking about writing.

So, I’ve started to write down all the advice I give to people in a new book – How I write books.

I’m following my usual pattern so you can buy early versions on LeanPub now – I released it last week and it immediately sold a few copies. As usual at this stage everything is in a state of flux; spelling, punctuation, grammar and all that jazz will be fixed later. Of course, anyone buying now will get free updates as they become available all the way to the final version.

If you do buy, then please let me know what you think.


Subscribe to my blog newsletter and download Continuous Digital for free

The post How I write books (my new book) appeared first on Allan Kelly.

Evaluating estimation performance

Derek Jones from The Shape of Code

What is the best way to evaluate the accuracy of an estimation technique, given that the actual values are known?

Estimates are often given as point values, and accuracy scoring functions (for a sequence of estimates) have the form S=1/n sum{i=1}{n}{S(E_i, A_i)}, where n is the number of estimated values, E_i the estimates, and A_i the actual values; smaller S is better.

Commonly used scoring functions include:

  • S(E, A)=(E-A)^2, known as squared error (SE)
  • S(E, A)=delim{|}{E-A}{|}, known as absolute error (AE)
  • S(E, A)=delim{|}{E-A}{|}/A, known as absolute percentage error (APE)
  • S(E, A)=delim{|}{E-A}{|}/E, known as relative error (RE)

APE and RE are special cases of: S(E, A)=delim{|}{1-(A/E)^{beta}}{|}, with beta=-1 and beta=1 respectively.

Let’s compare three techniques for estimating the time needed to implement some tasks, using these four functions.

Assume that the mean time taken to implement previous project tasks is known, E_m. When asked to implement a new task, an optimist might estimate 20% lower than the mean, E_o=E_m*0.8, while a pessimist might estimate 20% higher than the mean, E_p=E_m*1.2. Data shows that the distribution of the number of tasks taking a given amount of time to implement is skewed, looking something like one of the lines in the plot below (code):

Two example distributions of number of tasks taking a given amount of time to implement.

We can simulate task implementation time by randomly drawing values from a distribution having this shape, e.g., zero-truncated Negative binomial or zero-truncated Weibull. The values of E_o and E_p are calculated from the mean, E_m, of the distribution used (see code for details). Below is each estimator’s score for each of the scoring functions (the best performing estimator for each scoring function in bold; 10,000 values were used to reduce small sample effects):

    SE   AE   APE   RE
E_o 2.73 1.29 0.51 0.56
E_m 2.31 1.23 0.39 0.68
E_p 2.70 1.37 0.36 0.86

Surprisingly, the identity of the best performing estimator (i.e., optimist, mean, or pessimist) depends on the scoring function used. What is going on?

The analysis of scoring functions is very new. A 2010 paper by Gneiting showed that it does not make sense to select the scoring function after the estimates have been made (he uses the term forecasts). The scoring function needs to be known in advance, to allow an estimator to tune their responses to minimise the value that will be calculated to evaluate performance.

The mathematics involves Bregman functions (new to me), which provide a measure of distance between two points, where the points are interpreted as probability distributions.

Which, if any, of these scoring functions should be used to evaluate the accuracy of software estimates?

In software estimation, perhaps the two most commonly used scoring functions are APE and RE. If management selects one or the other as the scoring function to rate developer estimation performance, what estimation technique should employees use to deliver the best performance?

Assuming that information is available on the actual time taken to implement previous project tasks, then we can work out the distribution of actual times. Assuming this distribution does not change, we can calculate APE and RE for various estimation techniques; picking the technique that produces the lowest score.

Let’s assume that the distribution of actual times is zero-truncated Negative binomial in one project and zero-truncated Weibull in another (purely for convenience of analysis, reality is likely to be more complicated). Management has chosen either APE or RE as the scoring function, and it is now up to team members to decide the estimation technique they are going to use, with the aim of optimising their estimation performance evaluation.

A developer seeking to minimise the effort invested in estimating could specify the same value for every estimate. Knowing the scoring function (top row) and the distribution of actual implementation times (first column), the minimum effort developer would always give the estimate that is a multiple of the known mean actual times using the multiplier value listed:

                   APE   RE
Negative binomial  1.4   0.5
Weibull            1.2   0.6

For instance, management specifies APE, and previous task/actuals has a Weibull distribution, then always estimate the value 1.2*E_m.

What mean multiplier should Esta Pert, an expert estimator aim for? Esta’s estimates can be modelled by the equation Act*U(0.5, 2.0), i.e., the actual implementation time multiplied by a random value uniformly distributed between 0.5 and 2.0, i.e., Esta is an unbiased estimator. Esta’s table of multipliers is:

                   APE   RE
Negative binomial  1.0   0.7
Weibull            1.0   0.7

A company wanting to win contracts by underbidding the competition could evaluate Esta’s performance using the RE scoring function (to motivate her to estimate low), or they could use APE and multiply her answers by some fraction.

In many cases, developers are biased estimators, i.e., individuals consistently either under or over estimate. How does an implicit bias (i.e., something a person does unconsciously) change the multiplier they should consciously aim for (having analysed their own performance to learn their personal percentage bias)?

The following table shows the impact of particular under and over estimate factors on multipliers:

                 0.8 underestimate bias   1.2 overestimate bias
Score function          APE   RE            APE   RE
Negative binomial       1.3   0.9           0.8   0.6
Weibull                 1.3   0.9           0.8   0.6

Let’s say that one-third of those on a team underestimate, one-third overestimate, and the rest show no bias. What scoring function should a company use to motivate the best overall team performance?

The following table shows that neither of the scoring functions motivate team members to aim for the actual value when the distribution is Negative binomial:

                    APE   RE
Negative binomial   1.1   0.7
Weibull             1.0   0.7

One solution is to create a bespoke scoring function for this case. Both APE and RE are special cases of a more general scoring function (see top). Setting beta=-0.7 in this general form creates a scoring function that produces a multiplication factor of 1 for the Negative binomial case.

A Review: Incineration Fest 2022 – Metal is back!

Paul Grenyer from Paul Grenyer

Overall I really enjoyed Incineration Fest and would go again if the line up is right for me. What was really great was seeing metallers back at gig with no restrictions and doing what we do best!

Winterfylleth

I completely fell in love with Winterfylleth when they played Bloodstock on the mainstage and even more so when they released the set as a live album. They are incredible and totally deserved to be opening proceedings at the Roundhouse for Incineration Fest. Actually, they deserved to be much higher up the bill. They’re a solid outfit, played what I wanted to hear and ended, as I always think of them ending from the live album, with Chris saying this is the last song “as time is short and our songs are long!” I need to see them do a headline set in a venue with a great PA soon.

Tsjuder

Tsjuder was the wildcard for me. I didn’t really know them and had heard only a few things on Spotify before, although what I heard was really good. I had no idea I was going to be blown away. They sounded incredible from the first note, which was even more impressive given that they are only a three piece and the PA in the Roundhouse wasn’t turning out to be great for definition.

Bloodbath

Bloodbath was really the reason I was at Incineration Fest. I’d missed them at Bloodstock ten years before as one of my sons was being born and I hadn’t had a chance to see them until now. Of course now Nick Holmes (Paradise Lost) rather than Mikael Åkerfeldt (Opeth) was on lead vocals.

I was very, very excited and from the moment I heard that trademark crunching guitar sound I was even more excited. They played for a full hour. Unlike the Black Metal bands on the bill there was more riffing and solos and a slight different drum sound.

They’re an odd band to watch. For reasons I don’t understand, the bass player and two guitarists would often turn their backs to the audience to face the dummer. The band didn’t seem to interact much with each other on stage and even less so with Nick.

Nick’s deadpan humour was present when he did speak to the audience. He introduced the band as being from Sweden, then added from Halifax almost as an afterthought! During the set he admitted he couldn’t see and dispensed with his sun glasses as they’d apparently been a good idea backstage. After breaking the microphone he enquired if it would be added to his bill at the end of the night.

Emperor

Emperor hasn't released any new material (that I know of) since 2001 and, if I’m honest, I barely listen to them beyond the live album these days. I’ve seen them at least three times before, the first time being in 1999 in a small club in Bradford on my birthday - it doesn’t get much better than that. I’m more of a fan of Ihsahn’s solo stuff these days and I still really enjoy Samoth’s Zyklon whenever I play it. Emperor, not so much anymore.

They played for the full ninety and for the most part were solid as you might expect. Whether or not Faust plays with is of no consequence to me and I certainly didn’t need to covers they played with him towards the end of the set. There was lots I knew and lots I enjoyed, but I wouldn't make an effort to see Emperor again.