Code Like a Girl T-shirts

Andy Balaam from Andy Balaam's Blog

There are lots of people missing from the programming world: lots of the programmers I meet look and sound a lot like me. I’d really like it if this amazing job were open to a lot more people.

One of the weird things that has happened is that somehow we seem to have the idea that programming is only for boys, and I’d like to fight against that idea by wearing a t-shirt demonstrating how cool I think it is to be a woman coder.

So, I commissioned a design from an amazing artist called Ellie Mars, who I found through her Mastodon.art page @elliemars@mastodon.art. She did an amazing job, sending sketches and ideas back and forth, and finally she came up with this awesome design:

I’ve printed a t-shirt for myself that I will give myself for Christmas, and I’ve made a page on Street Shirts so you can get one too!

I’ve uploaded 2 designs, but if you’d like me to set something different up for you, let me know.  Also, these links will expire, but I can re-set them up for you if you contact me.  They are reasonably cheap:

To ask for different designs or an unexpired link, you are welcome to contact me (via DM or publicly) on twitter @andybalaam or Mastodon @andybalaam@mastodon.social, or by email, via a short test.

Ellie and I agreed to set up these t-shirts sales with no profit for us because we’d like to get the word out.  If they are popular we might add a little, so get in fast for a good deal!

10 points for anyone who can recognise the code in the background.  It’s from one of my favourite programs.

Personally, I think we all spend too much of our time walking around advertising faceless corporations when we could be saying something a bit more useful on our clothes.  What do you think of this idea?  Maybe you could design a similar t-shirt?  Let me know your thoughts in the comments below or on Twitter or Mastodon.

Growth of conditional complexity with file size

Derek Jones from The Shape of Code

Conditional statements are a fundamental constituent of programs. Conditions are driven by the requirements of the problem being solved, e.g., if the water level is below the minimum, then add more water. As the problem being solved gets more complicated, dependencies between subproblems grow, requiring an increasing number of situations to be checked.

A condition contains one or more clauses, e.g., a single clause in: if (a==1), and two clauses in: if ((x==y) && (z==3)); a condition also appears as the termination test in a for-loop.

How many conditions containing one clause will a 10,000 line program contain? What will be the distribution of the number of clauses in conditions?

A while back I read a paper studying this problem (“What to expect of predicates: An empirical analysis of predicates in real world programs”; Google currently not finding a copy online, grrr, you will have to hassle the first author: durelli@icmc.usp.br, or perhaps it will get added to a list of favorite publications {be nice, they did publish some very interesting data}) it contained a table of numbers and yesterday my analysis of the data revealed a surprising pattern.

The data consists of SLOC, number of files and number of conditions containing a given number of clauses, for 63 Java programs. The following plot shows percentage of conditionals containing a given number of clauses (code+data):

Percentage of conditions containing a given number of clauses in 63 large Java programs.

The fitted equation, for the number of conditionals containing a given number of clauses, is:

conditions = 3*slen^pred e^{10-10pred-1.8 10^{-5}avlen^2}

where: slen={SLOC}/{sqrt{Number of Files}} (the coefficient for the fitted regression model is 0.56, but square-root is easier to remember), avlen={SLOC}/{Number of Files}, and pred is the number of clauses.

The fitted regression model is not as good when slen or avlen is always used.

This equation is an emergent property of the code; simply merging files to increase the average length will not change the distribution of clauses in conditionals.

When slen = e^{10} = 22,026, all conditionals contain the same number of clauses, off to infinity. For the 63 Java programs, the mean slen was 2,625, maximum 11,710, and minimum 172.

I was expecting SLOC to have an impact, but was not expecting number of files to be involved.

What grows with SLOC? Number of global variables and number of dependencies. There are more things available to be checked in larger programs, and an increase in dependencies creates the need to perform more checks. Also, larger programs are likely to contain more special cases, which are likely to involve checking both general and specific values (i.e., more clauses in conditionals); ok, this second sentence is a bit more arm-wavy than the first. The prediction here is that the percentage of global variables appearing in conditions increases with SLOC.

Chopping stuff up into separate files has a moderating effect. Since I did not expect this, I don’t have much else to say.

This model explains 74% of the variance in the data (impressive, if I say so myself). What other factors might be involved? Depth of nesting would be my top candidate.

Removing non-if-statement related conditionals from the count would help clarify things (I don’t expect loop-controlling conditions to be related to amount of code).

Two interesting data-sets in one week, with 10-days still to go until Christmas :-)

The power of small declines

Allan Kelly from Allan Kelly Associates

DeclineGraph-2018-12-13-10-41.jpg

Last year I wrote about the power of 1% improvement, and how powerful this can be when that improvement occurs frequently. For example, if a team improves 1% a week then over the course of 50 weeks (a year) they would improve by over 62%.

A few days ago I had a revelation: the opposite is also true.

If a team enters a downward spiral then a 1% decline in productivity each week has similar effects but in the opposite direction. In fact, as I think about this I see more and more occasions where a team can loose small amounts of performance which actually saps their productive capacity. Like the frog in hot water they don’t realise they are cooked until it is too late.

The graph above shows what happens to value-add over a year when a team is 1% less productive each week. The blue bars show how value-add falls each week. The red line shows how each week the team declines slightly compared to the week before.

That the red line get higher seems odd but it makes sense: each week the team is 1% less productive than the week before. So at the end of the second week the team is 1% less productive than they were in the first week. At the end of week 50 they are 1% less productive than week 49 but only 0.62% more less productive than week 1 because the 1% decline was from a lower total. Getting worse slows down because the team are worse!

At some point the value-add ceases to justify the cost of the team. But as these changes are very gradual that is going to be hard to see.

Why might this happen? – lots of reasons

First off there are the corporate drains on productivity. Consider corporate security processes: think about passwords alone, the need to change passwords regularly, have longer and longer passwords, have different passwords on different systems, and so on. Sure cyber security is important but it can also be a drain on productivity.

Then there are the other hassles of working almost anywhere: finding meeting rooms, booking meeting rooms, setting up webex conference calls, “cake in the kitchen”, restrictions on internet use – whether it is limited access, site blacklists, or authorised “white list” sites only.

It is easy to see how a large corporate can gradually drain a team. But there are other reasons.

There are personal drains on productivity too. Consider internet use during work time. The likes of Facebook, LinkedIn and Twitter which aim to keep you on their sites as long as possible. Using LinkedIn is almost a necessity in modern work – got a meeting coming up? someone applied for a job? looking for a lead? – but once your in, Microsoft wants to keep you there.

Then think about your code base: is the code getting better or worse?

  • Easier to work with or harder to work with?
  • Do you write an automated test for every change? Or save time today at the cost of time tomorrow, and the next day, and the day after, and …
  • Do you take time to refactor every time you make a change? or are you constantly kludging it and making the next change slightly harder?

Notice here I’m not talking about those big time consuming changes that happen occasionally: new employees, reorganisations, mergers – things that happen occasionally, take a chunk of time but finish.

So, is your work environment getting a little bit better every week? or a little bit harder?

If we think at the “very little” level it is unlikely that things are the same as last week. Staying the same will be hard. Things are probably a little bit better or a little bit harder. Extend that over a year and – as the theory of 1% change shows – things are a lot worse, or maybe a lot better.

What is important is the trend, and the trend is going to be set by the culture. Do you have a culture of small improvements? Or an acceptance of small degradation?

Finally, because there are so many minor factors that can sap your productivity capacity then it is quite likely that if you aren’t getting more productive then you are getting less. In other words, you need to be working to improve just to stand still.

The post The power of small declines appeared first on Allan Kelly Associates.

“Hello World” Stories

Chris Oldwood from The OldWood Thing

I’ve always tried really hard to fight against “technical stories”. These are supposedly user stories but which are really framed as a solution to a problem and really just technical tasks. In “Turning Technical Tasks Into User Stories” I looked at how it’s often possible to elevate these from an obvious solution to a problem back up to a problem which needs to be solved. At this point you may discover there are other, hopefully cheaper, solutions to the problem which have been missed in the original analysis either because things have changed or different people are doing the thinking.

On the flip-side there are occasionally times where, after having looked at a few related stories, it’s apparent that they all require the same underlying mechanism to work. One common solution to this is to bulk up the first story with the technical work and let the rest flow through as normal. This way you have no technical work on your backlog per-se as it’s all hidden in the stories.

Transparency

What I don’t like about this approach is that one story arbitrarily gets hit with a load of extra work, which, if you’re using historical data to stick a finger in the air for estimation of similar work later, skews the average somewhat. It also means that from a visibility perspective one story takes longer while the mechanism is being built.

One way I’ve found to address this has been to pull out the bare bones of the technical work into a “Hello, World!” story [1]. This story is framed around building the skeleton of the mechanism that will be used to drive the implementation of the subsequent features. The aim is keep the scope minimal enough that we avoid speculating while still delivering something which stands on its own two feet and remains clearly visible on the board.

Value Proposition

While the value to the end-user is in the eventual feature, the value in the mechanism is proving to the development team that the basic approach seems sound. With the skeleton built, the idiosyncrasies around each individual feature can then be dealt with appropriately at the right time and accounted for in the usual way.

To be clear this is not about doing a spike or building a prototype, although that may have happened earlier to gain the knowledge needed to undertake this piece of work. No, here we’re talking about building the bare bones of a real mechanism along with the most basic feature possible.

The reason I’ve called these “Hello World” Stories is probably self-evident, it alludes to the classic program many have chosen as their first – to write “Hello, World!” to the console. In this context the name is intended to conjure up simplicity and remind us that what we’re doing is delivering the minimum required to make the platform viable. We probably won’t literally write “Hello, World!” to the console, but it may a log message instead that we can then observe and monitor, or be a message on a queue that we can see discarded. Essentially whatever we can do to make its effects observable without wasting any real effort or leaving it partially complete.

Based on the classic INVEST acronym we should strive to make every unit of work: Independent, Negotiable, Valuable, Estimable, Small and Testable. By splitting it out from one of the arbitrary features it becomes more independent, negotiable, estimable and small which can be useful should short-term priorities change. And by extending the scope from a pure mechanism just a little bit further to the most trivial feature possible we make it more testable from a technical perspective, even if not from a product viewpoint. Most importantly, however, is it valuable in its own right? I think sometimes splitting the mechanism out gives value by making the I,N,E,S and T more tangible. In particular breaking work down into smaller deliverable units is often the most valuable practice even if occasionally the end-user has nothing initially to show for it.

Ultimately, I guess, I can’t ever remember anyone complaining they had broken their work down into pieces that were so small they were too visible.

 

[1] I’m sure there is an argument about this not being a “story” per-se but just a “task”. However I prefer to call it a story because our “Hello, World!” realization should have a grounding in the real world, even if it is more abstract than what the end-user will eventually receive.

[2] There is an assumption here that we’ve already decided we cannot or do not want to solve the dependent features in different ways, probably because it would be far more costly (in the long run) than briefly delaying them by building a common pillar.

NoNo workshop in London

Allan Kelly from Allan Kelly Associates

Smaller cartons of software are cheaper and less risky
Smaller cartons of software are cheaper and less risky

 

Vasco Duarte and myself are running our #NoProjects/#NoEstimates workshop again in London – February. This is a one day class with myself and Vasco, it is very interactive, lots of exercises and lots of changes to ask us your questions.

 

“A one day introduction to #NoEstimates and #NoProjects. Learn to apply the digital-first tools that help you deliver more value in less time. From breaking down 9 month projects to 30 minutes, to learning to reduce investment risks. In this workshop you will learn how to transform your product development.”

More details and booking at the Learning Connexions website.

The post NoNo workshop in London appeared first on Allan Kelly Associates.

Guaranteed Copy Elision Does Not Elide Copies

Simon Brand from Simon Brand

This post is also available at the Microsoft Visual C++ Team Blog

C++17 merged in a paper called Guaranteed copy elision through simplified value categories. The changes mandate that no copies or moves take place in some situations where they were previously allowed, e.g.:

struct non_moveable {
    non_moveable() = default;
    non_moveable(non_moveable&&) = delete;
};
non_moveable make() { return {}; }
non_moveable x = make(); //compiles in C++17, error in C++11/14

You can see this behavior in compiler versions Visual Studio 2017 15.6, Clang 4, GCC 7, and above.

Despite the name of the paper and what you might read on the Internet, the new rules do not guarantee copy elision. Instead, the new value category rules are defined such that no copy exists in the first place. Understanding this nuance gives a deeper understanding of the current C++ object model, so I will explain the pre-C++17 rules, what changes were made, and how they solve real-world problems.

Value Categories

To understand the before-and-after, we first need to understand what value categories are (I’ll explain copy elision in the next section). Continuing the theme of C++ misnomers, value categories are not categories of values. They are characteristics of expressions. Every expression in C++ has one of three value categories: lvalue, prvalue (pure rvalue), or xvalue (eXpring value). There are then two parent categories: all lvalues and xvalues are glvalues, and all prvalues and xvalues are rvalues.

diagram expressing the taxonomy described above

For an explanation of what these are, we can look at the standard (C++17 [basic.lval]/1):

  • A glvalue ([generalised lvalue]) is an expression whose evaluation determines the identity of an object, bit-field, or function.
  • A prvalue is an expression whose evaluation initializes an object or a bit-field, or computes the value of an operand of an operator, as specified by the context in which it appears.
  • An xvalue is a glvalue that denotes an object or bit-field whose resources can be reused (usually because it is near the end of its lifetime).
  • An lvalue is a glvalue that is not an xvalue.
  • An rvalue is a prvalue or an xvalue.

Some examples:

std::string s; 
s //lvalue: identity of an object 
s + " cake" //prvalue: could perform initialization/compute a value 

std::string f(); 
std::string& g(); 
std::string&& h(); 

f() //prvalue: could perform initialization/compute a value 
g() //lvalue: identity of an object 
h() //xvalue: denotes an object whose resources can be reused 

struct foo { 
    std::string s; 
}; 

foo{}.s //xvalue: denotes an object whose resources can be reused

C++11

What are the properties of the expression std::string{"a pony"}?

It’s a prvalue. Its type is std::string. It has the value "a pony". It names a temporary.

That last one is the key point I want to talk about, and it’s the real difference between the C++11 rules and C++17. In C++11, std::string{"a pony"} does indeed name a temporary. From C++11 [class.temporary]/1:

Temporaries of class type are created in various contexts: binding a reference to a prvalue, returning a prvalue, a conversion that creates a prvalue, throwing an exception, entering a handler, and in some initializations.

Let’s look at how this interacts with this code:

struct copyable {
    copyable() = default;
    copyable(copyable const&) { /*...*/ }
};
copyable make() { return {}; }
copyable x = make();

make() results in a temporary. This temporary will be moved into x. Since copyable has no move constructor, this calls the copy constructor. However, this copy is unnecessary since the object constructed on the way out of make will never be used for anything else. The standard allows this copy to be elided by constructing the return value at the call-site rather than in make (C++11 [class.copy]). This is called copy elision.

The unfortunate part is this: even if all copies of the type are elided, the constructor still must exist.

This means that if we instead have:

struct non_moveable {
    non_moveable() = default;
    non_moveable(non_moveable&&) = delete;
};
non_moveable make() { return {}; }
auto x = make();

then we get a compiler error:

<source>(7): error C2280: 'non_moveable::non_moveable(non_moveable &&)': attempting to reference a deleted function
<source>(3): note: see declaration of 'non_moveable::non_moveable'
<source>(3): note: 'non_moveable::non_moveable(non_moveable &&)': function was explicitly deleted

Aside from returning non-moveable types by value, this presents other issues:

auto x = non_moveable{}; //compiler error
  • The language makes no guarantees that the constructors won’t be called (in practice this isn’t too much of a worry, but guarantees are more convincing than optional optimizations).
  • If we want to support some of these use-cases, we need to write copy/move constructors for types which they don’t make sense for (and do what? Throw? Abort? Linker error?)
  • You can’t pass non-moveable types to functions by value, in case you have some use-case which that would help with.

What’s the solution? Should the standard just say “oh, if you elide all copies, you don’t need those constructors”? Maybe, but then all this language about constructing temporaries is really a lie and building an intuition about the object model becomes even harder.

C++17

C++17 takes a different approach. Instead of guaranteeing that copies will be elided in these cases, it changes the rules such that the copies were never there in the first place. This is achieved through redefining when temporaries are created.

As noted in the value category descriptions earlier, prvalues exist for purposes of initialization. C++11 creates temporaries eagerly, eventually using them in an initialization and cleaning up copies after the fact. In C++17, the materialization of temporaries is deferred until the initialization is performed.

That’s a better name for this feature. Not guaranteed copy elision. Deferred temporary materialization.

Temporary materialization creates a temporary object from a prvalue, resulting in an xvalue. The most common places it occurs are when binding a reference to or performing member access on a prvalue. If a reference is bound to the prvalue, the materialized temporary’s lifetime is extended to that of the reference (this is unchanged from C++11, but worth repeating). If a prvalue initializes a class type of the same type as the prvalue, then the destination object is initialized directly; no temporary required.

Some examples:

struct foo {
    int i;
};

foo make();
auto& a = make();  //temporary materialized and lifetime-extended
auto&& b = make(); //ditto

foo{}.i //temporary materialized

auto c = make(); //no temporary materialized

That covers the most important points of the new rules. Now on to why this is actually useful past terminology bikeshedding and trivia to impress your friends.

Who cares?

I said at the start that understanding the new rules would grant a deeper understanding of the C++17 object model. I’d like to expand on that a bit.

The key point is that in C++11, prvalues are not “pure” in a sense. That is, the expression std::string{"a pony"} names some temporary std::string object with the contents "a pony". It’s not the pure notion of the list of characters “a pony”. It’s not the Platonic ideal of “a pony”.

In C++17, however, std::string{"a pony"} is the Platonic ideal of “a pony”. It’s not a real object in C++’s object model, it’s some elusive, amorphous idea which can be passed around your program, only being given form when initializing some result object, or materializing a temporary. C++17’s prvalues are purer prvalues.

If this all sounds a bit abstract, that’s okay, but internalising this idea will make it easier to reason about aspects of your program. Consider a simple example:

struct foo {};
auto x = foo{};

In the C++11 model, the prvalue foo{} creates a temporary which is used to move-construct x, but the move is likely elided by the compiler.

In the C++17 model, the prvalue foo{} initializes x.

A more complex example:

std::string a() {
    return "a pony";
}

std::string b() {
    return a();
}

int main() {
    auto x = b();
}

In the C++11 model, return "a pony"; initializes the temporary return object of a(), which move-constructs the temporary return object of b(), which move-constructs x. All the moves are likely elided by the compiler.

In the C++17 model, return "a pony"; initializes the result object of a(), which is the result object of b(), which is x.

In essence, rather than an initializer creating a series of temporaries which in theory move-construct a chain of return objects, the initializer is teleported to the eventual result object. In C++17, the code:

T a() { return /* expression */ ; }
auto x = a();

is identical to auto x = /* expression */;. For any T.

Closing

The “guaranteed copy elision” rules do not guarantee copy elision; instead they purify prvalues such that the copy doesn’t exist in the first place. Next time you hear or read about “guaranteed copy elision”, think instead about deferred temporary materialization.

Impact of group size and practice on manual performance

Derek Jones from The Shape of Code

How performance varies with group size is an interesting question that is still an unresearched area of software engineering. The impact of learning is also an interesting question and there has been some software engineering research in this area.

I recently read a very interesting study involving both group size and learning, and Jaakko Peltokorpi kindly sent me a copy of the data.

That is the good news; the not so good news is that the experiment was not about software engineering, but the manual assembly of a contraption of the experimenters devising. Still, this experiment is an example of the impact of group size and learning (through repeating the task).

Subjects worked in groups of one to four people and repeated the task four times. Time taken to assemble a bespoke, floor standing rack with some odd-looking connections between components (the image in the paper shows an image of something that might function as a floor standing book-case, if shelves were added, apart from some component connections getting in the way) was measured.

The following equation is a very good fit to the data (code+data). There is theory explaining why log(repetitions) applies, but the division by group-size was found by suck-it-and-see (in another post I found that time spent planning increased with teams size).

There is a strong repetition/group-size interaction. As the group size increases, repetition has less of an impact on improving performance.

time = 0.16+ 0.53/{group size} - log(repetitions)*[0.1 + {0.22}/{group size}]

The following plot shows one way of looking at the data (larger groups take less time, but the difference declines with practice):

Time taken (hours) for various group sizes, by repetition.

and here is another (a group of two is not twice as fast as a group of one; with practice smaller groups are converging on the performance of larger groups):

Time taken (hours) for various repetitions, by group size.

Would the same kind of equation fit the results from solving a software engineering task? Hopefully somebody will run an experiment to find out :-)

Poor performance in Chrome (especially on mobile) – caused by SVG background images

Andy Balaam from Andy Balaam&#039;s Blog

I have spent the last few hours investigating abysmal performance in my latest little game project Cross The Road. Firefox was fine, but Chromium and Chrome, especially on mobile, was rendering at about three frames per second.

When I stopped using SVGs as background-images for my elements, and used PNGs instead, it improved to about 20-30 FPS.

It seems fine to use SVGs as normal images, but for background-image, it really hurt performance.

nor(DEV):con 2019 schedule live now!

Paul Grenyer from Paul Grenyer


nor(DEV):con 2019
Thursday 21st to Saturday 23rd of February 2019
The Kings Centre, Norwich, NR1 1PH


Friday opening keynote: The Failure of Focus
Liz Keogh

We know that in our landscape of people and technology, aiming for a particular outcome doesn’t always lead to us getting what we want. Sometimes the best results come from approaching a problem obliquely. But in Agile our highest priority is to satisfy the customer through the early and continuous delivery of valuable software. We like to start with the outcome, meet the needs of our users, delivering high-quality working software with happy teams and true agility… but how might that focus be holding us back, and what are the alternatives?

In this talk we look at some different strategies for approaching complex ecosystems, starting from where we are right now, and allowing innovation to emerge through obliquity, naivety, and serendipity.


Friday closing keynote: Software doesn't always work out. 
Kevlin Henney

Looking at the number of software failure screens in public places, it can sometimes seem that software developers are the greatest producers of installation art around the planet. Software failures can be entertaining or disastrous. They can also be instructive — there's a lot we can learn.








Saturday keynote: Plain Wrong?
Heydon Pickering

I love writing JavaScript. The trouble is, so does everyone else. When people aren’t writing JavaScript, they’re usually writing frameworks for writing JavaScript in JavaScript. In fact, most of the JavaScript that’s around these days seems to either be written for, or within, a JavaScript flavor like React, Vue, or Angular. Frameworks make writing your own code faster and more ergonomic, but they do not come without problems. Code written with Framework A depends on the environment Framework A provides in order to work — and this dependency often represents a lot of code to transmit, decompress, parse, and compile. What about ‘plain’ JavaScript? Is it always naïve to think anything worthwhile can still be achieved just writing some straight-up code? It turns out this is a tricky question to answer, because the line between plain and flavored JavaScript is kind of blurry. It’s also not clear who should be the ones to get to write JavaScript, for what reasons, or when. But there’s no doubt the little we do as web developers is often done with much more than we 

See the full schedule here: nordevcon.com