Borknagar

Paul Grenyer from Paul Grenyer

I was pretty sure I had seen Borknagar support Cradle of Filth at the Astoria 2 in the ‘90s. It turns out that was Opeth and Extreme Noise Terror, so I don’t really remember how I got into them now.

Whatever the reason was, I really got into their 2000 album Quintessence. At the time I didn’t really enjoy their previous album, The Archaic Course, much so with the exception of the occasional relisten to Quintessence, Borknagar went by the wayside for me.  That was until ICS Vortex got himself kicked out of Dimmu Borgir for allegedly poor performances, produced a really rather bland and unlistenable solo album called Storm Seeker, and then got back properly with Borknagar.  That’s when things got interesting.

ICS Vortex has an incredible voice. When he joined Dimmu Borgir as bassist and second vocalist in time for Spiritual Black Dimensions, he brought a new dimension (pun intended) to an already amazing band. I’ve played Spiritual Black Dimensions to death since it came out and I think only Death Cult Armageddon is better.

ICS Vortex’s first album back with Borknagar is called Winter Thrice. Loving his voice and being bitterly disappointed with Storm Seeker I bought it desperately hoping for something more and I wasn’t disappointed. It’s an album with a cold feel and lyrical content about winter and the north. I loved it and played it constantly after release and regularly since. It’s progressive black metal which is the musical equivalent to walking through the snow early on a cold crisp morning.

This year Borknagar released a new album called True North. When I’ve loved an album intensely and the band brings out something new I always feel trepidation. Machine Head never bettered Burn My Eyes, WASP never bettered the Crimson Idol. I could go on, but you get the picture. True North is another album about winter and the north. So I ought to have been on safe ground, but then Arch Enemy have pretty much recorded the same album since Doomsday Machine, but never bettered it. They’re all good though.

My first listen to True North was tense, but it didn’t take long for that to dissipate. I had it on daily
play for a few weeks, together with the new albums from Winterfylleth and Opeth. True North was so brilliant I thought it might be even better than Winter Thrice. So cautiously I tried Winter Thrice again, but I wasn’t disappointed to find it was the slightly better album. The brilliant thing is that I now have two similar, but different enough albums I can enjoy again and again and other than Enslaved’s In Times, I haven’t found anything else like it.

I hope they do what Evergrey did with Hymns for the Broken, The Storm Within and The Atlantic and make it a set of three. Cross your fingers for me.

Student projects for 2019/2020

Derek Jones from The Shape of Code

It’s that time of year when students are looking for an interesting idea for a project (it might be a bit late for this year’s students, but I have been mulling over these ideas for a while, and might forget them by next year). A few years ago I listed some suggestions for student projects, as far as I know none got used, so let’s try again…

Checking the correctness of the Python compilers/interpreters. Lots of work has been done checking C compilers (e.g., Csmith), but I cannot find any serious work that has done the same for Python. There are multiple Python implementations, so it would be possible to do differential testing, another possibility is to fuzz test one or more compiler/interpreter and see how many crashes occur (the likely number of remaining fault producing crashes can be estimated from this data).

Talking to the Python people at the Open Source hackathon yesterday, testing of the compiler/interpreter was something they did not spend much time thinking about (yes, they run regression tests, but that seemed to be it).

Finding faults in published papers. There are tools that scan source code for use of suspect constructs, and there are various ways in which the contents of a published paper could be checked.

Possible checks include (apart from grammar checking):

Number extraction. Numbers are some of the most easily checked quantities, and anybody interested in fact checking needs a quick way of extracting numeric values from a document. Sometimes numeric values appear as numeric words, and dates can appear as a mixture of words and numbers. Extracting numeric values, and their possible types (e.g., date, time, miles, kilograms, lines of code). Something way more sophisticated than pattern matching on sequences of digit characters is needed.

spaCy is my tool of choice for this sort of text processing task.

FAO The Householder – a.k.

a.k. from thus spake a.k.

Some years ago we saw how we could use the Jacobi algorithm to find the eigensystem of a real valued symmetric matrix M, which is defined as the set of pairs of non-zero vectors vi and scalars λi that satisfy

    M × vi = λi × vi

known as the eigenvectors and the eigenvalues respectively, with the vectors typically restricted to those of unit length in which case we can define its spectral decomposition as the product

    M = V × Λ × VT

where the columns of V are the unit eigenvectors, Λ is a diagonal matrix whose ith diagonal element is the eigenvalue associated with the ith column of V and the T superscript denotes the transpose, in which the rows and columns of the matrix are swapped.
You may recall that this is a particularly convenient representation of the matrix since we can use it to generalise any scalar function to it with

    f(M) = V × f(Λ) × VT

where f(Λ) is the diagonal matrix whose ith diagonal element is the result of applying f to the ith diagonal element of Λ.
You may also recall that I suggested that there's a more efficient way to find eigensystems and I think that it's high time that we took a look at it.

Projects chapter of ‘evidence-based software engineering’ reworked

Derek Jones from The Shape of Code

The Projects chapter of my evidence-based software engineering book has been reworked; draft pdf available here.

A lot of developers spend their time working on projects, and there ought to be loads of data available. But, as we all know, few companies measure anything, and fewer hang on to the data.

Every now and again I actively contact companies asking data, but work on the book prevents me spending more time doing this. Data is out there, it’s a matter of asking the right people.

There is enough evidence in this chapter to slice-and-dice much of the nonsense that passes for software project wisdom. The problem is, there is no evidence to suggest what might be useful and effective theories of software development. My experience is that there is no point in debunking folktales unless there is something available to replace them. Nature abhors a vacuum; a debunked theory has to be replaced by something else, otherwise people continue with their existing beliefs.

There is still some polishing to be done, and a few promises of data need to be chased-up.

As always, if you know of any interesting software engineering data, please tell me.

Next, the Reliability chapter.

Mission Impossible: the Product Owner

Allan Kelly from Allan Kelly Associates

SecretAgents-2019-10-27-18-53.jpg

Is the product owner role impossible to fill well?

Do we set product owners up to fail?

Have you ever worked with a really excellent product owner? Someone you would be eager to work with again?

The lack of really outstanding product owners isn’t the fault of the individuals. I think product owners are asked to do a difficult job and are not supported the way they should be. Worse still, in many organizations the role of product owners is misunderstood, they are seen as a type of delivery manager when in fact they are a type of product owner.

There questions have been on my mind for a while, next month I’m giving a new presentation I’m Oredev in Malmo – and which coincides perfectly with the publication of my new book The Art of Agile Product Ownership (funny that). So by way of preview…

I’ve long argued that product owners need four things in order to do the job well: skills, authority, legitimacy and time. Lets look at each in turn:

1. Skills: the kind of thing a product owner learns on a Certified Scrum Product Owner course are table stakes. Yes POs need to be able to write user stories, split stories, write acceptance criteria, understand agile and scrum, work with teams, plan a little and so on. While necessary such skills are not sufficient.

The bigger question is:

How does a product owner know what they need to know in order to do these things?
How do they know what customers want?
How do they know what will make a difference?

Product owners need more skills. Some POs deliver products which must sell in the market to customers who have a choice. Such POs need to be able to identify customers, segment customers and markets, interview customers, analyse data, understand markets, monitor competitors and much more. In short they need the skills of a product manager.

Other POs work with internal customers who don’t have a choice over what product they use, here the PO needs other skills: stakeholder identification and management, business and process analysis, user observation and interviewing, they need to be aware of company politics and able to manage up. In other words, they need the skills of a business analyst.

And all POs need knowledge of their product domain. Many POs are POs because they are in fact subject matter experts.

That is a lot of skills for any one person. How many product owners have the right skills mix? And if they don’t, how many of them get the training they need?

2. Authority: Product owners need at least the authority to walk in to a planning meeting and state the work they would like done in the next two weeks. They need the authority to set this work without being contradicted by some other person, they need the authority to visit customers and get their expenses paid without having to provide a lengthy explation every time.

3. Legitimacy: Product owners need to be seen as the right person to set the priorities. The right person to visit customers, the right person to agree plans and write roadmaps. They need to be seen as the right person by the organisation, by peers and, most importantly, by the development team.

Authority and legitimacy are closely related but they are not the same thing. While the product owner needs both the lack of either results in the same problem: people don’t take their work seriously and other people try to set the agenda on what to build.

Unfortunately Scrum contains a seldom noticed problem here: product owners are team members, they are peers; the team are self organising and are responsible for delivering the product. (There is an egalitarian ethos even if this is only Implicit.)

But Scrum sets the PO as the one, and only one, who can tell he team what to do.

There is a contradiction.

4. Time: Product owners need time to do their work – which is a lot, just read that skills list and think about what the PO should be doing. And don’t forget the PO is a human being who needs to sleep for seven or eight hours a night, may well have a family and a home to go to.

When does the product owner get to do all of this?

Leave aside the question of where you find such people, or whether our companies pay them enough and ask yourself: do product owners get the support they need from their companies and teams?

So often the PO ends up in conflict with the company about what will be built and when it will be delivered, and they end up in conflict with their team about… well much the same issues every planning meeting.

Think about it: do we ask too much from our product owners?

Do we set up product owners to fail?

I’d love to hear your opinions, comment on this post or drop me a note or leave a comment.

I’m going to leave you hanging here today. In the Oredev presentation I’ll try and suggest some solutions – and there are some in the Art of Product Ownership. (Last year I described one in The Product Owner refactored: the SPO/TPO model.)


Like this post? – Like to receive these posts by e-mail?

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

Check out my books – The Art of Agile Product OwnershipContinuous Digital and Project Myopia – and the Project Myopia audio edition

The post Mission Impossible: the Product Owner appeared first on Allan Kelly Associates.

Winterfylleth

Paul Grenyer from Paul Grenyer

At school I had this friend, Jamie, and he once said to me that he always preferred having a band’s live album to their studio album of the same songs. His example was Queen’s Live Magic. To me, then, this was madness. It didn’t have all the same songs as It’s a Kind of Magic and the production isn’t as good. Let’s face it, unless it’s Pink Floyd, the production of a live album is never as good as in the studio. I avoided live albums for years.

Pink Floyd’s Pulse was the first live album I really got into and then there was nothing until the teenies when live albums from Emperor, Immortal, Arch Enemy, Dimmu Borgir and Blind Guardian got me completely hooked.

The latest live album I’ve bought is ‘The Siege Of Mercia: Live At Bloodstock 2017’ by Winterfylleth. It’s amazing for a number of reasons. I was at Bloodstock, watching the band, when it was recorded. It’s a fantastic performance of of some brilliant songs. It’s got a, atmospheric, synth version of an old track at the end. It’s encouraged me to relisten to their studio albums and enjoy them so much more.

Winterfylleth are a Black Metal band from Manchester. Their lyrics are based around England’s rich culture and heritage.  Some of their album covers and songs depict the Peak District. You’d have thought a song about Mam Tour, a hill in the Peak District, might be a bit boring, but it’s not! Maybe because, being black metal, you can’t really hear the words, but as always the vocal style, heavy guitars and fast drums make for the perfect mix.

I currently have the Siege of Mercia, along with some similar new albums from the likes of Borknager, on my regular daily playlist and it just gets better and better.

Three books discuss three small data sets

Derek Jones from The Shape of Code

During the early years of a new field, experimental data relating to important topics can be very thin on the ground. Ever since the first computer was built, there has been a lot of data on the characteristics of the hardware. Data on the characteristics of software, and the people who write it has been (and often continues to be) very thin on the ground.

Books are sometimes written by the researchers who produce the first data associated with an important topic, even if the data set is tiny; being first often generates enough interest for a book length treatment to be considered worthwhile.

As a field progresses lots more data becomes available, and the discussion in subsequent books can be based on findings from more experiments and lots more data

Software engineering is a field where a few ‘first’ data books have been published, followed by silence, or rather lots of arm waving and little new data. The fall of Rome has been followed by a 40-year dark-age, from which we are slowly emerging.

Three of these ‘first’ data books are:

  • “Man-Computer Problem Solving” by Harold Sackman, published in 1970, relating to experimental data from 1966. The experiments investigated the impact of two different approaches to developing software, on programmer performance (i.e., batch processing vs. on-line development; code+data). The first paper on this work appeared in an obscure journal in 1967, and was followed in the same issue by a critique pointing out the wide margin of uncertainty in the measurements (the critique agreed that running such experiments was a laudable goal).

    Failing to deal with experimental uncertainty is nothing compared to what happened next. A 1968 paper in a widely read journal, the Communications of the ACM, contained the following table (extracted from a higher quality scan of a 1966 report by the same authors, and available online).

    Developer performance ratios.

    The tale of 1:28 ratio of programmer performance, found in an experiment by Grant/Sackman, took off (the technical detail that a lot of the difference was down to the techniques subjects’ used, and not the people themselves, got lost). The Grant/Sackman ‘finding’ used to be frequently quoted in some circles (or at least it did when I moved in them, I don’t know often it is cited today). In 1999, Lutz Prechelt wrote an expose on the sorry tale.

    Sackman’s book is very readable, and contains lots of details and data not present in the papers, including survey data and a discussion of the intrinsic uncertainties associated with the experiment; it also contains the table above.

  • “Software Engineering Economics” by Barry W. Boehm, published in 1981. I wrote about the poor analysis of the data contained in this book a few years ago.

    The rest of this book contains plenty of interesting material, and even sounds modern (because books moving the topic forward have not been written).

  • “Program Evolution: Process of Software Change” edited by M. M. Lehman and L. A. Belady, published in 1985, relating to experimental data from 1977 and before. Lehman and Belady managed to obtain data relating to 19 releases of an IBM software product (yes, 19, not nineteen-thousand); the data was primarily the date and number of modules contained in each release, plus less specific information about number of statements. This data was sliced and diced every which way, and the book contains many papers with the same data appearing in the same plot with different captions (had the book not been a collection of papers it would have been considerably shorter).

    With a lot less data than Isaac Newton had available to formulate his three laws, Lehman and Belady came up with five, six, seven… “laws of software evolution” (which themselves evolved with the publication of successive papers).

    The availability of Open source repositories means there is now a lot more software system evolution data available. Lehman’s laws have not stood the test of more data, although people still cite them every now and again.

On The Octogram Of Seth LaPod – student

student from thus spake a.k.

The latest wager that the Baron put to Sir R----- had them competing to first chalk a triangle between three of eight coins, with Sir R----- having the prize if neither of them managed to do so. I immediately recognised this as the game known as Clique and consequently that Sir R-----'s chances could be reckoned by applying the pigeonhole principle and the tactic of strategy stealing. Indeed, I said as much to the Baron but I got the distinct impression that he wasn't really listening.

Comparing expression usage in mathematics and C source

Derek Jones from The Shape of Code

Why does a particular expression appear in source code?

One reason is that the expression is the coded form of a formula from the application domain, e.g., E=mc^2.

Another reason is that the expression calculates an algorithm/housekeeping related address, or offset, to where a value of interest is held.

Most people (including me, many years ago) think that the majority of source code expressions relate to the application domain, in one-way or another.

Work on a compiler related optimizer, and you will soon learn the truth; most expressions are simple and calculate addresses/offsets. Optimizing compilers would not have much to do, if they only relied on expressions from the application domain (my numbers tool throws something up every now and again).

What are the characteristics of application domain expression?

I like to think of them as being complicated, but that’s because it used to be in my interest for them to be complicated (I used to work on optimizers, which have the potential to make big savings if things are complicated).

Measurements of expressions in scientific papers is needed, but who is going to be interested in measuring the characteristics of mathematical expressions appearing in papers? I’m interested, but not enough to do the work. Then, a few weeks ago I discovered: An Analysis of Mathematical Expressions Used in Practice, by Clare So; an analysis of 20,000 mathematical papers submitted to arXiv between 2000 and 2004.

The following discussion uses the measurements made for my C book, as the representative source code (I keep suggesting that detailed measurements of other languages is needed, but nobody has jumped in and made them, yet).

The table below shows percentage occurrence of operators in expressions. Minus is much more common than plus in mathematical expressions, the opposite of C source; the ‘popularity’ of the relational operators is also reversed.

Operator  Mathematics   C source
=         0.39          3.08
-         0.35          0.19 
+         0.24          0.38
<=        0.06          0.04
>         0.041         0.11
<         0.037         0.22

The most common single binary operator expression in mathematics is n-1 (the data counts expressions using different variable names as different expressions; yes, n is the most popular variable name, and adding up other uses does not change relative frequency by much). In C source var+int_constant is around twice as common as var-int_constant

The plot below shows the percentage of expressions containing a given number of operators (I've made a big assumption about exactly what Clare So is counting; code+data). The operator count starts at two because that is where the count starts for the mathematics data. In C source, around 99% of expressions have less than two operators, so the simple case completely dominates.

Percentage of expressions containing a given number of operators.

For expressions containing between two and five operators, frequency of occurrence is sort of about the same in mathematics and C, with C frequency decreasing more rapidly. The data disagrees with me again...