Product Owner or Backlog Administrator?

Allan Kelly from Allan Kelly Associates

3337233_thumbnail-2018-03-20-18-08.jpg

In the official guides all Product Owners are equal. One size fits all.

In the world I live in some Product Owners are more equal than others and one size does not fit all.

The key variable here is the amount of Authority a Product Owner has. In my last post I said that Authority is one of the four things every product owner needs – the others being legitimacy, skills and time. However there is a class of Product Owner who largely lack authority and who I have taken to calling Backlog Administrators.

About the only thing a Backlog Administrator owns is their Jira login. They are at the beck and call of one or more people who tell them what should be in the backlog. Prioritisation is little more than an exercise in decibel management – he who shouts loudest gets what they want.

A Backlog Administrator rarely throws anything out of the backlog, they don’t feel they have the authority to do so. As a result their backlogs are constipated – lots of stories, many of little value. Fortunately Jira knows no limits, it is a bottomless pit – just don’t draw a CfD or Burn-Up chart!

If the team are lucky the Backlog Administrator can operate as a Tester, they can review work which is in progress or possibly “done.” They may be able to add acceptance criteria. If the team are unlucky the Backlog Administrator doesn’t know enough about the domain to do testing.

I would be the first to say that the Product Owner role can be vary a great deal: different individuals working with different teams in different domains for different type of company mean there that apart from backlog administration there is inherently a lot of variability in the role.

The Product Owner role should be capable of deciding what to build and/or change.

So Product Owners need to know what the most valuable thing to do it. Part of the job means finding out what is valuable. While Backlog Administration is part of the job the question one should ask is:

How does the Product Owner know what they need to know to do that?

Backlog Administrators are little more than gophers for more senior people.

True Product Owners take after full Product Managers and Senior Business Analysts – or a special version of Business Analysts sometimes called Business Partners.

Product Owners should be out meeting customers and observing users. They should be talking about technology options with the technical team and interface design options with UXD.

Product Owners should understand commercial pressures, how the product makes (or saves) money for the company. Product Owners are responsible for Product Strategy so they should both understand company strategy and input into company strategy. Product Strategy both supports company strategy and feeds into company strategy.

Product Owners may need to observe the competitor landscape and keep an eye on competitors and understand relevant technology trends. That probably means attend trade shows and even supporting sales people if asked.

Frequently Product Owners will require knowledge of the domain, i.e. the field in which your product is used. Sometimes – like in telecoms or surveying that may require actual hands on experience.

And apart from backlog administration there is a lot of work to do to deliver the things they want delivered: they need to work with the technical team to explain stories, to have the conversations behind the story, write acceptance criteria, attend planning meetings, perhaps help with interviewing new staff and sharing all the things they learn from meeting customers, analysing competitors, debating strategy, attending shows, etc. etc.

I sure there are many who would rush to call the Backlog Administrator an “anti-pattern” but since I don’t believe in anti-patterns I don’t. I just think Product Owners should be more than a Backlog Administrator.

The post Product Owner or Backlog Administrator? appeared first on Allan Kelly Associates.

Estimating the number of distinct faults in a program

Derek Jones from The Shape of Code

In an earlier post I gave two reasons why most fault prediction research is a waste of time: 1) it ignores the usage (e.g., more heavily used software is likely to have more reported faults than rarely used software), and 2) the data in public bug repositories contains lots of noise (i.e., lots of cleaning needs to be done before any reliable analysis can done).

Around a year ago I found out about a third reason why most estimates of number of faults remaining are nonsense; not enough signal in the data. Date/time of first discovery of a distinct fault does not contain enough information to distinguish between possible exponential order models (technical details; practically all models are derived from the exponential family of probability distributions); controlling for usage and cleaning the data is not enough. Having spent a lot of time, over the years, collecting exactly this kind of information, I was very annoyed.

The information required, to have any chance of making a reliable prediction about the likely total number of distinct faults, is a count of all fault experiences, i.e., multiple instances of the same fault need to be recorded.

The correct techniques to use are based on work that dates back to Turing’s work breaking the Enigma codes; people have probably heard of Good-Turing smoothing, but the slightly later work of Good and Toulmin is applicable here. The person whose name appears on nearly all the major (and many minor) papers on population estimation theory (in ecology) is Anne Chao.

The Chao1 model (as it is generally known) is based on a count of the number of distinct faults that occur once and twice (the Chao2 model applies when presence/absence information is available from independent sites, e.g., individuals reporting problems during a code review). The estimated lower bound on the number of distinct items in a closed population is:

S_{est} ge S_{obs}+{n-1}/{n}{f^2_1}/{2f_2}

and its standard deviation is:

S_{sd-est}=sqrt{f_2 [0.25k^2 ({f_1}/{f_2} )^4+k^2 ({f_1}/{f_2} )^3+0.5k ({f_1}/{f_2} )^2 ]}

where: S_{est} is the estimated number of distinct faults, S_{obs} the observed number of distinct faults, n the total number of faults, f_1 the number of distinct faults that occurred once, f_2 the number of distinct faults that occurred twice, k={n-1}/{n}.

A later improved model, known as iChoa1, includes counts of distinct faults occurring three and four times.

Where can clean fault experience data, where the number of inputs have been controlled, be obtained? Fuzzing has become very popular during the last few years and many of the people doing this work have kept detailed data that is sometimes available for download (other times an email is required).

Kaminsky, Cecchetti and Eddington ran a very interesting fuzzing study, where they fuzzed three versions of Microsoft Office (plus various Open Source tools) and made their data available.

The faults of interest in this study were those that caused the program to crash. The plot below (code+data) shows the expected growth in the number of previously unseen faults in Microsoft Office 2003, 2007 and 2010, along with 95% confidence intervals; the x-axis is the number of faults experienced, the y-axis the number of distinct faults.

Predicted growth of unique faults experienced in Microsoft Office

The take-away point: if you are analyzing reported faults, the information needed to build models is contained in the number of times each distinct fault occurred.

April nor(DEV): A.I. and Cognitive Computing with Watson & Keep Secure and Under the Radar

Paul Grenyer from Paul Grenyer

What:  A.I. and Cognitive Computing with Watson & Keep Secure and Under the Radar

When: Wednesday 4th April, 6.30pm to 9pm.

Where: Whitespace, 2nd Floor, St James' Mill, Whitefriars, NR3 1TN

RSVP: https://www.meetup.com/Norfolk-Developers-NorDev/events/242231165/

A.I. and Cognitive Computing with Watson
Colin Mower

Artificial Intelligence and Cognitive Computing have become the latest buzzwords in the industry, with companies big and small rushing to work out how they can take advantage of this emerging technology.

In this discussion, we’ll look at the myths behind the hype, how mature the technology is and how IBM’s Watson has evolved from game show winner to one of the market leaders.

Colin works for IBM as a Technical Leader, crossing all the IBM technologies and services. Prior to Big Blue, he worked in Aviva for over 14 years and has contributed to nor(DEV):con and Norfolk Developer Meetups.

He still lives in Norfolk and apart from plenty of travel working for some of the big blue chip companies, he tries to get out in South Norfolk running and cycling in a vain attempt to lose weight and keep fit.


Keep Secure and Under the Radar
David Higgins

Some basic and some not so basic steps to keep you and your business safe in the on-line business arena.

David is ex UK Gov contractor discusses simple steps you need to take to stay ahead of current data security legislation and keep yourself / your business secured.

On Natural Analogarithms – student

student from thus spake a.k.

Last year my fellow students and I spent a goodly portion of our free time considering the similarities of the relationships between sequences and series and those between derivatives and integrals. During the course of our investigations we deduced a sequence form of the exponential function ex, which stands alone in satisfying the equations

    D f = f
  f(0) = 1

where D is the differential operator, producing the derivative of the function to which it is applied.
This set us to wondering whether or not we might endeavour to find a discrete analogue of its inverse, the natural logarithm ln x, albeit in the sense of being expressed in terms of integers rather than being defined by equations involving sequences and series.

Linux & SQL Server at MigSolv a Review

Paul Grenyer from Paul Grenyer

We love the MigSolv data centre out at Bowthorpe in Norwich. This was nor(DEV):’s second visit and they always make us very welcome. Walking into what feels like a massive Blakes 7 set and getting the tour,including the retina scanner and massive server hall, is incredible and seriously interesting (even though it’s my third time!).

The intimacy of the board room with the table down the centre and nor(DEV): members arranged each side is great for generating conversation! And when you have a humorous and huge personality like Mark Pryce-Maher it encourages the banter and the discussion even more! It’s safe to say this was one of the most interactive nor(DEV): evening presentations for some time.

Mark was there to tell us about how you can run Microsoft SQL Server on Linux (or is that “Lynux”?). Anyone would think Mark had been on the WINE, but no, you really can run SQL Server natively on Linux now. The first question though, has to be “why?”. The answer is simple. Microsoft are going after geeks, Oracle users and Linux houses who only run Windows to run SQL server.

The second question is “how?”. Developers at Microsoft discovered that, despite the vast number of methods available from the Win32 API, there are only a small number of methods which actually talk to the operating system. These are for allocating memory, disc storage, etc. A project called Drawbridge was developed to identify these methods and port them to Linux. SQL Server can then make use of those methods to run on Linux. Simples!

Mark did a live demo of installing and connecting to SQL Server. Unfortunately he hadn’t made sufficient sacrifices to the demo gods and things didn’t go precisely to plan. SQL Server can be run on an Ubuntu instance on Microsoft’s Azure from about £1/day (I’m intending to try it on a Digital Ocean droplet which is slightly cheaper). It’s incredibly easy to install. You just add the necessary repositories to Ubuntu’s package manager and tell it to install SQL Server. There’s also a pre-made Docker image (if Docker is your thing) which is even quicker.

Microsoft have developed an open source version of the client tools called Microsoft Operations Studio . It is also very easy to install (I did it on my Linux Mint laptop over 4G while Mark was speaking), but for some reason during the demo it just wouldn’t connect to SQL Server. However, Mark talks a great talk and I’m sure with a little bit more playing it would have!

We enjoyed being at MigSolv and hearing from Mark! MigSolv would like us to go back and we’re keen to do so in the future.

The Next nor(DEV): is on 4th April and features “A.I. and Cognitive Computing with Watson” from Colin Mower of Microsoft and “Keep Secure and Under the Radar” from David Higgins. RSVP here: https://www.meetup.com/Norfolk-Developers-NorDev/events/242231165/

emBO++ 2018 Trip Report

Simon Brand from Simon Brand

emBO++ is a conference focused on C++ on embedded systems in Bochum, Germany. This was it’s second year of operation, but the first that I’ve been along to. It was a great conference, so I’m writing a short report to hopefully convince more of you to attend next year!

Format

The conference took place over four days: an evening of lightning talks and burgers, a workshop day, a day of talks (what you might call the conference proper), and finally an unofficial standards meeting for those interested in SG14. This made for a lot of variety, and each day was valuable.

Venue

One thing I really enjoyed about emBO++ was that the different tech and social events were dotted around the city. This meant that I actually got to see some of Bochum, get lost navigating the train system, walk around town at night, etc., which made a nice change from being cooped up in a hotel for a few days.

The main conference venue was at the Zentrum für IT-Sicherheit (Centre for IT Security). It was a spacious building with a lot of light and large social areas, so it suited the conference environment well. The only problem was that it used to be a military building and was lined with copper, making the thing into one huge Faraday cage. This meant that WiFi was an issue for the first few hours of the day, but it got sorted eventually.

zits

Food and Drink

The catering at the main conference location was really excellent: a variety of tasty food with healthy options and large quantities. Even better were the selection of drinks available, which mostly consisted of interesting soft drinks which I’d never seen before, such as bottled Matcha with lime and a number of varieties of Mate. All the locations we went to for food and drinks were great – especially the speakers dinner. A lot of thought was obviously put into this area of the conference, and it showed.

Workshops

There were four workshops on the first day of the conference with two running in parallel. The two which I attended were very interesting and instructive, but I wish that they had been more hands-on.

Jörn Seger – Getting Started with Yocto

I was in two minds about attending this workshop. We need to use Yocto a little bit in my current project, so I could attend the workshop in order to gain more knowledge about it. On the other hand, I’d then be the most experienced member of my team in Yocto and would be forced to fix all the issues!

In the end I decided to go along, and it was definitely worthwhile. Previously I’d mostly muddled along without an understanding of the fundamentals of the system; this workshop provided those.

Kris Jusiak – Embedding a Compile-Time-State-Machine

Kris gave a workshop on Boost.SML, which is an embedded domain specific language (EDSL) for encoding expressive, high-performance state machines in C++. The library is very impressive, and it was valuable to see all the different use-cases it supports and how it supports switching out the frontend and backend of the system. I was particularly interested in this session as my talk the next day was on EDSLs, so it was an opportunity to steal some things to mention in my talk.

You can find Boost.SML here.

Talks

There were two tracks for most of the day, with the first and final ones being plenary sessions. There was a strong variety of talks, and I felt that my time was well-spent at all of them.

Simon Brand – Embedded DSLs for Embedded Programming

My talk! I think it went down well. I got some good engagement and questions from the audience, although not much feedback from the attendees later on in the day. I guess I’ll need to wait for it to get torn apart on YouTube.

me

Klemens Morgenstern – Developing high-performance Coroutines for ARMs

Klemens gave an excellent talk about an ARM coroutine library which he implemented. This talk has nothing to do with the C++ Coroutines TS, instead focusing on how coroutines can be implemented in a very low-overhead manner. In Klemens’ library, the user provides some memory to be used as the stack for the coroutine, then there are small pieces of ARM assembly which perform the context switch when you suspend or resume that coroutine. The talk went into the performance considerations, implementation, and use in just the right amount of detail, so I would definitely recommend watching if you want an overview of the ideas.

The library and presentation can be found here.

Emil Fresk – The compile-time, reactive scheduler: CRECT

CRECT is a task scheduler which carries out its work at compile time, therefore almost entirely disappearing from the generated assembly. Emil’s lofty goal for the talk was to present all of the necessary concepts such that those viewing the talk would feel like they could go off and implement something similar afterwards. I think he mostly succeeded in this – although a fair amount of metaprogramming skills would be required! He showed how to use the library to specify the jobs which need to be executed, the resources which they require, and when they should be scheduled to run. After we understood the fundamentals of the library, we learned how this actually gets executed at compile-time in order to produce the final scheduled output. Highly recommended for those who work with embedded systems and want a better way of scheduling their work.

You can find CRECT here.

Ben Craig – Standardizing an OS-less subset of C++

If you watch one talk from the conference it should be this one. C++ has had a “freestanding” variant for a long time, and it’s been neglected for the same amount of time. Ben talked about all the things which should not be available in freestanding mode but are, and those which should be but are not. He presented his vision for what should be standards-mandated facilities available in freestanding C++ implementations, and a tentative path to making this a reality. Particularly of interest were the odd edge cases which I hadn’t considered. For example, it turns out that std::array has to #include <string> somewhere down the line, because my_array.at(n) can throw an exception (std::out_of_range), and that exception has a constructor which takes std::string as an argument. These tiny issues will make getting a solid standard for freestanding difficult to pin down and agree on, but I think it’s a worthy cause.

Ben’s ISO C++ paper on a freestanding standard library can be found here.

Jacek Galowicz — Scalable test infrastructure for advanced bare-metal software development

In Jacek’s team, they have many different hardware versions to support. This creates a problem of creating regressions in some versions and not others when changes are made. This talk showed how they developed the testing infrastructure to enable them to test all hardware versions they needed on each merge request to ensure that bad commits aren’t merged in to the master branch. They wrote a simple testing framework in Haskell which was fine-tuned to their use case rather than using an existing solution like Jenkins (that’s what we use at Codeplay for solving the same problem). Jacek spoke about issues they faced and the creative solutions they put in place, such as putting a light detector over the CAPS LOCK button of a keyboard and making it blink in Morse code in order to communicate out from machines with no usable ports.

Odin Holmes – Bare-Metal-Roadmap

Odin’s talk summed up some current major problems that are facing the embedded community, and roped together all of the talks which had come before. It was cool to see the overlap in all the talks in areas of abstraction, EDSLs, making choices at compile time, etc.

Closing

I had a great time at emBO++ and would whole-heartedly recommend attending next year. The talks should be online in the next few months, so I look forward to watching those which I didn’t attend. The conference is mostly directed at embedded C++ developers, but I think it would be valuable to anyone doing low-latency programming on non-embedded systems, or those writing C/Rust/whatever for embedded platforms.

Thank you to Marie, Odin, Paul, Stephan, and Tabea for inviting me to talk and organising these great few days!

embo

ACME DNS Validation

Christof Meerwald from cmeerw.org blog

I was looking at modifying acme tiny to support DNS-01 validation with a custom PowerDNS backend just a few days ago (in my case to get certificates for an XMPP server where there isn't a corresponding HTTP server or the HTTP server is hosted on a different machine). This work is available from Subversion: pdns-acme-backend.

Interestingly, I am just reading that Let's Encrypt is now supporting wildcard certificates that need to be validated using the DNS-01 challenge type.