CppQuiz.org now uses C++17

Anders Schau Knatten from C++ on a Friday

CppQuiz.org is an open source C++ quiz site ran by me, with contributions from the C++ community. If you’re unfamiliar with it, you can read more in its “About” section.

Thanks to great help from Nicolas Lesser and Simon Brand, all the questions and explanations on CppQuiz.org have now been updated from C++11 to C++17. This task would have taken me a very long time to do on my own, but thanks to Nicolas and Simon, this was all done in no more than four days!

If you want to test your knowledge of C++, try out the quiz now!

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

#NoProjects: Project Myopia is published

Allan Kelly from Allan Kelly Associates

ProjectMyopiaNew-2018-09-10-11-17.jpg

Project Myopia – the original case for #NoProjects – has been a long time in the works but it is now done. Published. For sale on Amazon.

Projects fail. Some say 40% of all IT projects fail, some say 70%. And it has been that way for years. Each project fails for its own reasons but they all share one thing in common: the Project Model. Could it be the project model itself which creates failure?

Projects end. Successful software continues. Twenty-first century digital businesses want to continue and grow.

Project Myopia is available to buy on Amazon today – the physical version should joined the eBook in a few days.

Project Myopia gives the case against projects – the hard core #NoProjects arguments. A second book, Continuous Digital will join Project Myopia in a few weeks on Amazon. Right now copyediting isn’t finished on Continuous Digital, plus the physical copy needs to be worked out. In the meantime late drafts of Continuous Digital are available on LeanPub.

The post #NoProjects: Project Myopia is published appeared first on Allan Kelly Associates.

#NoProjects: Project Myopia is published

Allan Kelly from Allan Kelly Associates

ProjectMyopiaNew-2018-09-10-11-17-1.jpg

Project Myopia – the original case for #NoProjects – has been a long time in the works but it is now done. Published. For sale on Amazon.

Projects fail. Some say 40% of all IT projects fail, some say 70%. And it has been that way for years. Each project fails for its own reasons but they all share one thing in common: the Project Model. Could it be the project model itself which creates failure?

Projects end. Successful software continues. Twenty-first century digital businesses want to continue and grow.

Project Myopia is available to buy on Amazon today – the physical version should joined the eBook in a few days.

Project Myopia gives the case against projects – the hard core #NoProjects arguments. A second book, Continuous Digital will join Project Myopia in a few weeks on Amazon. Right now copyediting isn’t finished on Continuous Digital, plus the physical copy needs to be worked out. In the meantime late drafts of Continuous Digital are available on LeanPub.

The post #NoProjects: Project Myopia is published appeared first on Allan Kelly Associates.

Reidolized

Paul Grenyer from Paul Grenyer

So Blackie Lawless decided to re-record my favorite album of all time and my initial thoughts are why did he bother. I’ve avoided buying it for months until last Friday when I was trying out some new headphones with the re-recorded guitar solo from The Idol and quite enjoyed the new layers and mix of the new version of the song. Although I’m not a musician, it feels like I know every note and I could really hear the difference. It’s not the original guitarist, and consequently the solo wasn’t the same or as good.


Having now listened to the first disk of the re-record I can only say there is one improvement, the extra songs, at least one of which was taken from WASP’s latest studio album and is magnificent. The removing of the swearing in the songs and intros feels wrong and a cop out and the lead guitar work just isn’t up to Bruce Kulik’s incredible standard.

The second disk is starting now….

Cuboid Space Division – a.k.

a.k. from thus spake a.k.

Over the last few months we have been taking a look at algorithms for interpolating over a set of points (xi,yi) in order to approximate values of y between the nodes xi. We began with linear interpolation which connects the points with straight lines and is perhaps the simplest interpolation algorithm. Then we moved on to cubic spline interpolation which yields a smooth curve by specifying gradients at the nodes and fitting cubic polynomials between them that match both their values and their gradients. Next we saw how this could result in curves that change from increasing to decreasing, or vice versa, between the nodes and how we could fix this problem by adjusting those gradients.
I concluded by noting that, even with this improvement, the shape of a cubic spline interpolation is governed by choices that are not uniquely determined by the points themselves and that linear interpolation is consequently a more mathematically appropriate scheme, which is why I chose to generalise it to other arithmetic types for y, like complex numbers or matrices, but not to similarly generalise cubic spline interpolation.

The obvious next question is whether or not we can also generalise the nodes to other arithmetic types; in particular to vectors so that we can interpolate between nodes in more than one dimension.

Help save CppQuiz.org! We’re porting to C++17.

Anders Schau Knatten from C++ on a Friday

[start helping now]

CppQuiz.org is an open source C++ quiz site ran by me, with contributions from the C++ community. If you’re unfamiliar with it, you can read more in its “About” section

All the CppQuiz questions are currently targetting C++11. We need to update them for C++17. Most questions will still have the same answers, we just need to update the explanations and references to the standard. A few questions will also have different answers.

Doing this all by myself is going to take months, so I would very much appreciate some help from the community. Everyone who helps will be credited in a blog post when the porting is done.

To make porting as simple as possible, I’ve created a repository containing all the questions. There is a directory for each question, named after the question number. That directory contains the source code of the question in a .cpp file, the hint and explanation in .md files, as well as a README.md explaining everything you need to do to port the question. There’s also an issue for each question, making it easier to track progress. The issue has the same information as the README.md file.

As soon as we’ve updated all the questions in this repository, I’ll import them back into CppQuiz, and from then on CppQuiz will always ask about C++17.

How to help porting questions

There are two ways to contribute, listed below. I prefer the first alternative, but if that prevents you from contributing, the second is also ok.

Contributing using a fork and pull requests

  1. Fork the repo by clicking the “Fork” button at the top right.
  2. Pick the issue for a question you want to port. Add a comment that you’ll be handling that issue.
  3. Follow the instructions in the issue to port the question.
  4. Make a pull request. Porting several questions in one PR is fine.

Contributing without a fork

If you think forking and PRs is too cumbersome, or you are not able to do this for other reasons, I’d still appreciate your help:

  1. Pick the issue for a question you want to port. Add a comment that you’ll be handling that issue.
  2. Follow the instructions in the issue to port the question.
  3. Paste the updated files as comments to the issue.

Other ways to help

  • Look for questions labeled “help wanted”. It means the person responsible needs a second pair of eyes.
  • Look at pull requests, review them and comment “LGTM” if they should be merged.
  • Other ideas for help are also welcome, please get in touch (see “Questions” below).

Questions

If you have any questions, either file an issue in the repo, contact @knatten on Twitter, or email me at anders@knatten.org.

[start helping now]

Bulk adding items to Wunderlist using wunderline on Ubuntu MATE

Andy Balaam from Andy Balaam's Blog

If you use Wunderlist and want to be able to bulk-add tasks from a text file, first install and set up wunderline.

Now, to be able to right-click a text file containing one task per line on Ubuntu MATE, create a file called “wunderlist-bulk-add” in ~/.config/caja/scripts/ and make it executable. Paste the code below into that file.

(Note: it’s possible that this would work in GNOME if you replaced “caja” with “nautius” in that path – let me know in the comments.)

#!/usr/bin/env bash

set -e
set -u
set -o pipefail

function nonblank()
{
    grep -v -e '^[[:space:]]*$' "$@"
}

for F in "$@"; do
{
    COUNT=$(nonblank "$F" | wc -l | awk '{print $1}')
    if [ "$COUNT" = "" ]; then
    {
        zenity --info --no-wrap --text="File $F does not exist"
    }
    else
    {
        set +e
        zenity --question --no-wrap \
            --text="Add $COUNT items from $F to Wunderlist?"
        ANSWER=$?
        set -e

        if [ "$ANSWER" = "0" ]; then
        {
            set +e
            nonblank "$F" | \
                wunderline add --stdin | \
                zenity --progress --text "Adding $COUNT items to Wunderlist"
            SUCCEEDED=$?
            set -e
            if [ "$SUCCEEDED" = "0" ]; then
            {
                zenity --info --no-wrap --text "Added $COUNT items."
            }
            else
            {
                zenity --error --no-wrap \
                    --text "Failed to add some or all of these items."
            }; fi
        }; fi
    }; fi
}; done

Release or be damned

Allan Kelly from Allan Kelly Associates

iStock-166161352small-2018-09-6-12-26.jpg

Back when I was still paid to code I had a simple question I posed to troubled development efforts:

“Why can’t we release tomorrow?”

This short simple question turns out to be amazingly powerful. I remember one effort I was involved with in California where a new CEO took over and started cutting jobs. I posed this question to the team and in a week or two we did a “beta release” – we did those sort of things back then. Asking this question was the key that allows us to question everything, to cut the feature list – or rather push work back, it stayed on the to-do list but we didn’t let it stop us from pushing to release.

We rethought what we were trying to achieve: we didn’t need the whole product, we just needed enough of the product to work to show to one specific target customer. Even if they signed there and then we had weeks before they used it in anger. But until we released something, until we had something “done” our team, our product, look like just another “maybe.” We had to draw a line under it so the new CEO wouldn’t draw a line under us.

Saying “only do the essential” is easy and come up again and again, whether it is Minimal Viable Product, Minimal Subset, Must haves in Moscow rules, but it is far easier said than done. One persons “essential” is so often another persons “optional extra.” In this context, when I say “essential” I mean “the parts needed to make the system work end to end” – I’m far closer to the old walking skeleton idea.

I was reminded of this question by a couple of endeavours that came to my attention during the summer. Well, I say came to my attention, I feel a bit responsible. Both endeavours are happening at clients; clients who I had fallen out of touch with. My style of working is to help clients who want help, I don’t like selling myself. These clients didn’t ask for more help so I didn’t jam my foot in the door, in retrospect maybe I should have.

In one case the team were doing very well. They were iterating, they were TDD/BDD’ing, they were demoing, they were working with the client, they were doing everything … except releasing. Then one day the client asked “when will it be done?”

Now think for a moment: What if you could release your product tomorrow?

The thing is, without actual products those around the team look for signs that the team can be trusted, that they team will deliver, that the team are thinking about what is to be done. People ask for proxy-products: plans, schedules, risk-logs, budget forecasts and so on. When stakeholders can’t see progress they look for things to assure them that there is (or will be) progress (soon).

Who needs plans and predictions about the future when the future is here tomorrow?

Actual releases are they key to reaching the new world, they change everything.

So I feel guilty: I should have inflicted myself on these teams, I should have been there again and again bugging them “Go to release”, “Remove that barrier”, “Force it through”.

Being able to ship an update of your product has a transformative effect.

It demonstrates the team have the ability to do the job in hand.
It demonstrates you have quality. It obliterates the need for a test-fix-test-fix aka stabilisation aka hardening phase.
It blows away sunk costs because something has been delivered.
It removes “maybe” and “ready but…”
It is probably the greatest risk mitigation strategy possible.
It creates trust and provides a platform for solid conversations.

Most of all, a released product is a far better statement of progress than any number of plans or forecasts.

This does not mean everything is done. Sure there are things left undone but there will be things left undone when I’m on my deathbed, that is the nature of life. As much as we (especially men) love to collect entire sets there are few prizes in life for completing everything on your bucket list.

Having a released product utterly changes the nature of the conversation. Conversations are no longer full of “ifs” “maybes” “shoulds” “how long will it take?” “what are the quick wins?”. Those questions can go away. In its place you can have serious conversations about prioritisation and “what do you want tomorrow?”

This is all part of the reason I love continuous delivery. Teams can focus on real priorities and stop wasting time on conjecture.

In my book if you don’t have a releasable product at least every two weeks – say every second Thursday – you are not Agile. And if you haven’t released a product to live in the last two weeks you are probably not Agile.

I don’t care how close you get to a releasable product: it isn’t a release if it isn’t released to a live environment – close but no cigar as they say. (OK, I’ll accept the live environment may not be publicly know, or might be called a beta, but it has to be the real thing.)

Nor should you rest on your laurels once you have regular releases (to live) every second week. That is but first base. You have opened the door, now go further. There are at least 13 opportunities to improve.

If you cannot do that now then ask yourself: Why can’t we release tomorrow?

And start working to remove those obstacles:

  • Reduce the number of work items you are aiming to put in the release.
  • Fix show-stopper defects now.
  • Running tests now.
  • Get those people who need to sign-off to sign-off.

Software development has diseconomies of scale: many small is cheaper than few large.

And once you have your release you can turn your attention to making sure these things don’t happen again:

  • Reduce the amount of work you accept into development at one time.
  • Fix every defects as soon as they are found.
  • Automate tests so they can run more often. (Automate anything that moves, and if it doesn’t move, automate it in case.)
  • Find a way to reduce the time it takes to get sign-offs: remove the sign-off, make sure the signer prioritises signing or delegate someone else to sign (or automate the signature.)

If there are essential processes, activities, third-parties (or anything else) that has limited bandwidth which need to be done before release but inject delay then re-orientate your process around that bottleneck. For example, if your code needs to pass a security audit before release (an audit you can’t automate that is) then, downsize all the other activities so that the audit process is 100% utilised. (OK, 100% is wrong, 76% might be better, but thats a long conversation about queuing theory.)

Again and again I seem condemned to learn the lesson: nothing counts but working software which is used.

As for my team, and my job in California, it didn’t save me. I regret not asking the question sooner.

The post Release or be damned appeared first on Allan Kelly Associates.

Writing a new Flarum extension on Ubuntu

Andy Balaam from Andy Balaam's Blog

In a previous post I described how to install Flarum locally on Ubuntu.

Here is how I set up my development environment on top of that setup so I was able to write a new Flarum extension and test it on my local machine.

Recap: I installed Apache and PHP and enabled mod_rewrite, installed MariaDB and made a database, installed Composer, and used Composer to install Flarum at /var/www/html/flarum.

More info: Flarum Contribution Guide, extension development guide, extension development quick start, workflow discussion.

I decided to call my extension “rabbitescape/flarum-ext-rabbitescape-leveleditor”, so that is the name you will see below.

Here’s what I did:

cd /var/www/html/flarum

Edit /var/www/html/flarum/composer.json (see the workbench explanation for examples of the full file).

At the end of the “require” section I added a line like this:

“rabbitescape/flarum-ext-rabbitescape-leveleditor”: “*@dev”

(Remembering to add a comma at the end of the previous line so it remained valid JSON.)

After the “require” section, I added this:

    "repositories": [
        {
            "type": "path",
            "url": "workbench/*/"
        }
    ],

Next, I made the place where my extension would live:

mkdir workbench
cd workbench
mkdir flarum-ext-rabbitescape-leveleditor
cd flarum-ext-rabbitescape-leveleditor

Now I created a file inside flarum-ext-rabbitescape-leveleditor called composer.json like this:

{
    "name": "rabbitescape/flarum-ext-rabbitescape-leveleditor",
    "description": "Allow viewing and editing Rabbit Escape levels in posts",
    "type": "flarum-extension",
    "keywords": ["rabbitescape"],
    "license": "MIT",
    "authors": [
        {
            "name": "Andy Balaam",
            "email": "andybalaam@artificialworlds.net"
        }
    ],
    "support": {
        "issues": "https://github.com/andybalaam/flarum-ext-rabbitescape-leveleditor/issues",
        "source": "https://github.com/andybalaam/flarum-ext-rabbitescape-leveleditor"
    },
    "require": {
        "flarum/core": "^0.1.0-beta.5"
    },
    "extra": {
        "flarum-extension": {
            "title": "Rabbit Escape Level Editor"
        }
    }
}

In the same directory I created a file bootstrap.php like this:

<?php

return function () {
    echo 'Hello, world!';
};

Then I told Composer to refresh the project like this:

cd ../..   # Back up to the main directory
composer update

Among the output I saw a line like this which convinced me it had worked:

  - Installing rabbitescape/flarum-ext-rabbitescape-leveleditor (dev-master): Symlinking from workbench/flarum-ext-rabbitescape-leveleditor

Now I went to http://localhost/flarum, signed in as admin, adminpassword (as I set up in the previous post), clicked my username in the top right and chose Administration, then clicked Extensions on the left.

I saw my extension in the list, and turned it on by clicking the check box below it. This made the whole web site disappear, and instead just the text “Hello world!” appeared. This showed my extension was being loaded.

Finally, I edited bootstrap.php and commented out the line starting with “echo”. I refreshed the page in my browser and saw the Flarum site reappear.

Now, my extension is installed and ready to be developed! See flarum.org/docs/extend/start for how to get started making it do cool stuff.

Moving to the 12th circle in fault prediction modeling

Derek Jones from The Shape of Code

Most software fault prediction papers are based on a false assumption, i.e., a list of dates when a fault was first experienced, by a program, contains enough information to build a model that has a connection to reality. A count of faults that have been experienced twice is required to fit a basic model that has some mathematical connection to reality.

I had thought that people had moved on from writing papers that fitted yet more complicated equations to one of the regularly used data sets. No, it seems they have just switched to publishing someplace they have not been seen before.

Table 1 lists the every increasing number of circles within circles; the new model is proposed as the 12th refinement (the table is a summary, lots of forks have been proposed over the years). I have this sinking feeling there is another paper in the works, one that ‘benchmarks’ the new equation using a collection of the other regular characters data sets that appear in papers of this kind.

Fitting an equation to data of first experience of a fault is little better than fitting noise.

As Planck famously said, science advances one funeral at a time.