Graft Animation Language on Raspberry Pi

Andy Balaam from Andy Balaam's Blog

Because the Rapsberry Pi uses a slightly older Python version, there is a special version of Graft for it.

Here’s how to get it:

  • Open a terminal window by clicking the black icon with a “>” symbol on it at the top near the left.
  • First we need to install a couple of things Graft needs, so type this, then press Enter:
    sudo apt install python3-attr at-spi2-core
  • If you want to be able to make animated GIFs, install one more thing:
    sudo apt install imagemagick
  • To download Graft and switch to the Raspberry Pi version, type in these commands, pressing Enter after each line.
    git clone https://github.com/andybalaam/graft.git
    cd graft
    git checkout raspberry-pi
  • Now, you should be able to run Graft just like on another computer, for example, like this:
    ./graft 'd+=10 S()'
  • If you’re looking for a fun way to start, why not try the worksheet “Tell a story by making animations with code”?

    For more info, see Graft Raspberry Pi Setup.

Continuous Digital published – done?

Allan Kelly from Allan Kelly Associates

CDpile2cut-2018-10-9-14-43.jpg

Continuous Digital is done.

Probably. Maybe. Definitely maybe.

Continuous Digital is the second of my two #NoProjects books. Many people ask: “why two?” “What is the difference between them?” “Do I need to read both?”

Short answer: Project Myopia explains why the project model is bad for software development. Continuous Digital describes what to do instead.

Long answer: as the #NoProjects hypothesis grew, as I thought about it more, as I talked to others about the ideas – specifically Steve Smith, Joshua Arnold and Evan Leybourn – the ideas grew. My thinking both on “what to do instead of project management” and “why do something different” grew.

Specifically I saw that the combination of Continual Delivery and Digital Business meant there was a stand alone case for moving beyond the project model. Whether you agree with the problems I discuss in Project Myopia or not there is a case for changing the way businesses are managed.

That is why I split the too books. Project Myopia is a companion book, it is not a prequel, a sequel, a book one or a book two. It is a book some people will read in its own right.

Continuous Digital argues that since business are increasingly digital, and as businesses strive to survive and grow then technology development is not a separate “project” it is inherent to the business. Technology and innovation are business as usual.

Stopping, even pausing, work – as in the project model – surrenders competitive advantage and introduces extra costs (time, money, risk). What is needed is a new model. A continuous model.

Continuous Digital is now published on Amazon in digital form and will soon be there – and in other booksellers – in physical form. (If you can’t wait for a print copy you can buy one from Lulu where they are slightly cheaper too.)

So I’d like to say Continuous Digital is done. But…

Even before I saw the final print version I had requests for an audio version of both Project Myopia and Continuous Digital. I’m debating whether to do these, if you would buy an audio version please let me know, if enough people want it I’ll do it.

Second, once I saw and held the final, done, version in print new ideas came to me. I don’t want to revisit the text – although I might fix a couple of typos – but Continuous Digital is a big book, 350 pages. And I know many people will be put off by the size.

So I’m thinking of turning it into four smaller books, each around 100 pages in length and each corresponding to one part of Continuous Digital. Maybe.

It is never done. It is continual.

The post Continuous Digital published – done? appeared first on Allan Kelly Associates.

Win an Echo Dot with Naked Element at the Norfolk Chamber B2B

Paul Grenyer from Paul Grenyer



On Thursday 11th October Naked Element will be making its yearly pilgrimage to the Norfolk Chamber of Commerce and Industry’s annual B2B event at Norwich City Football Club. We’re looking forward to seeing clients old and new, meeting new people and most of all YOU!

This year we have stand number 85 on Top of the Terrace (Level 2). All of the Naked Element team will be on the stand at various points throughout the day and we’ll looking forward to Rain Crowson helping us out at lunchtime and during the afternoon.

We’ll be raffling an Echo Dot in exchange for your business card and offering new potential clients £500 off their projects over £10,000.

Come and see us, we’re looking forward to seeing you!

Talk – Getting started with geospatial data in MongoDB (MDBW 2017)

Timo Geusch from The Lone C++ Coder's Blog

I’ve been meaning to post this link for quite a while now but keep forgetting to do so. If you are planning to store geospatial data in MongoDB, the database offers you a variety of ways to deal with geospatial-specific data storage and queries. I gave an introductory talk on this subject at MongoDB World […]

The post Talk – Getting started with geospatial data in MongoDB (MDBW 2017) appeared first on The Lone C++ Coder's Blog.

Talk – Getting started with geospatial data in MongoDB (MDBW 2017)

The Lone C++ Coder's Blog from The Lone C++ Coder's Blog

I’ve been meaning to post this link for quite a while now but keep forgetting to do so. If you are planning to store geospatial data in MongoDB, the database offers you a variety of ways to deal with geospatial-specific data storage and queries. I gave an introductory talk on this subject at MongoDB World 2017 and you can find a recording of the talk here. Disclaimer: I work for MongoDB as a Consulting Engineer and this is my personal blog.

New Directions Of Interpolation – a.k.

a.k. from thus spake a.k.

We have spent a few months looking at how we might interpolate between sets of points (xi, yi), where the xi are known as nodes and the yi as values, to approximate values of y for values of x between the nodes, either by connecting them with straight lines or with cubic curves.
Last time, in preparation for interpolating between multidimensional vector nodes, we implemented the ak.grid type to store ticks on a set of axes and map their intersections to ak.vector objects to represent such nodes arranged at the corners of hyperdimensional rectangular cuboids.
With this in place we're ready to take a look at one of the simplest multidimensional interpolation schemes; multilinear interpolation.

Disaster Recovery: A Dynamic Redundancy Approach

Tim Pizey from Tim Pizey

The problem with disaster planning is that it is not rehearsed. When you need to retrieve a file from backup is when you discover that your backup has been broken for three months.

Modern cloud systems, based upon software defined infrastructure and redundant, auto-scaling fleets of micro-services, come with disaster recovery built in. They are designed to be resilient against DDoS attacks, unexpected peaks in usage and continent wide unavailability.

Some systems have yet to migrate to outsourced infrastructure, some never will migrate. For these systems we need a Disaster Recovery Strategy which can be implemented within reasonable costs and ideally does not suffer from the fails when needed feature of many backup systems. One answer is to do regular fire drills. No one would dispute the importance of fire drills in saving lives and ensuring that people know what to do in the case of a real fire, however we all know there is a big difference between a rehearsal and the real thing.

The key insight in the modern cloud architectures is that every version of a system is the same (at a particular time).

We can reduce this to a minimal redundant system: a pair of identical systems with one designated Primary and the other Secondary, with a standard data mirroring link from Primary to Secondary.

To ensure that both elements of the pair really can function as the Primary you could rehearse a cutover one weekend.

But if the two systems really are identical then there is no reason to reverse the cutover at the end of the rehearsal. The old Secondary is the new Primary, the old Primary is the new Secondary. The Primary can be swapped at a periodicity the business is comfortable with, say twice a year.

This Dynamic Redundancy strategy ensures that your Disaster Recovery works when you need it to and can be adjusted according to the business' appetite for risk.

Adding a new scalar type to C

Derek Jones from The Shape of Code

I think the time has arrived for a new scalar type in C, which for want of a better name I shall call the compendium type.

On today’s processors a compendium type behaves a lot like an integer type, except that nobody really wants to include it in the list of supported integer types, e.g., 128-bit scalars.

Why is a new scalar type needed? The Standard supports extended integer types, why not treat a scalar object that supports integer arithmetic as an integer type?

The C Standard says (section 6.2.5 Types):
“There are five standard signed integer types, designated as signed char, short int, int, long int, and long long int. (These and other types may be designated in several additional ways, as described in 6.7.2.) There may also be implementation-defined extended signed integer types.38) The standard and extended signed integer types are collectively called signed integer types.39)”

There is corresponding wording for unsigned integer types.

The standard header allows implementations to define a whole menagerie of integer types: section 7.20.1.1 Exact-width integer types
“The typedef name intN_t designates a signed integer type with width N, no padding bits, and a two’s complement representation. Thus, int8_t denotes such a signed integer type with a width of exactly 8 bits.”

This all sounds very feasible, but there is a catch. The Standard defines a greatest-width integer type, section 7.20.1.5 Greatest-width integer types
“The following type designates a signed integer type capable of representing any value of any signed integer type:
intmax_t

and various library functions have an argument type intmax_t (there is also an uintmax_t).

An ‘extra-large’ integer type is not something that can just sit there, in the list of available integer types, waiting to be used. Preprocessor arithmetic and a variety of library are based around the type intmax_t. An extra-large integer type would have a very visible impact on all developers, many of whom would want to ignore it.

GCC supports 128-bit integers, e.g., __int128. But some magic pixie dust is involved, this type has no connection with intmax_t.

What do developers do with these 128- and 256-bit scalar objects? Evaluating graphics algorithms, hashes and cryptographic calculations are obvious candidates; yes, perhaps even calculations involving integers that require this many bits. I have not seen any analysis of the uses of this kind of wide-integer-like type.

Extra-wide scalar types have a variety of uses and the term compendium type, captures this. Hardware support for such extra-width types is growing, with vendors looking to fill major niches.

Contorting existing wording, in the Standard, so accommodate these extra-wide types within the existing integer type machinery is a short term solution. Work on the upcoming revision of the C Standard should either do nothing and allow vendors to take the approach currently used by GCC, or create a new scalar type (perhaps using a TR).

Why you can’t list-initialize containers of non-copyable types

Anders Schau Knatten from C++ on a Friday

Have you ever wondered why you can’t list-initialize containers of non-copyable types? This is for instance not possible:

    vector<unique_ptr<int>> vu{
        make_unique<int>(1), make_unique<int>(2)};
    //error: call to implicitly-deleted copy constructor of unique_ptr

If you ever wondered, or if you now are, read on!

List-initialization

Since C++11, you’re probably used to intitalizing containers like this:

    vector<int> vi1{1,2,3};
    vector<int> vi2 = {1,2,3};

This of course also works with user defined types. Let’s say you have a class Copyable, then you can for instance do:

    Copyable c1(1);
    Copyable c2(2);
    vector<Copyable> vc1{c1, c2};
    vector<Copyable> vc2 = {c1, c2};

(Copyable is just an arbitrary class which can be copied. It’s reproduced at the end of the post.)

Now what happens if we have a non-copyable class NonCopyable? (NonCopyable is just an arbitrary class which can be moved but not copied, it too is reproduced at the end of the post.)

    NonCopyable n1(1);
    NonCopyable n2(2);
    vector<NonCopyable> vn1{n1, n2}; //error: call to deleted constructor of 'const NonCopyable'
    vector<NonCopyable> vn2 = {n1, n2}; //error: call to deleted constructor of 'const NonCopyable'

Well, n1 and n2 are lvalues, so no wonder it tries to copy them. What if we turn them into rvalues, either with std::move or by creating temporaries?

    vector<NonCopyable> vn3{std::move(n1), std::move(n2)}; //error: call to deleted constructor of 'const NonCopyable'
    vector<NonCopyable> vn3{NonCopyable(4), NonCopyable(5)}; //error: call to deleted constructor of 'const NonCopyable'

So what’s going on here, why is it trying to copy our rvalues? Let’s see what the standard has to say in [dcl.init.list]¶1:

List-initialization is initialization of an object or reference from a braced-init-list.

A braced-init-list is the {element1, element2, ...} syntax we saw above. The standard continues:

Such an initializer is called an initializer list. (…) List-initialization can occur in direct-initialization or copy-initialization contexts.

So list-initialization applies both to the forms vector<Copyable> vc1{c1, c2} and vector<Copyable> vc2 = {c1, c2}, which we saw above. The former is an example of direct-initialization, the latter of copy-initialization. In both cases, {c1, c2} is the braced-init-list.

(Note that the word copy-initialization here is not what causes a copy. Copy-initialization simply refers to the form T t = expression, which doesn’t necessarily invoke the copy constructor.)

Creating the initializer_list

Now what exactly happens with the braced_init_list, and how do its elements end up inside the container we’re initializing?

[dcl.init.list]¶5

An object of type std::initializer_list<E> is constructed from an initializer list as if the implementation generated and materialized (7.4) a prvalue of type “array ofN const E“, where N is the number of elements in the initializer list. Each element of that array is copy-initialized with the corresponding element of the initializer list, and thestd::initializer_list<E> object is constructed to refer to that array.

So the initializer_list can be thought of as just a wrapper for a temporary array we initialize with the elements in the braced-init-list. Sort of like if we’d been doing this:

    const Copyable arr[2] = {c1, c2};    
    vector<Copyable> vc3(initializer_list<Copyable>(arr, arr+2));

Consuming the initializer_list

Now that our initializer_list has been created and passed to the vector constructor, what can that constructor do with it? How does it get the elements out of the initializer_list and into the vector?

[initializer_list.syn] lists the very sparse interface of std::initializer_list:

constexpr const E* begin() const noexcept; // first element
constexpr const E* end() const noexcept; // one past the last element

There’s no access to the elements as rvalue references, only iterators of pointers to const, so we only get lvalues, and we need to copy. Why is there no access as rvalue references?

As we saw in the quote above, “the std::initializer_list<E> object is constructed to refer to that array.” So it only refers to it, and does not own the elements. In particular, this means that if we copy the initializer_list, we do not copy the elements, we only copy a reference to them. In fact, this is spelled out in a note [initializer_list.syn]¶1:

Copying an initializer list does not copy the underlying elements.

So even if we get passed the initializer_list by value, we do not get a copy of the elements themselves, and it would not be safe to move them out, as another copy of the initializer_list could be used again somewhere else. This is why initializer_list offers no rvalue reference access.

Summary

In summary: When you do T t{elm1, elm2}, an initializer_list is created, referring to those elements. Copying that initializer_list does not copy the elements. When a constructor takes an initializer_list, it does not know whether it’s the only consumer of those elements, so it’s not safe to move them out of the initializer_list. The only safe way to get the elements out is by copy, so a copy constructor needs to be available.

As usual, the code for this blog post is available on GitHub.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

Appendix: The Copyable and Uncopyable classes:

class Copyable {
public:
    Copyable(int i): i(i){}
    Copyable(const Copyable&) = default;
    Copyable(Copyable&&) = default;
    Copyable& operator=(const Copyable&) = default;
    Copyable& operator=(Copyable&&) = default;
    ~Copyable() = default;
    int i;
};

class NonCopyable {
public:
    NonCopyable(int i): i(i){}
    NonCopyable(const NonCopyable&) = delete;
    NonCopyable(NonCopyable&&) = delete;
    NonCopyable& operator=(const NonCopyable&) = default;
    NonCopyable& operator=(NonCopyable&&) = default;
    ~NonCopyable() = default;
    int i;
};

Dealing with unplanned but urgent work

Allan Kelly from Allan Kelly Associates

YellowUrgent-2018-10-3-09-36.jpg

3) Maintenance and Evolution
To keep a product alive, we choose backlog stories that will bring value, and do them one after the other.
But… as support of the application may take a huge part the work. And when the problem is critical, there is nothing you can do but stop what you do and fix it. This can blow any estimation.
How do you deal with firefighting in a #NoProjects world?
And techniques to avoid it.
How does #NoProject and DevOps work together?

Let me take the last part of this question first. Operations has never been plagued by the project model the way development has. When does a SysAdmin ever say “The project is finished so I’m not going to restart the server” ?

DevOps (aka Continuous Delivery) and Continuous Digital are a natural fit. The team is responsible and accountable: writing the code, deploying it and supporting it there after: “You built it, you operate it” as DevOps people like to say.

Of course the team needs to contain all the skills needed to service this approach. That might mean having an individual specialist on the team or it might mean that team members have multiple skills. A Continuous team is not just a DevOps team, it is also a Business-Technology team – or #BizTech to coin a hashtag. (This week I heard such a team called a BizDevOps team. That is one portmanteau too far for me.)

Which brings us quite nicely to the first part of this question: how do you manage – and perhaps even plan for: unplanned work?

What I would like to happen when unplanned work appears is that it is written on a card and placed in the backlog. It then takes its place with all the other possible work. But… as the questioner states: this work can’t wait, it is urgent.

Unplanned but urgent simply needs to be done. Quite possibly other work, less valuable work or work which is not time critical may even be interrupted.

At this point I was about to refer readers to an old blog post about Yellow Cards. But it turns out that I never wrote that post. Despite talking about Yellow cards for years I’ve never blogged about them. I wrote about them in Xanpan but for some reason or another I never wrote the blog… so here you go…

When a team is mid-sprint and unplanned work appears the team should:

  • First ask “Can this work wait?” – If so then write it on a regular card and put it in the backlog
  • If not then ask, is this more valuable than work we are doing now? – If not then someone needs to find the source of the request and explain why is isn’t going to get done.
  • Assuming it is urgent then it gets written on a Yellow card.
  • If it is really really urgent then someone drops what they are doing and works on the yellow card immediately.
  • If it can wait a little while then the next person who finishes their current work picks up the card and does it.
  • Once the yellow card is done mark it as done as with any other card and work continues as it was before.

Accepting unplanned work into a sprint impacts the other work the team is doing. I’m not a big fan of the commitment protocol so to me it is no big deal if this work displaces something else. But if your team are expected to hold fast to hard commitments while dealing with unplanned work then you have a problem, call me, we need to talk more.

At the end of the iteration we can look at the cards and reason about them. Now we can see the work we can manage it and decide what to do about it.

I count up the yellow cards – and all the planned work cards. That allows me to calculate a ratio of planned versus unplanned work. (Sometimes teams put a retrospective points estimate on a yellow but doing a card count is often sufficient.)

This can be tracked over time – graph it, make it visible again. Now we can look at the work and the pattern of work, reason about it, maybe do some root-cause analysis. Perhaps:

  • Perhaps much of the urgent work isn’t really so urgent, perhaps the team should push back more. Maybe the team, or one of the team leaders, needs to the authority to say No.
  • Perhaps most of the unplanned work comes from a particular person. Maybe this person doesn’t realise the impact of their unplanned requests, or maybe they need to be included in the planning process, or … a million other reasons.
  • Perhaps the unplanned work is coming from the same sub-system, maybe some remedial work on that sub-system could reduce the amount of unexpected work.
  • Perhaps the unplanned work is just the nature of the business and being responsive is valuable.

Looked at this way we can think about reducing the amount of unplanned work. But also, we can plan for unplanned work.

It is likely that over time a pattern will emerge. One team I know found that 20% to 25% of their work in any sprint was unplanned. They simply planned for 20% less work. They now had the capacity to cope with unplanned work. At the least they could expectation manage stakeholders.

One team found that each sprint they were doing about 20% IT support tasks (new PCs, Word problems, etc.) so they hired a support technician.

Another team who agonised about unplanned work found that actually they only had about one unplanned card a week. Their problem was not excessive unplanned work but the fact that unplanned work tended to have a very high profile in the company.

Teams which find they have very high levels of unplanned work on a regular basis (e.g. over 50% of work for several months) may well decide to adopt a full Kanban system. Indeed, Kanban folk probably recognise my description as a very simple example of quality-of-service and policies.

I say more about Yellow Cards for unplanned but urgent in Xanpan so you might like to continue reading there.


This is the third question carried over from the #NoEstimates/#NoProjects August workshop in Zurich.


If you have any questions about Continuous Digital, Project Myopia and #NoProjects please mail them over and I’ll do my best to answer them in this blog.

Receive these posts by e-mail?

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

The post Dealing with unplanned but urgent work appeared first on Allan Kelly Associates.