HELP KEEP NORWICH & NORFOLK ON THE TECH NATION MAP IN 2018

Paul Grenyer from Paul Grenyer


HELP KEEP NORWICH & NORFOLK ON THE TECH NATION MAP IN 2018

Tech Nation is a groundbreaking series of reports on the UK’s digital tech ecosystem. Over the last three years – it has captured the strength, depth and breadth of activity across the UK. It has revealed the scale of the digital tech sector, captured its growth, and – crucially – developed an understanding of the characteristics of the communities driving it.

We hope to make Tech Nation 2018 the best report yet.

To do this, we need your help. Last year the survey had 2,700 responses, this year we hope to reach 11,000 responses, and to hear from all tech communities in the UK to allow us to provide the most up to date and insightful data on the UK tech community in 2018.

If you work in or run a business in the technology, digital sector or any business that is related to or supports these sectors such as investors, legal, education etc. then we need your input.

We want to hear from you on topics such as diversity of the tech sector in your local area, on opportunities for high growth businesses and the quality of education and training.

Help keep Norwich & Norfolk at the forefront of peoples minds when they consider Tech Communities in the UK.

It only takes 5 minutes!



If you would like to join Tech Nation 2018 Community Partner please email details of your business along with your logo to technation@techcityuk.com. This will mean that you will be sent updates of completions in your area, and have the opportunity to contribute further to the report.

All community partners will have their logos featured in the report.

The survey will close on Friday 2nd of February.

TAKE THE SURVEY

Visual Lint 6.0.8.291 has been released

Products, the Universe and Everything from Products, the Universe and Everything

Visual Lint 6.0.8.291 has just been released. This is a maintenance update for Visual Lint 6.0, and includes the following changes:
  • Fixed a bug which could cause Visual C++ 2010-2017 project (.vcxproj) files which have configuration names containing brackets to be loaded incorrectly.
  • Fixed a race condition which could cause errors or a crash while loading MSBuild projects.
  • Added modified versions of several PC-lint 9.0 indirect files which are not supplied with PC-lint Plus 1.0 to the installer.
  • Added additional PC-lint Plus suppression directives to the indirect file lib-rb-win32.lnt supplied within the installer.
Download Visual Lint 6.0.8.291

Passing overload sets to functions

Simon Brand from Simon Brand

Passing functions to functions is becoming increasingly prevalent in C++. With common advice being to prefer algorithms to loops, new library features like std::visit, lambdas being incrementally beefed up12 and C++ function programming talks consistently being given at conferences, it’s something that almost all C++ programmers will need to do at some point. Unfortunately, passing overload sets or function templates to functions is not very well supported by the language. In this post I’ll discuss a few solutions and show how C++ still has a way to go in supporting this well.

An example

We have some generic operation called foo. We want a way of specifying this function which fulfils two key usability requirements.

1- It should be callable directly without requiring manually specifying template arguments:

auto a = foo(42);           //good
auto b = foo("hello"); //good
auto c = foo<double>(42.0); //bad
auto d = foo{}(42.0); //bad

2- Passing it to a higher-order function should not require manually specifying template arguments:

std::transform(first, last, target, foo);      //good
std::transform(first, last, target, foo<int>); //bad
std::transform(first, last, target, foo{}); //okay I guess

A simple first choice would be to make it a function template:

template <class T>
T foo(T t) { /*...*/ }

This fulfils the first requirement, but not the second:

//compiles, but not what we want
std::transform(first, last, target, foo<int>);

//uh oh
std::transform(first, last, target, foo);

7 : <source>:7:5: error: no matching function for call to 'transform'
std::transform(first, last, target, foo);
^~~~~~~~~~~~~~
/opt/compiler-explorer/gcc-7.2.0/lib/gcc/x86_64-linux-gnu/7.2.0/../../../../include/c++/7.2.0/bits/stl_algo.h:4295:5: note: candidate template ignored: couldn't infer template argument '_UnaryOperation'
transform(_InputIterator __first, _InputIterator __last,
^
/opt/compiler-explorer/gcc-7.2.0/lib/gcc/x86_64-linux-gnu/7.2.0/../../../../include/c++/7.2.0/bits/stl_algo.h:4332:5: note: candidate function template not viable: requires 5 arguments, but 4 were provided
transform(_InputIterator1 __first1, _InputIterator1 __last1,
^
1 error generated.

That’s no good.

A second option is to write foo as a function object with a call operator template:

struct foo {
template<class T>
T operator()(T t) { /*...*/ }
};

We are now required to create an instance of this type whenever we want to use the function, which is okay for passing to other functions, but not great if we want to call it directly:

//this looks okay
std::transform(first, last, target, foo{});

//this looks strange
auto x = foo{}(42.0);
auto x = foo()(42.0);

We have similar problems when we have multiple overloads, even when we’re not using templates:

int foo (int);
float foo (float);

std::transform(first, last, target, foo); //doesn't compile
// ew ew ew ew ew ew ew
std::transform(first, last, target, static_cast<int(*)(int)>(foo));

We’re going to need a different solution.


Lambdas and LIFT

As an intermediate step, we could use the normal function template approach, but wrap it in a lambda whenever we want to pass it to another function:

std::transform(first, last, target,
[](const auto&... xs) { return foo(xs...); });

That’s not great. It’ll work in some contexts where we don’t know what template arguments to supply, but it’s not yet suitable for all cases. One improvement would be to add perfect forwarding:

[](auto&&... xs) { return foo(std::forward<decltype(xs)>(xs)...); }

But wait, we want to be SFINAE friendly, so we’ll add a trailing return type:

[](auto&&... xs) -> decltype(foo(std::forward<decltype(xs)>(xs)...)) {
return foo(std::forward<decltype(xs)>(xs)...);
}

Okay, it’s getting pretty crazy and expert-only at this point. And we’re not even done! Some contexts will care about noexcept:

[](auto&&... xs)
noexcept(noexcept(foo(std::forward<decltype(xs)>(xs)...)))
-> decltype(foo(std::forward<decltype(xs)>(xs)...)) {
return foo(std::forward<decltype(xs)>(xs)...);
}

So the solution is to write this every time we want to pass an overloaded function to another function. That’s probably a good way to make your code reviewer cry.

What would be nice is if P0573: Abbreviated Lambdas for Fun and Profit and P0644: Forward without forward were accepted into the language. That’d let us write this:

[](xs...) => foo(>>xs...)

The above is functionally equivalent to the triplicated monstrosity in the example before. Even better, if P0834: Lifting overload sets into objects was accepted, we could write:

[]foo

That lifts the overload set into a single function object which we can pass around. Unfortunately, all of those proposals have been rejected. Maybe they can be renewed at some point, but for now we need to make do with other solutions. One such solution is to approximating []foo with a macro (I know, I know).

#define FWD(...) std::forward<decltype(__VA_ARGS__)>(__VA_ARGS__)

#define LIFT(X) [](auto &&... args) \
noexcept(noexcept(X(FWD(args)...))) \
-> decltype(X(FWD(args)...)) \
{ \
return X(FWD(args)...); \
}

Now our higher-order function call becomes:

std::transform(first, last, target, LIFT(foo));

Okay, so there’s a macro in there, but it’s not too bad (you know we’re in trouble when I start trying to justify the use of macros for this kind of thing). So LIFT is at least some solution.


Making function objects work for us

You might recall from a number of examples ago that the problem with using function object types was the need to construct an instance whenever we needed to call the function. What if we make a global instance of the function object?

struct foo_impl {
//template
template<class T>
T operator()(T t) { /*...*/ }

//overloads
int operator()(int) { /*...*/ }
float operator()(float) { /*...*/ }
};

extern const foo_impl foo;

// in some .cpp file
foo_impl foo;

This works if you’re able to have a single translation unit with the definition of the global object. If you’re writing a header-only library then you don’t have that luxury, so you need to do something different.

struct foo_impl {
template<class T>
T operator()(T t) { /*...*/ }
};

static constexpr foo_impl foo;

This might look innocent, but it can lead to One-Definition Rule (ODR) violations3:

//test.h header
struct foo_impl {
template<class T>
T operator()(T t) const { return t; }
};

static constexpr foo_impl foo;

template <class T>
int oh_no(T t) {
auto* foop = &foo;
return (*foop)(t);
}

//cpp1
#include "test.h"
int sad() {
return oh_no(42);
}

//cpp2
#include "test.h"
int also_sad() {
return oh_no(24);
}

Since foo is declared static, each Translation Unit (TU) will get its own definition of the variable. However, sad and also_sad will instantiate oh_no which will get different definitions of foo for &foo. This is undefined behaviour by [basic.def.odr]/12.2.

In C++17 the solution is simple:

inline constexpr foo_impl foo{};

The inline allows the variable to be multiply-defined, and the linker will throw away all but one of the definitions.

If you can’t use C++17, there are a few solutions given in N4424: Inline Variables. The Ranges V3 library uses a reference to a static member of a template class:

template<class T>
struct static_const {
static constexpr T value{};
};

template <class T>
constexpr T static_const<T>::value;

constexpr auto& foo = static_const<foo_impl>::value;

An advantage of the function object approach is that function objects designed carefully make for much better customisation points than the traditional techniques used in the standard library. See Eric Niebler’s blog post and standards paper for more information.

A disadvantage is that now we need to write all of the functions we want to use this way as function objects, which is not great at the best of times, and even worse if we want to use external libraries. One possible solution would be to combine the two techniques we’ve already seen:

// This could be in an external library
namespace lib {
template <class T>
T foo(T t) { /*...*/ }
}

namespace lift {
inline constexpr auto foo = LIFT(lib::foo);
}

Now we can use lift::foo instead of lib::foo and it’ll fit the requirements I laid out at the start of the post. Unfortunately, I think it’s possible to hit ODR-violations with this due to possible difference in closure types cross-TU. I’m not sure what the best workaround for this is, so input is appreciated.


Conclusion

I’ve given you a few solutions to the problem I showed at the start, so what’s my conclusion? C++ still has a way to go to support this paradigm of programming, and teaching these ideas is a nightmare. If a beginner or even intermediate programmer asks how to pass overloaded functions around – something which sounds like it should be fairly easy – it’s a real shame that the best answers I can come up with are “Copy this macro which you have no chance of understanding”, or “Make function objects, but make sure you do it this way for reasons which I can’t explain unless you understand the subtleties of ODR4”. I feel like the language could be doing more to support these use cases.

Maybe for some people “Do it this way and don’t ask why” is an okay answer, but that’s not very satisfactory to me. Maybe I lack imagination and there’s a better way to do this with what’s already available in the language. Send me your suggestions or heckles on Twitter @TartanLlama.


Thanks to Michael Maier for the motivation to write this post; Jayesh Badwaik, Ben Craig, Michał Dominiak and Kévin Boissonneault for discussion on ODR violations; and Eric Niebler, Barry Revzin, Louis Dionne, and Michał Dominiak (again) for their work on the libraries and standards papers I referenced.


  1. P0315: Lambdas in unevaluated contexts 

  2. P0624: Default constructible and assignable stateless lambdas 

  3. Example lovingly stolen from n4381. 

  4. Disclaimer: I don’t understand all the subtleties of ODR. 

Happy New Year and all that!

Products, the Universe and Everything from Products, the Universe and Everything

So, here we are in another year, and with two weeks under our best so far things are starting to get done.
Happy New Year! Have a blue fish for Blue Monday
Although the post-Christmas period in our part of the world tends to be a bit of a gloomy experience (witness the concept of Blue Monday), we like to look at the hopeful stuff which is always there when you care to look for it. For us, that means a few things. We start this year in the knowledge that Gimpel has now released PC-lint Plus (yay!), which finally puts to bed the issues PC-lint 9.0 had with modern C++ code - and variadic templates in particular. For the unaware, PC-lint Plus uses Clang as a front-end, so it's absolutely futureproof with regard to C++ 17, C++ 20 etc.! We're spending a lot of our time developing library suppression files for the new version, so if you're running into issues with unexpected errors etc. let us know and we'll be happy to help. The pricing and licencing model for PC-lint Plus is different from that for PC-lint 9.0, so that may take some getting used to but we have high hopes. Evaluation licences are available on request from Gimpel. They have also shared some of their future plans for the product with us, and although we're under NDA (and can't therefore share details of them yet) I can tell you that they are rather exciting. Then there is the ACCU Conference in Bristol on 10th-14th April, and the Business of Software Conference Europe on 21st-22nd May (if you're in the USA, Business of Software USA is on 1st-3rd October in Boston, MA). We'll be at both ACCU and BoS Europe - in the former case with our full demo rig. Finally, we're getting Visual Lint 6.5 ready for public release (more on that in the next mailshot) so internally we've been busy branching code, building, testing etc. Visual Lint 6.5 will be an incrememental (and free!) upgrade to Visual Lint 6.0, but we think you'll like it.

First use of: software, software engineering and source code

Derek Jones from The Shape of Code

While reading some software related books/reports/articles written during the 1950s, I suddenly realized that the word ‘software’ was not being used. This set me off looking for the earliest use of various computer terms.

My search process consisted of using pfgrep on my collection of pdfs of documents from the 1950s and 60s, and looking in the index of the few old computer books I still have.

Software: The Oxford English Dictionary (OED) cites an article by John Tukey published in the American Mathematical Monthly during 1958 as the first published use of software: “The ‘software’ comprising … interpretive routines, compilers, and other aspects of automotive programming are at least as important to the modern electronic calculator as its ‘hardware’.”

I have a copy of the second edition of “An Introduction to Automatic Computers” by Ned Chapin, published in 1963, which does a great job of defining the various kinds of software. Earlier editions were published in 1955 and 1957. Did these earlier edition also contain various definitions of software? I cannot find any reasonably prices copies on the second-hand book market. Do any readers have a copy?

Software engineering: The OED cites a 1966 “letter to the ACM membership” by Anthony A. Oettinger, then ACM President: “We must recognize ourselves … as members of an engineering profession, be it hardware engineering or software engineering.”

The June 1965 issue of COMPUTERS and AUTOMATION, in its Roster of organizations in the computer field, has the list of services offered by Abacus Information Management Co.: “systems software engineering”, and by Halbrecht Associates, Inc.: “software engineering”. This pushes the first use of software engineering back by a year.

Source code: The OED cites a 1965 issue of Communications ACM: “The PUFFT source language listing provides a cross reference between the source code and the object code.”

The December 1959 Proceedings of the EASTERN JOINT COMPUTER CONFERENCE contains the article: “SIMCOM – The Simulator Compiler” by Thomas G. Sanborn. On page 140 we have: “The compiler uses this convention to aid in distinguishing between SIMCOM statements and SCAT instructions which may be included in the source code.”

Running pdfgrep over the archive of documents on bitsavers would probably throw up all manners of early users of software related terms.

Constantly Confusing: C++ const and constexpr pointer behaviour

Samathy from Stories by Samathy on Medium

A quick explanation of how const and constexpr work on pointers in C++

So I was checking that my knowledge was correct when working on a Firefox bug.
I made a quick C++ file with all the examples I know of how to use const and constexpr on pointers.
As one can see, its pretty confusing!

Because there are several places in a statement where you can put ‘const’ it can be complicated to work out what part of your statement the ‘const’ is referring too.
Generally, its best to read from right to left to work it out. i.e:

static const char * const hello;

Would read like:

hello (is a) const pointer (to) const char

But, that takes a bit of practice.

C++’s constexpr brings another new dimension to the problem too!
It behaves like const in the sense that it makes all pointers constant pointers.
But because it occurs at the start of your statement (rather than after the ‘*’) its not immediately obvious.

Heres my list of all the ways you can use const and constexpr on pointers and how they behave.

Working with PDF Highlight Annotations Programmatically

Samathy from Stories by Samathy on Medium

PDFs are the format of choice in academia, but extracting the information they contain is annoyingly hard.

I’ve just started working on my degree’s final project. An academic project requires lots of research, which means reading lots of papers.
Papers are normally available in one form only, PDF.

While PDF is a format so ubiquitous nowadays that one can guarantee being able to display it as the writer(s) intended, its not a nice format, as I found out as soon as I needed to do something with it.

During the course of my research, I’ve been using PDF’s highlight annotations to highlight parts of a paper that’re particularly interesting.
I wanted to be able to retrieve the highlighted text at a later date so I didn’t have to open the paper again to find the parts I found interesting when I read it the first time.

You’d think that exporting annotations on text would be something that all PDF readers which support annotations (most of them do) would be capable of. I mean, surely its easy enough even if there arnt that many reasons why you’d want to do it.

Alas, none that I found running on Linux had this feature, so I delved into trying to write something to do what I needed.

I based my project on a tool I found in a StackOverflow answer to a question similar to mine.
The Python code in the answer utilises poppler-qt4 to export annotated text from a PDF. Unfortunately, the code is Python2 and the python poppler-qt4 package wouldn't install properly on my system anyway, even after installing the poppler-qt4 package.
Neither did Python’s poppler-qt5 bindings.

Convinced I could do a better job than a Python 2 script which depended on a package last updated in 2015, I translated the answer into the equivalent in C++.

I started with trying to use poppler-cpp, the C++ bindings for poppler where one has objects and namespaces, and none of the guff associated with GUI frameworks that I wouldn't need here. However, to my dismay, poppler-cpp doesn't support annotations at all. For whatever reason, annotation support only works with the bindings to a GUI framework, like glib or QT.

So instead I used poppler-glib (i.e glib from the GNOME project). Purely because I use GNOME, so wouldn't have to install anything extra.

Now, the PDF format is really odd. Annotations seem to be an after-thought to the format tacked on later.
Specifically highlighting is weird, because a highlight annotation has no connection to the document’s text.
As such, poppler’s poppler_annot_get_contents(PopplerAnnot *) which should return the annotation’s contents, returns nothing.
Instead, to get the text associated with a highlight annotation, one has to get the coordinates of the highlight annotation (A PopplerRectangle) and then utilise the function poppler_page_get_text_for_area(PopplerPage*, PopplerRectangle*) which returns the text in a defined area.

What an entirely baffling way to go about implementing highlighting. Attaching it as purely a visual element, rather than actually marking up the text.

Even more baffling is the fact that although my application works, it only mostly works.
Sometimes I get the full text highlighted, other times it chops off characters, and sometimes it adds things that’re nowhere near the highlighted text at all!
This is a problem I’m yet to solve, and I might never solve, because its ridiculous and the tool mostly does what I needed anyway.

In conclusion; The PDF format is weird, I wrote a thing.
If you use it, let me know how it goes!

https://github.com/Samathy/pdfcommentextractor

Computer books your great grandfather might have read

Derek Jones from The Shape of Code

I have been reading two very different computer books written for a general readership: Giant Brains or Machines that Think, published in 1949 (with a retrospective chapter added in 1961) and LET ERMA DO IT, published in 1956.

‘Giant Brains’ by Edmund Berkeley, was very popular in its day.

Berkeley marvels at a computer performing 5,000 additions per second; performing all the calculations in a week that previously required 500 human computers (i.e., people using mechanical adding machines) working 40 hours per week. His mind staggers at the “calculating circuits being developed” that can perform 100,00 additions a second; “A mechanical brain that can do 10,000 additions a second can very easily finish almost all its work at once.”

The chapter discussing the future, “Machines that think, and what they might do for men”, sees Berkeley struggling for non-mathematical applications; a common problem with all new inventions. Automatic translator and automatic stenographer (typist who transcribe dictation) are listed. There is also a chapter on social control, which is just as applicable today.

This was the first widely read book to promote Shannon‘s idea of using the algebra invented by George Boole to analyze switching circuits symbolically (THE 1940 Masters thesis).

The ‘ERMA’ book paints a very rosy picture of the future with computer automation removing the drudgery that so many jobs require; it is so upbeat. A year later the USSR launched Sputnik and things suddenly looked a lot less rosy.

I am guilty of Agile training

Allan Kelly from Allan Kelly Associates

iStock_000010964034Medium-2018-01-11-17-17.jpg

Over Christmas I was thinking, reflecting, drinking…

Once upon a time I was asked by a manager to teach his team Agile so the team could become Agile. It went downhill from there…

I turned up at the clients offices to find a room of about 10 people. The manager wasn’t there – shame, he should be in the room to have the conversations with the team. In fact half the developers were missing. This company didn’t allow contractors to attend training sessions.

For agile introduction courses I always try and have a whole team, complete with decision makers, in the room. If you are addressing a specialist topic (say user stories or Cucumber) then its OK to have only the people the topic effects in the room. But I am talking about teams and processes, well I want everyone there!

We did a round of introductions and I learned that the manager, and other managers from the company, had been on a Scrum Master course and instructed the team to be Agile. Actually, the company had decided to be Agile and sent all the managers on Scrum Master courses.

So the omens were bad and then one of the developers said something to the effect:

“I don’t think Agile can help us. We have lots of work to do, we don’t have enough time, we are already struggling, there is masses of technical debt and we can’t cut quality any further. We need more time to do our work not less.”

What scum am I? – I pretend to be all nice but underneath I allow myself to be used as a tool to inflict agile pain on others. No wonder devs hate Agile.

My name is Allan and I provide Agile training and consulting services.
I am guilty of training teams in how to do Agile software development.
I am guilty of offering advice to individuals and teams in a directive format.
I have been employed by managers who want to make their teams agile against the will of the team members.
I have absented myself from teams for weeks, even months and failed to provide deep day-in-day-out coaching.

In my defence I plead mitigating circumstances.

One size does not fit all. The Agile Industrial Complex* has come up with one approach (training, certification and enforcement) and the Agile Hippies another (no-pressure, non-directive, content free, coaching).

I don’t fit into either group. Doing things differently can be lonely … still, I’ve had my successes.

I happen to believe that training team members in “Agile” can be effective. I believe training can help by:

  • Providing time for individuals to learn
  • Sharing the wisdom of one with others
  • Providing the opportunity for teams to learn together and create a shared understanding
  • Providing rehearsal space for teams to practice what the are doing, or hope to do
  • Providing a starting-point – a kick-off or a Kaikaku event – for a reset or change
  • and some other reasons which probably don’t come to mind right now

Yes, when I deliver training I’m teaching people to do something, but that is the least important thing. When I stand up at the start of a training session I image myself as a market stall holder. On my market stall are a set of tools and techniques which those in the room might like to buy: stand-up meetings, planning meeting, stories, velocity, and so on. My job is to both explain these tools and inspire my audience to try. I have a few hours to do that.

As much as I hate to say it, part of my job at this point is Sales. I have to sell Agile. In part I do that by painting a picture of how great the world might be with Agile. I like to think I also give the audience some tools for moving towards that world.

At the end of the time individuals get to decide which, if any, of the tools I’ve set out they want to use. Sometimes these are individual decisions, and sometimes individuals may not pick up any tools for months or years.

On other occasions – when I have time – I let the audience decide what they want to do. Mentally I see myself handing the floor over to the audience to decide what they want to do. In reality this is a team based exercise where the teams decide which tools they want to adopt.

If a team wants to say “No thank you” then so be it.

In my experience teams adopting Agile benefit greatly from having ongoing advice on how they are working. Managers benefit from understanding the team, understanding how their own role changes, and understanding how the organization needs to change over time.

Plus: you cannot cram everything a team need to know into a few hours training and it would be wrong todo so. You don’t want to overload people at the start. There are many things that are better talked about when people have had some experience.

Actually, I tend to believe that there are some parts of Agile which people can only learn first hand. They are – almost – incomprehensible, or unbelievable until one has experience. That is one of the reasons I think managers have trouble gasping agile in full: they are too far removed from the work to experience it first hand.

You see, I believe everyone engages in their own sense making, everyone learns to make sense and meaning in the world themselves. In so much as I have a named educational style it is constructivist. But my philosophy isn’t completely joined up and has some holes, I’m still learning myself.

When I do training I want to give people experiences help them learn. And that continues into the work place after the training.

So I also offer coaching, consulting, advice, call it what you will.

But I don’t like being with the team too much. I prefer to drop in. I believe that people, teams, need space to create their own understanding. If I was there they wouldn’t get that space, they wouldn’t have those experiences, and possibly they wouldn’t take responsibility for their own changes.

One of my fears about having a “Scrum Master” type figure attached to a team is that that person becomes the embodiment of the change. Do people really take responsibility and ownership if there is someone else there to do it?

I prefer to drop in occasionally. Talk to individuals, teams, talk about how things are going. Talk about their experience. Further their sense making process. Do some additional exercises if it helps. Run a retrospective.

And then I disappear. Leave things with them. Let them own it.

Whether technical skills are concerned – principally TDD – it is a little different. Because that is a skill that needs to be learned by practice. I don’t tend to do that so I usually involve one of my associates and they are sometimes embedded with a team for a longer period.

Similarly, I do sometimes become embedded in an organization. I can be there for several days a week for many weeks on end. That usually occurs when the organization is larger, or when the problems are bigger. Even then I want to leave as much control with the teams as I can.

On the one hand I’m a very bad person: I accept unwilling participants on my training courses and then don’t provide the day-to-day coaching that many advocate.

On the other hand: what I do works, I’ve seen it work. Sometimes one can benefit from being challenged, sometimes one needs to open ones mind to new ideas.

If I’m guilty of anything I’m guilty of having a recipe which works differently.

And that team I spoke of to start with?

One day two some people did not return: that was a win. They had worked out that it was not for them and they had taken control. That to me is a success.

Most people did return and at the end, the one who had told me Agile could do nothing for them saw that Agile offered hope. That hope was principally an approach to quality which was diametrically opposite to what he initially thought it was going to be and was probably, although I can’t be sure, the opposite of what his manager thought Agile meant.

It is entirely possible that had his manager been in the room to hear my quality message I’d have been thrown out there and then. And its just possible I might have given him food for thought.

But I will never know. I never heard from them again. Which was a shame, I’d love to know how the story ended. But that is something else: I don’t want to force anyone to work with me, I don’t lock people in. That causes me commercial headaches and sometimes I see people who stop taking the medicine before they are fully recovered but thats what happens when you allow people to exercise free will.

O, one more thing, ad advert, I’m available for hire, if you like the sound of any of that then check out my Agile Training or just get in touch.

*Tongue in cheek, before you flame me, I’ve exaggerated and pandered to stereotypes to effect and humour.

Read more? Subscribe to my newsletter – free updates on blog post, insights, events and offers.

The post I am guilty of Agile training appeared first on Allan Kelly Associates.