Medieval guilds: a tax collection bureaucracy

Derek Jones from The Shape of Code

The medieval guild is sometimes held up as the template for an institution dedicated to maintaining high standards, and training the next generation of craftsmen.

“The European Guilds: An economic analysis” by Sheilagh Ogilvie takes a chainsaw (i.e., lots of data) to all the positive things that have been said about medieval guilds (apart from them being a money making machine for those on the inside).

Guilds manipulated markets (e.g., drove down the cost of input items they needed, and kept the prices they charged high), had little or no interest in quality, charged apprentices for what little training they received, restricted entry to their profession (based on the number of guild masters the local population could support in a manner expected by masters), and did not hesitate to use force to enforce the rules of the guild (should a member appear to threaten the livelihood of other guild members).

Guild wars is not the fiction of an online game, guilds did go to war with each other.

Given their focus on maximizing income, rather than providing customer benefits, why did guilds survive for so many centuries? Guilds paid out significant sums to influence those in power, i.e., bribes. Guilds paid annual sums for the exclusive rights to ply their trade in geographical areas; it’s all down on Vellum.

Guilds provided the bureaucracy needed to collect money from the populace, i.e., they were effectively tax collectors. Medieval rulers had a high turn-over, and most were not around long enough to establish a civil service. In later centuries, the growth of a country’s population led to the creation of government departments, that were stable enough to perform tax collecting duties more efficiently that guilds; it was the spread of governments capable of doing their own tax collecting that killed off guilds.

The Product Owner Delta

Allan Kelly from Allan Kelly Associates

ValueAddPO-2019-07-1-08-19.jpg

As regular readers might know I’m working on a book called The Art of Product Ownership to be published by Apress later this year. One of the chapters is entitled “Why have a Product Owner” and a few days ago a bunch of ideas crystallised into this…

The aim of the Product Owner is to increase, even maximise, the business value delivered by the team as a whole. The Product Owner does not so much create value themselves as increase the value created by others.

Think of it like this: if the team randomly selected work to do and delivered it to customers then some value would be created. (For the moment I’ll ignore the scenario where that work detracts from the existing value.) The aim of the PO is to ensure the work done creates more value than a simple random selection. The greater the difference, or delta to use a mathematical term, between random selection and an informed selection the better.

The general hypothesis is that intelligent selection of work by a skilled Product Owner will result in both more value being delivered and an increasing delta between intelligent PO selected work and randomly selected work.

This difference the value added by a Product Owner. I like to call this difference the Product Owner Delta.

Now in real life work is seldom randomly so Product Owners are not competing against random selection. In some cases the alternative to a designated Product Owners is someone else: a senior developer, an architect, a manager or someone else. In such cases this person is taking on the Product Owner role. They may not have the title, the aptitude, the skills or official position but when work is selected by one person they are de facto the Product Owner.

In other cases the alternative to the PO might be selection by consensus on the team, or a sub-set of the team. Now it is entirely possible that such a group could outperform a single Product Owner in selecting work – especially is they have market and customer knowledge, some analysis skills, time to do the background research and so on. In some cases this works, for example think of a small start-up staffed by software developers creating software development tools.

However, in some cases selection by committee might be inferior to a random selection. Imagine a team which has never met a customer, argue about what to do, duck key decisions and never say No to any request. Its easy to image a dysfunctional selection committee.

There is more to increasing the Product Owner Delta than simply selecting the highest value items. Timely selection can help too. If decisions are not being made, or committees are spending a long time making decisions then having one person simply make those decisions in an efficient, timely, manner can increase the delta.

Time has another role. Because of cost-of-delay simply selecting the highest value items at any one point in time does not maximise the value delivered. Time Value Profiles (see Little Book of User Stories or my presentations on value “How much? When?”) expose this and need to be another tool in the Product Owners repertoire.

And of course, the Product Owner Delta is not the only reason to have a Product Owner in the team, but it is probably the main reason.


Like this post? – Like to receive these posts by e-mail?

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

Check out my latest books – Continuous Digital and Project Myopia – and the Project Myopia audio edition

The post The Product Owner Delta appeared first on Allan Kelly Associates.

The Power of Hidden Friends in C++

Anthony Williams from Just Software Solutions Blog

"Friendship" in C++ is commonly thought of as a means of allowing non-member functions and other classes to access the private data of a class. This might be done to allow symmetric conversions on non-member comparison operators, or allow a factory class exclusive access to the constructor of a class, or any number of things.

However, this is not the only use of friendship in C++, as there is an additional property to declaring a function or function template a friend: the friend function is now available to be found via Argument-Dependent Lookup (ADL). This is what makes operator overloading work with classes in different namespaces.

Argument Dependent Lookup at Work

Consider the following code snippet:

namespace A{
  class X{
  public:
    X(int i):data(i){}
  private:
    int data;
    friend bool operator==(X const& lhs,X const& rhs){
      return lhs.data==rhs.data;
    }
  };
}
int main(){
  A::X a(42),b(43);
  if(a==b) do_stuff();
}

This code snippet works as you might expect: the compiler looks for an implementation of operator== that works for A::X objects, and there isn't one in the global namespace, so it also looks in the namespace where X came from (A), and finds the operator defined as a friend of class X. Everything is fine. This is ADL at work: the argument to the operator is an A::X object, so the namespace that it comes from (A) is searched as well as the namespace where the usage is.

Note, however, that the comparison operator is not declared anywhere other than the friend declaration. This means that it is only considered for name lookup when one of the arguments is an X object (and thus is "hidden" from normal name lookup). To demonstrate this, let's define an additional class in namespace A, which is convertible to 'X':

namespace A{
  class Y{
  public:
    operator X() const{
      return X(data);
    }
    Y(int i):data(i){}
  private:
    int data;
  };
}
A::Y y(99);
A::X converted=y; // OK

Our Y class has a conversion operator defined, so we can convert it to an X object at will, and it is also in namespace A. You might think that we can compare Y objects, because our comparison operator takes an X, and Y is convertible to X. If you did, you'd be wrong: the comparison operator is only visible to name lookup if one of the arguments is an X object.

int main(){
  A::Y a(1),b(2);
  if(a==b) // ERROR: no available comparison operator
    do_stuff();
}

If we convert one of the arguments to an X then it works, because the comparison operator is now visible, and the other argument is converted to an X to match the function signature:

int main(){
  A::Y a(1),b(2);
  if(A::X(a)==b) // OK
    do_stuff();
}

Similarly, if we declare the comparison operator at namespace scope, everything works too:

namespace A{
  bool operator==(X const& lhs,X const& rhs);
}
int main(){
  A::Y a(1),b(2);
  if(a==b) // OK now
    do_stuff();
}

In this case, the arguments are of type Y, so namespace A is searched, which now includes the declaration of the comparison operator, so it is found, and the arguments are converted to X objects to do the comparison.

If we omit this namespace scope definition, as in the original example, then this function is a hidden friend.

This isn't just limited to operators: normal functions can be defined in friend declarations too, and just as with the comparison operator above, if they are not also declared at namespace scope then they are hidden from normal name lookup. For example:

struct X{
  X(int){}
  friend void foo(X){};
};
int main(){
    X x(42);
    foo(x); // OK, calls foo defined in friend declaration
    foo(99); // Error: foo not found, as int is not X
    ::foo(x); // Error: foo not found as ADL not triggered
}

Benefits of Hidden Friends

The first benefit of hidden friends is that it avoids accidental implicit conversions. In our example above, comparing Y objects doesn't implicitly convert them to X objects to use the X comparison unless you explicitly do something to trigger that behaviour. This can avoid accidental uses of the wrong function too: if I have a function wibble that takes an X and wobble that takes a Y, then a typo in the function name won't trigger the implicit conversion to X:

class X{
friend void wibble(X const&){}
};

class Y{
friend void wobble(Y const&){}
public:
operator X() const;
};

int main(){
  Y y;
  wibble(y); // Error no function wibble(Y)
}

This also helps spot errors where the typo was on the definition: we meant to define wibble(Y) but misspelled it. With "normal" declarations, the call to wibble(y) would silently call wibble(X(y)) instead, leading to unexpected behaviour. Hopefully this would be caught by tests, but it might make it harder to identify the problem as you'd be checking the definition of wobble, wondering why it didn't work.

Another consequence is that it makes it easier for the compiler: the hidden friends are only checked when there is a relevant argument provided. This means that there are fewer functions to consider for overload resolution, which makes compilation quicker. This is especially important for operators: if you have a large codebase, you might have thousands of classes with operator== defined. If they are declared at namespace scope, then every use of == might have to check a large number of them and perform overload resolution. If they are hidden friends, then they are ignored unless one of the expressions being compared is already of the right type.

In order to truly understand the benefits and use them correctly, we need to know when hidden friends are visible.

Rules for Visibility of Hidden Friends

Firstly, hidden friends must be functions or function templates; callable objects don't count.

Secondly, the call site must use an unqualified name — if you use a qualified name, then that checks only the specified scope, and disregards ADL (which we need to find hidden friends).

Thirdly, normal unqualified lookup must not find anything that isn't a function or function template. If you have a local variable int foo;, and try to call foo(my_object) from the same scope, then the compiler will rightly complain that this is invalid, even if the type of my_object has a hidden friend named foo.

Finally, one of the arguments to the function call must be of a user-defined type, or a pointer or reference to that type.

We now have the circumstances for calling a hidden friend if there is one:

my_object x;
my_object* px=&x;

foo(x);
foo(px);

Both calls to foo in this code will trigger ADL, and search for hidden friends.

ADL searches a set of namespaces that depend on the type of my_object, but that doesn't really matter for now, as you could get to normal definitions of foo in those namespaces by using appropriate qualification. Consider this code:

std::string x,y;
swap(x,y);

ADL will find std::swap, since std::string is in the std namespace, but we could just as well have spelled out std::swap in the first place. Though this is certainly useful, it isn't what we're looking at right now.

The hidden friend part of ADL is that for every argument to the function call, the compiler builds a set of classes to search for hidden friend declarations. This lookup list is built as follows from a source type list, which is initially the types of the arguments supplied to the function call.

Our lookup list starts empty. For each type in the source type list:

  • If the type being considered is a pointer or reference, add the pointed-to or referenced type to the source type list
  • Otherwise, if the type being considered is a built-in type, do nothing
  • Otherwise, if the type is a class type then add it to the lookup list, and check the following:
    • If the type has any direct or indirect base classes, add them to the lookup list
    • If the type is a member of a class, add the containing class to the lookup list
    • If the type is a specialization of a class template, then:
    • add the types of any template type arguments (not non-type arguments or template template arguments) to the source type list
    • if any of the template parameters are template template parameters, and the supplied arguments are member templates, then add the classes of which those templates are members to the lookup list
  • Otherwise, if the type is an enumerated type that is a member of a class, add that class to the lookup list
  • Otherwise, if the type is a function type, add the types of the function return value and function parameters to the source type list
  • Otherwise, if the type is a pointer to a member of some class X, add the class X and the type of the member to the source type list

This gets us a final lookup list which may be empty (e.g. in foo(42)), or may contain a number of classes. All the classes in that lookup list are now searched for hidden friends. Normal overload resolution is used to determine which function call is the best match amongst all the found hidden friends, and all the "normal" namespace-scope functions.

This means that you can add free functions and operators that work on a user-defined type by adding normal namespace-scope functions, or by adding hidden friends to any of the classes in the lookup list for that type.

Adding hidden friends via base classes

In a recent blog post, I mentioned my strong_typedef implementation. The initial design for that used an enum class to specify the permitted operations, but this was rather restrictive, so after talking with some others (notably Peter Sommerlad) about alternative implementation strategies, I switched it to a mixin-based implementation. In this case, the Properties argument is now a variadic parameter pack, which specifies types that provide mixin classes for the typedef. jss::strong_typedef<Tag,Underlying,Prop> then derives from Prop::mixin<jss::strong_typedef<Tag,Underlying,Prop>,Underlying>. This means that the class template Prop::mixin can provide hidden friends that operate on the typedef type, but are not considered for "normal" lookup. Consider, for example, the implementation of jss::strong_typedef_properties::post_incrementable:

struct post_incrementable {
    template <typename Derived, typename ValueType> struct mixin {
        friend Derived operator++(Derived &self, int) noexcept(
            noexcept(std::declval<ValueType &>()++)) {
            return Derived{self.underlying_value()++};
        }
    };
};

This provides an implementation of operator++ which operates on the strong typedef type Derived, but is only visible as a hidden friend, so if you do x++, and x is not a strong typedef that specifies it is post_incrementable then this operator is not considered, and you don't get accidental conversions.

This makes the strong typedef system easily extensible: you can add new property types that define mixin templates to provide both member functions and free functions that operate on the typedef, without making these functions generally visible at namespace scope.

Hidden Friends and Enumerations

I had forgotten that enumerated types declared inside a class also triggered searching that class for hidden friends until I was trying to solve a problem for a client recently. We had some enumerated types that were being used for a particular purpose, which we therefore wanted to enable operations on that wouldn't be enabled for "normal" enumerated types.

One option was to specialize a global template as I described in my article on Using Enum Classes as Bitfields, but this makes it inconvenient to deal with enumerated types that are members of a class (especially if they are private members), and impossible to deal with enumerated types that are declared at local scope. We also wanted to be able to declare these enums with a macro, which would mean we couldn't use the specialization as you can only declare specializations in the namespace in which the original template is declared, and the macro wouldn't know how to switch namespaces, and wouldn't be usable at class scope.

This is where hidden friends came to the rescue. You can define a class anywhere you can define an enumerated type, and hidden friends declared in the enclosing class of an enumerated type are considered when calling functions that take the enumerated as a parameter. We could therefore declare our enumerated types with a wrapper class, like so:

struct my_enum_wrapper{
  enum class my_enum{
    // enumerations
  };
};
using my_enum=my_enum_wrapper::my_enum;

The using declaration means that other code can just use my_enum directly without having to know or care about my_enum_wrapper.

Now we can add our special functions, starting with a function to verify this is one of our special enums:

namespace xyz{
  constexpr bool is_special_enum(void*) noexcept{
    return false;
  }
  template<typename T>
  constexpr bool is_special_enum() noexcept{
    return is_special_enum((T*)nullptr);
  }
}

Now we can say xyz::is_special_enum<T>() to check if something is one of our special enumerated types. By default this will call the void* overload, and thus return false. However, the internal call passes a pointer-to-T as the argument, which invokes ADL, and searches hidden friends. We can therefore add a friend declaration to our wrapper class which will be found by ADL:

struct my_enum_wrapper{
  enum class my_enum{
    // enumerations
  };
  constexpr bool is_special_enum(my_enum*) noexcept
  {
    return true;
  }
};
using my_enum=my_enum_wrapper::my_enum;

Now, xyz::is_special_enum<my_enum>() will return true. Since this is a constexpr function, it can be used in a constant expression, so can be used with std::enable_if to permit operations only for our special enumerated types, or as a template parameter to specialize a template just for our enumerated types. Of course, some additional operations can also be added as hidden friends in the wrapper class.

Our wrapper macro now looks like this:

#define DECLARE_SPECIAL_ENUM(enum_name,underlying_type,...)\
struct enum_name##_wrapper{\
  enum class enum_name: underlying_type{\
    __VA_ARGS__\
  };\
  constexpr bool is_special_enum(enum_name*) noexcept\
  {\
    return true;\
  }\
};\
using enum_name=enum_name##_wrapper::enum_name;

so you can declare a special enum as DECLARE_SPECIAL_ENUM(my_enum,int,a,b,c=42,d). This works at namespace scope, as a class member, and at local scope, all due to the hidden friend.

Summary

Hidden Friends are a great way to add operations to a specific type without permitting accidental implicit conversions, or slowing down the compiler by introducing overloads that it has to consider in other contexts. They also allow declaring operations on types in contexts that otherwise you wouldn't be able to do so. Every C++ programmer should know how to use them, so they can be used where appropriate.

Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: ,
Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter

Further On An Ethereal Orrery – student

student from thus spake a.k.

Last time we met we spoke of my fellow students' and my interest in constructing a model of the motion of heavenly bodies using mathematical formulae in the place of brass. In particular we have sought to do so from first principals using Sir N-----'s law of universal gravitation, which states that the force attracting two bodies is proportional to the product of their masses divided by the square of the distance between them, and his laws of motion, which state that a body will remain at rest or in constant motion unless a force acts upon it, that it will be accelerated in the direction of that force at a rate proportional to its magnitude divided the body's mass and that a force acting upon it will be met with an equal force in the opposite direction.
Whilst Sir N----- showed that a pair of bodies traversed conic sections under gravity, being those curves that arise from the intersection of planes with cones, the general case of several bodies has proved utterly resistant to mathematical reckoning. We must therefore approximate the equations of motion and I shall now report on our first attempt at doing so.

A zero-knowledge proofs workshop

Derek Jones from The Shape of Code

I was at the Zero-Knowledge proofs workshop run by BinaryDistict on Monday and Tuesday. The workshop runs all week, but is mostly hacking for the remaining days (hacking would be interesting if I had a problem to code, more about this at the end).

Zero-knowledge proofs allow person A to convince person B, that A knows the value of x, without revealing the value of x. There are two kinds of zero-knowledge proofs: an interacting proof system involves a sequence of messages being exchanged between the two parties, and in non-interactive systems (the primary focus of the workshop), there is no interaction.

The example usually given, of a zero-knowledge proof, involves Peggy and Victor. Peggy wants to convince Victor that she knows how to unlock the door dividing a looping path through a tunnel in a cave.

The ‘proof’ involves Peggy walking, unseen by Victor, down path A or B (see diagram below; image from Wikipedia). Once Peggy is out of view, Victor randomly shouts out A or B; Peggy then has to walk out of the tunnel using the path Victor shouted; there is a 50% chance that Peggy happened to choose the path selected by Victor. The proof is iterative; at the end of each iteration, Victor’s uncertainty of Peggy’s claim of being able to open the door is reduced by 50%. Victor has to iterate until he is sufficiently satisfied that Peggy knows how to open the door.

Alibaba example cave loop.

As the name suggests, non-interactive proofs do not involve any message passing; in the common reference string model, a string of symbols, generated by person making the claim of knowledge, is encoded in such a way that it can be used by third-parties to verify the claim of knowledge. At the workshop we got an overview of zk-SNARKs (zero-knowledge succinct non-interactive argument of knowledge).

The ‘succinct’ component of zk-SNARK is what has made this approach practical. When non-interactive proofs were first proposed, the arguments of knowledge contained around one-terabyte of data; these days common reference strings are around a kilobyte.

The fact that zero-knowledge ‘proof’s are possible is very interesting, but do they have practical uses?

The hackathon aspect of the workshop was designed to address the practical use issue. The existing zero-knowledge proofs tend to involve the use of prime numbers, or the factors of very large numbers (as might be expected of a proof system that was heavily based on cryptography). Making use of zero-knowledge proofs requires mapping the problem to a form that has a known solution; this is very hard. Existing applications involve cryptography and block-chains (Zcash is a cryptocurrency that has an option that provides privacy via zero-knowledge proofs), both heavy users of number theory.

The workshop introduced us to two languages, which could be used for writing zero-knowledge applications; ZoKrates and snarky. The weekend before the workshop, I tried to install both languages: ZoKrates installed quickly and painlessly, while I could not get snarky installed (I was old that the first two hours of the snarky workshop were spent getting installs to work); I also noticed that ZoKrates had greater presence than snarky on the web, in the form of pages discussing the language. It seemed to me that ZoKrates was the market leader. The workshop presenters included people involved with both languages; Jacob Eberhardt (one of the people behind ZoKrates) gave a great presentation, and had good slides. Team ZoKrates is clearly the one to watch.

As an experienced hack attendee, I was ready with an interesting problem to solve. After I explained the problem to those opting to use ZoKrates, somebody suggested that oblivious transfer could be used to solve my problem (and indeed, 1-out-of-n oblivious transfer does offer the required functionality).

My problem was: Let’s say I have three software products, the customer has a copy of all three products, and is willing to pay the license fee to use one of these products. However, the customer does not want me to know which of the three products they are using. How can I send them a product specific license key, without knowing which product they are going to use? Oblivious transfer involves a sequence of message exchanges (each exchange involves three messages, one for each product) with the final exchange requiring that I send three messages, each containing a separate product key (one for each product); the customer can only successfully decode the product-specific message they had selected earlier in the process (decoding the other two messages produces random characters, i.e., no product key).

Like most hackathons, problem ideas were somewhat contrived (a few people wanted to delve further into the technical details). I could not find an interesting team to join, and left them to it for the rest of the week.

There were 50-60 people on the first day, and 30-40 on the second. Many of the people I spoke to were recent graduates, and half of the speakers were doing or had just completed PhDs; the field is completely new. If zero-knowledge proofs take off, decisions made over the next year or two by the people at this workshop will impact the path the field follows. Otherwise, nothing happens, and a bunch of people will have interesting memories about stuff they dabbled in, when young.

The Agile virus

Allan Kelly from Allan Kelly Associates

iStock-952804252s-2019-06-6-15-18.jpg

The thing we call Agile is a virus. It gets into organizations and disrupts the normal course of business. In the early days, say before 2010, the corporate anti-bodies could be counted on to root out and destroy the virus before too much damage was done.

But sometimes the anti-bodies didn’t work. As the old maxim says “that which doesn’t kill me makes me stronger.” Sometimes agile made the organization stronger. Software development teams produced more stuff, they delivered on schedule, employees were happier, they had fewer bugs. In smaller, less established companies, the virus infected the company central nervous system, the operating system, and subverted it. Agile became natural.

Perhaps thats not so odd after all. Fighting infection is one of the ways bodies grow stronger. And some virus have positive effects – Friendly Viruses Protect Us Against Bacteria (Science Magazine),
‘Good viruses’ defend gut when bacteria are wiped out (New Scientist), 10 Viruses That Actually Help Humankind (ListVerse), and virus play a roll in evolution by removing the weaker of the species.

The problem is, once the virus is inside the organization/organism it wants to grow and expand. It you don’t kill it then it will infect more and more of the body. Hence, software teams that contracted the agile virus and found it beneficial were allowed to survive but at the same time the virus spread downstream to the requirements process. Business Analysts and Product Managers had to become agile too.

Once you are infected you start to see the world through infected eyes. Over time the project model looked increasingly counter productive. Growth of the agile virus lead to the #NoProjects movement as the virus started to change management models.

Similar things are happening in the accounting and budgeting field. As the agile virus takes hold, and especially once the #NoProjects mutation kicks in, the annual budget process looks crazy. Agile creates a force for more change, agile demands Beyond Budgetting. Sure you can do agile in a traditional budget environment but the more you do the more contradictions you see and the more problems you encounter.

Then there is “human resources” – or to give it a more humane name personnel. Traditional staff recruitment, line management, individual bonuses and retention polices start to look wrong when you are infected by agile. Forces grow to change things, the more the organization resists the virus the more those infected by the virus grow discontent and the more unbalanced things become.

It carries on. The more successful agile is the greater the forces pressing for more change.

While companies don’t recognise these forces they grow. Hierarchical organizations and cultures (like banks) have this really bad. At the highest level they have come to recognise the advantages of the agile virus but to embrace it entirely is to destroy the essence of the organization.

Countless companies try to contain the agile virus but to do so they need to exert more and more energy. Really they need to kill it or embrace it and accept the mutation that is the virus.

Ultimately it all comes down to forces. The forces of status quo and traditional working (Theory X) on one side against the forces of twenty-first century digital enabled millennial workforce (Theory Y) on the other. Victory for the virus is inevitable but that does not mean every organization will be victorious or even survive. Those who can harness the virus fastest stand to gain the most.

The virus has been released, putting the genie back in the bottle is going to be hard – although the paradox of digital technology is that while the digital elite stand to gain the digital underclass (think Amazon warehouse workers) stand to lose.

All companies need to try to embrace the virus, to not do so would be to condemn oneself. But not all will succeed, in fact most will fail trying. Their failures will allow space for new comers to succeed, that is the beauty of capitalism. Unfortunately that space might be also be grabbed by the winners, the companies that have let the virus take over the organization.


Like this post? – Like to receive these posts by e-mail?

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

Check out my latest books – Continuous Digital and Project Myopia – and the Project Myopia audio edition

The post The Agile virus appeared first on Allan Kelly Associates.

Lehman ‘laws’ of software evolution

Derek Jones from The Shape of Code

The so called Lehman laws of software evolution originated in a 1968 study, and evolved during the 1970s; the book “Program Evolution: processes of software change” by Lehman and Belady was published in 1985.

The original work was based on measurements of OS/360, IBM’s flagship operating system for the computer industries flagship computer. IBM dominated the computer industry from the 1950s, through to the early 1980s; OS/360 was the Microsoft Windows, Android, and iOS of its day (in fact, it had more developer mind share than any of these operating systems).

In its day, the Lehman dataset not only bathed in reflected OS/360 developer mind-share, it was the only public dataset of its kind. But today, this dataset wouldn’t get a second look. Why? Because it contains just 19 measurement points, specifying: release date, number of modules, fraction of modules changed since the last release, number of statements, and number of components (I’m guessing these are high level programs or interfaces). Some of the OS/360 data is plotted in graphs appearing in early papers, and can be extracted; some of the graphs contain 18, rather than 19, points, and some of the values are not consistent between plots (extracted data); in later papers Lehman does point out that no statistical analysis of the data appears in his work (the purpose of the plots appears to be decorative, some papers don’t contain them).

One of Lehman’s early papers says that “… conclusions are based, comes from systems ranging in age from 3 to 10 years and having been made available to users in from ten to over fifty releases.“, but no other details are given. A 1997 paper lists module sizes for 21 releases of a financial transaction system.

Lehman’s ‘laws’ started out as a handful of observations about one very large software development project. Over time ‘laws’ have been added, deleted and modified; the Wikipedia page lists the ‘laws’ from the 1997 paper, Lehman retired from research in 2002.

The Lehman ‘laws’ of software evolution are still widely cited by academic researchers, almost 50-years later. Why is this? The two main reasons are: the ‘laws’ are sufficiently vague that it’s difficult to prove them wrong, and Lehman made a large investment in marketing these ‘laws (e.g., publishing lots of papers discussing these ‘laws’, and supervising PhD students who researched software evolution).

The Lehman ‘laws’ are not useful, in the sense that they cannot be used to make predictions; they apply to large systems that grow steadily (i.e., the kind of systems originally studied), and so don’t apply to some systems, that are completely rewritten. These ‘laws’ are really an indication that software engineering research has been in a state of limbo for many decades.

Visual Lint 7.0.1.308 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 7.0. The following changes are included:

  • System include folder and for loop compliance settings can now be read from Visual Studio 2019 project files which use the v142 toolset.

  • Device specific includes (e.g. "<ProjectFolder>/RTE/Device/TLE9879QXA40") are now read when loading Keil uVision 5 projects.

  • Updated the value of _MSC_FULL_VER in the PC-lint Plus compiler indirect files for Microsoft Visual Studio 6.0, 2008, 2010, 2012, 2013, 2015, 2017 and 2019 to reflect the version of the most recent update of the compiler shipped with each. Also enabled C++ 17 support (-std=c++17) with Visual Studio 2017 (VS2017 update 9.4 is C++ 17 complete) and Visual Studio 2019.

  • Removed an erroneous directive from the PC-lint Plus library indirect file rb-win32-pclint10.lnt (an implementation file invoked by lib-rb-win32.lnt).

  • Fixed a bug in the Analysis Tool Options Page which could prevent the "Browse for installation folder" button from working correctly.

  • Fixed a bug in the Project Properties Dialog which affected the editing of configurations within custom projects.

  • Fixed a bug in the VisualLintGui Products Display "Remove File" context menu command.

  • The Configuration Wizard "Select Analysis Tool Installation Folder" page no longer indicates that you can buy PC-lint licences from Riverblade as this analysis tool has now been superseded by PC-lint Plus.

  • VisualLintGui now displays the correct page title when loading the "Blog" page on the Riverblade website.

  • Fixed a typo in the "Product Not Configured" dialog.

Visual Lint 7.0.1.308 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 7.0. The following changes are included:

  • System include folder and for loop compliance settings can now be read from Visual Studio 2019 project files which use the v142 toolset.
  • Device specific includes (e.g. "<ProjectFolder>/RTE/Device/TLE9879QXA40") are now read when loading Keil uVision 5 projects.
  • Updated the value of _MSC_FULL_VER in the PC-lint Plus compiler indirect files for Microsoft Visual Studio 6.0, 2008, 2010, 2012, 2013, 2015, 2017 and 2019 to reflect the version of the most recent update of the compiler shipped with each. Also enabled C++ 17 support (-std=c++17) with Visual Studio 2017 (VS2017 update 9.4 is C++ 17 complete) and Visual Studio 2019.
  • Removed an erroneous directive from the PC-lint Plus library indirect file rb-win32-pclint10.lnt (an implementation file invoked by lib-rb-win32.lnt).
  • Fixed a bug in the Analysis Tool Options Page which could prevent the "Browse for installation folder" button from working correctly.
  • Fixed a bug in the Project Properties Dialog which affected the editing of configurations within custom projects.
  • Fixed a bug in the VisualLintGui Products Display "Remove File" context menu command.
  • The Configuration Wizard "Select Analysis Tool Installation Folder" page no longer indicates that you can buy PC-lint licences from Riverblade as this analysis tool has now been superseded by PC-lint Plus.
  • VisualLintGui now displays the correct page title when loading the "Blog" page on the Riverblade website.
  • Fixed a typo in the "Product Not Configured" dialog.

Download Visual Lint 7.0.1.308

PCTE: a vestige of a bygone era of ISO standards

Derek Jones from The Shape of Code

The letters PCTE (Portable Common Tool Environment) might stir vague memories, for some readers. Don’t bother checking Wikipedia, there is no article covering this PCTE (although it is listed on the PCTE acronym page).

The ISO/IEC Standard 13719 Information technology — Portable common tool environment (PCTE) —, along with its three parts, has reached its 5-yearly renewal time.

The PCTE standard, in itself, is not interesting; as far as I know it was dead on arrival. What is interesting is the mindset, from a bygone era, that thought such a standard was a good idea; and, the continuing survival of a dead on arrival standard sheds an interesting light on ISO standards in the 21st century.

PCTE came out of the European Union’s first ESPRIT project, which ran from 1984 to 1989. Dedicated workstations for software developers were all the rage (no, not those toy microprocessor-based thingies, but big beefy machines with 15inch displays, and over a megabyte of memory), and computer-aided software engineering (CASE) tools were going to provide a huge productivity boost.

PCTE is a specification for a tool interface, i.e., an interface whereby competing CASE tools could provide data interoperability. The promise of CASE tools never materialized, and they faded away, removing the need for an interface standard.

CASE tools and PCTE are from an era where lots of managers still thought that factory production methods could be applied to software development.

PCTE was a European funded project coordinated by a (at the time) mainframe manufacturer. Big is beautiful, and specifications with clout are ISO standards (ECMA was used to fast track the document).

At the time Ada was the language that everybody was going to be writing in the future; so, of course, there is an Ada binding (there is also a C one, cannot ignore reality too much).

Why is there still an ISO standard for PCTE? All standards are reviewed every 5-years, countries have to vote to keep them, or not, or abstain. How has this standard managed to ‘live’ so long?

One explanation is that by being dead on arrival, PCTE never got the chance to annoy anybody, and nobody got to know anything about it. Standard’s committees tend to be content to leave things as they are; it would be impolite to vote to remove a document from the list of approved standards, without knowing anything about the subject area covered.

The members of IST/5, the British Standards committee responsible (yes, it falls within programming languages), know they know nothing about PCTE (and that its usage is likely to be rare to non-existent) could vote ABSTAIN. However, some member countries of SC22 might vote YES, because while they know they know nothing about PCTE, they probably know nothing about most of the documents, and a YES vote does not require any explanation (no, I am not suggesting some countries have joined SC22 to create a reason for flunkies to spend government money on international travel).

Prior to the Internet, ISO standards were only available in printed form. National standards bodies were required to hold printed copies of ISO standards, ready for when an order to arrive. When a standard having zero sales in the last 5-years, came up for review a pleasant person might show up at the IST/5 meeting (or have a quiet word with the chairman beforehand); did we really want to vote to keep this document as a standard? Just think of the shelf space (I never heard them mention the children dead trees). Now they have pdfs occupying rotating rust.