2019 in the programming language standards’ world

Derek Jones from The Shape of Code

Last Tuesday I was at the British Standards Institute for a meeting of IST/5, the committee responsible for programming language standards in the UK.

There has been progress on a few issues discussed last year, and one interesting point came up.

It is starting to look as if there might be another iteration of the Cobol Standard. A handful of people, in various countries, have started to nibble around the edges of various new (in the Cobol sense) features. No, the INCITS Cobol committee (the people who used to do all the heavy lifting) has not been reformed; the work now appears to be driven by people who cannot let go of their involvement in Cobol standards.

ISO/IEC 23360-1:2006, the ISO version of the Linux Base Standard, has been updated and we were asked for a UK position on the document being published. Abstain seemed to be the only sensible option.

Our WG20 representative reported that the ongoing debate over pile of poo emoji has crossed the chasm (he did not exactly phrase it like that). Vendors want to have the freedom to specify code-points for use with their own emoji, e.g., pineapple emoji. The heady days, of a few short years ago, when an encoding for all the world’s character symbols seemed possible, have become a distant memory (the number of unhandled logographs on ancient pots and clay tablets was declining rapidly). Who could have predicted that the dream of a complete encoding of the symbols used by all the world’s languages would be dashed by pile of poo emoji?

The interesting news is from WG9. The document intended to become the Ada20 standard was due to enter the voting process in June, i.e., the committee considered it done. At the end of April the main Ada compiler vendor asked for the schedule to be slipped by a year or two, to enable them to get some implementation experience with the new features; oops. I have been predicting that in the future language ‘standards’ will be decided by the main compiler vendors, and the future is finally starting to arrive. What is the incentive for the GNAT compiler people to pay any attention to proposals written by a bunch of non-customers (ok, some of them might work for customers)? One answer is that Ada users tend to be large bureaucratic organizations (e.g., the DOD), who like to follow standards, and might fund GNAT to implement the new document (perhaps this delay by GNAT is all about funding, or lack thereof).

Right on cue, C++ users have started to notice that C++20’s added support for a system header with the name version, which conflicts with much existing practice of using a file called version to contain versioning information; a problem if the header search path used the compiler includes a project’s top-level directory (which is where the versioning file version often sits). So the WG21 committee decides on what it thinks is a good idea, implementors implement it, and users complain; implementors now have a good reason to not follow a requirement in the standard, to keep users happy. Will WG21 be apologetic, or get all high and mighty; we will have to wait and see.

Building an all-in-one Jar in Gradle with the Kotlin DSL

Andy Balaam from Andy Balaam's Blog

To build a “fat” Jar of your Java or Kotlin project that contains all the dependencies within a single file, you can use the shadow Gradle plugin.

I found it hard to find clear documentation on how it works using the Gradle Kotlin DSL (with a build.gradle.kts instead of build.gradle) so here is how I did it:

$ cat build.gradle.kts 
import com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar

plugins {
    kotlin("jvm") version "1.3.41"
    id("com.github.johnrengelman.shadow") version "5.1.0"
}

repositories {
    mavenCentral()
}

dependencies {
    implementation(kotlin("stdlib"))
}

tasks.withType<ShadowJar>() {
    manifest {
        attributes["Main-Class"] = "HelloKt"
    }
}

$ cat src/main/kotlin/Hello.kt 
fun main() {
    println("Hello!")
}

$ gradle wrapper --gradle-version 5.5
BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

$ ./gradlew shadowJar
BUILD SUCCESSFUL in 1s
2 actionable tasks: 2 executed

$ java -jar build/libs/hello-all.jar 
Hello!

Complexity is a source of income in open source ecosystems

Derek Jones from The Shape of Code

I am someone who regularly uses R, and my interest in programming languages means that on a semi-regular basis spend time reading blog posts about the language. Over the last year, or so, I had noticed several patterns of behavior, and after reading a recent blog post things started to make sense (the blog post gets a lot of things wrong, but more of that later).

What are the patterns that have caught my attention?

Some background: Hadley Wickham is the guy behind some very useful R packages. Hadley was an academic, and is now the chief scientist at RStudio, the company behind the R language specific IDE of the same name. As Hadley’s thinking about how to manipulate data has evolved, he has created new packages, and has been very prolific. The term Hadley-verse was coined to describe an approach to data manipulation and program structuring, based around use of packages written by the man.

For the last nine-months I have noticed that the term Tidyverse is being used more regularly to describe what had been the Hadley-verse. And???

Another thing that has become very noticeable, over the last six-months, is the extent to which a wide range of packages now have dependencies on packages in the HadleyTidyverse. And???

A recent post by Norman Matloff complains about the Tidyverse’s complexity (and about the consistency between its packages; which I had always thought was a good design principle), and how RStudio’s promotion of the Tidyverse could result in it becoming the dominant R world view. Matloff has an academic world view and misses what is going on.

RStudio, the company, need to sell their services (their IDE is clunky and will be wiped out if a top of the range product, such as Jetbrains, adds support for R). If R were simple to use, companies would have less need to hire external experts. A widely used complicated library of packages is a god-send for a company looking to sell R services.

I don’t think Hadley Wickam intentionally made things complicated, any more than the creators of the Microsoft server protocols added interdependencies to make life difficult for competitors.

A complex package ecosystem was probably not part of RStudio’s product vision, at least for many years. But sooner or later, RStudio management will have realised that simplicity and ease of use is not in their interest.

Once a collection of complicated packages exist, it is in RStudio’s interest to get as many other packages using them, as quickly as possible. Infect the host quickly, before anybody notices; all the while telling people how much the company is investing in the community that it cares about (making lots of money from).

Having this package ecosystem known as the Hadley-verse gives too much influence to one person, and makes it difficult to fire him later. Rebranding as the Tidyverse solves these problems.

Matloff accuses RStudio of monopoly behavior, I would have said they are fighting for survival (i.e., creating an environment capable of generating the kind of income a VC funded company is expected to make). Having worked in language environments where multiple, and incompatible, package ecosystems existed, I can see advantages in there being a monopoly. Matloff is also upset about a commercial company swooping in to steal their precious, a common academic complaint (academics swooping in to steal ideas from commercially developed software is, of course, perfectly respectable). Matloff also makes claims about teachability of programming that are not derived from any experimental evidence, but then everybody makes claims about programming languages without there being any experimental evidence.

RStudio management rode in on the data science wave, raising money from VCs. The wave is subsiding and they now need to appear to have a viable business (so they can be sold to a bigger fish), which means there has to be a visible market they can sell into. One way to sell in an open source environment is for things to be so complicated, that large companies will pay somebody to handle the complexity.

A Place In The Hierarchy – a.k.

a.k. from thus spake a.k.

Last time we implemented the clusterings type to store a set of clustering objects in order to represent hierarchical clusterings, which are sequences of clusterings having the property that if a pair of data are in the same cluster in one clustering then they will be in the same cluster in the next, where clusters are subsets of a set of data that are in some sense similar to each other.
We then went on to define the ak.clade type to represent hierarchical clusterings as trees, so named because that's what they're called in biology when they are used to show the relationships between species and their common ancestors.
Now that we have those structures in place we're ready to see how to create hierarchical clusterings and so in this post we shall start with a simple, general purpose, but admittedly rather inefficient, way to do so.

How much is a 1-hour investment today worth a year from now?

Derek Jones from The Shape of Code

Today, I am thinking of investing 1-hour of effort adding more comments to my code; how much time must this investment save me X-months from now, for today’s 1-hour investment to be worthwhile?

Obviously, I must save at least 1-hour. But, the purpose of making an investment is to receive a greater amount at a later time; ‘paying’ 1-hour to get back 1-hour is a poor investment (unless I have nothing else to do today, and I’m likely to be busy in the coming months).

The usual economic’s based answer is based on compound interest, the technique your bank uses to calculate how much you owe them (or perhaps they owe you), i.e., the expected future value grows exponentially at some interest rate.

Psychologists were surprised to find that people don’t estimate future value the way economists do. Hyperbolic discounting provides a good match to the data from experiments that asked subjects to value future payoffs. The form of the equation used by economists is: e^{-kD}, while hyperbolic discounting has the form 1/{1+kD}, where: k is a constant, and D the period of time.

The simple economic approach does not explicitly include the risk that one of the parties involved may cease to exist. Including risk is non-trivial, banks handle the risk that you might disappear by asking for collateral, or adding something to the interest rate charged.

The fact that humans, and some other animals, have been found to use hyperbolic discounting suggests that evolution has found this approach, to discounting time, increases the likelihood of genes being passed on to the next generation. A bird in the hand is worth two in the bush.

How do software developers discount investment in software engineering projects?

The paper Temporal Discounting in Technical Debt: How do Software Practitioners Discount the Future? describes a study that specifies a decision that has to be made and two options, as follows:

“You are managing an N-years project. You are ahead of schedule in the current iteration. You have to decide between two options on how to spend our upcoming week. Fill in the blank to indicate the least amount of time that would make you prefer Option 2 over Option 1.

  • Option 1: Implement a feature that is in the project backlog, scheduled for the next iteration. (five person days of effort).
  • Option 2: Integrate a new library (five person days effort) that adds no new functionality but has a 60% chance of saving you person days of effort over the duration of the project (with a 40% chance that the library will not result in those savings).

Subjects are then asked six questions, each having the following form (for various time frames):

“For a project time frame of 1 year, what is the smallest number of days that would make you prefer Option 2? ___”

The experiment is run twice, using professional developers from two companies, C1 and C2 (23 and 10 subjects, respectively), and the data is available for download :-)

The following plot shows normalised values given by some of the subjects from company C1, for the various time periods used. On a log scale, values estimated using the economists exponential approach would form a straight line (e.g., close to the first five points of subject M, bottom right), and values estimated using the hyperbolic approach would have the concave form seen for subject C (top middle) (code+data).

Normalised returned required for various elapsed years.

Subject B is asking for less, not more, over a longer time period (several other subjects have the same pattern of response). Why did Subject E (and most of subject G’s responses) not vary with time? Perhaps they were tired and were not willing to think hard about the problem, or perhaps they did not think the answer made much difference. The subjects from company C2 showed a greater amount of variety. Company C1 had some involvement with financial applications, while company C2 was involved in simulations. Did this domain knowledge spill over into company C1’s developers being more likely to give roughly consistent answers?

The experiment was run online, rather than an experimenter being in the room with subjects. It is possible that subjects would have invested more effort if a more formal setting, with an experimenter who had made the effort to be present. Also, if an experimenter had been present, it would have been possible to ask question to clarify any issues.

Both exponential and hyperbolic equations can be fitted to the data, but given the diversity of answers, it is difficult to put any weight in either regression model. Some subjects clearly gave responses fitting a hyperbolic equation, while others gave responses fitted approximately well by either approach, and other subjects used. It was possible to fit the combined data from all of company C1 subjects to a single hyperbolic equation model (the most significant between subject variation was the value of the intercept); no such luck with the data from company C2.

I’m very please to see there has been a replication of this study, but the current version of the paper is a jumble of ideas, and is thin on experimental procedure. I’m sure it will improve.

What do we learn from this study? Perhaps that developers need to learn something about calculating expected future payoffs.

Medieval guilds: a tax collection bureaucracy

Derek Jones from The Shape of Code

The medieval guild is sometimes held up as the template for an institution dedicated to maintaining high standards, and training the next generation of craftsmen.

“The European Guilds: An economic analysis” by Sheilagh Ogilvie takes a chainsaw (i.e., lots of data) to all the positive things that have been said about medieval guilds (apart from them being a money making machine for those on the inside).

Guilds manipulated markets (e.g., drove down the cost of input items they needed, and kept the prices they charged high), had little or no interest in quality, charged apprentices for what little training they received, restricted entry to their profession (based on the number of guild masters the local population could support in a manner expected by masters), and did not hesitate to use force to enforce the rules of the guild (should a member appear to threaten the livelihood of other guild members).

Guild wars is not the fiction of an online game, guilds did go to war with each other.

Given their focus on maximizing income, rather than providing customer benefits, why did guilds survive for so many centuries? Guilds paid out significant sums to influence those in power, i.e., bribes. Guilds paid annual sums for the exclusive rights to ply their trade in geographical areas; it’s all down on Vellum.

Guilds provided the bureaucracy needed to collect money from the populace, i.e., they were effectively tax collectors. Medieval rulers had a high turn-over, and most were not around long enough to establish a civil service. In later centuries, the growth of a country’s population led to the creation of government departments, that were stable enough to perform tax collecting duties more efficiently that guilds; it was the spread of governments capable of doing their own tax collecting that killed off guilds.

The Product Owner Delta

Allan Kelly from Allan Kelly Associates

ValueAddPO-2019-07-1-08-19.jpg

As regular readers might know I’m working on a book called The Art of Product Ownership to be published by Apress later this year. One of the chapters is entitled “Why have a Product Owner” and a few days ago a bunch of ideas crystallised into this…

The aim of the Product Owner is to increase, even maximise, the business value delivered by the team as a whole. The Product Owner does not so much create value themselves as increase the value created by others.

Think of it like this: if the team randomly selected work to do and delivered it to customers then some value would be created. (For the moment I’ll ignore the scenario where that work detracts from the existing value.) The aim of the PO is to ensure the work done creates more value than a simple random selection. The greater the difference, or delta to use a mathematical term, between random selection and an informed selection the better.

The general hypothesis is that intelligent selection of work by a skilled Product Owner will result in both more value being delivered and an increasing delta between intelligent PO selected work and randomly selected work.

This difference the value added by a Product Owner. I like to call this difference the Product Owner Delta.

Now in real life work is seldom randomly so Product Owners are not competing against random selection. In some cases the alternative to a designated Product Owners is someone else: a senior developer, an architect, a manager or someone else. In such cases this person is taking on the Product Owner role. They may not have the title, the aptitude, the skills or official position but when work is selected by one person they are de facto the Product Owner.

In other cases the alternative to the PO might be selection by consensus on the team, or a sub-set of the team. Now it is entirely possible that such a group could outperform a single Product Owner in selecting work – especially is they have market and customer knowledge, some analysis skills, time to do the background research and so on. In some cases this works, for example think of a small start-up staffed by software developers creating software development tools.

However, in some cases selection by committee might be inferior to a random selection. Imagine a team which has never met a customer, argue about what to do, duck key decisions and never say No to any request. Its easy to image a dysfunctional selection committee.

There is more to increasing the Product Owner Delta than simply selecting the highest value items. Timely selection can help too. If decisions are not being made, or committees are spending a long time making decisions then having one person simply make those decisions in an efficient, timely, manner can increase the delta.

Time has another role. Because of cost-of-delay simply selecting the highest value items at any one point in time does not maximise the value delivered. Time Value Profiles (see Little Book of User Stories or my presentations on value “How much? When?”) expose this and need to be another tool in the Product Owners repertoire.

And of course, the Product Owner Delta is not the only reason to have a Product Owner in the team, but it is probably the main reason.


Like this post? – Like to receive these posts by e-mail?

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

Check out my latest books – Continuous Digital and Project Myopia – and the Project Myopia audio edition

The post The Product Owner Delta appeared first on Allan Kelly Associates.

The Power of Hidden Friends in C++

Anthony Williams from Just Software Solutions Blog

"Friendship" in C++ is commonly thought of as a means of allowing non-member functions and other classes to access the private data of a class. This might be done to allow symmetric conversions on non-member comparison operators, or allow a factory class exclusive access to the constructor of a class, or any number of things.

However, this is not the only use of friendship in C++, as there is an additional property to declaring a function or function template a friend: the friend function is now available to be found via Argument-Dependent Lookup (ADL). This is what makes operator overloading work with classes in different namespaces.

Argument Dependent Lookup at Work

Consider the following code snippet:

namespace A{
  class X{
  public:
    X(int i):data(i){}
  private:
    int data;
    friend bool operator==(X const& lhs,X const& rhs){
      return lhs.data==rhs.data;
    }
  };
}
int main(){
  A::X a(42),b(43);
  if(a==b) do_stuff();
}

This code snippet works as you might expect: the compiler looks for an implementation of operator== that works for A::X objects, and there isn't one in the global namespace, so it also looks in the namespace where X came from (A), and finds the operator defined as a friend of class X. Everything is fine. This is ADL at work: the argument to the operator is an A::X object, so the namespace that it comes from (A) is searched as well as the namespace where the usage is.

Note, however, that the comparison operator is not declared anywhere other than the friend declaration. This means that it is only considered for name lookup when one of the arguments is an X object (and thus is "hidden" from normal name lookup). To demonstrate this, let's define an additional class in namespace A, which is convertible to 'X':

namespace A{
  class Y{
  public:
    operator X() const{
      return X(data);
    }
    Y(int i):data(i){}
  private:
    int data;
  };
}
A::Y y(99);
A::X converted=y; // OK

Our Y class has a conversion operator defined, so we can convert it to an X object at will, and it is also in namespace A. You might think that we can compare Y objects, because our comparison operator takes an X, and Y is convertible to X. If you did, you'd be wrong: the comparison operator is only visible to name lookup if one of the arguments is an X object.

int main(){
  A::Y a(1),b(2);
  if(a==b) // ERROR: no available comparison operator
    do_stuff();
}

If we convert one of the arguments to an X then it works, because the comparison operator is now visible, and the other argument is converted to an X to match the function signature:

int main(){
  A::Y a(1),b(2);
  if(A::X(a)==b) // OK
    do_stuff();
}

Similarly, if we declare the comparison operator at namespace scope, everything works too:

namespace A{
  bool operator==(X const& lhs,X const& rhs);
}
int main(){
  A::Y a(1),b(2);
  if(a==b) // OK now
    do_stuff();
}

In this case, the arguments are of type Y, so namespace A is searched, which now includes the declaration of the comparison operator, so it is found, and the arguments are converted to X objects to do the comparison.

If we omit this namespace scope definition, as in the original example, then this function is a hidden friend.

This isn't just limited to operators: normal functions can be defined in friend declarations too, and just as with the comparison operator above, if they are not also declared at namespace scope then they are hidden from normal name lookup. For example:

struct X{
  X(int){}
  friend void foo(X){};
};
int main(){
    X x(42);
    foo(x); // OK, calls foo defined in friend declaration
    foo(99); // Error: foo not found, as int is not X
    ::foo(x); // Error: foo not found as ADL not triggered
}

Benefits of Hidden Friends

The first benefit of hidden friends is that it avoids accidental implicit conversions. In our example above, comparing Y objects doesn't implicitly convert them to X objects to use the X comparison unless you explicitly do something to trigger that behaviour. This can avoid accidental uses of the wrong function too: if I have a function wibble that takes an X and wobble that takes a Y, then a typo in the function name won't trigger the implicit conversion to X:

class X{
friend void wibble(X const&){}
};

class Y{
friend void wobble(Y const&){}
public:
operator X() const;
};

int main(){
  Y y;
  wibble(y); // Error no function wibble(Y)
}

This also helps spot errors where the typo was on the definition: we meant to define wibble(Y) but misspelled it. With "normal" declarations, the call to wibble(y) would silently call wibble(X(y)) instead, leading to unexpected behaviour. Hopefully this would be caught by tests, but it might make it harder to identify the problem as you'd be checking the definition of wobble, wondering why it didn't work.

Another consequence is that it makes it easier for the compiler: the hidden friends are only checked when there is a relevant argument provided. This means that there are fewer functions to consider for overload resolution, which makes compilation quicker. This is especially important for operators: if you have a large codebase, you might have thousands of classes with operator== defined. If they are declared at namespace scope, then every use of == might have to check a large number of them and perform overload resolution. If they are hidden friends, then they are ignored unless one of the expressions being compared is already of the right type.

In order to truly understand the benefits and use them correctly, we need to know when hidden friends are visible.

Rules for Visibility of Hidden Friends

Firstly, hidden friends must be functions or function templates; callable objects don't count.

Secondly, the call site must use an unqualified name — if you use a qualified name, then that checks only the specified scope, and disregards ADL (which we need to find hidden friends).

Thirdly, normal unqualified lookup must not find anything that isn't a function or function template. If you have a local variable int foo;, and try to call foo(my_object) from the same scope, then the compiler will rightly complain that this is invalid, even if the type of my_object has a hidden friend named foo.

Finally, one of the arguments to the function call must be of a user-defined type, or a pointer or reference to that type.

We now have the circumstances for calling a hidden friend if there is one:

my_object x;
my_object* px=&x;

foo(x);
foo(px);

Both calls to foo in this code will trigger ADL, and search for hidden friends.

ADL searches a set of namespaces that depend on the type of my_object, but that doesn't really matter for now, as you could get to normal definitions of foo in those namespaces by using appropriate qualification. Consider this code:

std::string x,y;
swap(x,y);

ADL will find std::swap, since std::string is in the std namespace, but we could just as well have spelled out std::swap in the first place. Though this is certainly useful, it isn't what we're looking at right now.

The hidden friend part of ADL is that for every argument to the function call, the compiler builds a set of classes to search for hidden friend declarations. This lookup list is built as follows from a source type list, which is initially the types of the arguments supplied to the function call.

Our lookup list starts empty. For each type in the source type list:

  • If the type being considered is a pointer or reference, add the pointed-to or referenced type to the source type list
  • Otherwise, if the type being considered is a built-in type, do nothing
  • Otherwise, if the type is a class type then add it to the lookup list, and check the following:
    • If the type has any direct or indirect base classes, add them to the lookup list
    • If the type is a member of a class, add the containing class to the lookup list
    • If the type is a specialization of a class template, then:
    • add the types of any template type arguments (not non-type arguments or template template arguments) to the source type list
    • if any of the template parameters are template template parameters, and the supplied arguments are member templates, then add the classes of which those templates are members to the lookup list
  • Otherwise, if the type is an enumerated type that is a member of a class, add that class to the lookup list
  • Otherwise, if the type is a function type, add the types of the function return value and function parameters to the source type list
  • Otherwise, if the type is a pointer to a member of some class X, add the class X and the type of the member to the source type list

This gets us a final lookup list which may be empty (e.g. in foo(42)), or may contain a number of classes. All the classes in that lookup list are now searched for hidden friends. Normal overload resolution is used to determine which function call is the best match amongst all the found hidden friends, and all the "normal" namespace-scope functions.

This means that you can add free functions and operators that work on a user-defined type by adding normal namespace-scope functions, or by adding hidden friends to any of the classes in the lookup list for that type.

Adding hidden friends via base classes

In a recent blog post, I mentioned my strong_typedef implementation. The initial design for that used an enum class to specify the permitted operations, but this was rather restrictive, so after talking with some others (notably Peter Sommerlad) about alternative implementation strategies, I switched it to a mixin-based implementation. In this case, the Properties argument is now a variadic parameter pack, which specifies types that provide mixin classes for the typedef. jss::strong_typedef<Tag,Underlying,Prop> then derives from Prop::mixin<jss::strong_typedef<Tag,Underlying,Prop>,Underlying>. This means that the class template Prop::mixin can provide hidden friends that operate on the typedef type, but are not considered for "normal" lookup. Consider, for example, the implementation of jss::strong_typedef_properties::post_incrementable:

struct post_incrementable {
    template <typename Derived, typename ValueType> struct mixin {
        friend Derived operator++(Derived &self, int) noexcept(
            noexcept(std::declval<ValueType &>()++)) {
            return Derived{self.underlying_value()++};
        }
    };
};

This provides an implementation of operator++ which operates on the strong typedef type Derived, but is only visible as a hidden friend, so if you do x++, and x is not a strong typedef that specifies it is post_incrementable then this operator is not considered, and you don't get accidental conversions.

This makes the strong typedef system easily extensible: you can add new property types that define mixin templates to provide both member functions and free functions that operate on the typedef, without making these functions generally visible at namespace scope.

Hidden Friends and Enumerations

I had forgotten that enumerated types declared inside a class also triggered searching that class for hidden friends until I was trying to solve a problem for a client recently. We had some enumerated types that were being used for a particular purpose, which we therefore wanted to enable operations on that wouldn't be enabled for "normal" enumerated types.

One option was to specialize a global template as I described in my article on Using Enum Classes as Bitfields, but this makes it inconvenient to deal with enumerated types that are members of a class (especially if they are private members), and impossible to deal with enumerated types that are declared at local scope. We also wanted to be able to declare these enums with a macro, which would mean we couldn't use the specialization as you can only declare specializations in the namespace in which the original template is declared, and the macro wouldn't know how to switch namespaces, and wouldn't be usable at class scope.

This is where hidden friends came to the rescue. You can define a class anywhere you can define an enumerated type, and hidden friends declared in the enclosing class of an enumerated type are considered when calling functions that take the enumerated as a parameter. We could therefore declare our enumerated types with a wrapper class, like so:

struct my_enum_wrapper{
  enum class my_enum{
    // enumerations
  };
};
using my_enum=my_enum_wrapper::my_enum;

The using declaration means that other code can just use my_enum directly without having to know or care about my_enum_wrapper.

Now we can add our special functions, starting with a function to verify this is one of our special enums:

namespace xyz{
  constexpr bool is_special_enum(void*) noexcept{
    return false;
  }
  template<typename T>
  constexpr bool is_special_enum() noexcept{
    return is_special_enum((T*)nullptr);
  }
}

Now we can say xyz::is_special_enum<T>() to check if something is one of our special enumerated types. By default this will call the void* overload, and thus return false. However, the internal call passes a pointer-to-T as the argument, which invokes ADL, and searches hidden friends. We can therefore add a friend declaration to our wrapper class which will be found by ADL:

struct my_enum_wrapper{
  enum class my_enum{
    // enumerations
  };
  constexpr bool is_special_enum(my_enum*) noexcept
  {
    return true;
  }
};
using my_enum=my_enum_wrapper::my_enum;

Now, xyz::is_special_enum<my_enum>() will return true. Since this is a constexpr function, it can be used in a constant expression, so can be used with std::enable_if to permit operations only for our special enumerated types, or as a template parameter to specialize a template just for our enumerated types. Of course, some additional operations can also be added as hidden friends in the wrapper class.

Our wrapper macro now looks like this:

#define DECLARE_SPECIAL_ENUM(enum_name,underlying_type,...)\
struct enum_name##_wrapper{\
  enum class enum_name: underlying_type{\
    __VA_ARGS__\
  };\
  constexpr bool is_special_enum(enum_name*) noexcept\
  {\
    return true;\
  }\
};\
using enum_name=enum_name##_wrapper::enum_name;

so you can declare a special enum as DECLARE_SPECIAL_ENUM(my_enum,int,a,b,c=42,d). This works at namespace scope, as a class member, and at local scope, all due to the hidden friend.

Summary

Hidden Friends are a great way to add operations to a specific type without permitting accidental implicit conversions, or slowing down the compiler by introducing overloads that it has to consider in other contexts. They also allow declaring operations on types in contexts that otherwise you wouldn't be able to do so. Every C++ programmer should know how to use them, so they can be used where appropriate.

Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: ,
Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter