More Productive C++ with TDD

Phil Nash from level of indirection

The title might read a little like click-bait, and there are certainly some nuances and qualifications here. But, hey! That's what the article is for.

Those that know me know that I have been a practitioner of, and advocate for, TDD in C++ for many years. I originally introduced Catch in an attempt to make that easier. I've given many talks that touch on the subject, as well as giving coaching and consultancy. For the last year I've added to that with a workshop class that I have given at several conferences - in both one-day and two-day forms (at CppCon you can do either!).

But why am I all in on TDD? Is it really that good?

What has TDD ever done for me?

Most of the time, especially for new code (including new parts being added to legacy code) the benefits of using TDD include (but are not limited to):

  1. A decent set of tests with high coverage (100% if you're following a strict approach).
  2. Well factored, "clean code".
  3. (an aspect of 2) code that is easy to change.
  4. A more thoughtful design that is easier to work with.

But attaining these benefits is not automatic. Applying the discipline of TDD steers you towards the "pit of success" - but you still have to jump in! Fully achieving all the benefits relies on a certain amount of experience - with TDD itself - but also with a range of other design principles. TDD is like a spirit guide, nudging you in the right direction. It's down to you to listen to what TDD is telling you and take the right actions.

This is the first hurdle that people trying TDD fall down at. The core steps to TDD are simple and can be taught in about 10-20 minutes. Getting all the benefits from them takes experience. If you start trying to apply TDD to your production code straight away you will almost certainly just see overhead and constraints - with little-to-no benefit!

On your own this may just be a case of doing enough katas and side-projects until you feel confident to slowly start using it in more critical areas. If you're able to take a class (whether mine or someone else's), you can be helped past this initial stage - and also shown the cloud of additional things around TDD that will let you get the best out of it.

Types, Tests and EOP

First of all, I don't consider TDD to be the complete picture. There are some cases where it's just not practical (although these are often not the ones people think - it takes exprience to make that call). Where it is practical (the majority of cases, in my experience) it can form the backbone for your code-design approach - but should be complemented by other techniques.

If you think of these things as a toolbox to select from - then combining the right tools is the only way to be fully effective. Proficiency - if not mastery - with those tools is also necessary,

The tools that I find, again-and-again, work well with TDD are: using the type system to reduce - or eliminate - potential errors, and what I call Expression-Oriented Programming, which is really a distillation of Functional Programming.

These are two big topics in their own right, and I plan to follow up with deeper dives on each. In the meantime you'll get a better idea of my views on FP from my talk, "Functional Programming for Fun & Profit". I've yet to do a talk specifically on my views on how the type system can help, but there are elements in my recent series on Error Handling.

The bottom line, for now, is that the more you can lean on Types and FP, the less there will be left that needs testing.

TDD or not TDD

Alas, poor Yorik...

That's the question.

I've already hinted that it may not always be the best approach (but usually is). But even when it is - it is not sufficient. That is, the tests you are left with through pure TDD are only half the story. They helped you arrive at the design, gave you a users perspective on your code, and the confidence to make changes. But they won't find the edges of your design that you didn't explicitly consider. TDD might help you think of some pathological cases you might have otherwise left out (and leaning on type system may eliminate many more) - but you're going to have to consider other forms of testing to find the things you didn't design for.

Then there's the thorny issue of how to get legacy code under test - or should you even try? There are more tools in the toolbox to help with all of these things. Even though they are not part of TDD itself, to practice TDD well you'll need to be able to be able to weave it into this bigger picture.

Short-cutting the Gordian Knot

So while learning TDD, technically, is easy - mastering it is a far richer proposition. Returning to our opening question - is it worth it? My experience has been a resounding "yes"! All the pieces that come into play (and we've not mentioned them all, here) will give you more tools, greater experience and a better understanding of the bigger picture - even if you choose not to use TDD in a particular case.

But how can we get up to speed? You could do what I did and pick up bits here and there, reconcile it with your own years of experience, observe the results, adjust course and try again - eventually settling on a pattern that works.

Or you could join me on one of my workshops as I try to give you the benefit of that experience in a distilled form. You won't walk out a seasoned expert, but I think I can help you well down the road.

My next outings are:

Or if you'd like me to come into your company to deliver on-site training you can reach me on atdd@philnash.me.

East End Functions

Phil Nash from level of indirection

There has been a recent stirring of attention, in the C++ community, for the practice of always placing the const modifier to the right of the thing it modifies. The practice has even been gifted a catchy name: East Const (which, I think, is what has stirred up the interest).

As purely a matter of style it's fascinating that it seems to have split the community so strongly! There are cases for and against, but both sides seem to revolve around the idea of "consistency". For the East Const believers the consistency is in the sense that you can always apply one, simple, rule about what const means and where it goes. For the West Consters the consistency is with the majority of existing code out there - as well as the Core Guidelines recommendation!

Personally I've been an East Const advocate for many years (although not by that name, of course) - and converted the entire Catch codebase over to East Const quite early on.

But there's another style choice that I've not seen discussed quite as much, but has a number of parallels.

As with East vs West Const this is purely a matter of style (it doesn't change what the compiler generates), and one of the arguments in favour is consistency across application (there are some cases where you must do it this way) - but the main argument against is also consistency - with most existing code. Sound familiar? But what is it?

The issue is about where to specify return types on function signatures. For most of C++'s history the only choice has been to write the type before the name of the function (along with any modifiers). But since C++11 we've been able to write the type at the end of the function signature (after a -> - and the function must be prefixed with the keyword auto).

auto someFunc( int i ) -> std::string;
// instead of
std::string someFunc( int i );

So why would you prefer this style? Well, first there's that consistency argument. It's the only way to specify return types for lambdas. You're also required to use trailing return types if the type is a decltype that is dependent on the name of one of the function's arguments. Indeed, that's the motivating case for adding the syntax in the first place. e.g.:

template <typename Lhs, typename Rhs>
auto add( Lhs const& lhs, Rhs const& rhs ) -> decltype( lhs + rhs ) {
    return lhs + rhs;
}

A Foolish Consistency?

Given those cases where it is required, using the same syntax in all other cases would seem to be more consistent.

I'm not sure the consistency argument is as strong here as it is with East Const - there was never much confusion over what the return applied to, after all. But I think it's worth keeping in mind.

The next reason for is consistency with other languages. Many languages, especially functional programming languages, exclusively use the trailing syntax for return types. Quite a few, e.g. Swift, use the same -> syntax.

It's not a strong reason on its own, but combined with the internal consistency argument I think there's something there.

However, for me at least, the most compelling rationale is for readability. Why do I think it's more readable? There are actually two parts to this:

  1. Function declarations tend to line up. Certain qualifiers might spoil this effect, although one approach might be to group similarly qualified functions (e.g. all virtuals) together. This makes glancing through the list of function names much easier.

  2. The name of the function is usually the most important thing when you're browsing the code. If you're more interested in the return type it's usually because you already know which function you're interested in. So making the name the first thing you read (after the auto introducer) seems fitting.

auto doesItBlend() -> bool;
auto whatsYourFavouriteNumber() -> int;
auto add( double a, double b ) -> double;
void setTheControls();

(note that many who prefer this form, including myself, tend to still put void first)

For me the arguments for are compelling. The arguments against really boil down to the same argument against East Const - inconsistency with older code. As Jon Kalb deliberated on in A Foolish Consistency, this sort of thinking can hold us back.

I've been favouring this style for more than a couple of years now. In fact I tracked down a post to the ACCU mailing list (linked here, but I believe you have to be a subscriber to read it) where I talked about it - and made all the same points I'm making here. My opinion since then has not changed much. Other than feeling more confident that it's The Right Thing.

So I think it's time we gave it a catchy name. Unlike East Const it already has a name, "trailing return types". It's not especially galvanising, though. Given the parallels to East vs West Const - and the fact that it, also, relates to the thing in question being placed to the left or the right, I propose East End Functions (vs West End Functions).

What about the redundant auto keyword?

Think of auto, here, as the "function introducer". In other languages it might be spelt fun or func. If it makes you feel better you could always:

#define func auto

... actually don't. The point is, in languages that introduce a function with func, then have a trailing return type, nobody gives it a second thought. auto is the same number of characters as func. It's a shame it's not quite as expressive - but that's the price of legacy. It shouldn't mean we "can't have nice things".

East End Functions

Phil Nash from level of indirection

There has been a recent stirring of attention, in the C++ community, for the practice of always placing the const modifier to the right of the thing it modifies. The practice has even been gifted a catchy name: East Const (which, I think, is what has stirred up the interest).

As purely a matter of style it's fascinating that it seems to have split the community so strongly! There are cases for and against, but both sides seem to revolve around the idea of "consistency". For the East Const believers the consistency is in the sense that you can always apply one, simple, rule about what const means and where it goes. For the West Consters the consistency is with the majority of existing code out there - as well as the Core Guidelines recommendation!

Personally I've been an East Const advocate for many years (although not by that name, of course) - and converted the entire Catch codebase over to East Const quite early on.

But there's another style choice that I've not seen discussed quite as much, but has a number of parallels.

As with East vs West Const this is purely a matter of style (it doesn't change what the compiler generates), and one of the arguments in favour is consistency across application (there are some cases where you must do it this way) - but the main argument against is also consistency - with most existing code. Sound familiar? But what is it?

The issue is about where to specify return types on function signatures. For most of C++'s history the only choice has been to write the type before the name of the function (along with any modifiers). But since C++11 we've been able to write the type at the end of the function signature (after a -> - and the function must be prefixed with the keyword auto).

auto someFunc( int i ) -> std::string;
// instead of
std::string someFunc( int i );

So why would you prefer this style? Well, first there's that consistency argument. It's the only way to specify return types for lambdas. You're also required to use trailing return types if the type is a decltype that is dependent on the name of one of the function's arguments. Indeed, that's the motivating case for adding the syntax in the first place. e.g.:

template <typename Lhs, typename Rhs>
auto add( Lhs const& lhs, Rhs const& rhs ) -> decltype( lhs + rhs ) {
    return lhs + rhs;
}

A Foolish Consistency?

Given those cases where it is required, using the same syntax in all other cases would seem to be more consistent.

I'm not sure the consistency argument is as strong here as it is with East Const - there was never much confusion over what the return applied to, after all. But I think it's worth keeping in mind.

The next reason for is consistency with other languages. Many languages, especially functional programming languages, exclusively use the trailing syntax for return types. Quite a few, e.g. Swift, use the same -> syntax.

It's not a strong reason on its own, but combined with the internal consistency argument I think there's something there.

However, for me at least, the most compelling rationale is for readability. Why do I think it's more readable? There are actually two parts to this:

  1. Function declarations tend to line up. Certain qualifiers might spoil this effect, although one approach might be to group similarly qualified functions (e.g. all virtuals) together. This makes glancing through the list of function names much easier.

  2. The name of the function is usually the most important thing when you're browsing the code. If you're more interested in the return type it's usually because you already know which function you're interested in. So making the name the first thing you read (after the auto introducer) seems fitting.

auto doesItBlend() -> bool;
auto whatsYourFavouriteNumber() -> int;
auto add( double a, double b ) -> double;
void setTheControls();

(note that many who prefer this form, including myself, tend to still put void first)

For me the arguments for are compelling. The arguments against really boil down to the same argument against East Const - inconsistency with older code. As Jon Kalb deliberated on in A Foolish Consistency, this sort of thinking can hold us back.

I've been favouring this style for more than a couple of years now. In fact I tracked down a post to the ACCU mailing list (linked here, but I believe you have to be a subscriber to read it) where I talked about it - and made all the same points I'm making here. My opinion since then has not changed much. Other than feeling more confident that it's The Right Thing.

So I think it's time we gave it a catchy name. Unlike East Const it already has a name, "trailing return types". It's not especially galvanising, though. Given the parallels to East vs West Const - and the fact that it, also, relates to the thing in question being placed to the left or the right, I propose East End Functions (vs West End Functions).

What about the redundant auto keyword?

Think of auto, here, as the "function introducer". In other languages it might be spelt fun or func. If it makes you feel better you could always:

#define func auto

... actually don't. The point is, in languages that introduce a function with func, then have a trailing return type, nobody gives it a second thought. auto is the same number of characters as func. It's a shame it's not quite as expressive - but that's the price of legacy. It shouldn't mean we "can't have nice things".

The World’s First Distributed C++ Meet-up (*)

Phil Nash from level of indirection

(* Probably)

Last week my London based C++ user group, C++ London, joined forces with SwedenCpp, based in Stockholm, for a distributed event where we shared video streams with each other. The whole thing was hosted by King, who took care of the audio-visual link up, so we just had to organise the speakers.

We did this as a series of lightning talks (5 or 10 minutes each) - 5 in Stockholm and 7 in London.

This was an experiment. The idea was that, if successful, we could do more like this. I know for a fact that other user groups have been waiting to hear how it went as they think about doing something similar too.

So how did it go? Well, you can see for yourself. The video is here (if you view it on YouTube you'll also find a Table Of Contents to jump to individual talks):

Overall I think it went very well. There were certainly some rough edges, and opportunities to do better next time - but the basic concept worked well, I think. Feedback from attendees in both locations also bore that out. Everyone is keen to do this again!

Survey results

Paul Dreik, in Stockholm, sent out a survey to all attendees after the meeting and collected some stats and comments. Regarding the overall event, over 80% said "Great, let's do it again!". Of the rest, all but one said it was at least as good as a normal meet-up. Only one person thought it was a step down (you can't please everyone).

Interestingly, while lightning talks, as a format, was popular (over 50% thought at least most of the time should be spent on them), a sizeable 40% thought more time should be spent on longer talks. So maybe some combination could work well?

Beyond that there were a couple of people who said that it was too long without a break, and some people wanted time for questions as we go (which is hard to make work for lightning talks).

Room for improvement?

So, some of the things that could be improved (and notes for other groups looking to try):

We could have planned more interaction between the locations. We were very much focused on our own schedules and, other than the handover in the middle, there was not much beyond two independent sites that happened to be watching each other. There was an "open questions" section at the end. Jean Guegant, in Sweden, had mentioned that we'd do this in the intro but I'd missed it - and I think others had too, so we weren't really prepared for it. Between that and everyone forgetting any questions they'd had during the talks meant we didn't get many questions. I think we can do better here - perhaps collecting questions during the talks via a web app, or maybe a Twitter hashtag?

Are there other points during the course of the event that we could interact more between the sites? It can be a tough balance, but I think there is scope to experiment here.

And finally, while we're very grateful to King, and their AV staff, for providing, and setting up, the live stream, as well as recording, I think we made too many assumptions about how that was going to work. In particular we were left with an audio recording that was not as good as we had hoped for. In the London audio, especially, the hand mic and ambient mics were mixed together before recording - so we couldn't separate them out - which would have been very useful in the editing.

So. Would we do it again? Yes, definitely! I think this is a great way to expand the reach of our speakers, and give our members more variety of speakers and topics to listen to. In that respect this was a great success, and I'm excited to see what happens next.

The World’s First Distributed C++ Meet-up (*)

Phil Nash from level of indirection

(* Probably)

Last week my London based C++ user group, C++ London, joined forces with SwedenCpp, based in Stockholm, for a distributed event where we shared video streams with each other. The whole thing was hosted by King, who took care of the audio-visual link up, so we just had to organise the speakers.

We did this as a series of lightning talks (5 or 10 minutes each) - 5 in Stockholm and 7 in London.

This was an experiment. The idea was that, if successful, we could do more like this. I know for a fact that other user groups have been waiting to hear how it went as they think about doing something similar too.

So how did it go? Well, you can see for yourself. The video is here (if you view it on YouTube you'll also find a Table Of Contents to jump to individual talks):

Overall I think it went very well. There were certainly some rough edges, and opportunities to do better next time - but the basic concept worked well, I think. Feedback from attendees in both locations also bore that out. Everyone is keen to do this again!

Survey results

Paul Dreik, in Stockholm, sent out a survey to all attendees after the meeting and collected some stats and comments. Regarding the overall event, over 80% said "Great, let's do it again!". Of the rest, all but one said it was at least as good as a normal meet-up. Only one person thought it was a step down (you can't please everyone).

Interestingly, while lightning talks, as a format, was popular (over 50% thought at least most of the time should be spent on them), a sizeable 40% thought more time should be spent on longer talks. So maybe some combination could work well?

Beyond that there were a couple of people who said that it was too long without a break, and some people wanted time for questions as we go (which is hard to make work for lightning talks).

Room for improvement?

So, some of the things that could be improved (and notes for other groups looking to try):

We could have planned more interaction between the locations. We were very much focused on our own schedules and, other than the handover in the middle, there was not much beyond two independent sites that happened to be watching each other. There was an "open questions" section at the end. Jean Guegant, in Sweden, had mentioned that we'd do this in the intro but I'd missed it - and I think others had too, so we weren't really prepared for it. Between that and everyone forgetting any questions they'd had during the talks meant we didn't get many questions. I think we can do better here - perhaps collecting questions during the talks via a web app, or maybe a Twitter hashtag?

Are there other points during the course of the event that we could interact more between the sites? It can be a tough balance, but I think there is scope to experiment here.

And finally, while we're very grateful to King, and their AV staff, for providing, and setting up, the live stream, as well as recording, I think we made too many assumptions about how that was going to work. In particular we were left with an audio recording that was not as good as we had hoped for. In the London audio, especially, the hand mic and ambient mics were mixed together before recording - so we couldn't separate them out - which would have been very useful in the editing.

So. Would we do it again? Yes, definitely! I think this is a great way to expand the reach of our speakers, and give our members more variety of speakers and topics to listen to. In that respect this was a great success, and I'm excited to see what happens next.

Catch2 released

Phil Nash from level of indirection

Super Catch

I've been talking about Catch2 for a while - but now it's finally here! The big news for Catch2 is that it drops all support for pre-C++11 compilers. Other than meaning that some users will not be supported (you can still use Catch "Classic" (1.x) - which will get some bug fix updates for a while, at least) that's mostly an internal change - however it enables a number of user-facing changes, both now and in the future. Let's take a look at what they are.

New and shiny

New, composable, command line processor

Clara is the command line parser in Catch. In Catch 1.0 I spun it out into its own library (but it's still embedded in the Catch single header). Like Catch 1.0 itself, Clara was constrained to C++98 compatibility. For Catch2 I've rewritten Clara from the ground up, not only to fully embrace C++11, but also to be composable. What that means here is that each individual command line option or argument can be represented using its own, self-contained, parser. A composite parser of all the options is then assembled, or composed, from those smaller parsers.

The main advantage of this approach is that the set of available options is now trivially extendible outside of Catch, so users can easily specify command line options that can tune their test code.

See my lightning talk at CppCon this year for a bit more

Commas in assertions

As Catch assertions are implemented using macros, it was susceptible to the old problem of how macros interpret commas within macro arguments. Commas may occur in contexts that macros don't know about, such as within angle brackets (e.g. for template instantiations) - and so get interpreted as argument separators for the macro itself.

Now that we can rely on C++11, which includes variadic macros, we can make the assertions variadic, and just reassemble all the arguments again inside. That means we can now write code like the following:

  REQUIRE( getPair() == std::pair<true, "banana">() );

Microbenchmarking (experimental)

Catch2 gains initial support for micro-benchmarking. This is where small pieces of code are timed, usually in a loop so they are repeated enough times to be significant compared to the system clock accuracy. Some extra adjustments need to be made to allow for other sources of jitter and slowdown on the host machine - and, even then, multiple samples should be taken so they can be subject to statistical analysis.

There are many shortcomings with micro-benchmarks - not least that the performance of a piece of code in isolation can often be drastically different to how well it performs in conjunction with other code. This is not only due to the way the compiler may inline or otherwise optimise code together, but even on the CPU instructions can be reordered, pipelined or run in parallel - and with cache levels and branch prediction, the relationship between these things becomes hugely unpredictable.

Nonetheless they can still be useful - and it can be convenient to use the same test framework that you use to write functional tests - not least because there is much shared infrastructure.

Catch's benchmarking support is incomplete at time of this writing, lacking the multi-sampling, statistical analysis and richer reporting that fully-fledged frameworks offer. The intention is to grow this, but only if it can be done without any significant impact to non-benchmarking tests. In lieu of full documentation, see the example tests for now.

Performance

Both runtime and compile time performance are becoming increasingly important for Catch, and a lot of work is going into improving both. Runtime performance was a non-goal initially, so there has been plenty of low-hanging fruit. As a result we're already seeing some significant improvements, and there is more to come.

Compile time is harder. This has always been important, but as Catch has grown over the years, it has begun to suffer. Improving it significantly means making some trade-offs. So far some features that drag compile times have been made configurable - e.g. whether breaking into the debugger on a failed assertion happens in the code that caused it (meaning the debugger code gets compiled into every assertion macro) or one level up the stack (so can be "hidden" in a function). Other areas to look at are whether to use (non-standard, potentially brittle) forward declarations of some standard library types. Again, this is an ongoing area of active development - but much is already in Catch2 at launch.

See some of the toggle macros for more details

A new name

Believe it or not "Catch2" is now the name - it's not just a version reference! In fact the current intention is that, even when we move to v3.x it will still be called Catch2. E.g. Catch2 v3.0. Why? Well there have been calls for a name change - for searchability reasons. Catch is obviously a common keyword in C++, and also in unit testing. So getting search terms sufficiently narrow has been tricky. But I didn't want to use an entirely different name (although I did toy with the idea of "Catfish" for a while) - because that would lose too much of the momentum behind Catch overall. A derivative name doesn't fully solve the problem, because people will still refer to it as Catch, casually - but at least it gives a slight advantage. So that's what I've gone with. There are also a few interesting numerological aspects to it. It stops short of being Catch22 - but if you consider the C++11 requirement you could multiply them to get 22. And you can add the digits in C++11 to get 2.

Upcoming features

It's always dangerous to talk about what's planned - and I've fallen into this trap with Catch before. So there are some feature promises that have been outstanding for a long time now. In fact most of those have been deferred to Catch2 for quite some time - either because C++11 has features that make them much easier/ possible to implement (e.g. threading) - or just because they involved a lot of code that gets less noisy in C++11. So we'll talk about those again here.

Threading

This was unfeasible in Catch 1.x due not having C++11 threading primitives, or being able to use external dependencies like Boost for threading. To provide a basic level of support should now be fairly straightforward as a lot of groundwork has been laid (e.g. how singletons are organised).

The idea is that if, within a test case, you use additional threads, you should be able to make assertions from those threads - as long as the test case is still in scope at the time. The aim is for this to be done without locks in the assertions. Running multiple test cases in parallel is not immediately planned (and may be best implemented at the process level, anyway).

Generators/ Property Based Testing

Generators give you what other frameworks might call (Data-)Parameterised Tests - i.e. being able to use the same test code with different inputs. An experimental version of generators was included in Catch from very early on. Other than not being complete and having some limitations it also had a serious issue in that it didn't work at all with Sections! This is because both features relied on the ability to re-enter test cases - but they were independent of each other. I rewrote the test-case tracking code a couple of years ago now to be able to support this properly - and had a proof-of-concept new implementation of Generators working with it - enough to give a demo at a talk I gave. However the implementation was getting noisy with C++98 syntax so I deferred work on it for Catch2. Now that Catch2 is released I'll be looking at this again. Closely related to Generators - in fact it builds on it - is the idea of Property Based Testing. The proof-of-concept I mentioned actually had an initial version of this, too. There's more work involved here to getting it right, but having Generators is a first step.

Breaking Changes

As a major version change we've taken advantage of the permission that Semantic Versioning gives us to introduce a few breaking changes. These should have little, if any, impact on most users - but it's worth checking these before making the move to be sure you're ready.

toString() has been removed

This is probably the biggest change, and the most likely to affect people. For a long time there have been three ways to tell Catch how to convert values into strings for reporting purposes. In order, the pipeline was like this:
  1. toString() overload
  2. StringMaker<> specialisation
  3. ostream& operator << overload
  4. give up and use {?}

If your types already have << overloads for ostream then you're good. If not then, in theory, overloading toString() was the simplest option.

However toString() had a number of limitations - mostly due to the point of template instantiation. Compiler differences with two-phase lookup, and other factors which are implementation defined, mean that toString() overloads were unreliable and caused a lot of confusion - hardly the simplest option after all!

Specialising StringMaker<> is slightly more work, but is more reliable, stable, and flexible. So this is now the recommended way to provide string conversion functionality for your types. In Catch2, toString() has been completely removed!

If you have code that calls toString() there is a new function that plays that role: Catch::detail::stringify(). However, note that (a) this should never be overloaded - it just wraps the call into the pipeline that starts with StringMaker<> and (b) the detail part of the namespace should be a clue that this is really an internal part of Catch and is subject to change.

To specialise StringMaker<> see the documentation.

Other removals and changes

As well as C++98 support and toString(), a number of deprecated features and interfaces have been removed, as well as a few other tweaks and changes that may impact some code-bases. See the "Breaking Changes" section in the release notes for the full list. In fact the release notes in general give a good overview of all the many small changes and improvements that have gone into Catch2 that have not been mentioned here.

A new home

The Catch(2) repository has moved! You may not have noticed as it has been transferred in GitHub, and that means GitHub maintains redirects for all the old links. However they do recommend updating your own urls, in bookmarks, direct download links and, of course, git remotes. We've made this move for two reasons:

1. As Catch has grown it has become more of a community effort. It already has an additional lead maintainer in Martin Hořeňovský, and others may be added. But as the sole owner of my own personal GitHub account there are some things that only I could do (webhooks and other integrations, for example). So as not to be a bottleneck we've created a GitHub "Organization" account, CatchOrg, which allows multiple admin users. That's where we've moved it to.

2. For Catch2 to get any traction as a new name it was important for it to be reflected in the repo name, so we've taken advantage of the move to change the repo name, too. Catch "Classic" (1.x) has also moved here, but is now on a branch. If you cannot move to Catch2 for C++98 compatibility reasons you can stay on Catch Classic on this branch. It will continue to receive critical fixes, at least for now, but is no longer the active development branch. Please try to move to Catch2 as soon as possible.

If you notice anything broken as a result of this move, please let us know so we can fix it.

Thanks!

As always, a huge thanks to all who have supported and contributed to Catch and Catch2 - especially for your patience when I wasn't getting to issues and PRs as quickly as was needed!

An extra thanks to Martin, who has been doing the majority of the work on Catch this year!

Catch2 Released

Phil Nash from level of indirection

I've been talking about Catch2 for a while - but now it's finally here

The big news for Catch2 is that it drops all support for pre-C++11 compilers. Other than meaning that some users will not be supported (you can still use Catch "Classic" (1.x) - which will get some bug fix updates for a while, at least) that's mostly an internal change - however it enables a number of user-facing changes, both now and in the future. Let's take a look at what they are.

New and shiny

New, composable, command line processor

Clara is the command line parser in Catch. In Catch 1.0 I spun it out into its own library (but it's still embedded in the Catch single header). Like Catch 1.0 itself, Clara was constrained to C++98 compatibility. For Catch2 I've rewritten Clara from the ground up, not only to fully embrace C++11, but also to be composable. What that means here is that each individual command line option or argument can be represented using its own, self-contained, parser. A composite parser of all the options is then assembled, or composed, from those smaller parsers.

The main advantage of this approach is that the set of available options is now trivially extendible outside of Catch, so users can easily specify command line options that can tune their test code.

See my lightning talk at CppCon this year for a bit more

Commas in assertions

As Catch assertions are implemented using macros, it was susceptible to the old problem of how macros interpret commas within macro arguments. Commas may occur in contexts that macros don't know about, such as within angle brackets (e.g. for template instantiations) - and so get interpreted as argument separators for the macro itself.

Now that we can rely on C++11, which includes variadic macros, we can make the assertions variadic, and just reassemble all the arguments again inside. That means we can now write code like the following:

REQUIRE( getPair() == std::pair<true, "banana">() );

Microbenchmarking (experimental)

Catch2 gains initial support for micro-benchmarking. This is where small pieces of code are timed, usually in a loop so they are repeated enough times to be significant compared to the system clock accuracy. Some extra adjustments need to be made to allow for other sources of jitter and slowdown on the host machine - and, even then, multiple samples should be taken so they can be subject to statistical analysis.

There are many shortcomings with micro-benchmarks - not least that the performance of a piece of code in isolation can often be drastically different to how well it performs in conjunction with other code. This is not only due to the way the compiler may inline or otherwise optimise code together, but even on the CPU instructions can be reordered, pipelined or run in parallel - and with cache levels and branch prediction, the relationship between these things becomes hugely unpredictable.

Nonetheless they can still be useful - and it can be convenient to use the same test framework that you use to write functional tests - not least because there is much shared infrastructure.

Catch's benchmarking support is incomplete at time of this writing, lacking the multi-sampling, statistical analysis and richer reporting that fully-fledged frameworks offer. The intention is to grow this, but only if it can be done without any significant impact to non-benchmarking tests. In lieu of full documentation, see the example tests for now.

Performance

Both runtime and compile time performance are becoming increasingly important for Catch, and a lot of work is going into improving both. Runtime performance was a non-goal initially, so there has been plenty of low-hanging fruit. As a result we're already seeing some significant improvements, and there is more to come.

Compile time is harder. This has always been important, but as Catch has grown over the years, it has begun to suffer. Improving it significantly means making some trade-offs. So far some features that drag compile times have been made configurable - e.g. whether breaking into the debugger on a failed assertion happens in the code that caused it (meaning the debugger code gets compiled into every assertion macro) or one level up the stack (so can be "hidden" in a function). Other areas to look at are whether to use (non-standard, potentially brittle) forward declarations of some standard library types. Again, this is an ongoing area of active development - but much is already in Catch2 at launch.

See some of the toggle macros for more details

A new name

Believe it or not "Catch2" is now the name - it's not just a version reference! In fact the current intention is that, even when we move to v3.x it will still be called Catch2. E.g. Catch2 v3.0.

Why?

Well there have been calls for a name change - for searchability reasons. Catch is obviously a common keyword in C++, and also in unit testing. So getting search terms sufficiently narrow has been tricky. But I didn't want to use an entirely different name (although I did toy with the idea of "Catfish" for a while) - because that would lose too much of the momentum behind Catch overall. A derivative name doesn't fully solve the problem, because people will still refer to it as Catch, casually - but at least it gives a slight advantage. So that's what I've gone with.

There are also a few interesting numerological aspects to it. It stops short of being Catch22 - but if you consider the C++11 requirement you could multiply them to get 22. And you can add the digits in C++11 to get 2.

Upcoming features

It's always dangerous to talk about what's planned - and I've fallen into this trap with Catch before. So there are some feature promises that have been outstanding for a long time now. In fact most of those have been deferred to Catch2 for quite some time - either because C++11 has features that make them much easier/ possible to implement (e.g. threading) - or just because they involved a lot of code that gets less noisy in C++11. So we'll talk about those again here.

Threading

This was unfeasible in Catch 1.x due not having C++11 threading primitives, or being able to use external dependencies like Boost for threading. To provide a basic level of support should now be fairly straightforward as a lot of groundwork has been laid (e.g. how singletons are organised).

The idea is that if, within a test case, you use additional threads, you should be able to make assertions from those threads - as long as the test case is still in scope at the time. The aim is for this to be done without locks in the assertions. Running multiple test cases in parallel is not immediately planned (and may be best implemented at the process level, anyway).

Generators/ Property Based Testing

Generators give you what other frameworks might call (Data-)Parameterised Tests - i.e. being able to use the same test code with different inputs.

An experimental version of generators was included in Catch from very early on. Other than not being complete and having some limitations it also had a serious issue in that it didn't work at all with Sections! This is because both features relied on the ability to re-enter test cases - but they were independent of each other.

I rewrote the test-case tracking code a couple of years ago now to be able to support this properly - and had a proof-of-concept new implementation of Generators working with it - enough to give a demo at a talk I gave. However the implementation was getting noisy with C++98 syntax so I deferred work on it for Catch2. Now that Catch2 is released I'll be looking at this again.

Closely related to Generators - in fact it builds on it - is the idea of [Property Based Testing](http://hypothesis.works/articles/what-is-property-based-testing/]. The proof-of-concept I mentioned actually had an initial version of this, too. There's more work involved here to getting it right, but having Generators is a first step.

Breaking Changes

As a major version change we've taken advantage of the permission that Semantic Versioning gives us to introduce a few breaking changes. These should have little, if any, impact on most users - but it's worth checking these before making the move to be sure you're ready.

toString() has been removed

This is probably the biggest change, and the most likely to affect people. For a long time there have been three ways to tell Catch how to convert values into strings for reporting purposes. In order, the pipeline was like this:

  • toString() overload
  • StringMaker<> specialisation
  • ostream& operator << overload
  • give up and use {?}

If your types already have << overloads for ostream then you're good. If not then, in theory, overloading toString()was the simplest option.

However toString() had a number of limitations - mostly due to the point of template instantiation. Compiler differences with two-phase lookup, and other factors which are implementation defined, mean that toString() overloads were unreliable and caused a lot of confusion - hardly the simplest option after all!

Specialising StringMaker<> is slightly more work, but is more reliable, stable, and flexible. So this is now the recommended way to provide string conversion functionality for your types. In Catch2, toString()has been completely removed!

If you have code that calls toString() there is a new function that plays that role: Catch::detail::stringify(). However, note that (a) this should never be overloaded - it just wraps the call into the pipeline that starts with StringMaker<> and (b) the detail part of the namespace should be a clue that this is really an internal part of Catch and is subject to change.

To specialise StringMaker<> see the documentation.

Other removals and changes

As well as C++98 support and toString(), a number of deprecated features and interfaces have been removed, as well as a few other tweaks and changes that may impact some code-bases. See the "Breaking Changes" section in the release notes for the full list. In fact the release notes in general give a good overview of all the many small changes and improvements that have gone into Catch2 that have not been mentioned here.

A new home

The Catch(2) repository has moved! You may not have noticed as it has been transferred in GitHub, and that means GitHub maintains redirects for all the old links. However they do recommend updating your own urls, in bookmarks, direct download links and, of course, git remotes. We've made this move for two reasons:

  1. As Catch has grown it has become more of a community effort. It already has an additional lead maintainer in Martin Hořeňovský, and others may be added. But as the sole owner of my own personal GitHub account there are some things that only I could do (webhooks and other integrations, for example). So as not to be a bottleneck we've created a GitHub "Organization" account, CatchOrg, which allows multiple admin users. That's where we've moved it to.

  2. For Catch2 to get any traction as a new name it was important for it to be reflected in the repo name, so we've taken advantage of the move to change the repo name, too. Catch "Classic" (1.x) has also moved here, but is now on a branch. If you cannot move to Catch2 for C++98 compatibility reasons you can stay on Catch Classic on this branch. It will continue to receive critical fixes, at least for now, but is no longer the active development branch. Please try to move to Catch2 as soon as possible.

If you notice anything broken as a result of this move, please let us know so we can fix it.

Thanks!

As always, a huge thanks to all who have supported and contributed to Catch and Catch2 - especially for your patience when I wasn't getting to issues and PRs as quickly as was needed!

An extra thanks to Martin, who has been doing the majority of the work on Catch this year!

Catch Up

Phil Nash from level of indirection

Trolley

Stock image from Shutterstock

It's been just over six years since I first announced Catch to the world as a brand new C++ test framework!

In that time it has matured to the point that it can take on the heavyweights - while still staying true to its original goals of being lightweight, easy to get started with and low-friction to work with.

In the last couple of years or so it has also increased dramatically in popularity! That sounds like a good thing - and it is - but with that comes a greater diversity of environments and usage, and more people raising issues and submitting pull requests.

Again, it's great to have so much input from the community - especially in the form of pull requests - where other developers have gone to some effort to implement a change, or a fix, and present it back for inclusion in the main project. So it's been heart-breaking for me that, between this increase in volume and finding my meagre free-time stretched even further, so many issues and PRs have been left unacknowledged - many not even seen by me in the first place.

But two things have happened, recently, that completely change this state of affairs. We're moving firmly in the right direction again.

Firstly, as mentioned in On Joining JetBrains, I've recently changed jobs to one that should give me much more time and opportunity to work on Catch - as well as the opportunity to do so in my home office - with stable internet (as opposed to on the train while commuting to and from work). The first few months were a bit of a wash for the reasons discussed in that post, but, as I also suggested there, this year has seen that change and I've been able to put in quite a lot of work on Catch already.

But that's not really enough. There's a huge back-log - and I'm still only doing this part time - and I want to spend time working on Catch2 as well (more on that soon). I don't want to end up back in the situation where everything is backing up and there's no hope of recovery.

I've been hoping to find someone else to be a key maintainer of Catch for a couple of years now. I've not been very active in this search - for all the same reasons - but it's been on my mind.

But, just last month, after I appeared on CppCast talking about JetBrains and Catch, there was a thread on Reddit about it - with many expressing concern over the Catch situation. I brought the subject up on there again and got the attention of one of the commenters.

I didn't know it at the time, but Martin Hořeňovský has been responsible for a good number of those PRs and issues that had been left unaddressed - as well as an active community member in helping address other people's issues. So it's with great pleasure (and relief!) that I can announce that Martin now has full commit rights to Catch on GitHub and has been prolific in working through the currently outstanding tickets.

Martin seems to really "get" Catch, and the design goals around it - so working with him on this the last couple of weeks has been very rewarding. From some queries I just ran on GitHub it looks like 39 issues have been closed and 38 PRs merged or closed in that time! That's compared to 9 new issues and 7 PRs - about half of which were created by Martin and I in the process. And that's not to mention all the labels we've been using to categorise the other tickets - with many marked as "Resolved - pending review" - which usually means we think it's resolved but we're just waiting for feedback (or a chance for more testing).

With 219 open issues and 41 PRs still outstanding, at time of writing, there's a lot more work to do yet - but I hope this reassures you that we're going in the right direction - and fast!

And we're not stopping with Martin. We have at least one other volunteer that I'll be bringing up to speed soon.

Catch2

I've referred to Catch2 a number of times now, and talked a little about what it will be. The biggest reason for making it a major release, according to Semantic Versioning, is that it will drop support for pre-C++11. For that reason Catch Classic (1.x) will continue to receive at least bug fix updates - but no more new features once Catch2 is fully released. A few major features in the pipeline have been explicitly deferred to Catch2: concurrency support and generators/ property-based testing in particular.

Moving to C++11 provides a very large scope for cleaning up the code-base - which has a significant volume of code dedicated to platform-specific workarounds for compiler shortcomings, missing library features such as smart pointers, and boilerplate that will no longer be necessary with things like range-based-for, auto and others. Lambdas will be useful too, but are not quite so important.

Because taking advantage of C++11 has the potential to touch almost every line of code, I'm taking the opportunity to rewrite the core of Catch - primarily the assertion macros and the infrastructure to support that. This is code that is #included in every test file, and expanded (in the case of macros) in every test case or even every assertion. Keeping this code lightweight is essential to avoiding a compile time hit. There's a number of ways this foot-print can be reduced and the rewrite will strive for this as much as possible.

The rest of the code, concerned with maintaining the registry of tests, parsing and interpreting the command line, running tests and reporting results, will be updated more incrementally.

I already have a (not-yet-public) proof-of-concept version of the re-written code. It's not yet complete but, so far, has only one standard library dependency and minimal templates. The compile-time overhead is imperceptible.

In addition to compile-time, runtime performance is also a goal of Catch2. It's not an overriding goal - I won't be obfuscating the code in the name of wringing out the last few milliseconds of performance - but this is a definite change from Catch Classic where runtime performance was a non-goal. This is in recognition of the fact that Catch is used for more than just isolated unit tests - and will also become more important with property based testing.

I don't have a timeline, yet, for when I expect Catch2 to be ready - and in the immediate term getting Catch Classic back under control is the priority. Despite the partial re-write, and the major version increment, I expect tests written against Catch Classic to mostly "just work" with Catch2 - or require very minimal changes in a some rare cases.

You

As already mentioned many developers have also spent time and effort contributing issues, fixes and even feature PRs over the years. So Catch has really been a community project for years now and I'm very grateful for all the help and support. I think Catch has shown that having a low-friction approach to testing C++ code is very important to a lot of people and I'm hoping we'll continue to build on that. Thank you all.

Catch Up

Phil Nash from level of indirection

Trolley

It's been just over six years since I first announced Catch to the world as a brand new C++ test framework!

In that time it has matured to the point that it can take on the heavyweights - while still staying true to its original goals of being lightweight, easy to get started with and low-friction to work with.

In the last couple of years or so it has also increased dramatically in popularity! That sounds like a good thing - and it is - but with that comes a greater diversity of environments and usage, and more people raising issues and submitting pull requests.

Again, it's great to have so much input from the community - especially in the form of pull requests - where other developers have gone to some effort to implement a change, or a fix, and present it back for inclusion in the main project. So it's been heart-breaking for me that, between this increase in volume and finding my meagre free-time stretched even further, so many issues and PRs have been left unacknowledged - many not even seen by me in the first place.

But two things have happened, recently, that completely change this state of affairs. We're moving firmly in the right direction again.

Firstly, as mentioned in On Joining JetBrains, I've recently changed jobs to one that should give me much more time and opportunity to work on Catch - as well as the opportunity to do so in my home office - with stable internet (as opposed to on the train while commuting to and from work). The first few months were a bit of a wash for the reasons discussed in that post, but, as I also suggested there, this year has seen that change and I've been able to put in quite a lot of work on Catch already.

But that's not really enough. There's a huge back-log - and I'm still only doing this part time - and I want to spend time working on Catch2 as well (more on that soon). I don't want to end up back in the situation where everything is backing up and there's no hope of recovery.

I've been hoping to find someone else to be a key maintainer of Catch for a couple of years now. I've not been very active in this search - for all the same reasons - but it's been on my mind.

But, just last month, after I appeared on CppCast talking about JetBrains and Catch, there was a thread on Reddit about it - with many expressing concern over the Catch situation. I brought the subject up on there again and got the attention of one of the commenters.

I didn't know it at the time, but Martin Hořeňovský has been responsible for a good number of those PRs and issues that had been left unaddressed - as well as an active community member in helping address other people's issues. So it's with great pleasure (and relief!) that I can announce that Martin now has full commit rights to Catch on GitHub and has been prolific in working through the currently outstanding tickets.

Martin seems to really "get" Catch, and the design goals around it - so working with him on this the last couple of weeks has been very rewarding. From some queries I just ran on GitHub it looks like 39 issues have been closed and 38 PRs merged or closed in that time! That's compared to 9 new issues and 7 PRs - about half of which were created by Martin and I in the process. And that's not to mention all the labels we've been using to categorise the other tickets - with many marked as "Resolved - pending review" - which usually means we think it's resolved but we're just waiting for feedback (or a chance for more testing).

With 219 open issues and 41 PRs still outstanding, at time of writing, there's a lot more work to do yet - but I hope this reassures you that we're going in the right direction - and fast!

And we're not stopping with Martin. We have at least one other volunteer that I'll be bringing up to speed soon.

Catch2

I've referred to Catch2 a number of times now, and talked a little about what it will be. The biggest reason for making it a major release, according to Semantic Versioning, is that it will drop support for pre-C++11. For that reason Catch Classic (1.x) will continue to receive at least bug fix updates - but no more new features once Catch2 is fully released. A few major features in the pipeline have been explicitly deferred to Catch2: concurrency support and generators/ property-based testing in particular.

Moving to C++11 provides a very large scope for cleaning up the code-base - which has a significant volume of code dedicated to platform-specific workarounds for compiler shortcomings, missing library features such as smart pointers, and boilerplate that will no longer be necessary with things like range-based-for, auto and others. Lambdas will be useful too, but are not quite so important.

Because taking advantage of C++11 has the potential to touch almost every line of code, I'm taking the opportunity to rewrite the core of Catch - primarily the assertion macros and the infrastructure to support that. This is code that is #included in every test file, and expanded (in the case of macros) in every test case or even every assertion. Keeping this code lightweight is essential to avoiding a compile time hit. There's a number of ways this foot-print can be reduced and the rewrite will strive for this as much as possible.

The rest of the code, concerned with maintaining the registry of tests, parsing and interpreting the command line, running tests and reporting results, will be updated more incrementally.

I already have a (not-yet-public) proof-of-concept version of the re-written code. It's not yet complete but, so far, has only one standard library dependency and minimal templates. The compile-time overhead is imperceptible.

In addition to compile-time, runtime performance is also a goal of Catch2. It's not an overriding goal - I won't be obfuscating the code in the name of wringing out the last few milliseconds of performance - but this is a definite change from Catch Classic where runtime performance was a non-goal. This is in recognition of the fact that Catch is used for more than just isolated unit tests - and will also become more important with property based testing.

I don't have a timeline, yet, for when I expect Catch2 to be ready - and in the immediate term getting Catch Classic back under control is the priority. Despite the partial re-write, and the major version increment, I expect tests written against Catch Classic to mostly "just work" with Catch2 - or require very minimal changes in a some rare cases.

You

As already mentioned many developers have also spent time and effort contributing issues, fixes and even feature PRs over the years. So Catch has really been a community project for years now and I'm very grateful for all the help and support. I think Catch has shown that having a low-friction approach to testing C++ code is very important to a lot of people and I'm hoping we'll continue to build on that. Thank you all.

C++17 – Why it’s better than you might think

Phil Nash from level of indirection

C++20 Horizon

From Mark Isaacson's Meeting C++ talk, "Exploring C++ and beyond"

I was recently interviewed for CppCast and one the news items that came up was a trip report from a recent C++ standards meeting (Issaquah, Nov 2016). This was one of the final meetings before the C++17 standard is wrapped up, so things are looking pretty set at this point. During the discussion I made the point that, despite initially being disappointed that so many headline features were not making it in (Concepts, Modules, Coroutines and Ranges - as well as dot operator and uniform call syntax), I'm actually very happy with how C++17 is shaping up. There are some very nice refinements and features (const expr if is looking quite big on its own) - and including a few surprise ones (structured bindings being the main one for me).

But the part of what I said that surprised even me (because I hadn't really thought of it until a couple of hours before we recorded) was that perhaps it is for the best that we don't get those bigger features just yet! The thinking was that if you take them all together - or even just two or three of them - they have the potential to change the language - and the way we write "modern C++" perhaps even more so than C++11 did - and that's really saying something! Now that's a good thing, in my opinion, but I do wonder if it would be too soon for such large scale changes just yet.

After the 98 standard C++ went into a thirteen year period in the wilderness (there was C++03, which fixed a couple of problems with the 98 standard - but didn't actually add any new features - except value initialisation). As this period coincided with the rise of other mainstream languages - Java and C# in particular - it seemed that C++ was a dying language - destined for a drawn out, Cobolesque, old-age at best.

But C++11 changed all that and injected a vitality and enthusiasm into the community not seen since the late 90s - if ever! Again the timing was a factor - with Moore's Law no longer influencing single-core performance there was a resurgence of interest in low/ zero overhead systems languages - and C++11 was getting modern enough to be palatable again. "There's no such thing as a free lunch" turns out to be true if you wait long enough.

So the seismic changes in C++11 were overdue, welcome and much needed at that time. Since then the standardisation process has moved to the "train model", which has settled on a new standard every three years. Whatever is ready (and fits) makes it in. If it's not baked it's dropped - or is moved into a TS that can be given more real-world testing before being reconsidered. This has allowed momentum to be maintained and reassures us that we won't be stuck without an update to the standard for too long again.

On the other hand many code-bases are still catching up to C++11. There are not many breaking changes - and you can introduce newer features incrementally and to only parts of the code-base - but this can lead to some odd looking code and once you start converting things you tend to want to go all in. Even if that's not true for your own codebase it may be true of libraries and frameworks you depend on! Those features we wanted in C++17 could have a similar - maybe even greater - effect and my feeling is that, while they would certainly be welcomed by many (me included) - there would also be many more that might start to see the churn on the language as a sign of instability. "What? We've only just moved on to C++11 and you want us to adopt these features too?". Sometimes it can be nice to just know where you are with a language - especially after a large set of changes. 2011 might seem like a long time ago but there's a long lag in compiler conformance, then compiler adoption, then understanding and usage of newer features. Those just starting to experiment with C++11 language features are still very common.

I could be wrong about this, but it feels like there's something in it based on my experience. And I think the long gap between C++98 and C++11 is responsible for at least amplifying the effect. People got used to C++ being defined a single way and now we have three standards already in use, with another one almost ready. It's a lot to keep up with - even for those of us that enjoy that sort of thing!

So I'm really looking forward to those bigger features that we'll hopefully get in C++20 (and don't forget you can even use the TS's now if your compiler supports them - and the Ranges library is available on GitHub) - but I'm also looking forward to updating the language with C++17 and the community gaining a little more experience with the new, rapidly evolving, model of C++ before the next big push.