C++17 – Why it’s better than you might think

Phil Nash from level of indirection

C++20 Horizon

From Mark Isaacson's Meeting C++ talk, "Exploring C++ and beyond"

I was recently interviewed for CppCast and one the news items that came up was a trip report from a recent C++ standards meeting (Issaquah, Nov 2016). This was one of the final meetings before the C++17 standard is wrapped up, so things are looking pretty set at this point. During the discussion I made the point that, despite initially being disappointed that so many headline features were not making it in (Concepts, Modules, Coroutines and Ranges - as well as dot operator and uniform call syntax), I'm actually very happy with how C++17 is shaping up. There are some very nice refinements and features (const expr if is looking quite big on its own) - and including a few surprise ones (structured bindings being the main one for me).

But the part of what I said that surprised even me (because I hadn't really thought of it until a couple of hours before we recorded) was that perhaps it is for the best that we don't get those bigger features just yet! The thinking was that if you take them all together - or even just two or three of them - they have the potential to change the language - and the way we write "modern C++" perhaps even more so than C++11 did - and that's really saying something! Now that's a good thing, in my opinion, but I do wonder if it would be too soon for such large scale changes just yet.

After the 98 standard C++ went into a thirteen year period in the wilderness (there was C++03, which fixed a couple of problems with the 98 standard - but didn't actually add any new features - except value initialisation). As this period coincided with the rise of other mainstream languages - Java and C# in particular - it seemed that C++ was a dying language - destined for a drawn out, Cobolesque, old-age at best.

But C++11 changed all that and injected a vitality and enthusiasm into the community not seen since the late 90s - if ever! Again the timing was a factor - with Moore's Law no longer influencing single-core performance there was a resurgence of interest in low/ zero overhead systems languages - and C++11 was getting modern enough to be palatable again. "There's no such thing as a free lunch" turns out to be true if you wait long enough.

So the seismic changes in C++11 were overdue, welcome and much needed at that time. Since then the standardisation process has moved to the "train model", which has settled on a new standard every three years. Whatever is ready (and fits) makes it in. If it's not baked it's dropped - or is moved into a TS that can be given more real-world testing before being reconsidered. This has allowed momentum to be maintained and reassures us that we won't be stuck without an update to the standard for too long again.

On the other hand many code-bases are still catching up to C++11. There are not many breaking changes - and you can introduce newer features incrementally and to only parts of the code-base - but this can lead to some odd looking code and once you start converting things you tend to want to go all in. Even if that's not true for your own codebase it may be true of libraries and frameworks you depend on! Those features we wanted in C++17 could have a similar - maybe even greater - effect and my feeling is that, while they would certainly be welcomed by many (me included) - there would also be many more that might start to see the churn on the language as a sign of instability. "What? We've only just moved on to C++11 and you want us to adopt these features too?". Sometimes it can be nice to just know where you are with a language - especially after a large set of changes. 2011 might seem like a long time ago but there's a long lag in compiler conformance, then compiler adoption, then understanding and usage of newer features. Those just starting to experiment with C++11 language features are still very common.

I could be wrong about this, but it feels like there's something in it based on my experience. And I think the long gap between C++98 and C++11 is responsible for at least amplifying the effect. People got used to C++ being defined a single way and now we have three standards already in use, with another one almost ready. It's a lot to keep up with - even for those of us that enjoy that sort of thing!

So I'm really looking forward to those bigger features that we'll hopefully get in C++20 (and don't forget you can even use the TS's now if your compiler supports them - and the Ranges library is available on GitHub) - but I'm also looking forward to updating the language with C++17 and the community gaining a little more experience with the new, rapidly evolving, model of C++ before the next big push.

Cherry pick and merge revisions in Mercurial

The Lone C++ Coder's Blog from The Lone C++ Coder's Blog

I’ve mentioned before that I prefer Mercurial to Git, at least for my own work. That said, git has a nice feature that allows you to cherry pick revisions to merge between branches. That’s extremely useful if you want to move a single change between branches and not do a full branch merge. Turns out mercurial has that ability, too, but it goes by a slightly different name. There are actually two options in Mercurial - the older transplant extension and from Mercurial 2.

On joining JetBrains

Phil Nash from level of indirection

JbOffice

The JetBrains office in St. Petersburg

I recently joined JetBrains as Developer Advocate for their (our) C++ related tools, CLion (a cross-platform C++ IDE), AppCode (for Mac and iOS development - also supporting Swift and Objective-C) and ReSharper C++ (plug-in for Visual Studio).

This was a significant move for me as my previous full-time role had been as a developer at one place for over seven years! In that time I had been able to take time out to do occasional stints of coaching and consulting, as well as travelling to a number of conferences and other events where I always felt like I was more a part of the community. But making that a significant part of my job has been an interesting transition.

JetBrains are a superb software engineering company with a friendly, diverse, set of employees, distributed across a number of offices. I primarily interact with members in St. Petersburg, Russia and Munich, Germany (and have already visited both offices). Personally I work at home - as do the rest of the Developer Advocacy team (their respective homes, that is. There's not quite enough room at my place!)

I'm not going to talk too much more about my role in general, here. I covered it a bit more in an interview published on some of the JetBrains blogs soon after I joined. What I wanted to talk about here is how it affects my other activities - and in particular Catch.

One of the enticing things about taking this role was that I would be expected to continue work on Open Source projects. After all I wouldn't be working on a paid project anymore - but I still need to keep my skills relevant (and not just tuned to small, self-contained, demos). Over the last three months, however, I've found there has been a lot more to get up-to-speed on - as well as opportunity to overbook myself - than I had anticipated. So catching up on Catch has been minimal so far.

But with my first three release-oriented activities under my belt, and a clearer idea of what to expect, I'm looking forward to turning that around early in the new year. I have a lot of issues and PRs to catch up on, and I've started work on Catch2 in parallel, which I'm keen to get to a first release of (I intend to follow-up soon with a post on Catch2). I'm also hoping to find people I can work with (and give commit rights to) to help with all of the above.

Beyond Catch I expect to blog more frequently again (both here and on the JetBrains blogs) and I have some other projects in the pipeline too.

Mutt regex pattern modifiers

The Lone C++ Coder's Blog from The Lone C++ Coder's Blog

I still use the mutt email client when I’m remoted into some of my FreeBSD servers. It might not be the most eye pleasing email client ever, but it’s powerful, lightweight and fast. Mutt has a very powerful feature that allows you to tag messages via regular expressions. It has a couple of special pattern modifiers that allow you to apply the regex to certain mail headers only. I can never remember so I’m starting a list of the ones I tend to use most in the hope that I’ll either remember them eventually or can refer back to this post.

Migrate MelatiSite from CVS to github

Tim Pizey from Tim Pizey

Re-visiting http://tim-pizey.blogspot.co.uk/2011/10/cvs-to-github.html (why did I not complete this at the time?)

Following How to export revision history from mercurial or git to cvs?

On hanuman I created an id file git_authors mapping cvs ids to github name, email format for all contributors:


timp=Tim Pizey<timp@paneris.org>
then create a repository on github (melati in this example, I already have uploaded my ssh public key for this machine)

cd ~
git cvsimport -d /usr/cvsroot -C MelatiSite -r cvs -k -A git_authors MelatiSite

cd melati
echo A jdbc to java object relational mapping system. 1999-2011 > README.txt
git add README.txt
git commit -m "Initial" README.txt
git remote add origin git@github.com:timp21337/melati.git
git push -u origin master
See https://github.com/timp21337/melati.

Surprising Defaults – HttpClient ExpectContinue

Chris Oldwood from The OldWood Thing

One of the things you quickly discover when moving from building services on-premise to “the cloud” is quite how many more bits of wire and kit suddenly sit between you and your consumer. Performance-wise this already elongated network path can then be further compounded when the framework you’re using invokes unintuitive behaviour by default [1].

The Symptoms

The system was a new REST API built in C# on the .Net framework (4.6) and hosted in the cloud with AWS. This AWS endpoint was then further fronted by Akamai for various reasons. The initial consumer was an on-premise adaptor (also written in C#) which itself had to go through an enterprise grade web proxy to reach the outside world.

Naturally monitoring was added in fairly early on so that we could start to get a feel for how much added latency moving to the cloud would bring. Our first order approximation to instrumentation allowed us to tell how long the HTTP requests took to handle along with a breakdown of the major functions, e.g. database queries and 3rd party requests. Outside the service we had some remote monitoring too that could tell us the performance from a more customer-like position.

When we integrated with the 3rd party service some poor performance stats caused us to look closer into our metrics. The vast majority of big delays were outside our control, but it also raised some other questions as the numbers didn’t quite add up. We had expected the following simple formula to account for virtually all the time:

HTTP Request Time ~= 3rd Party Time + Database Time

However we were seeing a 300 ms discrepancy in many (but not all) cases. It was not our immediate concern as there was bigger fish to fry but some extra instrumentation was added to the OWIN pipeline and we did a couple of quick local profile runs to look out for anything obviously out of place. The finger seemed to point to time lost somewhere in the Nancy part of the pipeline, but that didn’t entirely make sense at the time so it was mentally filed away and we moved on.

Serendipity Strikes

Whilst talking to the 3rd party about our performance woes with their service they came back to us and asked if we could stop sending them a “Expect: 100-Continue” header in our HTTP requests.

This wasn’t something anyone in the team was aware of and as far as we could see from the various RFCs and blog posts it was something “naturally occurring” on the internet. We also didn’t know if it was us adding it or one of the many proxies in between us and them.

We discovered how to turn it off, and did, but it made little difference to the performance problems we had with them, which were in the order of seconds, not milliseconds. Feeling uncomfortable about blindly switching settings off without really understanding them we reverted the change.

The mention of this header also cropped up when we started investigating some errors we were getting from Akamai that seemed to be more related to a disparity in idle connection timeouts.

Eventually, as we learned more about this mysterious header someone in the team put two-and-two together and realised this was possibly where our missing time was going too.

The Cause

Our REST API uses PUT requests to add resources and it appears that the default behaviour of the .Net HttpClient class is to enable the sending of this “Expect: 100-Continue” header for those types of requests. Its purpose is to tell the server that the headers have been sent but that it will delay sending the body until it receives a 100-Continue style response. At that point the client sends the body, the server can then process the entire request and the response is handled by the client as per normal.

Yes, that’s right, it splits the request up so that it takes two round trips instead of one!

Now you can probably begin to understand why our request handling time appeared elongated and why it also appeared to be consumed somewhere within the Nancy framework. The request processing is started and handled by the OWN middleware as that only depends on the headers, it then enters Nancy which finds a handler, and so requests the body in the background (asynchronously). When it finally arrives the whole request is then passed to our Nancy handler just as if it had been sent all as a single chunk.

The Cure

When you google this problem with relation to .Net you’ll see that there are a couple of options here. We were slightly nervous about choosing the nuclear option (setting it globally on the ServicePointManager) and instead added an extra line into our HttpClient factory so that it was localised:

var client = new HttpClient(...);
...
client.DefaultRequestHeaders.ExpectContinue = false;

We re-deployed our services, checked our logs to ensure the header was no longer being sent, and then checked the various metrics to see if the time was now all accounted for, and it was.

Epilogue

In hindsight this all seems fairly obvious, at least, once you know what this header is supposed to do, and yet none of the people in my team (who are all pretty smart) joined up the dots right away. When something like this goes astray I like to try and make sense of why we didn’t pick it up as quickly as perhaps we should have.

In the beginning there were so many new things for the team to grasp. The difference in behaviour between our remote monitoring and on-premise adaptor was assumed to be one of infrastructure especially when we had already battled the on-premise web proxy a few times [2]. We saw so many other headers in our requests that we never added so why would we assume this one was any different (given none of us had run across it before)?

Given the popularity and maturity of the Nancy framework we surmised that no one would use it if there was the kind of performance problems we were seeing, so once again were confused as to how the time could appear to be lost inside it. Although we were all aware of what the async/await construct does none of us had really spent any serious time trying to track down performance anomalies in code that used it so liberally and so once again we had difficulties understanding perhaps what the tool was really telling us.

Ultimately though the default behaviour just seems so utterly wrong that none of use could imagine the out-of-the-box settings would cause the HttpClient to behave this way. By choosing this default we are in essence optimising PUT requests for the scenario where the body does not need sending, which we all felt is definitely the exception not the norm. Aside from large file uploads or massive write contention we were struggling to come up with a plausible use case.

I don’t know what forces caused this decision to be made as I clearly wasn’t there and I can’t find any obvious sources that might explain it either. The internet and HTTP has evolved so much over the years that it’s possible this behaviour provides the best compatibility with web servers out-of-the-box. My own HTTP experience only covers the last few years along with few more around the turn of the millennium, but my colleagues easily cover the decades I’m missing so I don’t feel I’m missing anything obvious.

Hopefully some kind soul will use the comments section to link to the rationale so we can all get a little closure on the issue.

 

[1] Violating The Principle of Least Astonishment for configuration settings was something I covered more generally before in “Sensible Defaults”.

[2] See “The Curse of NTLM Based HTTP Proxies”.

What happened to XEmacs?

The Lone C++ Coder's Blog from The Lone C++ Coder&#039;s Blog

I used XEmacs quite a lot in the 2000s before I switched back to the more stable GNU Emacs. That was back then before GNU Emacs offered a stable official Windows build when XEmacs did, and at the time I was doing a lot of Windows development. Out of curiosity and for some research I tried to look into the current state of the project and found that the www.xemacs.org appears to be unreachable.

Bit the bullet and upgraded my Mac Pro’s CPU

The Lone C++ Coder's Blog from The Lone C++ Coder&#039;s Blog

I’ve been an unashamed fan of the old “cheese grater” Mac Pro due to its sturdiness and expandability. Yes, they’re not the most elegant bit of kit out there but they are well built. And most importantly for me, they are expandable by plugging things inside the case, not by creating a Gordian Knot of hubs, Thunderbolt cables, USB cables and stacks of external disks all evenly scattered around a trash can. Oh, and they’re designed to go under a desk. Where mine happens to live, right next to my dual boot Linux/Windows development box.