Modeling visual studio C++ compile times

Derek Jones from The Shape of Code

Last week I spotted an interesting article on the compile-time performance of C++ compilers running under Microsoft Windows. The author had obviously put a lot of work into gathering the data, and had taken care to have multiple runs to reduce the impact of random effects (128 runs to be exact); but, as if often the case, the analysis of the data was lackluster. I posted a comment asking for the data, and a link was posted the next day :-)

The compilers benchmarked were: Visual Studio 2015, Visual Studio 2017 and clang 7.0.1; the compilers were configured to target: C++20, C++17, C++14, C++11, C++03, or C++98. The source code used was 100 system headers.

If we are interested in understanding the contribution of each component to overall compile-time, the obvious fist regression model to build is:

compile_time = header_x+compiler_y+language_z

where: header_x are the different headers, compiler_y the different compilers and language_z the different target languages. There might be some interaction between variables, so something more complicated was tried first; the final fitted model was (code+data):

compile_time = k+header_x+compiler_y+language_z+compiler_y*language_z

where k is a constant (the Intercept in R’s summary output). The following is a list of normalised numbers to plug into the equation (clang is the default compiler and C++03 the default language, and so do not appear in the list, the : symbol represents the multiplication; only a few of the 100 headers are listed, details are available):

                             Estimate Std. Error  t value Pr(>|t|)    
               (Intercept)                  headerany 
               1.000000000                0.051100398 
               headerarray             headerassert.h 
               0.522336397               -0.654056185 
...
            headerwctype.h            headerwindows.h 
              -0.648095154                1.304270250 
              compilerVS15               compilerVS17 
              -0.185795534               -0.114590143 
             languagec++11              languagec++14 
               0.032930014                0.156363433 
             languagec++17              languagec++20 
               0.192301727                0.184274629 
             languagec++98 compilerVS15:languagec++11 
               0.001149643               -0.058735591 
compilerVS17:languagec++11 compilerVS15:languagec++14 
              -0.038582437               -0.183708714 
compilerVS17:languagec++14 compilerVS15:languagec++17 
              -0.164031495                         NA 
compilerVS17:languagec++17 compilerVS15:languagec++20 
              -0.181591418                         NA 
compilerVS17:languagec++20 compilerVS15:languagec++98 
              -0.193587045                0.062414667 
compilerVS17:languagec++98 
               0.014558295 

As an example, the (normalised) time to compile wchar.h using VS15 with languagec++11 is:
1-0.514807638-0.183862162+0.033951731-0.059720131

Each component adds/substracts to/from the normalised mean.

Building this model didn’t take long. While waiting for the kettle to boil, I suddenly realised that an additive model was probably inappropriate for this problem; oops. Surely the contribution of each component was multiplicative, i.e., components have a percentage impact to performance.

A quick change to the form of the fitted model:

log(compile_time) = k+header_x+compiler_y+language_z+compiler_y*language_z

Taking the exponential of both side, the fitted equation becomes:

compile_time = e^{k}e^{header_x}e^{compiler_y}e^{language_z}e^{compiler_y*language_z}

The numbers, after taking the exponent, are:

               (Intercept)                  headerany 
              9.724619e+08               1.051756e+00 
...
            headerwctype.h            headerwindows.h 
              3.138361e-01               2.288970e+00 
              compilerVS15               compilerVS17 
              7.286951e-01               7.772886e-01 
             languagec++11              languagec++14 
              1.011743e+00               1.049049e+00 
             languagec++17              languagec++20 
              1.067557e+00               1.056677e+00 
             languagec++98 compilerVS15:languagec++11 
              1.003249e+00               9.735327e-01 
compilerVS17:languagec++11 compilerVS15:languagec++14 
              9.880285e-01               9.351416e-01 
compilerVS17:languagec++14 compilerVS15:languagec++17 
              9.501834e-01                         NA 
compilerVS17:languagec++17 compilerVS15:languagec++20 
              9.480678e-01                         NA 
compilerVS17:languagec++20 compilerVS15:languagec++98 
              9.402461e-01               1.058305e+00 
compilerVS17:languagec++98 
              1.001267e+00 

Taking the same example as above: wchar.h using VS15 with c++11. The compile-time (in cpu clock cycles) is:
9.724619e+08*3.138361e-01*7.286951e-01*1.011743e+00*9.735327e-01

Now each component causes a percentage change in the (mean) base value.

Both of these model explain over 90% of the variance in the data, but this is hardly surprising given they include so much detail.

In reality compile-time is driven by some combination of additive and multiplicative factors. Building a combined additive and multiplicative model is going to be like wrestling an octopus, and is left as an exercise for the reader :-)

Given a choice between these two models, I think the multiplicative model is probably closest to reality.

xkcd-style plots in MatPlotLib

Frances Buontempo from BuontempoConsulting

Most programmers I know are familiar with xkcd, the webcomic of romance, sarcasm, math, and language. In order to create diagrams for my machine learning book, I wanted a way to create something I could have fun with.

I discovered that Python's MatPlotLib library has an xkcd style, which you simply wrap round a plot. This allowed me to piece together what I needed using line segments, shapes, and labels.

Given a function, f, which draws what you need on some axes ax, use the style like this, and you're done:

with plt.xkcd():
    fig = plt.figure()
    ax = fig.add_subplot(1,1,1)
    f(ax)

    plt.show()

For example to explain what happens when you fire cannons at different angles:


I gave a talk at Skillsmatter, called "Visualisation FTW", which was recorded, so you can watch it if you want. I also wrote this up for ACCU's CVu magazine. The ACCU runs an annual best article survey, and I was runner up, which was a pleasant surprise. You need to be a member to view the article, but it covers the same ground as the short talk at Skillsmatter.

Buy a copy of my book, or go play with xkcd style pictures. Have fun; I did.



London Python Meetup January 2019 – Async Python and GeoPandas

Andy Balaam from Andy Balaam's Blog

It was a pleasure to go to the London Python Meetup organised by @python_london. There were plenty of friendly people and interesting conversations.

I gave a talk “Making 100 million requests with Python aiohttp” (slides) explaining the basics of writing async code in Python 3 and how I used that to make a very large number of HTTP requests.

Andy giving the presentation

(Photo by CB Bailey.)

Hopefully it was helpful – there were several good questions, so I am optimistic that people were engaged with it.

After that, there was an excellent talk by Gareth Lloyd called “GeoPandas, the geospatial extension for Pandas” in which he explained how to use the very well-developed geo-spatial data tools available in the Python ecosphere to transform, combine, plot and analyse data which includes location information. I was really impressed with how easy the libraries looked to use, and also with the cool Jupyter notebook Gareth used to explain the ideas using live demos.

London Python Meetups seem like a cool place to meet Pythonistas of all levels of experience in a nice, low-pressure environment!

Meetup link: aiohttp / GeoPandas

Setting up my own VPN server on Vultr with Centos 7 and WireGuard

Timo Geusch from The Lone C++ Coder's Blog

As an IT consultant, I travel a lot. I mean, a lot. Part of the pleasure is having to deal with day-to-day online life on open, potentially free-for-all hotel and conference WiFi. In other words, the type of networks you really want to do your online banking, ecommerce and other potentially sensitive operations on. After […]

The post Setting up my own VPN server on Vultr with Centos 7 and WireGuard appeared first on The Lone C++ Coder's Blog.

Teaching basic data analysis to programmers: summer internship

Derek Jones from The Shape of Code

Software engineering is one of the topics in this year’s summer internships being sponsored by R-Studio. The spec says: “Data Science Training for Software Engineers – Develop course materials to teach basic data analysis to programmers using software engineering problems and data sets.”

It’s good to see interest in data analysis of software engineering data start to gain traction.

What topics might basic data analysis for programmers include? I have written about statistical techniques that I think are useful in software engineering, but I don’t think this list would be regarded as basic. Techniques that are think are basic are:

  • a picture is worth a thousand words, so obviously visualization is a major topic,
  • building regression models is good for helping to understand what is going on.

Anything else? Well, I don’t know.

An alternative approach to teaching basic data analysis is to give examples of the kind of useful things it can be used to do. Software developers are fast learners, and given the motivation have the skills needed to find and learn techniques that they think are of use. In a basic course, I would put the emphasis on motivating developers to think that data analysis can help them do a better job.

I would NOT, repeat, not, include any material on machine learning. Software engineering data sets tend to be too small to obtain reliable results from machine learning, and I don’t want to encourage clueless button pushers.

What are the desirable skills in the summer intern? I would say that being able to write readable material is the most important, with statistical knowledge ranked second; the level of software engineering knowledge is unimportant. Data analysis tends to follow the same pattern whatever the subject; so it’s important to get somebody who knows about data analysis.

A social science major is the obvious demographic for this intern (they do lots of data analysis); the last people to consider are students majoring in a computing subject.

On Onwards And Downwards – student

student from thus spake a.k.

When last they met, the Baron challenged Sir R----- to evade capture whilst moving rooks across and down a chessboard. Beginning with a single rook upon the first file and last rank, the Baron should have advanced it to the second file and thence downwards in rank in response to which Sir R----- should have progressed a rook from beneath the board by as many squares and if by doing so had taken the Baron's would have won the game. If not, Sir R----- could then have chosen either rook, barring one that sits upon the first rank, to move to the next file in the same manner with the Baron responding likewise. With the game continuing in this fashion and ending if either of them were to take a rook moved by the other or if every file had been played upon, the Baron should have had a coin from Sir R----- if he took a piece and Sir R----- one of the Baron's otherwise.

Choosing between two reasonably fitting probability distributions

Derek Jones from The Shape of Code

I sometimes go fishing for a probability distribution to fit some software engineering data I have. Why would I want to spend time fishing for a probability distribution?

Data comes from events that are driven by one or more processes. Researchers have studied the underlying patterns present in many processes and in some cases have been able to calculate which probability distribution matches the pattern of data that it generates. This approach starts with the characteristics of the processes and derives a probability distribution. Often I don’t really know anything about the characteristics of the processes that generated the data I am looking at (but I can often make what I like to think are intelligent guesses). If I can match the data with a probability distribution, I can use what is known about processes that generate this distribution to get some ideas about the kinds of processes that could have generated my data.

Around nine-months ago, I learned about the Conway–Maxwell–Poisson distribution (or COM-Poisson). This looked as-if it might find some use in fitting software engineering data, and I added it to my list of distributions to keep in mind. I saw that the R package COMPoissonReg supports the fitting of COM-Poisson distributions.

This week I came across one of the papers, about COM-Poisson, that I was reading nine-months ago, and decided to give it a go with some count-data I had.

The Poisson distribution involves count-data, i.e., non-negative integers. Lots of count-data samples are well described by a Poisson distribution, and it is one of the basic distributions supported by statistical packages. Processes described by a Poisson distribution are memory-less, in that the probability of an event occurring are independent of when previous events occurred. When there is a connection between events, the Poisson distribution is not such a good fit (depending on the strength of the connection between events).

While a process that generates count-data may not meet the requirements needed to be exactly described by a Poisson distribution, the behavior may be close enough to give good-enough results. R supports a quasipoisson distribution to help handle the ‘near-misses’.

Sometimes count-data has a distribution that looks nothing like a Poisson. The Negative-binomial distribution is the obvious next choice to try (this can be viewed as a combination of different Poisson distributions; another combination is the Poisson inverse gaussian distribution).

The plot below (from a paper analyzing usage of record data structures in Racket; Tobias Pape kindly sent me the data) shows the number of Racket structure types that contain a given number of fields (red pluses), along with lines showing fitted Negative binomial and COM-Poisson distributions (code+data):

Number of Racket structure types containing a given number of fields.

I’m interested in understanding the processes that are generating the data, and having two distributions do such a reasonable job of fitting the data has given me more possible distinct explanations for what is going on than I wanted (if I were interested in prediction, then either distribution looks like it would do a good-enough job).

What are the characteristics of the processes that generate data having each of the distributions?

  • A Negative binomial can be viewed as a combination of Poisson distributions (the combination having a Gamma distribution). We could create a story around multiple processes being responsible for the pattern seen, with each of these processes having the impact of a Poisson distribution. Sounds plausible.
  • A COM-Poisson distribution can be viewed as a Poisson distribution which is length dependent. We could create a story around the probability of a field being added to a structure type being dependent on the number of existing fields it contains. Sounds plausible (it’s a slightly different idea from preferential attachment).

When fitting a distribution to data, I usually go with the ‘brand-name’ distributions (i.e., the one with most name recognition, provided it matches well enough; brand names are an easier sell then the less well known names).

The Negative binomial distribution is the brand-name here. I had not heard of the COM-Poisson distribution until nine-months ago.

Perhaps the authors of the Racket analysis paper will come up with a theory that prefers one of these distributions, or even suggests another one.

Readers of my evidence-based software engineering book need to be aware of my brand-name preference in some of the data fitting that occurs.

Visual Lint 6.5.6.302 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a recommended maintenance update for Visual Lint 6.0 and 6.5. The following changes are included:

  • Modified generated Vera++ command lines to replace the -showrules option with --show-rule. In consequence the minimum supported version of Vera++ is now 1.2.1.
  • When a Visual Studio 2017 project using the /Zc:alignedNew or /Zc:alignedNew+ option is loaded the C++ 17 __STDCPP_DEFAULT_NEW_ALIGNMENT__ preprocessor symbol will now be included in the generated analysis configuration.
  • Corrected the value of _MSC_FULL_VER referenced in the PC-lint Plus compiler indirect files for Visual Studio .NET 2002 and 2003 (co-rb-vs2002.lnt and co-rb-vs2003.lnt respectively).

Download Visual Lint 6.5.6.302