CppCon 2019 Class, Presentation and Book Signing

Anthony Williams from Just Software Solutions Blog

It is now less than a month to this year's CppCon, which is going to be in Aurora, Colorado, USA for the first time this year, in a change from Bellevue where it has been for the last few years.

The main conference runs from 15th-20th September 2019, but there are also pre-conference classes on 13th and 14th September, and post-conference classes on 21st and 22nd September.

I will be running a 2-day pre-conference class, entitled More Concurrent Thinking in C++: Beyond the Basics, which is for those looking to move beyond the basics of threads and locks to the next level: high level library and application design, as well as lock-free programming with atomics. You can book your place as part of the normal CppCon registration.

I will also be presenting a session during the main conference on "Concurrency in C++20 and beyond".

Finally, I will also be signing copies of the second edition of my book C++ Concurrency In Action now that it is in print.

I look forward to seeing you there!

Posted by Anthony Williams
[/ news /] permanent link
Tags: , , ,
Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter

Breakfast: One for the bikers with Matt Leach of Geotekk

Paul Grenyer from Paul Grenyer


Breakfast: One for the bikers with Matt Leach of Geotekk

When: Tuesday: September 3, 2019 - 7:30am to 8:30pm
Where: The Maids Head Hotel, Tombland, Norwich, NR3 1LB
How much: £13.95
RSVP: https://www.meetup.com/Norfolk-Developers-NorDev/events/qqwhznyzmbfb/

Matt will talk about Geotekk’s product design and fund raising journey and how the company has developed through a belief that anything which serves to reduce stress and worry in everyday lives enables a happier life empowering us to “Live More”.

Matt is co-founder of Geotekk, a company specialising in smart alarms for bikes. Founded in 2015 in response to ever-rising levels of bike theft, Matt and his co-founder James strive to provide customers with freedom and peace of mind by creating an affordable, versatile and best-in-class smart alarm. This alarm would combine and improve the most effective features of other security products into one multi-functional package.

The Octogram Of Seth LaPod – baron m.

baron m. from thus spake a.k.

Salutations Sir R-----! I trust that this fine summer weather has you thirsting for a flagon. And perhaps a wager?

Splendid! Come join me at my table!

I propose a game played as a religious observance by the parishioners of the United Reformed Eighth-day Adventist Church of Cthulhu, the eldritch octopus god that lies dead but dreaming in the drowned city of Hampton-on-Sea.
Several years ago, the Empress directed me to pose as a peasant and infiltrate their temple of Fhtagn in the sleepy village of Saint Reatham on the Hill when it was discovered that Bishop Derleth Miskatonic had been directing his congregation to purchase vast tracts of land in the Ukraine and gift them to the church in return for the promise of being spared when Cthulhu finally wakes and devours mankind.

Converting lines in an svg image to csv

Derek Jones from The Shape of Code

During a search for data on programming language usage I discovered Stack Overflow Trends, showing an interesting plot of language tags appearing on Stack Overflow questions (see below). Where was the csv file for these numbers? Somebody had asked this question last year, but there were no answers.

Stack Overflow language tag trends over time.

The graphic is in svg format; has anybody written an svg to csv conversion tool? I could only find conversion tools for specialist uses, e.g., geographical data processing. The svg file format is all xml, and using a text editor I could see the numbers I was after. How hard could it be (it had to be easier than a png heatmap)?

Extracting the x/y coordinates of the line segments for each language turned out to be straight forward (after some trial and error). The svg generation process made matching language to line trivial; the language name was included as an xml attribute.

Programmatically extracting the x/y axis information exhausted my patience, and I hard coded the numbers (code+data). The process involves walking an xml structure and R’s list processing, two pet hates of mine (the data is for a book that uses R, so I try to do everything data related in R).

I used R’s xml2 package to read the svg files. Perhaps if my mind had a better fit to xml and R lists, I would have been able to do everything using just the functions in this package. My aim was always to get far enough down to convert the subtree to a data frame.

Extracting data from graphs represented in svg files is so easy (says he). Where is the wonderful conversion tool that my search failed to locate? Pointers welcome.

My book’s pdf generation workflow

Derek Jones from The Shape of Code

The process used to generate the pdf of my evidence-based software engineering book has been on my list of things to blog about, for ever. An email arrived this afternoon, asking how I produced various effects using Asciidoc; this post probably contains rather more than N. Psaris wanted to know.

It’s very easy to get sucked into fiddling around with page layout and different effects. So, unless I am about to make a release of a draft, I only generate a pdf once, at the end of each month.

At the end of the month the text is spell checked using aspell, and then grammar checked using Language tool. I have an awk script that checks the text for mistakes I have made in the past; this rarely matches, i.e., I seem to be forever making different mistakes.

The sequencing of tools is: R (Sweave) -> Asciidoc -> docbook -> LaTeX -> pdf; assorted scripts fiddle with the text between outputs and inputs. The scripts and files mention below are available for download.

R generates pdf files (via calls to the Sweave function, I have never gotten around to investigating Knitr; the pdfs are cropped using scripts/pdfcrop.sh) and the ascii package is used to produce a few tables with Asciidoc markup.

Asciidoc is the markup language used for writing the text. A few years after I started writing the book, Stuart Rackham, the creator of Asciidoc, decided to move on from working and supporting it. Unfortunately nobody stepped forward to take over the project; not a problem, Asciidoc just works (somebody did step forward to reimplement the functionality in Ruby; Asciidoctor has an active community, but there is no incentive for me to change). In my case, the output from Asciidoc is xml (it supports a variety of formats).

Docbook appears in the sequence because Asciidoc uses it to produce LaTeX. Docbook takes xml as input, and generates LaTeX as output. Back in the day, Docbook was hailed as the solution to all our publishing needs, and wonderful tools were going to be created to enable people to produce great looking documents.

LaTeX is the obvious tool for anybody wanting to produce lovely looking books and articles; tex/ESEUR.tex is the top-level LaTeX, which includes the generated text. Yes, LaTeX is a markup language, and I could have written the text using it. As a language I find LaTeX too low level. My requirements are not complicated, and I find it easier to write using a markup language like Asciidoc.

The input to Asciidoc and LuaTeX (used to generate pdf from LaTeX) is preprocessed by scripts (written using sed and awk; see scripts/mkpdf). These scripts implement functionality that Asciidoc does not support (or at least I could see how to do it without modifying the Python source). Scripts are a simple way of providing the extra functionality, that does not require me to remember details about the internals of Asciidoc. If Asciidoc was being actively maintained, I would probably have worked to get some of the functionality integrated into a future release.

There are a few techniques for keeping text processing scripts simple. For instance, the cost of a pass over text is tiny, there is little to be gained by trying to do everything in one pass; handling the possibility that markup spans multiple lines can be complicated, a simple solution is to join consecutive lines together if there is a possibility that markup spans these lines (i.e., the actual matching and conversion no longer has to worry about line breaks).

Many simple features are implemented by a script modifying Asciidoc text to include some ‘magic’ sequence of characters, which is subsequently matched and converted in the generated LaTeX, e.g., special characters, and hyperlinks in the pdf.

A more complicated example handles my desire to specify that a figure appear in the margin; the LaTeX sidenotes package supports figures in margins, but Asciidoc has no way of specifying this behavior. The solution was to add the word “Margin”, to the appropriate figure caption option (in the original Asciidoc text, e.g., [caption="Margin ", label=CSD-95-887]), and have a script modify the LaTeX generated by docbook so that figures containing “Margin” in the caption invoked the appropriate macro from the sidenotes package.

There are still formatting issues waiting to be solved. For instance, some tables are narrow enough to fit in the margin, but I have not found a way of embedding this information in the table information that survives through to the generated LaTeX.

My long time pet hate is the formatting used by R’s plot function for exponentiated values as axis labels. My target audience are likely to be casual users of R, so I am sticking with basic plotting (i.e., no calls to ggplot). I do wish the core R team would integrate the code from the magicaxis package, to bring the printing of axis values into the era of laser printers and bit-mapped displays.

Ideas and suggestions welcome.

Growth and survival of gcc options and processor support

Derek Jones from The Shape of Code

Like any actively maintained software, compilers get more complicated over time. Languages are still evolving, and options are added to control the support for features. New code optimizations are added, which don’t always work perfectly, and options are added to enable/disable them. New ways of combining object code and libraries are invented, and new options are added to allow the desired combination to be selected.

The web pages summarizing the options for gcc, for the 96 versions between 2.95.2 and 9.1 have a consistent format; which means they are easily scrapped. The following plot shows the number of options relating to various components of the compiler, for these versions (code+data):

Number of options supported by various components of gcc, over 20 years.

The total number of options grew from 632 to 2,477. The number of new optimizations, or at least the options supporting them, appears to be leveling off, but the number of new warnings continues to increase (ah, the good ol’ days, when -Wall covered everything).

The last phase of a compiler is code generation, and production compilers are generally structured to enable new processors to be supported by plugging in an appropriate code generator; since version 2.95.2, gcc has supported 80 code generators.

What can be added can be removed. The plot below shows the survival curve of gcc support for processors (80 supported cpus, with support for 20 removed up to release 9.1), and non-processor specific options (there have been 1,091 such options, with 214 removed up to release 9.1); the dotted lines are 95% confidence internals.

Survival curve of gcc options and support for specific processors, over 20 years.

Racing Up The Hierarchy – a.k.

a.k. from thus spake a.k.

In the previous post we saw how to identify subsets of a set of data that are in some sense similar to each other, known as clusters, by constructing sequences of clusterings starting with each datum in its own cluster and ending with all of the data in the same cluster, subject to the constraint that if a pair of data are in the same cluster in one clustering then they must also be in the same cluster in the next, which are known as hierarchical clusterings.
We did this by selecting the closest pairs of clusters in one clustering and merging them to create the next, using one of three different measures of the distance between a pair of clusters; the average distance between their members, the distance between their nearest members and the distance between their farthest members, known as average linkage, single linkage and complete linkage respectively.
Unfortunately our implementation came in at a rather costly O(n3) operations and so in this post we shall look at how we can improve its performance.

Are we there yet?

Allan Kelly from Allan Kelly Associates

Monk-2019-08-2-13-08.jpg

Those of us who don’t code any more, and perhaps many of those who do, need Electronic Monks to help us with software development.

There is an old Douglas Adam’s book (Dirk Gently’s Holistic Detective Agency) which features an Electric Monk. The job of an electric monk is to believe things for you. In Adam’s story people have too many things to believe so they offload all that believing to an electric monk. In my mind I’ve always considered part of the monk’s work to include worrying. I can’t remember if Adam’s says this explicitly or just implies it. (And as I no longer have a copy of the book I can’t check.)

I’m currently very engaged with one client as an Agile Coach (although I sometimes wonder if “Shadow Manager” might be a better term, more of that another day). I regularly find myself staring at the board thinking about the work it shows and worrying about whether it will be done. Sometimes my mind plays “what if games” – “If that card is finished soon, then the other one could move down and…”

The same worry plays out when looking at the backlog showing work not on the board. Or when I’m talking to the Product Owner. Or indeed when I find myself talking to middle managers and others in the company who have an interest in the work the team is doing.

Basically, there is very little any of us can do to move the work through.

Sure I can call a meeting and talk about optimising our workflow. But I’ve done that a few times already.

I could call a meeting and emphasis how important the work is to the company. I’ve seen this done many times. Some non-coders – call them “the business” – seem to think “If only the coders appreciated how important this work was then they would do it faster.” I hated this when I was a coder and I hate it when I hear others saying things like “Do they know how important this is?” Business folk sometimes seem to believe coders sit around drinking tea and coffee for most of the day.

By the way, I almost wrote “and Testers” in that last paragraph but then realised testers CAN make work go faster: they can just drop their standards, turn a blind eye to issues, accept things they wouldn’t have accepted last week. Piling pressure on Testers is a more effective route to getting work done than pressuring coders. But in both cases it is probably just piling up more work.

Pressuring people to do work faster usually creates problems which come a back and bite you pretty damn quick. As I’ve said before: There is no such thing as “Quick and Dirty” … “Quick and Dirty” actually means “Slow and Dirty”

I could go to the coders and ask “Are you done yet?” – or as 5-year olds say in the back of a car on a long journey (or just any journey) “Are we there yet?” In both cases it doesn’t change the time it takes to get to the end, the answer doesn’t usually say much but asking the question will annoy parents and coders alike.

But if I do anything – like calling a meeting or asking “are we there yet?” – which involves distracting coders from working then I am slowing work down. It is self-defeating.

Yes there are things I can do to make work go more smoothly but once a piece of work is in flight there is little I can do. Most of the change I can make are to do with the way work happens. Or to put it another way: I can influence the climate work happens in but I can’t control the individual weather events which are the work items.

All I can do is worry. Thus my desire to offload that worrying to an electronic monk. Maybe more people in and around software developerment need to recognise they are in the same position.


Like this post? – Like to receive these posts by e-mail?

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

Check out my latest books – Continuous Digital and Project Myopia – and the Project Myopia audio edition

The post Are we there yet? appeared first on Allan Kelly Associates.

First language taught to undergraduates in the 1990s

Derek Jones from The Shape of Code

The average new graduate is likely to do more programming during the first month of a software engineering job, than they did during a year as an undergraduate. Programming courses for undergraduates is really about filtering out those who cannot code.

Long, long ago, when I had some connection to undergraduate hiring, around 70-80% of those interviewed for a programming job could not write a simple 10-20 line program; I’m told that this is still true today. Fluency in any language (computer or human) takes practice, and the typical undergraduate gets very little practice (there is no reason why they should, there are lots of activities on offer to students and programming fluency is not needed to get a degree).

There is lots of academic discussion around which language students should learn first, and what languages they should be exposed to. I have always been baffled by the idea that there was much to be gained by spending time teaching students multiple languages, when most of them barely grasp the primary course language. When I was at school the idea behind the trendy new maths curriculum was to teach concepts, rather than rote learning (such as algebra; yes, rote learning of the rules of algebra); the concept of number-base was considered to be a worthwhile concept and us kids were taught this concept by having the class convert values back and forth, such as base-10 numbers to base-5 (base-2 was rarely used in examples). Those of us who were good at maths instantly figured it out, while everybody else was completely confused (including some teachers).

My view is that there is no major teaching/learning impact on the choice of first language; it is all about academic fashion and marketing to students. Those who have the ability to program will just pick it up, and everybody else will flounder and do their best to stay away from it.

Richard Reid was interested in knowing which languages were being used to teach introductory programming to computer science and information systems majors. Starting in 1992, he contacted universities roughly twice a year, asking about the language(s) used to teach introductory programming. The Reid list (as it became known), was regularly updated until Reid retired in 1999 (the average number of universities included in the list was over 400); one of Reid’s ex-students, Frances VanScoy, took over until 2006.

The plot below is from 1992 to 2002, and shows languages in the top 3% in any year (code+data):

Normalised returned required for various elapsed years.

Looking at the list again reminded me how widespread Pascal was as a teaching language. Modula-2 was the language that Niklaus Wirth designed as the successor of Pascal, and Ada was intended to be the grown up Pascal.

While there is plenty of discussion about which language to teach first, doing this teaching is a low status activity (there is more fun to be had with the material taught to the final year students). One consequence is lack of any real incentive for spending time changing the course (e.g., using a new language). The Open University continued teaching Pascal for years, because material had been printed and had to be used up.

C++ took a while to take-off because of its association with C (which was very out of fashion in academia), and Java was still too new to risk exposing to impressionable first-years.

A count of the total number of languages listed, between 1992 and 2002, contains a few that might not be familiar to readers.

          Ada    Ada/Pascal          Beta          Blue             C 
         1087             1            10             3           667 
       C/Java      C/Scheme           C++    C++/Pascal        Eiffel 
            1             1           910             1            29 
      Fortran       Haskell     HyperTalk         ISETL       ISETL/C 
          133            12             2            30             1 
         Java  Java/Haskell       Miranda            ML       ML/Java 
          107             1            48            16             1 
     Modula-2      Modula-3        Oberon      Oberon-2     ObjPascal 
          727            24            26             7            22 
       Orwell        Pascal      Pascal/C        Prolog        Scheme 
           12          2269             1            12           752 
    Scheme/ML Scheme/Turing        Simula     Smalltalk           SML 
            1             1            14            33            88 
       Turing  Visual-Basic 
           71             3 

I had never heard of Orwell, a vanity language foisted on Oxford Mathematics and Computation students. It used to be common for someone in computing departments to foist their vanity language on students; it enabled them to claim the language was being used and stoked their ego. Is there some law that enables students to sue for damages?

The 1990s was still in the shadow of the 1980s fashion for functional programming (which came back into fashion a few years ago). Miranda was an attempt to commercialize a functional language compiler, with Haskell being an open source reaction.

I was surprised that Turing was so widely taught. More to do with the stature of where it came from (university of Toronto), than anything else.

Fortran was my first language, and is still widely used where high performance floating-point is required.

ISETL is a very interesting language from the 1960s that never really attracted much attention outside of New York. I suspect that Blue is BlueJ, a Java IDE targeting novices.