Manjaro Linux and AMD RX 470, revisited

The Lone C++ Coder's Blog from The Lone C++ Coder's Blog

I’ve blogged about getting Manjaro Linux to work with my AMD RX 470 before. The method described in that post got my AMD RX 470 graphics card working with the default 4.4 kernel. This worked fine - with the usual caveats regarding VESA software rendering - until I tried to upgrade to newer versions of the kernel.

My understanding is that the 4.4 kernel series doesn’t include drivers for the relatively recent AMD RX 470 GPUs, whereas later kernel series (4.8 and 4.9 specifically) do. Unfortunately trying to boot into a 4.9 kernel resulted in the X server locking up so well that even the usual Alt-Fx didn’t get me to a console to fix the problem.

Just::Thread Pro v2.4.2 released with clang support

Anthony Williams from Just Software Solutions Blog

I am pleased to announce that Just::Thread Pro v2.4.2 has been released with support for clang on linux.

Just::Thread Pro is our C++ concurrency extensions library which provides an Actor framework for easier concurrency, along with concurrent data structures: a thread-safe queue, and concurrent hash map, and a wrapper for ensuring synchronized access to single objects.

It also includes the new facilities from the Concurrency TS:

Clang support is finally here!

V2.4.2 adds the much-anticipated support for clang. clang 3.8 and 3.9 are supported on ubuntu 16.04 or later, clang 3.8 is supported on Fedora 24, and clang 3.9 on Fedora 25.

Just::Thread Pro is now fully supported on the following compiler/OS combinations (32-bit and 64-bit):

  • Microsoft Visual Studio 2015 for Windows
  • Microsoft Visual Studio 2017 for Windows
  • gcc 5 for Ubuntu 14.04 or later
  • gcc 6 for Ubuntu 14.04 or later
  • clang 3.8 for Ubuntu 16.04 or later
  • clang 3.9 for Ubuntu 16.04 or later
  • gcc 5 for Fedora 22 and 23
  • gcc 6 for Fedora 24 and 25
  • clang 3.8 for Fedora 24
  • clang 3.9 for Fedora 25

Just::Thread Pro v2.2 is also supported with the Just::Thread compatibility library on the following compiler/OS combinations:

  • Microsoft Visual Studio 2005, 2008, 2010, 2012 and 2013 for Windows
  • TDM gcc 4.5.2, 4.6.1 and 4.8.1 for Windows
  • g++ 4.3 or later for Ubuntu 9.04 or later
  • g++ 4.4 or later for Fedora 13 or later
  • g++ 4.4 for Centos 6
  • MacPorts g++ 4.3 to 4.8 on MacOSX Snow Leopard or later

All licences include a free upgrade to point releases, so if you purchase now you'll get a free upgrade to all 2.x releases of Just::Thread Pro. Purchasers of the older Just::Thread library (now called the compatibility library) may upgrade to Just::Thread Pro for a small fee.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter

Does const mean thread-safe?

Anthony Williams from Just Software Solutions Blog

There was a discussion recently on the cpplang slack about whether const meant thread-safe.

As with everything in life, it depends. In some ways, yes it does. In others, it does not. Read on for the details.

What do we mean by "thread-safe"?

If someone says something is "thread-safe", then it is important to define what that means. Here is an incomplete list of some things people might mean when they say something is "thread safe".

  • Calling foo(x) on one thread and foo(y) on a second thread concurrently is OK.
  • Calling foo(x_i) on any number of threads concurrently is OK, provided each x_i is different.
  • Calling foo(x) on a specific number of threads concurrently is OK.
  • Calling foo(x) on any number of threads concurrently is OK.
  • Calling foo(x) on one thread and bar(x) on another thread concurrently is OK.
  • Calling foo(x) on one thread and bar(x) on any number of threads concurrently is OK.

Which one we mean in a given circumstance is important. For example, a concurrent queue might be a Single-Producer, Single-Consumer queue (SPSC), in which case it is safe for one thread to push a value while another pops a value, but if two threads try and push values then things go wrong. Alternatively, it might be a Multi-Producer, Single-Consumer queue (MPSC), which allows multiple threads to push values, but only one to pop values. Both of these are "thread safe", but the meaning is subtly different.

Before we look at what sort of thread safety we're after, let's just define what it means to be "OK".

Data races

At the basic level, when we say an operation is "OK" from a thread-safety point of view, we mean it has defined behaviour, and there are no data races, since data races lead to undefined behaviour.

From the C++ Standard perspective, a data race is where there are 2 operations that access the same memory location, such that neither happens-before the other, at least one of them modifies that memory location, and at least one of them is not an atomic operation.

An operation is thus "thread safe" with respect to the set of threads we wish to perform the operation concurrently if:

  • none of the threads modify a memory location accessed by any of the other threads, or
  • all accesses performed by the threads to memory locations which are modified by one or more of the threads are atomic, or
  • the threads use suitable synchronization to ensure that there are happens-before operations between all modifications, and any other accesses to the modified memory locations.

So: what sort of thread-safety are we looking for from const objects, and why?

Do as ints do

A good rule of thumb for choosing behaviour for a class in C++ is "do as ints do".

With regard to thread safety, ints are simple:

  • Any number of threads may read a given int concurrently
  • If any thread modifies a given int, no other threads may access that int for reading or writing concurrently.

This follows naturally from the definition of a data race, since ints cannot do anything special to provide synchronization or atomic operations.

If you have an int, and more than one thread that wants to access it, if any of those threads wants to modify it then you need external synchronization. Typically you'll use a mutex for the external synchronization, but other mechanisms can work too.

If your int is const, (or you have const int& that references it, or const int* that points to it) then you can't modify it.

What does that mean for your class? In a well-designed class, the const member functions do not modify the state of the object. It is "logically" immutable when accessed exclusively through const member functions. On the other hand, if you use a non-const member function then you are potentially modifying the state of the object. So far, so good: this is what ints do with regard to reading and modifying.

To do what ints do with respect to thread safety, we need to ensure that it is OK to call any const member functions concurrently on any number of threads. For many classes this is trivially achieved: if you don't modify the internal representation of the object in any way, you're home dry.

Consider an employee class that stores basic information about an employee, such as their name, employee ID and so forth. The natural implementation of const member functions will just read the members, perform some simple manipulation on the values, and return. Nothing is modified.

class employee{
    std::string first_name;
    std::string last_name;
    // other data
public:
    std::string get_full_name() const{
        return last_name + ", " + first_name;
    }
    // other member functions
}

Provided that reading from a const std::string and appending it to another string is OK, employee::get_full_name is OK to be called from any number of threads concurrently.

You only have to do something special to "do as ints do" if you modify the internal state in your const member function, e.g. to keep a tally of calls, or cache calculation values, or similar things which modify the internal state without modifying the externally-visible state. Of course, you would also need to add some synchronization if you were modifying externally-visible state in your const member function, but we've already decided that's not a good plan.

In employee::get_full_name, we're relying on the thread-safety of std::string to get us through. Is that OK? Can we rely on that?

Thread-safety in the C++ Standard Library

The C++ Standard Library itself sticks to the "do as ints do" rule. This is spelled out in the section on Data race avoidance (res.on.data.races). Specifically,

A C++ standard library function shall not directly or indirectly modify objects accessible by threads other than the current thread unless the objects are accessed directly or indirectly via the function's non-const arguments, including this.

and

Implementations may share their own internal objects between threads if the objects are not visible to users and are protected against data races.

This means that if you have a const std::string& referencing an object, then any calls to member functions on it must not modify the object, and any shared internal state must be protected against data races. The same applies if it is passed to any other function in the C++ Standard Library.

However, if you have a std::string& referencing the same object, then you must ensure that all accesses to that object must be synchronized externally, otherwise you may trigger a data race.

Our employee::get_full_name function is thus as thread-safe as an int: concurrent reads are OK, but any modifications will require external synchronization for all concurrent accesses.

There are two little words in the first paragraph quoted above which have a surprising consequence: "or indirectly".

Indirect Accesses

If you have two const std::vector<X>s, vx and vy, then calling standard library functions on those objects must not modify any objects accessible by other threads, otherwise we've violated the requirements from the "data race avoidance" section quoted above, since those objects would be "indirectly" modified by the function.

This means that any operations on the X objects within those containers that are performed by the operations we do on the vectors must also refrain from modifying any objects accessible by other threads. For example, the expression vx==vy compares each of the elements in turn. These comparisons must thus not modify any objects accessible by other threads. Likewise, std::for_each(vx.begin(),vx.end(),foo) must not modify any objects accessible by other threads.

This pretty much boils down to a requirement that if you use your class with the C++ Standard Library, then const operations on your class must be safe if called from multiple threads. There is no such requirement for non-const operations, or combinations of const and non-const operations.

You may of course decide that your class is going to allow concurrent modifications (even if that is by using a mutex to restrict accesses internally), but that is up to you. A class designed for concurrent access, such as a concurrent queue, will need to have the internal synchronization; a value class such as our employee is unlikely to need it.

Summary

Do as ints do: concurrent calls to const member functions on your class must be OK, but you are not required to ensure thread-safety if there are also concurrent calls to non-const member functions.

The C++ Standard Library sticks to this rule itself, and requires it of your code. In most cases, it's trivial to ensure; it's only classes with complex const member functions that modify internal state that may need synchronization added.

Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter

The Hidden Life of Trees

Jon Jagger from less code, more software

is an excellent book by Peter Wohlleben (isbn 1771642483). As usual I'm going to quote from a few pages.

In forked trees, at a certain point, two main shoots form, they continue to grow alongside each other. Each side of the fork creates its own crown, so in a heavy wind, both sides sway back and forth in different directions, putting a great strain on the trunk where the two parted company. ... The fork always breaks at its narrowest point, where the two sides diverge.
The process of learning stability is triggered by painful micro-tears that occur when the trees bend way over in the wind, first in one direction and then in the other. Wherever it hurts, that's where the tree must strengthen its support structure. ... The thickness and stability of the trunk, therefore, builds up as the tree responds to a series of aches and pains.
There is a honey fungus in Switzerland that covers almost 120 acres and is about a thousand years old. Another in Oregon is estimated to be 2,400 years old, extends for 2,000 acres, and weighs 660 tons. That makes fungi the largest known living organism in the world.
You find twice the amount of life-giving nitrogen and phosphorus in plants that cooperate with fungal partners than in plants that tap the soil with the roots alone.
Diversity provides security for ancient forests.
There are more life-forms in a handful of forest soil than there are people on the planet.
As foresters like to say, the forest creates its own ideal habitat.
Commercial forest monocultures also encourage the mass reproduction of butterflies and moths, such as nun moths and pine loopers. What usually happens is that viral illnesses crop up towards the end of the cycle and populations crash.
The storms pummel mature trunks with forces equivalent to a weight of approximately 220 tons. Any tree unprepared for the onslaught can't withstand the pressure and falls over. But deciduous trees are well prepared. To be more aerodynamic they cast off all their solar panels. And so a huge surface area of 1,200 square yards disappears and sinks to the forest floor. This is the equivalent of a sailboat with a 130-foot mast dropping a 100-by-130 foot mainsail.
Why do tree grow into pipes in the first place?... What was attracting them was loose soil that had not been fully compacted after construction. Here the roots found room to breathe and grow. It was only incidentally that they penetrated the seals between individual sections of pipe and eventually ran riot inside them.
Sometimes, especially in cold winters, the old wounds can act up again. Then a crack like a rifle shot echoes through the forest and the trunk splits open along the old injury. This is caused by differences in tension in the frozen wood, because the wood in trees with a history of injury varies greatly in density.


just::thread Pro adds Visual Studio 2017 support

Anthony Williams from Just Software Solutions Blog

I am pleased to announce that just::thread Pro now supports Microsoft Visual Studio 2017 on Microsoft Windows.

This adds to the support for Microsoft Visual Studio 2015, g++ 5 and g++ 6 for the just::thread Pro enhancements, which build on top of the platform-supplied version of the C++14 thread library. For older compilers, and for MacOSX, the just::thread compatibility library is still required.

The new build features all the same facilities as the previous release:

Get your copy of just::thread Pro

Purchase your copy and get started now.

As usual, all customers with V2.x licenses of just::thread Pro will get a free upgrade to the new just::thread Pro Standalone edition.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter

Visual Lint 6.0.1.275 has been released

Products, the Universe and Everything from Products, the Universe and Everything

Visual Lint 6.0.1.275 is now available. This is a recommended maintenance update for Visual Lint 6.0, and includes the following changes:
  • Fixed a bug which could potentially prevent Visual Studio 2013/2015 projects from being analysed with PC-lint 8.0/9.0 using compatible system headers from earlier versions of Visual Studio.
  • Fixed a bug which caused saved display filter settings to be lost when the product edition was set to one which does not support the global display filter. This ensures that the previous filter settings will be preserved if the display filter subsequently becomes available once again.
  • Fixed a minor resizing bug in the "Send Feedback" Dialog.
  • The "View Analysis Results" context menu command in the Analysis Status Display now correctly echoes the raw analysis results to the Output Window (Visual Studio) or Console (Eclipse) if the "Echo raw analysis results to the Output/Console Window" option is active.
  • The "Echo raw analysis results to the Output/Console Window" option in the "Display" Options page is now disabled in hosts which do not support it. In Visual Studio, turning on the option also no longer requires Visual Studio to be restarted to apply changes.
  • CppCheck analysis issues of the form "cppcheck: error: unrecognized command line option: "-foo"." are now correctly recognised as Fatal Errors.
  • Added the "Send Feedback" command to the main toolbar in VisualLintGui and the Visual Studio and Eclipse plug-ins.
  • When the analysis tool installation folder is changed in the "Analysis Tool" Options page or "Select Analysis Tool Installation Folder" Configuration Wizard page, the paths of the associated PC-lint indirect and help files will now be updated automatically where possible.
  • Converted the "Raw issue ID filter" control in the Global Display Filter Dialog to a combo box.
  • The initial folder paths in various file/folder dialogs are now set correctly.
  • If the Visual Studio plug-in has been installed into Visual Studio 2017, installing a newer version will now first attempt to remove the previous version of the extension prior to installing the new version.
  • The installer now copies the PC-lint Plus indirect file rb-win32-pclint10.lnt to the correct location.
  • Added additional suppression directives to some of the Riverblade authored PC-lint Plus indirect files.
  • Minor corrections to the comments within some of the Riverblade authored PC-lint Plus indirect files.
  • Added a help topic for the "Send Feedback" Dialog.
Download Visual Lint 6.0.1.275

Manual Mutation Testing

Chris Oldwood from The OldWood Thing

One of the problems when making code changes is knowing whether there is good test coverage around the area you’re going to touch. In theory, if a rigorous test-first approach is taken no production code should be written without first being backed by a failing test. Of course we all know the old adage about how theory turns out in practice [1]. Even so, just because a test has been written, you don’t know what the quality of it and any related ones are.

Mutation Testing

The practice of mutation testing is one way to answer the perennial question: how do you test the tests? How do you know if the tests which have been written adequately cover the behaviours the code should exhibit? Alternatively, as a described recently in “Overly Prescriptive Tests”, are the tests too brittle because they require too exacting a behaviour?

There are tools out there which will perform mutation testing automatically that you can include as part of your build pipeline. However I tend to use them in a more manual way to help me verify the tests around the small area of functionality I’m currently concerned with [2].

The principle is actually very simple, you just tweak the production code in a small way that would likely mimic a real change and you see what tests fail. If no tests fail at all then you probably have a gap in your spec that needs filling.

Naturally the changes you make on the production code should be sensible and functional in behaviour; there’s no point in randomly setting a reference to null if that scenario is impossible to achieve through the normal course of events. What we’re aiming for here is the simulation of an accidental breaking change by a developer. By tweaking the boundaries of any logic we can also check that our edge cases have adequate coverage too.

There is of course the possibility that this will also unearth some dead code paths too, or at least lead you to further simplify the production code to achieve the same expected behaviour.

Example

Imagine you’re working on a service and you spy some code that appears to format a DateTime value using the default formatter. You have a hunch this might be wrong but there is no obvious unit test for the formatting behaviour. It’s possible the value is observed and checked in an integration or acceptance test elsewhere but you can’t obviously [3] find one.

Naturally if you break the production code a corresponding test should break. But how badly do you break it? If you go too far all your tests might fail because you broke something fundamental, so you need to do it in varying degrees and observe what happens at each step.

If you tweak the date format, say, from the US to the UK format nothing may happen. That might be because the tests use a value like 1st January which is 01/01 in both schemes. Changing from a local time format to an ISO format may provoke something new to fail. If the test date is particularly well chosen and loosely verified this could well still be inside whatever specification was chosen.

Moving away from a purely numeric form to a more natural, wordy one should change the length and value composition even further. If we reach this point and no tests have failed it’s a good chance nothing will. We can then try an empty string, nonsense strings and even a null string reference to see if someone only cares that some arbitrary value is provided.

But what if after all that effort still no lights start flashing and the klaxon continues to remain silent?

What Does a Test Pass or Fail Really Mean?

In the ideal scenario as you slowly make more and more severe changes you would eventually hope for one or maybe a couple of tests to start failing. When you inspect them it should be obvious from their name and structure what was being expected, and why. If the test name and assertion clearly specifies that some arbitrary value is required then its probably intentional. Of course It may still be undesirable for other reasons [4] but the test might express its intent well (to document and to verify).

If we only make a very small change and a lot of tests go red we’ve probably got some brittle tests that are highly dependent on some unrelated behaviour, or are duplicating behaviours already expressed (probably better) elsewhere.

If the tests stay green this does not necessary mean we’re still operating within the expected behaviour. It’s entirely possible that the behaviour has been left completely unspecified because it was overlooked or forgotten about. It might be that not enough was known at the time and the author expected someone else to “fill in the blanks” at a later date. Or maybe the author just didn’t think a test was needed because the intent was so obvious.

Plugging the Gaps

Depending on the testing culture in the team and your own appetite for well defined executable specifications you may find mutation testing leaves you with more questions than you are willing to take on. You only have so much time and so need to find a way to plug any holes in the most effective way you can. Ideally you’ll follow the Boy Scout Rule and at least leave the codebase in a better state than you found it, even if that isn’t entirely to your own satisfaction.

The main thing I get out of using mutation testing is a better understanding of what it means to write good tests. Seeing how breaks are detected and reasoned about from the resulting evidence gives you a different perspective on how to express your intent. My tests definitely aren’t perfect but by purposefully breaking code up front you get a better feel for how to write less brittle tests than you might by using TDD alone.

With TDD you are the author of both the tests and production code and so are highly familiar with both from the start. Making a change to existing code by starting with mutation testing gives you a better orientation of where the existing tests are and how they perform before you write your own first new failing test.

Refactoring Tests

Refactoring is about changing the code without changing the behaviour. This can also apply to tests too in which case mutation testing can provide the technique for which you start by creating failing production code that you “fix” when the test is changed and the bar goes green again. You can then commit the refactored tests before starting on the change you originally intended to make.

 

[1] “In theory there is no difference between theory and practice; in practice there is.” –- Jan L. A. van de Snepscheut.

[2] Just as with techniques like static code analysis you really need to adopt this from the beginning if you want to keep the noise level down and avoid exploring too large a rabbit hole.

[3] How you organise your tests is a subject in its own right but suffice to say that it’s usually easier to find a unit test than an acceptance test that depends on any given behaviour.

[4] The author may have misunderstood the requirement or the requirement was never clear originally and so the the behaviour was left loosely specified in the short term.