just::thread Pro adds Visual Studio 2017 support

Anthony Williams from Just Software Solutions Blog

I am pleased to announce that just::thread Pro now supports Microsoft Visual Studio 2017 on Microsoft Windows.

This adds to the support for Microsoft Visual Studio 2015, g++ 5 and g++ 6 for the just::thread Pro enhancements, which build on top of the platform-supplied version of the C++14 thread library. For older compilers, and for MacOSX, the just::thread compatibility library is still required.

The new build features all the same facilities as the previous release:

Get your copy of just::thread Pro

Purchase your copy and get started now.

As usual, all customers with V2.x licenses of just::thread Pro will get a free upgrade to the new just::thread Pro Standalone edition.

Posted by Anthony Williams
[/ news /] permanent link
Tags: , ,

| Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter

Visual Lint 6.0.1.275 has been released

Products, the Universe and Everything from Products, the Universe and Everything

Visual Lint 6.0.1.275 is now available. This is a recommended maintenance update for Visual Lint 6.0, and includes the following changes:
  • Fixed a bug which could potentially prevent Visual Studio 2013/2015 projects from being analysed with PC-lint 8.0/9.0 using compatible system headers from earlier versions of Visual Studio.
  • Fixed a bug which caused saved display filter settings to be lost when the product edition was set to one which does not support the global display filter. This ensures that the previous filter settings will be preserved if the display filter subsequently becomes available once again.
  • Fixed a minor resizing bug in the "Send Feedback" Dialog.
  • The "View Analysis Results" context menu command in the Analysis Status Display now correctly echoes the raw analysis results to the Output Window (Visual Studio) or Console (Eclipse) if the "Echo raw analysis results to the Output/Console Window" option is active.
  • The "Echo raw analysis results to the Output/Console Window" option in the "Display" Options page is now disabled in hosts which do not support it. In Visual Studio, turning on the option also no longer requires Visual Studio to be restarted to apply changes.
  • CppCheck analysis issues of the form "cppcheck: error: unrecognized command line option: "-foo"." are now correctly recognised as Fatal Errors.
  • Added the "Send Feedback" command to the main toolbar in VisualLintGui and the Visual Studio and Eclipse plug-ins.
  • When the analysis tool installation folder is changed in the "Analysis Tool" Options page or "Select Analysis Tool Installation Folder" Configuration Wizard page, the paths of the associated PC-lint indirect and help files will now be updated automatically where possible.
  • Converted the "Raw issue ID filter" control in the Global Display Filter Dialog to a combo box.
  • The initial folder paths in various file/folder dialogs are now set correctly.
  • If the Visual Studio plug-in has been installed into Visual Studio 2017, installing a newer version will now first attempt to remove the previous version of the extension prior to installing the new version.
  • The installer now copies the PC-lint Plus indirect file rb-win32-pclint10.lnt to the correct location.
  • Added additional suppression directives to some of the Riverblade authored PC-lint Plus indirect files.
  • Minor corrections to the comments within some of the Riverblade authored PC-lint Plus indirect files.
  • Added a help topic for the "Send Feedback" Dialog.
Download Visual Lint 6.0.1.275

Manual Mutation Testing

Chris Oldwood from The OldWood Thing

One of the problems when making code changes is knowing whether there is good test coverage around the area you’re going to touch. In theory, if a rigorous test-first approach is taken no production code should be written without first being backed by a failing test. Of course we all know the old adage about how theory turns out in practice [1]. Even so, just because a test has been written, you don’t know what the quality of it and any related ones are.

Mutation Testing

The practice of mutation testing is one way to answer the perennial question: how do you test the tests? How do you know if the tests which have been written adequately cover the behaviours the code should exhibit? Alternatively, as a described recently in “Overly Prescriptive Tests”, are the tests too brittle because they require too exacting a behaviour?

There are tools out there which will perform mutation testing automatically that you can include as part of your build pipeline. However I tend to use them in a more manual way to help me verify the tests around the small area of functionality I’m currently concerned with [2].

The principle is actually very simple, you just tweak the production code in a small way that would likely mimic a real change and you see what tests fail. If no tests fail at all then you probably have a gap in your spec that needs filling.

Naturally the changes you make on the production code should be sensible and functional in behaviour; there’s no point in randomly setting a reference to null if that scenario is impossible to achieve through the normal course of events. What we’re aiming for here is the simulation of an accidental breaking change by a developer. By tweaking the boundaries of any logic we can also check that our edge cases have adequate coverage too.

There is of course the possibility that this will also unearth some dead code paths too, or at least lead you to further simplify the production code to achieve the same expected behaviour.

Example

Imagine you’re working on a service and you spy some code that appears to format a DateTime value using the default formatter. You have a hunch this might be wrong but there is no obvious unit test for the formatting behaviour. It’s possible the value is observed and checked in an integration or acceptance test elsewhere but you can’t obviously [3] find one.

Naturally if you break the production code a corresponding test should break. But how badly do you break it? If you go too far all your tests might fail because you broke something fundamental, so you need to do it in varying degrees and observe what happens at each step.

If you tweak the date format, say, from the US to the UK format nothing may happen. That might be because the tests use a value like 1st January which is 01/01 in both schemes. Changing from a local time format to an ISO format may provoke something new to fail. If the test date is particularly well chosen and loosely verified this could well still be inside whatever specification was chosen.

Moving away from a purely numeric form to a more natural, wordy one should change the length and value composition even further. If we reach this point and no tests have failed it’s a good chance nothing will. We can then try an empty string, nonsense strings and even a null string reference to see if someone only cares that some arbitrary value is provided.

But what if after all that effort still no lights start flashing and the klaxon continues to remain silent?

What Does a Test Pass or Fail Really Mean?

In the ideal scenario as you slowly make more and more severe changes you would eventually hope for one or maybe a couple of tests to start failing. When you inspect them it should be obvious from their name and structure what was being expected, and why. If the test name and assertion clearly specifies that some arbitrary value is required then its probably intentional. Of course It may still be undesirable for other reasons [4] but the test might express its intent well (to document and to verify).

If we only make a very small change and a lot of tests go red we’ve probably got some brittle tests that are highly dependent on some unrelated behaviour, or are duplicating behaviours already expressed (probably better) elsewhere.

If the tests stay green this does not necessary mean we’re still operating within the expected behaviour. It’s entirely possible that the behaviour has been left completely unspecified because it was overlooked or forgotten about. It might be that not enough was known at the time and the author expected someone else to “fill in the blanks” at a later date. Or maybe the author just didn’t think a test was needed because the intent was so obvious.

Plugging the Gaps

Depending on the testing culture in the team and your own appetite for well defined executable specifications you may find mutation testing leaves you with more questions than you are willing to take on. You only have so much time and so need to find a way to plug any holes in the most effective way you can. Ideally you’ll follow the Boy Scout Rule and at least leave the codebase in a better state than you found it, even if that isn’t entirely to your own satisfaction.

The main thing I get out of using mutation testing is a better understanding of what it means to write good tests. Seeing how breaks are detected and reasoned about from the resulting evidence gives you a different perspective on how to express your intent. My tests definitely aren’t perfect but by purposefully breaking code up front you get a better feel for how to write less brittle tests than you might by using TDD alone.

With TDD you are the author of both the tests and production code and so are highly familiar with both from the start. Making a change to existing code by starting with mutation testing gives you a better orientation of where the existing tests are and how they perform before you write your own first new failing test.

Refactoring Tests

Refactoring is about changing the code without changing the behaviour. This can also apply to tests too in which case mutation testing can provide the technique for which you start by creating failing production code that you “fix” when the test is changed and the bar goes green again. You can then commit the refactored tests before starting on the change you originally intended to make.

 

[1] “In theory there is no difference between theory and practice; in practice there is.” –- Jan L. A. van de Snepscheut.

[2] Just as with techniques like static code analysis you really need to adopt this from the beginning if you want to keep the noise level down and avoid exploring too large a rabbit hole.

[3] How you organise your tests is a subject in its own right but suffice to say that it’s usually easier to find a unit test than an acceptance test that depends on any given behaviour.

[4] The author may have misunderstood the requirement or the requirement was never clear originally and so the the behaviour was left loosely specified in the short term.