Software Process Dynamics

Rob Smallshire from Good With Computers

At the Software Architect 2015 conference in London I presented "What if? Supporting decisions with software dynamics simulations". [1] This talk introduces the idea of performing numerical simulations of software development teams and the products they build. The value in such simulations is to inform policy decisions and guide deliberate perturbations to the software development process, such as whether and when to add or remove personnel from a project. Simulations should not be used to make hard predications about, for example, when a particular project will be finished.

[1]Slides

In vivo, in vitro, in silico

Frances Buontempo from BuontempoConsulting


In vivo, in vitro, in silico


Some people get unit testing and some people don't. The reasons vary, usually based on a mixture of previous experience, lack of experience, fear of the unknown or joy at a safer quicker way of developing. One specific doubt crops up from time to time. It comes in the form of "If I test small bits, i.e. units, whatever *that* means, it proves nothing. I need to test the whole thing or small parts of the whole thing live."

My PhD was in toxicity prediction, which involves testing if something will be toxic or not. You can test a chemical "in vivo" - administer it to a several animals in varying doses. You sit back and wait til half of them die, or show toxicity symptoms and record the doses. This gives you the Lx50 - for example the LD50 is the lethal dose that kills 50% of the animals.  Notice I said you can do this. You can also test the chemical on a set of cells in a test tube or petri dish - "in vitro" (in glass). Again you can find the dose which affects 50% of the specimens. I personally find this less upsetting, but I want to focus on parallels with testing code here. Finally, given all this data the previous tests have generated, you can analyse the data, probably on a computer, perhaps finding chemical structure to activity relationships - SAR, or quantitative SARS i.e. QSARs. These are referred to as "in silico" - for obvious reasons. Some in silico experiments will just find clusters of similar chemical, which can either alert you to groups that might need more detailed toxicity testing, or even guide drug discovery by steering clear of molecules, say containing benzene rings which can be carcinogenic, saving time and money if you are trying to invent a drug that cures cancer. The value of testing on a computer outside a live organism should be clear. It can save time, money and even lives.


If we keep this in mind while considering testing a software system, rather than a biological system, we should be able to see some parallels. It is possible to test a live system - maybe on beta rather than "TIP" (Test in production). This can be a good thing. However, it might save time and money, and though maybe not lives, certainly headaches, to test parts of the live system in a sandboxed environment, analogous to in vitro. Running an end to end test against a test database instance with data in a specific state might count. Pushing the analogy further, you could even test small parts of the system, say units, whatever they are, in silico. Just try this small part away from the live system in a computer. This is worthwhile. It will be quicker, as toxicity in silico experiments are quicker - they tend to take hours rather than days. This is a good thing. Of course, you won't know exactly what will happen in a full live system, but you can catch problems earlier, before killing something. This is a Good Thing.

Other industries also test things in units - I could put together a car or a computer hit the on switch and see if it works.  However, I am given to believe that the components are tested thoroughly *before* the full system is built. If I build a PC and it doesn't work I will then have to go through one part at a time and check. If someone tests the parts first, this will ensure I haven't put a dodgy power block in the whole thing. Testing small parts, preferably before testing the whole system, is a Good Thing.

I don't believe this short observation will change anyone's minds. But I hope it will give pause for thought to those who think only testing from end to end matters, and testing "in silico" is a waste of time.










Event-Sourced Domain Models in Python at PyCon UK

Rob Smallshire from Good With Computers

At PyCon UK 2015 I led a very well attended workshop with the goal of introducing Python developers to the tried-and-tested techniques and patterns of Domain Driven Design (DDD), in particular when used as part of an event-sourced architecture.

The two-and-a-half hour workshop was comprised of excerpts from our training course DDD Patterns in Python. Although the workshop material was heavily edited and compressed from the course – I'm confident that the majority of attendees grasped the main principles.

Several attendees have since asked for the introductory slides, which preceded the exercises. Here they are:

Sixty North training materials are for individual use. For training in a commercial setting please contact us to book a training course or obtain a license for the materials.

Read Maven Surefire Test Result files using Perl

Tim Pizey from Tim Pizey

When you want something quick and dirty it doesn't get dirtier, or quicker, than Perl.

We have four thousand tests and they are taking way too long. To discover why we need to sort the tests by how long they take to run and see if a pattern emerges. The test runtimes are written to the target/surefire-reports directory. Each file is named for the class of the test file and contains information in the following format:


-------------------------------------------------------------------------------
Test set: com.mycorp.MyTest
-------------------------------------------------------------------------------
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.03 sec


#! /usr/bin/perl -wall

my %tests;
open(RESULTS, "grep 'Tests run' target/surefire-reports/*.txt|");
while () {
s/\.txt//;
s/target\/surefire-reports\///;
s/Tests run:.+Time elapsed://;
s/ sec//;
s/,//;
/^(.+):(.+)$/;
$tests{$1} = $2;
}
close(RESULTS);

my $cumulative = 0.0;
print("cumulative\ttime\tcumulative_secs\ttime_secs\ttest");
foreach my $key (sort {$tests{$a} <=> $tests{$b}} keys %tests) {
$cumulative += $tests{$key};
printf("%2d:%02d\t%2d:%02d\t%5d\t%5d\t%s\n",
($cumulative/60)%60, $cumulative%60,
($tests{$key}/60)%60, $tests{$key}%60,
$cumulative,
$tests{$key},
$key);
};


The resultant CSV can be viewed using a google chart:

A slight enhancement on Developing tvOS Apps with Swift

Pete Barber from C#, C++, Windows &amp; other ramblings

Apple announced tvOS yesterday. Downloading Xcode 7.1 Beta comes with the SDK and simulator for tvOS apps. The official documentation starts to run through how to create a basic app but is doesn't mention where to place and load the JS from and the same for the TVML.

Fortunately and vert quickly Jamerson Quave put together a tutorial.

I followed the Apple docs but checked Jameson's tutorial to verify the missing declaration of

var appController: TVApplicationController?

from AppDelegate and also for the JS and then TVML loading. I don't understand and the docs don't seem to see where the JS & TVML should be loaded from. They seem to suggest it should be remote, i.e. not part of the App Bundle but I don't know why. Anyhow I thought I'd see if I could.

The following assumes you've got to the end of Jameson's tutorial.

Loading the JS file; that then loads the TVML is easy. Add main.js to your application and change the lines within application:didFinishLaunchingWithOptions in AppDelegate.swift from:

let jsFilePath = NSURL(string: "http://localhost:8000/main.js") let javascriptURL = jsFilePath! appControllerContext.javaScriptApplicationURL = javascriptURL!

To

guard let jsUrl = NSBundle.mainBundle().URLForResource("main", withExtension: "js") else
{
    return false
}

This just loads the Javascript file (main.js) from the bundle instead. It's not a great improvement but it removes one dependency on the local web server.

I then tried to add hello.tvml to the bundler and modify main.js to create the fetched (via XmlHttpRequest). Unfortunately I couldn't create the Document in the JS. It seems that the normally (I've not done JS in a long time so what do I know) available document object isn't available to more documents and/or elements can't be created.

An attempt to create one, i.e.

var otherDoc = Document()

gives

015-09-10 21:53:50.213 tv1[55699:1483712] ITML <Error>: Document is not a function. (In 'Document()', 'Document' is an instance of IKDOMDocumentConstructor) - file:///Users/pete/Library/Developer/CoreSimulator/Devices/C2E7E5BD-1823-48BF-89E9-D3A499EE778A/data/Containers/Bundle/Application/F9C514E1-1A95-46A8-83D1-1BC96BC9A220/tv1.app/main.js - line:18:25

The objects mentioned in the TVJS documentation don't seem to be able to create one either.

Anyway, hopefully another small step. Full source on github.

A good be being dumb here and another look at the docs & samples suggests that writing apps. via JS is just one way and that a more iOS like app. can be written. Perhaps this is similar to Windows Metro that had both a JS and .Net (C#) version of WinRT; & C++ for completion.

Tomcat7 User Config

Tim Pizey from Tim Pizey

Wouldn't it be nice if tomcat came with the following, commented out, in /etc/tomcat7/tomcat-users.xml ?

<?xml version='1.0' encoding='utf-8'?>
<tomcat-users>
<role rolename="manager-gui" />
<role rolename="manager-status" />
<role rolename="manager-script" />
<role rolename="manager-jmx" />

<role rolename="admin-gui" />
<role rolename="admin-script" />

<user
username="admin"
password="admin"
roles="manager-gui, manager-status, manager-script, manager-jmx, admin-gui, admin-script"/>

</tomcat-users>

Language lawyers – or why words can have precise meaning

Frances Buontempo from BuontempoConsulting

I was called a language lawyer the other day, because I attempted to be precise about the state of play with some code. Initially I was taken aback, but eventually concluded that the phrase "language lawyer" was not being used precisely. It was used in the sense of, "Saying exactly what you mean." If I had clarified this the self-reference may have meant I got lost down a rabbit hole, so I left it.

The situation came about because a co-worker is changing some code in a repo which has a few unit tests, but due to circumstance I won't bore you with the code is in two repos - one has the tests and the other doesn't. I have been tasks with getting tests round any code changes he makes. I am therefore working in the repo with the tests. He, of course, has decided to work in the repo without tests so doesn't know if his code changes break any existing tests.

/head-desk

It's like pair-programming but we have to talk in words rather than code.

I cannot manage to guess what his code changes might do to the tests. This would be so much easier if he ran the tests as he changed the code. In fact, by definiton, refactoring should involve running the tests as you go. Trying to ask questions like "Have you deleted the isValid function or changed its behaviour?" in order to try to get the tests to match his changes have resulted in answers like "No, well a bit, but I haven't decided yet."

My attempted to print off the test names so we could discuss how the code actually behaved before the changes have been met with ,"I haven't looked at the tests yet - I'd need to look at the code to see what they test." I think the tests have really clear names - like FooWithDefaultDateIsNotValid, He could look at the test code but I was rather hoping this was clear enough. I tried asking what new test *names* we might need, but got no-where. He did suggest I check the private container didn't contain any default dates - and offered to add a getter so I could verify this from outside the object in test code. I muttered something about encapsulation and seppuku and encapsulation.

I'm not sure if this is happening because people are used to function names making no sense and figuring out  one line at a time in a debugger, or if some people genuinely don't think in words. It's very difficult to communicate if people assume you aren't saying what you mean, realise you are and then call you out for trying to be clear.

A Game of Tag

Phil Nash from level of indirection

One of the tent-pole features of Catch is the ability to write test names as free-form strings. When you run a Catch executable from the command line you can specify a test case by name, to run just that one:

./MyTestExe "a very nice test case"

or you can use wildcards to run a group of test cases (or just one with less typing):

./MyTestExe "*very nice*"

If you want to use wildcards but you're not sure what they'll match you can combine this with the listing option, -l, to see which test cases match the pattern:

./MyTestExe "*very nice*" -l
Matching test cases:
  a very nice test case
  a not very nice test case
2 matching test cases

This is already quite a powerful way to group test cases into ad-hoc "suites". However we don't want to twist our test names into artificial schemes for this purposes (although, early on, that's exactly what I proposed). Instead Catch allows you to add "tags" to test cases.

TEST_CASE( "a very nice test case", "[nice][good]" ) { /* ... */ }
TEST_CASE( "a not very nice test case", "[nice][bad]" ) { /* ... */ }

Now we can run all tests with a certain tag:

./MyTestExe [good]

or combination of tags:

./MyTestExe [nice][good]

also with exclusions:

./MyTestExe [nice]~[bad]

unions are supported with ,:

./MyTestExe [nice],[pleasant]

Very powerful! And this functionality has been around for a while.

More recent, and less well known (mostly because they weren't documented until recently) are a set of "special tags": Instruction Tags, Hiding Tags, Tag Aliases and some automatically generated tags.

Let's see what they're all about.

Instruction Tags

In general all tags that start with a symbol are reserved by Catch (or, put another way, user defined tag names must start with an alpha-numeric character). This allows a nice rich range of namespaces for special tags. Tags that start with the ! character are Instruction tags. They inform Catch something about the test case that they apply to. At time of writing the following are defined:

  • [!hide] This "hides" the test from the default run (i.e. if you run the test executable without specifying any names or tags). This feature was originally introduced with the [hide] tag (note, no: !) - and is still supported, though deprecated. There is also a shortcut form, [.] which we'll revisit in a moment.
  • [!throws] This tells Catch that an exception may be thrown in the course of executing the test - even if it is caught and dealt with. If you've ever tried to track down a rogue exception in your debugger - and so have set the debugger to break on exceptions as they're thrown - you'll know how frustrating all the false positives coming from such tests are! So Catch provides a way to suppress exceptions it is expecting - through the -e or --nothrow options on the command line. This already skips over REQUIRE_THROWS... or CHECK_THROWS... assertions. The [!throws] tag covers you for cases where the exception is caught and handled in the code under test (or your test code).
  • [!shouldfail] This tells Catch that you're expecting this test to fail! Furthermore, if it does fail then it should treat that as a pass!
  • [!mayfail] Rather than explicitly inverting the pass/ fail logic as the previous tag does, this tag just says that the test may fail but that's ok (although it is still reported). It's also ok if it passes.

Hiding Tags

We already looked at [!hide] (and the deprecated [hide]) above, and mentioned that [.] was a shortcut for the same.

It turns that when one of these tags is used it is often combined with another tag that is used when you do want to run the test. The classic example is where you write integration tests in the same executable as unit tests. By default you don't want the integration tests to run as you want the shortest possible path to running just unit tests. So you hide them but also tag them [integration], or something similar (the word "integration" has no significance to Catch). So pairings like, [.][integration] or [.][performance] are frequently found together.

So, as a convenience, Catch now supports . as a tag prefix. The rest of the tag can be completely custom and works exactly like any other normal tag - except that the test is also hidden. Our examples would, thus, be written as [.integration] and [.performance]

One final point to mention about hiding tags is that, due to the way they have evolved through a number of forms (including the severely deprecated "./" name prefix) whichever form is used will not only hide the test, but any of the other forms will match it in a tag pattern. e.g. if you tag a test with [.] you can match it with [!hide].

Tag Aliases

As we saw earlier, tags can be combined in fairly complex ways. While this is powerful and flexible, it can be a bit awkward if you often want to use the same tag expression. Wouldn't it be nice if there was a way of writing the expression once then getting Catch to remember it for you - and associate it with an easier to remember name?

Well there is! You can associate any tag pattern with a name that you can use just like any normal tag - except that it must begin with the @ character.

You create a tag alias, in code, using the CATCH_REGISTER_TAG_ALIAS macro. E.g.

CATCH_REGISTER_TAG_ALIAS( "[@not nice]", "~[nice]~[!hide]" );

This registers a tag alias, [@not nice] which, when expanded will match all tests that are not tagged [nice] but also are not hidden. The second part is important because if you have any hidden tests then they will usually be included any time you use a not expression (~) because the rule is that tests are only hidden if no pattern is specified!

Also did you notice that we had a space in the tag name? Surprised? I never said that tags could not include spaces. Of course they can.

You can register as many aliases as you like and you can put them anywhere you like (as long as catch.hpp is #included). However I recommend keeping them all in your main source file (the one you #define CATCH_CONFIG_MAIN, or equivalent) - simply so you only have to look in one place for them.

Filenames As Tags

The newest special tag form is the result of automatically generating a set of tags. The tags all begin with the # character (I've resisted the urge to call them "hash tags"). The rest of the tag is generated from the name of the source file that the test is implemented in. The full path (as reported by __FILE__) is stripped of its directories and extension - so all tests in /Development/Tests/SquirrelTests.cpp would be tagged, [#SquirrelTests].

At time of writing this feature is only available on the develop branch on GitHub - and must be specifically enabled running with the --filenames-as-tags or -# command line options. It's possible that situation may change by the time it makes it onto master.

The Tag Line

So tags not only provide a rich grouping mechanism in Catch - they also allow you to control some aspects of how Catch runs and treats test cases. Some tags can be generated for you - and some tags can be expanded from simpler forms. We've covered here the complete set of special tags at the time of writing. If you're reading this in the future there may be more - I'll try and be better at keeping the docs up-to-date there. Also any stock price tips you might have from the future would be welcome too.