Read Maven Surefire Test Result files using Perl

Tim Pizey from Tim Pizey

When you want something quick and dirty it doesn't get dirtier, or quicker, than Perl.

We have four thousand tests and they are taking way too long. To discover why we need to sort the tests by how long they take to run and see if a pattern emerges. The test runtimes are written to the target/surefire-reports directory. Each file is named for the class of the test file and contains information in the following format:


-------------------------------------------------------------------------------
Test set: com.mycorp.MyTest
-------------------------------------------------------------------------------
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.03 sec


#! /usr/bin/perl -wall

my %tests;
open(RESULTS, "grep 'Tests run' target/surefire-reports/*.txt|");
while () {
s/\.txt//;
s/target\/surefire-reports\///;
s/Tests run:.+Time elapsed://;
s/ sec//;
s/,//;
/^(.+):(.+)$/;
$tests{$1} = $2;
}
close(RESULTS);

my $cumulative = 0.0;
print("cumulative\ttime\tcumulative_secs\ttime_secs\ttest");
foreach my $key (sort {$tests{$a} <=> $tests{$b}} keys %tests) {
$cumulative += $tests{$key};
printf("%2d:%02d\t%2d:%02d\t%5d\t%5d\t%s\n",
($cumulative/60)%60, $cumulative%60,
($tests{$key}/60)%60, $tests{$key}%60,
$cumulative,
$tests{$key},
$key);
};


The resultant CSV can be viewed using a google chart:

A slight enhancement on Developing tvOS Apps with Swift

Pete Barber from C#, C++, Windows &amp; other ramblings

Apple announced tvOS yesterday. Downloading Xcode 7.1 Beta comes with the SDK and simulator for tvOS apps. The official documentation starts to run through how to create a basic app but is doesn't mention where to place and load the JS from and the same for the TVML.

Fortunately and vert quickly Jamerson Quave put together a tutorial.

I followed the Apple docs but checked Jameson's tutorial to verify the missing declaration of

var appController: TVApplicationController?

from AppDelegate and also for the JS and then TVML loading. I don't understand and the docs don't seem to see where the JS & TVML should be loaded from. They seem to suggest it should be remote, i.e. not part of the App Bundle but I don't know why. Anyhow I thought I'd see if I could.

The following assumes you've got to the end of Jameson's tutorial.

Loading the JS file; that then loads the TVML is easy. Add main.js to your application and change the lines within application:didFinishLaunchingWithOptions in AppDelegate.swift from:

let jsFilePath = NSURL(string: "http://localhost:8000/main.js") let javascriptURL = jsFilePath! appControllerContext.javaScriptApplicationURL = javascriptURL!

To

guard let jsUrl = NSBundle.mainBundle().URLForResource("main", withExtension: "js") else
{
    return false
}

This just loads the Javascript file (main.js) from the bundle instead. It's not a great improvement but it removes one dependency on the local web server.

I then tried to add hello.tvml to the bundler and modify main.js to create the fetched (via XmlHttpRequest). Unfortunately I couldn't create the Document in the JS. It seems that the normally (I've not done JS in a long time so what do I know) available document object isn't available to more documents and/or elements can't be created.

An attempt to create one, i.e.

var otherDoc = Document()

gives

015-09-10 21:53:50.213 tv1[55699:1483712] ITML <Error>: Document is not a function. (In 'Document()', 'Document' is an instance of IKDOMDocumentConstructor) - file:///Users/pete/Library/Developer/CoreSimulator/Devices/C2E7E5BD-1823-48BF-89E9-D3A499EE778A/data/Containers/Bundle/Application/F9C514E1-1A95-46A8-83D1-1BC96BC9A220/tv1.app/main.js - line:18:25

The objects mentioned in the TVJS documentation don't seem to be able to create one either.

Anyway, hopefully another small step. Full source on github.

A good be being dumb here and another look at the docs & samples suggests that writing apps. via JS is just one way and that a more iOS like app. can be written. Perhaps this is similar to Windows Metro that had both a JS and .Net (C#) version of WinRT; & C++ for completion.

Tomcat7 User Config

Tim Pizey from Tim Pizey

Wouldn't it be nice if tomcat came with the following, commented out, in /etc/tomcat7/tomcat-users.xml ?

<?xml version='1.0' encoding='utf-8'?>
<tomcat-users>
<role rolename="manager-gui" />
<role rolename="manager-status" />
<role rolename="manager-script" />
<role rolename="manager-jmx" />

<role rolename="admin-gui" />
<role rolename="admin-script" />

<user
username="admin"
password="admin"
roles="manager-gui, manager-status, manager-script, manager-jmx, admin-gui, admin-script"/>

</tomcat-users>

Language lawyers – or why words can have precise meaning

Frances Buontempo from BuontempoConsulting

I was called a language lawyer the other day, because I attempted to be precise about the state of play with some code. Initially I was taken aback, but eventually concluded that the phrase "language lawyer" was not being used precisely. It was used in the sense of, "Saying exactly what you mean." If I had clarified this the self-reference may have meant I got lost down a rabbit hole, so I left it.

The situation came about because a co-worker is changing some code in a repo which has a few unit tests, but due to circumstance I won't bore you with the code is in two repos - one has the tests and the other doesn't. I have been tasks with getting tests round any code changes he makes. I am therefore working in the repo with the tests. He, of course, has decided to work in the repo without tests so doesn't know if his code changes break any existing tests.

/head-desk

It's like pair-programming but we have to talk in words rather than code.

I cannot manage to guess what his code changes might do to the tests. This would be so much easier if he ran the tests as he changed the code. In fact, by definiton, refactoring should involve running the tests as you go. Trying to ask questions like "Have you deleted the isValid function or changed its behaviour?" in order to try to get the tests to match his changes have resulted in answers like "No, well a bit, but I haven't decided yet."

My attempted to print off the test names so we could discuss how the code actually behaved before the changes have been met with ,"I haven't looked at the tests yet - I'd need to look at the code to see what they test." I think the tests have really clear names - like FooWithDefaultDateIsNotValid, He could look at the test code but I was rather hoping this was clear enough. I tried asking what new test *names* we might need, but got no-where. He did suggest I check the private container didn't contain any default dates - and offered to add a getter so I could verify this from outside the object in test code. I muttered something about encapsulation and seppuku and encapsulation.

I'm not sure if this is happening because people are used to function names making no sense and figuring out  one line at a time in a debugger, or if some people genuinely don't think in words. It's very difficult to communicate if people assume you aren't saying what you mean, realise you are and then call you out for trying to be clear.

A Game of Tag

Phil Nash from level of indirection

One of the tent-pole features of Catch is the ability to write test names as free-form strings. When you run a Catch executable from the command line you can specify a test case by name, to run just that one:

./MyTestExe "a very nice test case"

or you can use wildcards to run a group of test cases (or just one with less typing):

./MyTestExe "*very nice*"

If you want to use wildcards but you're not sure what they'll match you can combine this with the listing option, -l, to see which test cases match the pattern:

./MyTestExe "*very nice*" -l
Matching test cases:
  a very nice test case
  a not very nice test case
2 matching test cases

This is already quite a powerful way to group test cases into ad-hoc "suites". However we don't want to twist our test names into artificial schemes for this purposes (although, early on, that's exactly what I proposed). Instead Catch allows you to add "tags" to test cases.

TEST_CASE( "a very nice test case", "[nice][good]" ) { /* ... */ }
TEST_CASE( "a not very nice test case", "[nice][bad]" ) { /* ... */ }

Now we can run all tests with a certain tag:

./MyTestExe [good]

or combination of tags:

./MyTestExe [nice][good]

also with exclusions:

./MyTestExe [nice]~[bad]

unions are supported with ,:

./MyTestExe [nice],[pleasant]

Very powerful! And this functionality has been around for a while.

More recent, and less well known (mostly because they weren't documented until recently) are a set of "special tags": Instruction Tags, Hiding Tags, Tag Aliases and some automatically generated tags.

Let's see what they're all about.

Instruction Tags

In general all tags that start with a symbol are reserved by Catch (or, put another way, user defined tag names must start with an alpha-numeric character). This allows a nice rich range of namespaces for special tags. Tags that start with the ! character are Instruction tags. They inform Catch something about the test case that they apply to. At time of writing the following are defined:

  • [!hide] This "hides" the test from the default run (i.e. if you run the test executable without specifying any names or tags). This feature was originally introduced with the [hide] tag (note, no: !) - and is still supported, though deprecated. There is also a shortcut form, [.] which we'll revisit in a moment.
  • [!throws] This tells Catch that an exception may be thrown in the course of executing the test - even if it is caught and dealt with. If you've ever tried to track down a rogue exception in your debugger - and so have set the debugger to break on exceptions as they're thrown - you'll know how frustrating all the false positives coming from such tests are! So Catch provides a way to suppress exceptions it is expecting - through the -e or --nothrow options on the command line. This already skips over REQUIRE_THROWS... or CHECK_THROWS... assertions. The [!throws] tag covers you for cases where the exception is caught and handled in the code under test (or your test code).
  • [!shouldfail] This tells Catch that you're expecting this test to fail! Furthermore, if it does fail then it should treat that as a pass!
  • [!mayfail] Rather than explicitly inverting the pass/ fail logic as the previous tag does, this tag just says that the test may fail but that's ok (although it is still reported). It's also ok if it passes.

Hiding Tags

We already looked at [!hide] (and the deprecated [hide]) above, and mentioned that [.] was a shortcut for the same.

It turns that when one of these tags is used it is often combined with another tag that is used when you do want to run the test. The classic example is where you write integration tests in the same executable as unit tests. By default you don't want the integration tests to run as you want the shortest possible path to running just unit tests. So you hide them but also tag them [integration], or something similar (the word "integration" has no significance to Catch). So pairings like, [.][integration] or [.][performance] are frequently found together.

So, as a convenience, Catch now supports . as a tag prefix. The rest of the tag can be completely custom and works exactly like any other normal tag - except that the test is also hidden. Our examples would, thus, be written as [.integration] and [.performance]

One final point to mention about hiding tags is that, due to the way they have evolved through a number of forms (including the severely deprecated "./" name prefix) whichever form is used will not only hide the test, but any of the other forms will match it in a tag pattern. e.g. if you tag a test with [.] you can match it with [!hide].

Tag Aliases

As we saw earlier, tags can be combined in fairly complex ways. While this is powerful and flexible, it can be a bit awkward if you often want to use the same tag expression. Wouldn't it be nice if there was a way of writing the expression once then getting Catch to remember it for you - and associate it with an easier to remember name?

Well there is! You can associate any tag pattern with a name that you can use just like any normal tag - except that it must begin with the @ character.

You create a tag alias, in code, using the CATCH_REGISTER_TAG_ALIAS macro. E.g.

CATCH_REGISTER_TAG_ALIAS( "[@not nice]", "~[nice]~[!hide]" );

This registers a tag alias, [@not nice] which, when expanded will match all tests that are not tagged [nice] but also are not hidden. The second part is important because if you have any hidden tests then they will usually be included any time you use a not expression (~) because the rule is that tests are only hidden if no pattern is specified!

Also did you notice that we had a space in the tag name? Surprised? I never said that tags could not include spaces. Of course they can.

You can register as many aliases as you like and you can put them anywhere you like (as long as catch.hpp is #included). However I recommend keeping them all in your main source file (the one you #define CATCH_CONFIG_MAIN, or equivalent) - simply so you only have to look in one place for them.

Filenames As Tags

The newest special tag form is the result of automatically generating a set of tags. The tags all begin with the # character (I've resisted the urge to call them "hash tags"). The rest of the tag is generated from the name of the source file that the test is implemented in. The full path (as reported by __FILE__) is stripped of its directories and extension - so all tests in /Development/Tests/SquirrelTests.cpp would be tagged, [#SquirrelTests].

At time of writing this feature is only available on the develop branch on GitHub - and must be specifically enabled running with the --filenames-as-tags or -# command line options. It's possible that situation may change by the time it makes it onto master.

The Tag Line

So tags not only provide a rich grouping mechanism in Catch - they also allow you to control some aspects of how Catch runs and treats test cases. Some tags can be generated for you - and some tags can be expanded from simpler forms. We've covered here the complete set of special tags at the time of writing. If you're reading this in the future there may be more - I'll try and be better at keeping the docs up-to-date there. Also any stock price tips you might have from the future would be welcome too.

Interview: Fog Creek (Going Beyond Code to Become A Better Programmer)

Pete Goodliffe from Pete Goodliffe

I recently did a short interview with the guys at Fog Creek on the subject Becoming a Better Programmer. You can view it here.

It's a heroic editing effort! Between unreliable network connections and probably a 40 minute conversion they've heroically cut it down to ten minutes, and made me look rather like Max Headroom.

There's been lots of great feedback about this, so I'm glad it's inspiring people.

Did Bell Labs initially try to hide that C was based on BCPL?

olvemaudal from Geektalk

During research for my talk “History and Spirit of C and C++” (pdf, 11Mb) I realized that the reference manual Ken Thompson wrote for B in 1972 (pdf) was in parts a verbatim copy of the reference manual that Martin Richards wrote for BCPL in 1967 (pdf) (in particular look at page 6 in both documents or see slide 118-126 in my presentation). I guess that is fair as B is semantically basically the same language as BCPL. However, the odd thing is that in the more official reference manual for C dated 1974 (pdf), BCPL is not even mentioned at all.

“Good artists copy, great artists steal.” Perhaps this is just another kudos to Bell Labs, but I certainly found it interesting. It has to be said though that in all the interviews and later writings I have seen by members of Bell Labs, including Ritchie and Thompson, they are very open about BCPL being the main inspiration for B and C.

Release news: Hungry Bunny & KeyChainItemCRUDKit

Pete Barber from C#, C++, Windows &amp; other ramblings

Not a technical post today, just a bit of news on the things I've been working on.

Firstly, my latest SpriteKit game written in Swift is now available on the App Store. It's called Hungry Bunny and is effectively an endless runner/skill test. It's free with ads.



Secondly, now that Hungry Bunny is complete I've returned to working on another project. This is nowhere near complete but I got to the point where I needed to securely store an OAuth2 token on iOS. I came across the Keychain API. However, the API for this was long winded and I only wanted to use it in a simple manner. Therefore I created a Swift framework to provide CRUD access to it along with a higher level interface where any type conforming to NSCoding and be saved, loaded & deleted.

This is available on github and also as my first ever CocoaPod. All the docs are in the README plus there's an example iOS program (Single View App) and the Unit Tests.