ResOrg 2.0.6.25 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a maintenance update for ResOrg 2.0. The following changes are included:

  • Added support for Visual Studio 2015.
  • Added support for Windows 10.
  • Added a helpfile.
  • Removed support for Windows 2000.
  • ResOrgApp now declares itself as system DPI aware to reduce the likelyhood of DPI virtualization.
  • Icons used within the ResOrg displays now reflect the current system defined icon sizes rather than being hardcoded to 16x16, 32x32 etc.
  • Tweaked the layout of the AboutBox.

Download ResOrg 2.0.6.25

ResOrg 2.0.6.25 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a maintenance update for ResOrg 2.0. The following changes are included:

  • Added support for Visual Studio 2015.
  • Added support for Windows 10.
  • Added a helpfile.
  • Removed support for Windows 2000.
  • ResOrgApp now declares itself as system DPI aware to reduce the likelyhood of DPI virtualization.
  • Icons used within the ResOrg displays now reflect the current system defined icon sizes rather than being hardcoded to 16x16, 32x32 etc.
  • Tweaked the layout of the AboutBox.

Download ResOrg 2.0.6.25

ResOrg 2.0.6.25 has been released

Products, the Universe and Everything from Products, the Universe and Everything

This is a maintenance update for ResOrg 2.0. The following changes are included:
  • Added support for Visual Studio 2015.
  • Added support for Windows 10.
  • Added a helpfile.
  • Removed support for Windows 2000.
  • ResOrgApp now declares itself as system DPI aware to reduce the likelyhood of DPI virtualization.
  • Icons used within the ResOrg displays now reflect the current system defined icon sizes rather than being hardcoded to 16x16, 32x32 etc.
  • Tweaked the layout of the AboutBox.
Download ResOrg 2.0.6.25

Things I learnt from Swift Summit

Pete Barber from C#, C++, Windows & other ramblings

I attended the first Swift Summit on 21st of March; there were two days but I only went to the first. Here are some of the facts I learned:
  • Int is not a fundamental type as you would think of it in most languages. 
    • Instead it's a struct that derives SignedIntegerType with the actual value being an instance of the really fundamental type Bultin.World
    • Being a struct means it's a proper object hence its methods, the ability to extend (see later) and can (& does )implement protocols
  • Int can be extended
    • As it's an object (see above) it's possible to write extensions methods.
  • I have a far better understanding of what @autoclosure does now
    • It basically captures  the function rather than invoking it
  • nil is never actually treated as nil when used
And now for some opinion about the day.
  • People seem to be struggling with error handling
    • A couple of talks presented code to avoid pyramids of doom in regard to making a call, checking for success/failure and if successful continuing.
  • People think they're doing Functional Programming
    • Just because Swift supports Functional Programming style and some people use elements of FP they assume Swift is mainly a Functional language and that they are doing Functional Programming.
    • Passing functions arounds as first-class objects does not make your program functional
  • Some people now hate Objective-C

Software Process Dynamics

Rob Smallshire from Good With Computers

At the Software Architect 2015 conference in London I presented "What if? Supporting decisions with software dynamics simulations". [1] This talk introduces the idea of performing numerical simulations of software development teams and the products they build. The value in such simulations is to inform policy decisions and guide deliberate perturbations to the software development process, such as whether and when to add or remove personnel from a project. Simulations should not be used to make hard predications about, for example, when a particular project will be finished.

[1]Slides

In vivo, in vitro, in silico

Frances Buontempo from BuontempoConsulting


In vivo, in vitro, in silico


Some people get unit testing and some people don't. The reasons vary, usually based on a mixture of previous experience, lack of experience, fear of the unknown or joy at a safer quicker way of developing. One specific doubt crops up from time to time. It comes in the form of "If I test small bits, i.e. units, whatever *that* means, it proves nothing. I need to test the whole thing or small parts of the whole thing live."

My PhD was in toxicity prediction, which involves testing if something will be toxic or not. You can test a chemical "in vivo" - administer it to a several animals in varying doses. You sit back and wait til half of them die, or show toxicity symptoms and record the doses. This gives you the Lx50 - for example the LD50 is the lethal dose that kills 50% of the animals.  Notice I said you can do this. You can also test the chemical on a set of cells in a test tube or petri dish - "in vitro" (in glass). Again you can find the dose which affects 50% of the specimens. I personally find this less upsetting, but I want to focus on parallels with testing code here. Finally, given all this data the previous tests have generated, you can analyse the data, probably on a computer, perhaps finding chemical structure to activity relationships - SAR, or quantitative SARS i.e. QSARs. These are referred to as "in silico" - for obvious reasons. Some in silico experiments will just find clusters of similar chemical, which can either alert you to groups that might need more detailed toxicity testing, or even guide drug discovery by steering clear of molecules, say containing benzene rings which can be carcinogenic, saving time and money if you are trying to invent a drug that cures cancer. The value of testing on a computer outside a live organism should be clear. It can save time, money and even lives.


If we keep this in mind while considering testing a software system, rather than a biological system, we should be able to see some parallels. It is possible to test a live system - maybe on beta rather than "TIP" (Test in production). This can be a good thing. However, it might save time and money, and though maybe not lives, certainly headaches, to test parts of the live system in a sandboxed environment, analogous to in vitro. Running an end to end test against a test database instance with data in a specific state might count. Pushing the analogy further, you could even test small parts of the system, say units, whatever they are, in silico. Just try this small part away from the live system in a computer. This is worthwhile. It will be quicker, as toxicity in silico experiments are quicker - they tend to take hours rather than days. This is a good thing. Of course, you won't know exactly what will happen in a full live system, but you can catch problems earlier, before killing something. This is a Good Thing.

Other industries also test things in units - I could put together a car or a computer hit the on switch and see if it works.  However, I am given to believe that the components are tested thoroughly *before* the full system is built. If I build a PC and it doesn't work I will then have to go through one part at a time and check. If someone tests the parts first, this will ensure I haven't put a dodgy power block in the whole thing. Testing small parts, preferably before testing the whole system, is a Good Thing.

I don't believe this short observation will change anyone's minds. But I hope it will give pause for thought to those who think only testing from end to end matters, and testing "in silico" is a waste of time.










Event-Sourced Domain Models in Python at PyCon UK

Rob Smallshire from Good With Computers

At PyCon UK 2015 I led a very well attended workshop with the goal of introducing Python developers to the tried-and-tested techniques and patterns of Domain Driven Design (DDD), in particular when used as part of an event-sourced architecture.

The two-and-a-half hour workshop was comprised of excerpts from our training course DDD Patterns in Python. Although the workshop material was heavily edited and compressed from the course – I'm confident that the majority of attendees grasped the main principles.

Several attendees have since asked for the introductory slides, which preceded the exercises. Here they are:

Sixty North training materials are for individual use. For training in a commercial setting please contact us to book a training course or obtain a license for the materials.

Read Maven Surefire Test Result files using Perl

Tim Pizey from Tim Pizey

When you want something quick and dirty it doesn't get dirtier, or quicker, than Perl.

We have four thousand tests and they are taking way too long. To discover why we need to sort the tests by how long they take to run and see if a pattern emerges. The test runtimes are written to the target/surefire-reports directory. Each file is named for the class of the test file and contains information in the following format:


-------------------------------------------------------------------------------
Test set: com.mycorp.MyTest
-------------------------------------------------------------------------------
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.03 sec


#! /usr/bin/perl -wall

my %tests;
open(RESULTS, "grep 'Tests run' target/surefire-reports/*.txt|");
while () {
s/\.txt//;
s/target\/surefire-reports\///;
s/Tests run:.+Time elapsed://;
s/ sec//;
s/,//;
/^(.+):(.+)$/;
$tests{$1} = $2;
}
close(RESULTS);

my $cumulative = 0.0;
print("cumulative\ttime\tcumulative_secs\ttime_secs\ttest");
foreach my $key (sort {$tests{$a} <=> $tests{$b}} keys %tests) {
$cumulative += $tests{$key};
printf("%2d:%02d\t%2d:%02d\t%5d\t%5d\t%s\n",
($cumulative/60)%60, $cumulative%60,
($tests{$key}/60)%60, $tests{$key}%60,
$cumulative,
$tests{$key},
$key);
};


The resultant CSV can be viewed using a google chart: