In the past few years Iâ€™ve worked on a few projects where TIBCO has been the message queuing product of choice within the company. Naturally being a test-oriented kind of guy Iâ€™ve used unit and component tests for much of the donkey work, but initially had to shy away from writing any automated integration tests due to the inherent difficulties of getting the system into a known state in isolation.
For any automated integration tests to run reliably we need to control the whole environment, which ideally is our development workstations but also our CI build environment (see â€œThe Developerâ€™s Sandboxâ€). The main barriers to this with a commercial product like TIBCO are often technological, but also more often than not, organisational too.
In my experience middleware like this tends to be proprietary, very expensive, and owned within the organisation by a dedicated team. They will configure the staging and production queues and manage the fault-tolerant servers, which is probably what youâ€™d expect as you near production. A more modern DevOps friendly company would recognise the need to allow teams to test internally first and would help them get access to the product and tools so they can build their test scaffolding that provides the initial feedback loop.
Hence just being given the client access libraries to the product is not enough, we need a way to bring up and tear down the service endpoint, in isolation, so that we can test connectivity and failover scenarios and message interoperability. We also need to be able develop and test our logic around poisoned messages and dead-letter queues. And all this needs to be automatable so that as we develop and refactor we can be sure that weâ€™ve not broken anything; manually testing this stuff is not just not scalable in a shared test environment at the pace modern software is developed.
That said, the TIBCO EMS SDK Iâ€™ve been working with (v6.3.0) has all the parts I needed to do this stuff, albeit with some workarounds to avoid needing to run the tests with administrator rights which weâ€™ll look into later.
The only other thorny issue is licensing. You would hope that software product companies would do their utmost to get developers on their side and make it easy for them to build and test their wares, but it is often hard to get clarity around how the product can be used outside of the final production environment. For example trying to find out if the TIBCO service can be run on a developerâ€™s workstation or in a cloud hosted VM solely for the purposes of running some automated tests has been a somewhat arduous task.
This may not be solely the fault of the underlying product company, although the old fashioned licensing agreements often do little to distinguish production and modern development use . No, the real difficulty is finding the right person within the clientâ€™s company to talk to about such matters. Unless they are au fait with the role modern automated integrated testing takes place in the development process you will struggle to convince them your intended use is in the interests of the 3rd party product, not stealing revenue from them.
Okay, time to step down from the soap box and focus on the problems we can solveâ€¦
Hosting TIBEMSD as a Windows Service
From an automated testing perspective what we need access to is the TIBEMSD.EXE console application. This provides us with one or more TIBCO message queues that we can host on our local machine. Owning thing process means we can therefore create, publish to and delete queues on demand and therefore tightly control the environment.
If you only want to do basic integration testing around the sending and receiving of messages you can configure it as a Windows service and just leave it running in the background. Then your tests can just rely on it always being there like a local database or the file-system. The build machine can be configured this way too.
Unfortunately because itâ€™s a console application and not written to be hosted as a service (at least v6.3 isnâ€™t), you need to use a shim like SRVANY.EXE from the Windows 2003 Resource Kit or something more modern like NSSM. These tools act as an adaptor to the console application so that the Windows SCM can control them.
One thing to be careful of when running TIBEMSD in this way is that it will stick its data files in the CWD (Current Working Directory), which for a service is %SystemRoot%\System32, unless you configure the shim to change it. Putting them in a separate folder makes them a little more obvious and easier to delete when having a clear out .
Running TIBEMSD On Demand
Running the TIBCO server as a service makes certain kinds of tests easier to write as you donâ€™t have to worry about starting and stopping it, unless thatâ€™s exactly the kinds of test you want to write.
Iâ€™ve found itâ€™s all too easy when adding new code or during a refactoring to accidentally break the service so that it doesnâ€™t behave as intended when the network goes up and down, especially when youâ€™re trying to handle poisoned messages.
Hence I prefer to have the TIBEMSD.EXE binary included in the source code repository, in a known place so that it can be started and stopped on demand to verify the connectivity side is working properly. For those classes of integration tests where you just need it to be running you can add it to your fixture-level setup and even keep it running across fixtures to ensure the tests running at an adequate pace.
If, like me, you donâ€™t run as an Administrator all the time (or use elevated command prompts by default) you will find that TIBEMSD doesnâ€™t run out-of-the-box in this way. Fortunately itâ€™s easy to overcome these two issues and run in a LUA (Limited User Account).
Only Bind to the Localhost
One of the problems is that by default the server will try and listen for remote connections from anywhere which means it wants a hole in the firewall for its default port. This of course means youâ€™ll get that firewall popup dialog which is annoying when trying to automate stuff. Whilst you could grant it permission with a one-off NETSH ADVFIREWALL command I prefer components in test mode to not need any special configuration if at all possible.
Windows will allow sockets that only listen for connections from the local host to avoid generating the annoying firewall popup dialog (and this was finally extended to include HTTP too). However we need to tell the TIBCO server to do just that, which we can achieve by creating a trivial configuration file (e.g. localhost.conf) with the following entry:
Now we just need to start it with the â€“conf switch:
> tibemsd.exe -config localhost.conf
Suppressing the Need For Elevation
So far so good but our other problem is that when you start TIBEMSD it wants you to elevate its permissions. I presume this is a legacy thing and there may be some feature that really needs it but so far in my automated tests I havenâ€™t hit it.
There are a number of ways to control elevation for legacy software that doesnâ€™t have a manifest, like using an external one, but TIBEMSD does and that takes priority. Luckily for us there is a solution in the form of the __COMPAT_LAYER environment variable . Setting this, either through a batch file or within our test code, supresses the need to elevate the server and it runs happily in the background as a normal user, e.g.
> set __COMPAT_LAYER=RunAsInvoker
> tibemsd.exe -config localhost.conf
Spawning TIBEMSD From Within a Test
Once we know how to run TIBEMSD without it causing any popups we are in a position to do that from within an automated test running as any user (LUA), e.g. a developer or the build machine.
In C#, the language where I have been doing this most recently, we can either hard-code a relative path  to where TIBEMSD.EXE resides within the repo, or read it from the test assemblyâ€™s app.config file to give us a little more flexibility.
We can also add our special .conf file to the same folder and therefore find it in the same way. Whilst we could generate it on-the-fly it never changes so I see little point in doing this extra work.
Something to be wary of if youâ€™re using, say, NUnit to write your integration tests is that it (and ReSharper) can copy the test assemblies to a random location to aid in insuring your tests have no accidental dependencies. In this instance we do, and a rather large one at that, so we need the relative distance between where the test assemblies are built and run (XxxIntTests\bin\Debug) and the TIBEMSD.EXE binary to remain fixed. Hence we need to disable this copying behaviour with the /noshadow switch (or â€œTools | Unit Testing | Shadow-copy assemblies being testedâ€ in ReSharper).
Given that we know where our test assembly resides we can use Assembly.GetExecutingAssembly() to create a fully qualified path from the relative one like so:
private static string GetExecutingFolder()
var codebase = Assembly.GetExecutingAssembly()
var folder = Path.GetDirectoryName(codebase);
return new Uri(folder).LocalPath;
. . .
var thisFolder = GetExecutingFolder();
var tibcoFolder = â€œ..\..\tools\TIBCOâ€;
var serverPath = Path.Combine(
thisFolder, tibcoFolder, â€œtibemsd.exeâ€);
var configPath = Path.Combine(
thisFolder, tibcoFolder, â€œlocalhost.confâ€);
Now that we know where the binary and config lives we just need to stop the elevation by setting the right environment variable:
Finally we can start the TIBEMSD.EXE console application in the background (i.e. no distracting console window) using Diagnostics.Process:
var process = new System.Diagnostics.Process
StartInfo = new ProcessStartInfo(path, args)
UseShellExecute = false,
CreateNoWindow = true,
Stopping the daemon involves calling Kill(). There are more graceful ways of remotely stopping a console application which you can try first, but Kill() is always the fall-back approach and of course the TIBCO server has been designed to survive such abuse.
Naturally you can wrap this up with the Dispose pattern so that your test code can be self-contained:
Or if you want to amortise the cost of starting it across your tests you can use the fixture-level set-up and tear down:
private IDisposable _server;
public void GivenMessageQueueIsAvailable()
_server = RunTibcoServer();
public void StopMessageQueue()
_server = null;
One final issue to be aware of, and itâ€™s a common one with integration tests like this which start a process on demand, is that the server might still be running unintentionally across test runs. This can happen when youâ€™re debugging a test and you kill the debugger whilst still inside the test body. The solution is to ensure that the server definitely isnâ€™t already running before you spawn it, and that can be done by killing any existing instances of it:
.ForEach(p => p.Kill());
Naturally this is a sledgehammer approach and assumes you arenâ€™t using separate ports to run multiple disparate instances, or anything like that.
This gets us over the biggest hurdle, control of the server process, but there are a few other little things worth noting.
Due to the asynchronous nature and potential for residual state Iâ€™ve found itâ€™s better to drop and re-create any queues at the start of each test to flush them. I also use the Assume.That construct in the arrangement to make it doubly clear I expect the test to start with empty queues.
Also if youâ€™re writing tests that cover background connect and failover be aware that the TIBCO reconnection logic doesnâ€™t trigger unless you have multiple servers configured. Luckily you can specify the same server twice, e.g.
var connection= â€œtcp://localhost,tcp://localhostâ€;
If you expect your server to shutdown gracefully, even in the face of having no connection to the queue, you might find that calling Close() on the session and/or connection blocks whilst itâ€™s trying to reconnect (at least in EMS v6.3 it does). This might not be an expected production scenario, but it can hang your tests if something goes awry, hence Iâ€™ve used a slightly distasteful workaround where the call to Close() happens on a separate thread with a timeout:
Task.Run(() => _connection.Close()).Wait(1000);
Writing automated integration tests against a middleware product like TIBCO is often an uphill battle that I suspect many donâ€™t have the appetite or patience for. Whilst this post tackles the technical challenges, as they are at least surmountable, the somewhat harder problem of tackling the organisation is sadly still left as an exercise for the reader.
 The modern NoSQL database vendors appear to have a much simpler model â€“ use it as much as you like outside production.
 If the data files get really large because you leave test messages in them by accident they can cause your machine to really grind after a restart as the service goes through recovery.
 A relative path means the repo can then exist anywhere on the developerâ€™s file-system and also means the code and tools are then always self-consistent across revisions.