In this day & age of SaaS and everything cloud, a lot of us rarely write code in isolation anymore. Our code is always getting a live twitter feed, using Skynet's machine learning service to analyze it for dangerous human rebellions, and reacting on the results by calling into the appropriate crowd control REST API's. And if we want to test our code to make sure it always terminates the appropriate targets, conventional wisdom tells us to introduce a mocking layer for all the services we depend on, and then test our code against fake versions of those services.
I agree with conventional wisdom for the most part: not only does this save us from having to kill a bunch of people everytime we test the rebellion scneario, it also saves money (Skynet is pretty expensive), helps make our tests fast and reliable (they don't fail everytime twitter is down or our internet is spotty), and helps us cover edge cases and error conditions in ways that would be very difficult to do if we're testing against real services. There are many mocking frameworks to help you do this (I'm partial to Moq for .NET and Mockito for Java), and of course just good patterns of having abstract base classes and concrete implementations that either go to the live service or a fake one (though be careful of putting too much code in the live service class: this is code that won't be tested and it's a bad code smell).
But, and of course you knew this was coming, I also think that testing our code against live services is immensely useful, either in conjunction with testing against mocks or even sometimes instead of it. In this short series I want to discuss some patterns and practices I picked up along the way to do this, and some of my learnings so far.
There are various things one wants when living on the edge like that to get the thrill benefits of testing against live services while still getting most of the convenience, safety and cost-effectiveness of testing with mocking:
This is one of my favorite patterns, and one I've seen in many projects so I'm most definitely not inventing it (I'd credit original inventor but most likely this is just one of those organically developed things). I've personally seen it more ingrained in the Java and C++ worlds than the C# world for some reason: that's just my very subjective observation, but since this is my blog I'll base on it the decision to target my example to the C# crowd.
The basic idea of the pattern is to put all the information about the live services you want to use for testing (including secrets) into a file or a set of files that are read by the tests and used to connect to the services. The files are never checked into source control, and whoever wants to run the tests needs to fill in the information into those files before running them. There are other ways I've seen of passing this info around for tests: e.g. environment variables, registry settings, command-line parameters and of course the dreaded hard-coded value in the unit test that you need to edit. I personally have come to prefer the conf file pattern since having these things in a file makes them so much easier to save, copy, and generally manage (in a secure way of course) than any of the other options I've seen.
The first hurdle you have to pass when implementing this pattern is where to get this file. I've seen several options:
The last option as I said isn't bad, but my current personal favorite (for C# projects) is to have the file in a special directory in the enlistment (I usually call it TestConf) and embed this file as a resource. So my enlisment is usually organized something like:
(Note that TestConf is not checked in). And then in my project file (.csproj) for the test project, I'd have a link to it like:
Of course this means that in a new enlistment people would have to manually create this directory and file in order to build that, which contradicts my easy-to-run desirable. To circumvent that, I add an MSBuild step to create this directory and file if they don't exist:
<Target Name="BeforeBuild" Outputs="..\..\TestConf\PizzaVendors.txt">
Lines="# List of URL's for pizza vendors to use (one per line)"
This nicely doubles as a place to put simple instructions for the format of the file. Finally, in my test code I'd have a helper method to get me the configured information:
public static IEnumerable<string> ReadConfigurationFile(string name,
bool skipCommentLines = true, bool skipEmptyLines = true)
using (Stream resourceStream =
"EmergencyPizzaProcurement.Tests." + name + ".txt"))
StreamReader reader = new StreamReader(resourceStream);
while ((currentLine = reader.ReadLine()) != null)
currentLine = currentLine.Trim();
if (skipCommentLines && currentLine.StartsWith("#")) // Comment
if (skipEmptyLines && currentLine == "")
yield return currentLine;
As I hope I've stressed enough by now (among many others who've stressed it before me), these files should never go into source control. I'm usually using git these days, and luckily this makes it easier to avoid having these files sneak in there: I just add a line in my .gitignore file to always ignore these files when using git add:
# Test configuration
My overriding concern when writing these files is that they should be human-readable and editable. Since these are typically targeted at myself and my coworkers, I don't care about making them super-robust or generic. So I typically avoid XML or JSON, and just opt for simple text files and create several of them instead of worrying about sections. This also works because they're usually simple and just contain enough information to connect to the live services (see below).
I definitely do. And in general I hate seeing this mechanism (or similar ones) used for evil by making the tests configurable, e.g. having the test read how many files it should create to test with from a configuration file definitely veers into the dark side as far as I'm concerned. The test configuration here should just specify enough info to be able to connect to live services, no more no less. I'm only calling it test configuration because I'm not very imaginative with names, but I'm hoping to find a better name and start calling it that in the future.
And that's all I have to say about this pattern. In the next post I'll talk about other patterns for writing such edgy tests, but I'd love your feedback on this as usual (I'm still learning best practices here so I'd love to hear what works for you).