I have this debate with other testers and developers from time to time, so I thought I post my own thoughts on the subject.
Many applications today consist of multiple layers. For example, a typical ASP.NET MVC web application may have views, controllers, services, data repositories, a domain model, etc. A typical IT type application might have a WinForms or WPF GUI, some client side logic, web services, and a database layer.
The question is, when testing a layered application, should we test each layer independently or just test the product from an end-to-end perspective?
My answer (and yours may vary), is “both”. The key goal in my mind is to find each class of bug in the most efficient way possible while keeping the efficiency of your overall quality process in check too.
Often, the argument against testing layers sounds something like “we’ll end up with the same test written N times” or “it’s too inefficient to test each layer independently”. While I completely disagree with the 1st point, there may be validity to the 2nd. I’ll tackle each individually. I’ll point out here that I’m mainly talking about the layers that your product has created (i.e. code that you own). If you depend on 3rd party products such as a database or web server or user controls, you may or may not want to do some testing to ensure they meet your needs.
If your app is layered, there’s a really good chance each layer does something slightly different (why else would there be layers?). If you target your test at each layer and mock away the layers below it, then you can write tests to each layer that target the specific behavior of that layer and nothing else. This is typically what unit testing covers.
In my experience, people who use point #1 against testing layers often misunderstand the point/value/cost of mocking and they’re thinking about the problem as testing the whole stack in increments. I agree this would probably not be an efficient use of time and resources. Imagine testing the database, then the database + the data access layer, then the database + the data access layer + the business logic, then finally the database + the data access layer + the business logic + the client code that sits on top of the business logic… you get the point. In this approach, you’d be testing the bottom-most layer 4 times, the DAL 3 times, the business logic twice, etc. I don’t think that’s efficient or necessary at all in most cases.
This is where mocking comes in handy, and in particular mocking frameworks. The common misperception is that you have to write and maintain a bunch of code to create a mock for things your code depends on. Mocking frameworks such as RhinoMocks, TypeMock, Moq, etc. help you by automatically creating the mocks for you at runtime. You simply tell them how to respond when certain methods are called or properties accessed. For example, if your code under test depends on a class with 50 methods, but only 1 is called by the code under test, you only have to write about 2 lines of code to create the mock object and tell it how to behave when your code calls it. That’s all! How is that expensive?
A variation on this argument is the classic “why automate testing at all?” argument: “it’s too expensive to maintain test automation”. My response is, “it’s usually too expensive not to”. Sure, if you change the product code to do something differently, then you should change the test code accordingly. In fact, you should want to. Otherwise, how do you know the code you just wrote even works? You could go as far as writing the tests first in fact, but since I’m trying to convince you to write tests at all, I’m going to stop there for now :)
Are you sure? How can you be certain that that particular code path will never be hit by customers using the product from a layer higher up or possibly by a change in a dependency further down the layer stack? Why would you knowingly write code that’s full of bugs and rely on the fact that what’s true at a specific point in time will always be true? Think about all the time you’ve saved by testing in layers and finding bugs early instead of late in the development cycle when integration testing usually happens. Why not use a little of that time to fix this type of bug? I’m speaking in broad terms, of course… I’m sure everyone can come up with specific examples where this doesn’t make sense.
Testing layers doesn’t give you an excuse to not test from an end-user/end-to-end perspective. It doesn’t matter if all the parts of a car are tested independently and work fine if they don’t fit together in the end. It’s critical to understand how components work together in a system and validate that assumptions made about the interfaces between them align from both sides of the fit. If you’re having the “no testing of layers” argument with me, chances are you’re more interested in end-to-end testing anyway, so I’m not going to spend a lot of time on this part.
I think of it this way: when building a house, you wouldn’t just get an inspection after the whole house is finished just to discover that the electrical wiring was done incorrectly, would you? Nor would you only test the electrical wiring once before all the drywall was up and forego the final walkthrough. To really have confidence that your house is built correctly, you have inspections for each major sub-system along the way (framing, plumbing, electrical, insulation, drywall, etc.) and then you do a final walkthrough for an integration/end-to-end kind of test.
By testing all the components/layers independently, you’re finding bugs in those components more efficiently than waiting until the whole system is put together. By testing from an integration perspective, you’re testing the coupling of those individual components and their interactions. In both cases, you’re using the right tool for the right job.
So that’s my pitch! Convinced?