I was recently asked to look at a project that had around 60% code coverage and was asked to give my recommendations on what area to focus on to increase the code coverage. There were a lot of unit tests and actually there was around 10% more unit test code than production code so I was a little surprised that the coverage was as low as 60%. Obviously something must be wrong with the tests. A closer look showed that the code that implemented the logic of the application had very high coverage in general and that only the code dealing with external dependencies had low coverage. Also there wasn't that much code so the boilerplate code just to initialize the application was around 15% of the code. The code dealing with external dependencies and that low coverage was slightly less than 10%. So all in all 60 of the possible 75% was covered. Relatively speaking that is 80% - a number I've often heard managers be very happy with (but remember code coverage itself is a shady property to measure).
So what was missing? Well there were a few cases where a few small classes completely were missing unit tests so they would be easy to fix. There were also a bunch of classes that had unit tests exclusively for the happy path of execution and those can be fixed easily too. But I still think it would be possible to bring coverage up above 90%. How is that possible you might think? Well, the boiler plate code had some, but more important the code dealing with external dependencies had a lot of room for improvement. They were much more than simple pass through objects needing no testing and with a simple refactoring like extracting the external call as a protected virtual method of the class you could easily add unit tests for a huge portion of that code to I think.
So my conclusion? The original coverage was not bad. It was covering the important pieces and functional tests (not used when I analysed code coverage) covered a big portion of the code not covered by unit tests. So the situation was not bad. But when you use a code coverage report the right way and look at what and why it is not covered you can always find areas of improvement.
Almost to the day, Azure had another certificate related outage. Last year was more interesting I think and this year it was something different. My initial guess (remember I don't work for Azure nor do I have any knowledge about the details other than what has been communicated to the public) was that a few years ago when they were first creating Azure they generated some certificates to be used with SSL. My guess was that the certificates generated were valid for a long time. Probably like 5 years or something. Since the certificates were created long before azure was available to customers nobody thought about adding monitoring to detect when the certificates expired. Turns out I was wrong and that there were good alerting in place, but that the process had other flaws.
So how can you prevent this from happening in your Project? I would suggest you do the following:
The funniest thing about this outage is an article in Australia thinking this was caused by hackers... I guess I shouldn't get surprised that news papers don't check their facts...
Last week I read this article on an experiement a team did to compare off-shoring to co-location of the team. Pretty good summarizes my experiences. I've only ever seen two acceptable off-shoring projects.
The first one was not really off-shoring. It was when I first joined Microsoft and our team was located in Sweden and one by one the team members moved over to the US so for a while half the team was in the US and half in Sweden. We definitely saw some of the bad things of a distributed team and we definitely saw some of the good things with having different time zones since severe bugs could be worked on around the clock. But I think the main reason this worked was because we were a co-located team that got distributed so we were already working well together and knew each other so syncing across the pond was easier. Still an overhead, but not as bad as I imagine hiring random people across the world.
The second good implementation I saw was a partner company my old (pre Microsoft) company had. What hey did is that they flew in parts of their off-shore team a few months each year and had them work co-located with the rest of the team. This way the team got to know each other better and the team culture in Sweden could spread to the off-shore team. The second thing they did was to give the off-shore team separate things to work on as much as possible making daily synchronization between the different time zones minimal.
I read this great article "the February Revolution" listing four things that tend to happen when great results are achieved. The thing that interested me the most was how similar at least three of the items were to how a successful military unit operates.
Though listed last, the everyone cares is the most important point of all these. If everyone cares that is probably enough to achieve great things. But it is also the most difficult one to achieve since it relies on each individual to actually care... In my experience I've seen that if the leaders (formal and informal) of a group cares and show that they care, the rest of the group typically start to care pretty quickly too. This takes the form of both taking action but also rewarding the team for caring.
If you've ever implemented GetHashCode you probably did it the way suggested in MSDN which is using XOR. And if you use R# you might have seen that it generates a different GetHashCode using prime numbers. So what should you do? I think there are three properties you want to aim for when it comes to implement GetHashCode, in order of importance.
Given all this it turns out that the approach R# uses is pretty powerful into achieving all these goals. But the best possible approach is to use pick an algorithm that is suitable for your data. A few examples:
If you want to read more about a number of different hashing options there a is a pretty good list here with lots of good information.
This was recently sent to me and anybody who has worked at Microsoft knows that a lot of teams (if not all) have a lot of meetings and do a lot of communication over email. The essence of the article linked is that people who's work mainly consists of meetings need to understand that people who contribute mostly outside meetings will be less effective if their day is chopped up with a bunch of meetings. I think it is hard to argue that this is wrong since yes; meetings do take time from other things and with less meetings more things tend to get done. And if you have a few meetings in a day it makes sense to try to have them back to back I think.
However I believe meetings is the less disruptive thing during my work day since meetings are scheduled. Every time I walk into somebody's office to discuss something without booking a meeting I interrupt whatever they are doing and vice versa. But that does not mean I want to batch them up. Most of the time a short interruption is not that bad I think and if it means somebody else on the team quickly can be unblocked without writing long emails and/or booking meetings I think the team as a whole will be more productive. And then there is email notifications, instant messages, phone calls etc that cause interruptions. So I think it is more important to make sure the team can handle interruptions and defer them until they can be handled. My favorite method for that is the Pomodoro technique!
I am a believer in the Pomodoro technique since I do not think you can be productive and in the zone for hours. I do think the brain need some time to process thoughts in the background to quickly find solutions to problems. And I've felt the feeling of accomplishment when you know exactly how much you've gotten done in a day. The manager's vs maker's schedule article is good and worth reading but IMO it's just the tip of the iceberg and not addressing main problem. But it makes an important point that should not be forgotten.
The IDisposable is probably one of the most abused interfaces in .Net. Except from all the cases where you actually have an unmanaged resource you need to release I've seen it being used a lot of times (including by myself) just to guarantee some code is executed immediatly when a variable go out of scope. I've used this in test utilities to make sure certain validation happens before the test execution ends but I've also seen this used in non-test code. The people who like to use IDisposable probably (like myself) have some experience from C++ and using smart pointers or similar there. I wish there was a similar construction in .Net where I could force execution of a certain method on an object immediatly when it goes out of scope if (and only if) I explicitly want that. Kind of an IDisposable that does not generate FxCop warnings when not disposed properly...
Anyway, the best guid I've seen so far on how to actually implement IDisposable can be found here. Follow those guidelines and do what I do; stop abuse IDisposable constructs just because you think they make the code look neat.
I just wanted to make sure you did not miss this article describing a mechanism to pause asynchronous processing. Just like the article state that this came out of a problem encountered in the UI World I think this is something I would not expect to see a lot outside the UI world. But it could be used for processing items in a work-flow at different priorities; i.e. pausing less important work in favor of more important work.
And when you're done reading about the PauseToken you should definitely read this one with some common problems people encounter.
It's been a while since I last looked at Rx and I must confess that my first impression was that the amount of possibilities to do the same thing and all the extension methods was overwhelming at start. But like with any new framework you learn you'll settle for a few to solve your most common problems after a while. But then I never saw a really compelling reason for using Rx and it has been very rare among my friends and colleagues to use Rx. So last week I was excited to read that Netflix uses it a lot and I'm looking forward to read more about how they uses and what they've learned along the way.
This was brought to my attention and I was blown away by the fact that somebody would mark classes as TestClass without any tests in them just to reuse some setup code. And that they then make any assumptions on in which order the methods are called. If you really want to do that the constructor is a great place for that and that is also why prefer xUnit.Net which does not have a TestInitialize through attributes but actually use the constructor instead. But I actually think it is better if you're more explicit about how you initialize your tests than relying on an order just defined by the order in which base classes are called by default. In general I believe the consensus is that composition is superior to inheritance when it comes to code reuse which is why you should not put yourself in a situation where you rely on execution order between base and child classes.