A while back I did an experiment where it turned out that allocating objects was better than pooling them. Since then I have encountered a few times where allocating actually turned out to be a bad thing. I've never seen this being a problem in a client application, but in servers allocating a lot of objects can be a problem even if the objects are very short lived. What happened to me was that so many objects were created that the garbage collector wanted to run several times a second to try and clean things up. Each time a few of these short lived objects would escape from generation zero to generation one making any following garbage collections more expensive. This time I was lucky because the object in question was a byte array buffer that could be both reduced and reused with some simple logic.
What might be harder is when you add async/await and tasks into the mix. The reason is that async/await is very good at making your async code easy to understand but at the same time it can create a lot of objects if you just do the naive implementation of what you want to do. However the naive approach is still the preferred one I think since it reduces the risk of creating something that does not do what you want under all circumstances. But it is always good to know what is happening and that is why you should read this article that explains how to use the memory profiler and as an example look at an example with tasks.
I like to abstract diagnostics (logging and performance counters) into a separate interface or abstract class. But it becomes tedious to manually keep the fake diagnostics (used to test that that proper diagnostics calls are made), dummy diagnostics (when I just need something to pass around), console loggers (used for command line applications) and production logging in sync with the interface. And it is boring to do manually since the first three essentially have the same implementation for all methods.
So what I've started to do is to use T4 to generate those three files but also the interface. Production logging is still updated manually but I think it is important to consider exactly what events and performance counters should be triggered by each event. And I want to keep my diagnostics definition simple. So this is what I do:
I find it easy to work with the T4 template files (even though it feels a little like you're back to classical PHP/ASP style programing, but it does remove some boring tasks. And the error messages you get when you have errors in your template have been pretty good for me so it has been easy to correct the problems.
The only downside with this is that when you change the include file you need to transform the templates manually in visual studio. Unless you install an SDK that defines a build target you can use. This Is because the T4 transformation happens when the template file is saved, not when one of its dependencies change. I did however find this clever way of getting around it which is essentially running the tool manually as a build step. I prefer that over forcing additional SDKs to be installed by everyone including the build servers. I only hope this will be just working in a future version if VS.
I recently played around with Azure web sites and wanted to analyze the IIS logs generated by azure but none of the tools I tried could parse the file I downloaded. Turned out that the header line of the file that looks like this:
# date time s-sitename cs-method ...
That is apparently not correct W3C log format. once I change the line to this:
#Fields: date time s-sitename cs-method ...
With that change all the tools I tried were happy with the log files.
My impression of most major west coast cities like Seattle, San Francisco, Los Angeles etc, is that people in general are very healthy. And Redmond where Microsoft have its HQ is even the bicycle capital of the north west (I guess anything can be the capital of anything if you just constrain geography in a convenient way). And people who like to exercise also typically like to be environment friendly. And in the spirit to make applications more environmentally friendly there are a few new patterns you need to learn. I have been using these new patterns together with a co-worker for exactly one year now. The first pattern is a replacement for the polluting factory pattern; the farmers market pattern!
The farmers market pattern is similar to the factory pattern in that it is used to create other objects, but instead of having all the bad properties of a factory such as being far away from the object and not caring about the object's community. Factories also tend to keep the benefit they create for themselves. The farmers market pattern gives you a local object that can create your objects on a local and hence more environmentally friendly way. Here is what it looks like in it's most simple form:
1: public class Foo
3: private Foo()
7: public static class FarmersMarket
9: public static Foo Create()
11: return new Foo();
Having a the farmers market class local like this will save CPU cycles during compilation making sure your application is produced with minimal carbon dioxide footprint. Tomorrow I'll show you the hybrid pattern.