I had an interesting email conversation here at work recently and as usual I can't share it all but I can share some of it because it's generally applicable and perhaps others could benefit from my response. The particulars are really not important at all in any case, the concept is what counts.
What happened was one of my internal groups did a performance study and asked me for some feedback. Now I think this was actually a great little study but I did have an important criticism. It's something that I've written about before. Here's what I wrote them, almost word for word.
Thanks so much for sharing this with me. This is really a great effort on your part and I’m glad to see that you learned some things about how [your system] interacts with the CLR from it – I’m sure that knowledge will help you to make better decisions.
Sadly, that’s the good news. The bad news is that I’m not sure you have the right data. You might, but I’m not sure. Let me explain why:
When doing performance tuning, and more importantly performance planning, context is everything. It’s very difficult to interpret costs without context and so it’s basically impossible to say whether something is “good enough”, or “bad” or anything like that. It might not even be relevant much less “good.” It all depends on context.
In this particular case, these results are hard to interpret because they aren’t put in the context of representative customer scenarios. The low level cost analysis is great, but are these the right costs? How do they play in a customer scenario? When you add the customer side of the equation are the costs better, worse, the same?
You might want to take a quick look at these web pages for more background, I talked about this in the context of measuring the raw cost of exceptions a few months ago.
The comments in the problem area are especially interesting.
So getting back to this particular case, here are some things I would do:
To summarize: You have great looking raw data for some dimensions that look interesting but no way to interpret them yet. Calibrate according to some representative use cases. Correlate your use cases to the consumption metrics like the ones you have and see where you stand. Hopefully the things you have already measured will be the dominant costs, if not look into whatever looks to be most important.
Remember, as in the article, you need to measure in context so that you can see how your code affects a working system. If your system used, for instance, a lot of L2 cache you might not notice this running it alone. But you would see that it was degrading the performance of other code disproportionately.
The concept of context comes up a lot in performance work. For instance, when I wrote a couple weeks ago about Performance Signatures this was another way to try to assess the wisdom of using some methods from the context which they are going to be used. Context is critical.