I spent half of last week at the Alt.Net Seattle Conference 2011. It started with two days of workshops and ended with two days of open spaces. For me personally the four hour workshops did not work well. There was a good intention to let us get lots of hands on experience but in practice a lot of the workshops felt like a presentation with some extra time to experiment. Personally I'd rather have shorter sessions giving me the opportunity to hear more different things. Also the for the open spaces I missed the concept of starting with a few lightning talks followed by open spaces rather than just having a lot of open spaces. But all in all it was a good experience.
I started with a workshop on WCF Web Api which was interesting, especially since the examples showed in a good way how I could customize my services in an easy way. Second was a workshop where we discussed organizational and team health and how to "fix it". Lot's of interesting discussions there. Then there was a great introduction to reactive extensions. Made me think what an IObservable wrapper for CCR would look like (guess I'm not the first one). Definitely need to look into that some more. Last workshop for me was on advanced threading. It was a good lecture with some really good points but not really "advanced" in my opinion. It was certainly "advanced" if you don't work a lot with threads I guess.
The next two days were open spaces and the highlights for myself was a good discussion on cheating for benefit, Insertion of Confusion containers and Rabu. Wonder if it's just me or if open spaces are always best in the beginning... There was also an interesting side track where people worked together on open source projects. i kept away from that this time...
As a final note to self; the next time I start a company I'll make sure to pick a name that makes food sponsoring easy (a local company provided cheese burgers for lunch).
Lately I've been asked to put together a list of best practices for writing tests. And I fell in the trap. I know how important the right choice of words can be and still I didn't think twice about the request. When talking to some people on my team it was obvious that while the essence of the practices suggested were the same the details differed slightly. And then one colleague mentioned that there are no best practices, only good practices. This is not a new topic and I plead guilty to translating "best practice" into "good practice". If something is "best" it implies there is no (or little) point in looking for better alternatives. If something is good I think it implies you don't know if there is something better and you should continue looking for improvement. We also have to remember that the context make different practices better than others.
I'm reusing the title from an article in the latest MSDN magazine and the reason is that I think the article missed one obvious solution. The cooperative solution described in the article is interesting but also hard to implement since it implies a protocol between the outer and inner role. Load balancing by using a random algorithm would be a simpler way to achieve fairness and high availability of your services. Random based load balancing actually comes in two flavors.
First we have "session seeded" random. This means that you use a session identifier to "seed" your choice. Assuming that your session identifier is generated (GUID for example) you could just use the last byte (I assume you have less that 255 internal instances to load balance over) and do a modulo operation on the number of internal instances you have. This way each session will talk to the same internal endpoint. Very similar to the static approach descried in the article with the advantage that it works regardless of the number of internal endpoints. And it also works as new roles are added or removed.
The second alternative is to use pure randomness. This should be enough if there is no caching in the internal role. Getting the endpoint can then be implemented really easily with a little LINQ trick. First you need a nice extension method for IEnumerables.
1: public static T SelectRandom<T>(this IEnumerable<T> collection)
3: var random = new Random();
4: return (from item in collection
5: orderby random.Next()
6: select item).FirstOrDefault();
Then you use that to pick a random endpoint.
Naturally you could use a caching mechanism like described in the article to not query the RoleEnvironment for every endpoint you need.
I was recently involved in yet another discussion at work on the topic of extracting methods from larger methods just for readability. Just like I identified before it seems to be a trust issue; you don't trust that a method does what it says it does. We have to respect that people have different preferences but I started to think if the both extremes are really equivalent making it only a preference issue. This is what I came up with.
The benefit of breaking a large method up just for readability is that those extracted methods may actually be reused in the future. Another benefit is that when you look at any given method it is short and simple. If I remember correctly the human brain can hold around seven things in memory. So in order to remember large amounts of data we group data. The typical example is remembering a sequence of numbers. If you try to remember the sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 as individual numbers it will take ten "memory blocks" in your brain which is probably more than you can handle. So instead you'll group the numbers and remember 123, 456, 789, 10 and the fact that the first numbers should be split up. That only takes up five memory blocks. Since it's actually a sequence you will probably even optimize this into just one memory block; "a sequence from one to ten". I think breaking up your larger methods into smaller methods achieves the same thing. This means that if you don't know what a method that is being called does you can always look it up and from that moment it is only one memory block for that whole function. So even if you think breaking up large methods is worse for you, when you try to understand something that is broken up it should be fairly easy to follow the flow with today's IDEs.The opposite however is not true I think. A large method have to be broken up into blocks by the reader every time regardless of if that is your preferred way or not. In this case you never get any help to break the method down into smaller pieces that are easier to understand.
This however does not mean that you should always break methods up I think. The litmus test is that if you extraction of methods means you end up with a lot of methods that have a lot of arguments. In my opinion such an extraction will not improve readability of your code. Does that mean you should keep the method? Well actually I think this is a sign of something larger being wrong so you need to rearrange things more because every method extracted should be an improvement.