A few days ago there was an article with 20 tips to write a good stored procedure (requires free registration to read). The problem is that there are really only 12 good tips (and 4 bad and 4 neither good or bad). So let me go over the tips one by one and comment on them:
This rule states that no parameters should be a of a primitive type (e.g. int, double) nor a string. Instead new objects should be created to represent what the parameters are. Is the integer really representing money or a coordinate. This small classes also creates great places to put methods that manipulate whatever the primitive type represents.
When I first tried the object calisthenics this was one of the harder rules actually. Not by itself but because of the ninth rule; don't use getters or setters. Together it really makes you have to think about how to represent things in your code.
By itself I think this rule is one rule that easily translates to your real code. Making sure you're passing objects to your methods which actually are what they represent rather than a primitive type that represent something is generally a good thing if you some day have to change the representation. E.g. money should be a double rather than an integer.
Not only is a long line with a lot of dots harder to understand. It usually means that your object is using not only the objects it knows but the objects their "friends" know too. Having only one dot per line does not only make your code more readable. It also helps putting the code in the right place. Consider this method:
1: public void Update()
Changing it to this is not the intention of this rule:
1: public void Update()
3: Session session = context.CurrentSession;
Rather the rule want you to do this:
1: private void Update()
And here the Update method of the context object has been left out, but it would only call Update on the CurrentSession object. All in all I think this is a rule that is suitable for production code too.
I've been sitting in a lot of code reviews and code inspections where somebody suggests a change to optimize something. The response from the author is almost always; but that's premature optimization! End of argument... But I almost never hear somebody doing the opposite. I came to think about this again a few days ago when it happened again and just a few hours later I read this post about the subject.
Premature optimization is not black or white. Some things we know are bad for performance (like memory allocation and copies in a high performance C program) and nobody questions if you optimize that without actual proof it's a problem. As the post above describes it's all about paying a tax up front that is big enough to avoid a large fine later. I don't think the idea behind the quote about premature optimization is about making no optimizations. It's about making a reasonable amount of optimizations. Kind of the same kind of trade-off you have to do when it comes to writing tests. Is it worth adding a test for code that you know will pass just because to document the expected behavior? Sometimes yes, but not always as this post points out.
A few weeks ago I was introduced the the object calisthenics described by Jeff Bay in the book The ThoughtWorks Anthology. The object calisthenics is a way to practice writing object oriented code. The nine rules of are not intended to be used in your every day work. The rules are intended to be used on a small problem such as a coding Kata. The idea is that by applying these strict rules on a small problem you'll learn to write better code.
So I decided to try this on the MineSweeper Kata. In the beginning I decided to try to conform to the rules all the time, but pretty soon I changed my mind and wrote a working solution and then started to refactored to conform to the rules. I think this was a mistake. Some design decisions turned out to require very big refactorings when conforming to the rules and I actually never got all the way. But this doesn't mean I didn't learn anything. First of all I think I experienced a variant of you can't test in quality, you have to build it in from the start. I should have stuck with the initial strategy and make sure the code followed the rules all the time. I also learned that classes that I felt were really small and doing only one thing actually could be split up when I had to in order to conform to the rules. Reminds me of when people thought atoms were the smallest building blocks of the universe and then it turned out to be something smaller...
So all in all I think doing a coding Kata while applying the object calisthenics rules will improve my ability to write object oriented code. And it will be interesting to see how it works out in a coding dojo. By now you're probably wondering what the nine rules are. The rules are:
I'll go into more detail of these nine rules over the next few days.
First of all a method with a lot of different indentation is harder to understand than a method with less indentation. I also think that each level of indentation typically means you're doing another thing. Look at this code:
1: public void Format()
3: for (int row=0; row<rows; row++)
5: for (int column=0; column<columns; column++)
The nested for loop introduces two levels of indentation. One of them can be refactored into a second method so it looks like this:
1: public void Format()
3: for (int row=0; row<rows; row++)
9: private void FormatRow(int row)
11: for (int column=0; column<columns; column++)
Some people might say that it is just another level of indirection and that we have now made the Format method harder to understand since we also have to look in the FormatRow method to understand what is happening. I think that is only true if you use bad method names I'm perfectly fine with assuming that FormatRow actually formats a row without looking at the code for FormatRow. So if extracting code to a separate method will make your code less readable you're probably not giving the method a very good name.
Some people also scream that more methods means bad performance. First; believe in the compilers ability to optimize code for you. Second; only optimize what is proved to be a bottleneck. I think readability is preferred over any theoretical performance issue. Only real performance issues should be addressed.
Last but not least, this rule is not intended to be used in your production code. We're talking object calisthenics, remember? If you want to apply this rule to your real code you probably want to say use as few levels of indentations as possible in each method.
Some people think that the if branch should be the every thing is good branch and the else branch being the error case. Others think the if branch should be as short as possible since it makes it easier to understand what would bring us into the else branch. But this rule is not about what guideline is the best. It is about putting you into a situation where you have to consider other ways to implement your conditional logic. Polymorphism and the Null Object pattern being two good alternatives.
I recently experienced a weird behavior with FxCop. The classical "everything works fine on my machine but not on the build server" situation. In this case I got a few FxCop warnings from the build server but could not replicate when I ran FxCop locally in Visual Studio. Turns out I had to turn on code analysis in the build (Project properties -> code analysis) since the FxCop suppress attributes used to suppress warnings on specific methods are conditional attributes. More on the background here.
Fourth MSFTCorpDojo today. MineSweeper and MicroPairing this time too.This time we, as a result of the retrospect from last time, we did BDD-style testing (I've added one of the test classes below as an example). We decided to start by implementing a parser that parsed the input and created an internal representation of the input that we could later use to generate the output. By the end of the session we had a pretty complete parser with error handling for all kinds of bad input. This is one of the few times I've seen any real error handling in a dojo.
This time in the retrospect we said that the next time we should implement the other part next time. That is; assume a parsed input in some internal representation and generate the output from that. We also talked about how we do MicroPairing and switch one person in the pair every seven minutes. We decided to next time switch one person in the pair every time the keyboard gets passed instead. Guess that will mean less pair programming since there is no real ping-pong but rather a circular queue. But in our session almost everybody participates all the time anyway so that might not necessarily be a problem.
1: public class Given_a_single_1x1_fieldset_with_no_bombs
3: private string input;
4: private Field field;
6: public Given_a_single_1x1_fieldset_with_no_bombs()
8: input = "1 1" + Environment.NewLine +
9: "." + Environment.NewLine +
10: "0 0";
11: field = FieldParser.Parse(input).Single();
15: public void It_should_have_one_row()
17: Assert.Equal(1, field.Rows);
21: public void It_should_have_one_column()
23: Assert.Equal(1, field.Columns);
27: public void It_should_have_an_empty_square_0_0()
29: Assert.False(field.IsBomb(0, 0));