You've probably heard a thousand times that a good variable name describes what it is. Some people also thinks that the shorter the scope of the variable, the shorter should the name be. That was maybe true if you write the name out by hand but I think the best (and only) rule of thumb is that the name should be easy to read and describe what it represents. But sometimes it is hard coming up with a good name so what do you do? In the past a did my best to come up with something good but if it is hard to come up with the name, whatever name you come up with will not be that good. But from now on I'll do another thing I read about here. It is embarrassing how obvious the solution is. In TDD you make minimal changes until you have enough code to see the obvious refactorings needed. And the same thing applies to variable names. If it's not obvious at the moment what a variable should be called, just call it something and move on. A few TDD cycles later you'll probably know what a good name for that variable is and you refactor it then. So don't waste time trying to come up with good variable names. Move on and refactor later when it is obvious!
The good thing about this strategy is that it applies not only to variable names. It works great on all types of names in your code; methods, classes, constants and so on. And yes, I'm assuming you have a refactoring tool that let's you change these names securely and you don't have to do a search and replace that might be a little more dangerous if all your I-don't-know-the-name-yet variables are called "x"...
Thought it was time to combine my old favorite subject (SQL) with a new interest (TDD). So how do you test drive your SQL? I think the first answer is you don't. The database is typically something you want to fake anyway since setting up and accessing the database is to slow when you want rapid feedback. Nowadays when things like NHibernate are popular I think most people don't write much SQL anyway so there is no need to test drive the creation of SQL queries or stored procedures. But there was a time when many applications kept a lot of logic in the stored procedures and in those systems it might make some sense. Also we have the fact that the database needs to be created some way. Why not test drive the database schema?
So even though this might not be the hottest topic on the block, what alternatives do we have? First of all we have SQLUnit. This project does not seem to have been updated for several years. Guess that is because we don't put all application logic in stored procedures any more. But it is at least an attempt to create a unit test framework for SQL. But why use the same language for the tests as the implementation? Some people think this is OK but I think there is also a danger in doing so. Switching languages typically makes you do small stupid mistakes you don't do if you stick to one language. But sometimes it might still be worth it. For example I'd much rather use any .Net based unit test framework over SQLUnit and an example of how that looks can be found here.
Then there is of course the option of not using any framework at all. Creating your own framework may be the most efficient way of doing things and an example of database creation in a test driven manner without any fancy frameworks can be found here.
As usual the answer is it depends. If you insist on looking at short term effects of TDD for example there is no doubt in my mind it is a waste of time. Why deliver great software in six months when you can deliver decent software in four months? Personally I believe there is too much software out there that started as "a little temporary solution we will only use a few times" and that later turns out to be used, tweaked, fixed and updated for years and years to follow. You never know what will happen except that you probably have to maintain the software you write for a longer period than you really want. And the best way to not spend time on old stuff you wrote ages ago is to make sure it works in the first place and that changes/updates can be made quickly (and preferably by someone else without asking you for help). So in the long run, TDD is definitely not a waste of time in my book. There are also some research on this if you like to crunch numbers.
So what about pair programming? As pointed out here, you do much more than just write code when you develop software. In certain situations such as spreading knowledge pair programming is a no-brainier. One experienced person showing someone else how things work at the same time as new features gets created is probably more cost effective than having somebody sit for days by them selfs trying to figure out how things work in the code. And two skilled developers probably solve any problem faster together than if they try alone. Since two people also look at the code all the time you will probably end up with a better design because it is the work of two rather than one. And you always have (at least) two people that understands each part of the code so if somebody gets seriously ill it is not catastrophic. But all this (except maybe teaching novices) is definitely a waste of time if you insist on looking at the short term consequences. But once again, software always survives longer than you think. And each thing you do in your development process must be measured against the total life cycle of the software. Not just until the next release (or on a day to day basis).
I think all agree that quality costs but it pays in the long run. And there is always a breaking point. Some things are just not worth the price of quality but before you can make that call you must know the real costs and the real benefits. As provided above there is some research on TDD and developers don't just write code so two developers pairing is not half as fast as two working independently. And if you really need numbers then you should try TDD and pair programming and actually measure the cost vs benefit rather than focusing on the short term cost in time. Remember, quality typically pays of in the long run. Both in terms of less bugs and happier staff that stays on the job.
I recently needed to move all my private domains to a new hosting solution and the new solution did not allow any other access than FTP. I considered setting up a mirror on my ubuntu host at home but that didn't feel like the best solution. So I thought there must be a way to mount a file system using FTP. And there is several I think. The one I tested and that looks good so far is curlftpfs. The only tweak not mentioned on the curlftpfs site (in an obvious place) is that you need to put the following in your /etc/fstab to make sure all users (and not just root) have access to the mounted file system:
curlftpfs#ftp-user:ftp-password@ftp-host /mount/point fuse rw,uid=500,user,allow_other,noauto 0 0
"allow_other" is the magic enabling all users access. Since it uses FTP under the hood you will notice the file system is quite slow at times. Especially when editing large files. But compared with having a local mirror and updating all changed files when needed I think this a pretty convenient way to access the files on the FTP server.
One thing that struck me as kind of odd when I first started to look at PowerShell and how to include other script files in order to gain access to functions in that other script is that all examples I found use an absolute path to include scripts (ex: ". C:\Some\Path\Script.ps1"). At least for me, using absolute paths is never good so I tried a relative path like this: ".\Script.ps1"
The result is no errors running the script but no functions from the script are defined outside the script. ". Script.ps1" is even worse since it does not even execute the script. What you have to do is this: ". .\Script.ps1"
Remember to use "double dots" with space between them! And I cannot understand why all tutorials I found use the absolute path approach since relative inclusion is much more likely to work.
So finally it has been released. SCOM 2007 R2 where one of new big features is monitoring Unix/Linux computers (which happens to be what I'm been working on for the last two years). There is an evaluation copy available here. More community resources available here, here and here.
General availability is July 1st 2009 (i.e. when everybody can get this). Right now only existing customers with the correct upgrade agreements can download this release as far as I understand.
Since I moved to Redmond I've been looking for a regular coding dojo in this area but I've not been able to find one. So I first planned a number of team dojos in my team but I felt that was not enough. I think an important part of each dojo session is to meet other people and learn their TDD tricks. So since I couldn't find any dojo in the area I planned one myself and invited people on one of the internal mailing lists (also set up a separate internal mailing list for the dojo). I thought Microsoft would have more than enough people to be interested in attending coding dojos.
So yesterday was the first dojo session I arranged open to all Microsoft employees. We did the kata I'm most familiar with: MineSweeper. For the first time we tried to build parts rather than stick to the end-to-end type of tests. It was an interesting experience and we got further toward a complete solution than I've ever seen myself. Not sure it was just because we approached the problem slightly different or the fact that the group was fairly small. The important part is that I got to learn a few small tricks and I'm pretty sure others learned some tricks too.
So now I just have to plan the next session...
If you're negotiating with a customer over a contract for something and you want to be agile I think one of your hardest problems probably is to convince the customer of the value of an agile development process. Because you don't really want to be negotiating, right? And most customers want a fixed price so they know how much money they'll spend at most. I think this compilation of contract variants is great since it helps you (and your customer) to compare different types of contracts/collaboration. The most interesting is of cause the "Money for nothing - changes for free" contract (more on it here). What it essentially means is that the supplier estimates the initial work just as if it was a fixed price contract. That price is the target price. Rules for what happens if the target is exceeded may vary but that is not really something new. The interesting part is that if the project is canceled prematurely the supplier gets some percentage of the remaining sum without doing anything. And the customer may change everything and nothing at will (change for free) during the project. The idea is that if enough customer value is delivered early then the project ends early. This is good for the supplier since there is money for nothing. And the customer gets maximum value in shortest time probably also cheaper since the project can be canceled whenever the customer is happy enough. Because we all know that a fixed price or cost ceiling in reality means the same thing. So even if the customer pays "money for nothing" it is still better than paying for things you don't really want...