If you know what you're looking for in the user's browser history there is a pretty simple way to check if the user have visited a certain site recently or not. Basically you can create an invisible iframe with the link(s) you want to check and then use java script to query the appearance of the link. CSS tell you if the link is visited or not. A more detailed description on how this works can be found here. The way this exploit is used there is actually quite nice I think, since it enhances the user experience. And I have no problem with ads customized to match my browser history. I usually don't see them at all because of the ad blocker but if I could get ads that I'm actually interested in this would also enhance my user experience. So far no harm done. I guess the problem with this exploit is that phising sites like those impersonating a bank or paypal could now customize their phising attack to match the bank (or other service) the user actually have visited recently.
If you use Firefox (which have tried to fix this since 2002) there is a plugin to fix this.
When adapting to agile philosophy some managers get carried away and want to spend some money on improving the team productivity. Especially if you have an external consultant coaching the team, the manager tends to ask the coach what they should buy. Often the budget is presented as "$X per developer". As a coach it is always nice to hear that the management recognizes the benefits from the new methodology and wants to improve it even more but it is only once in a while the question is correctly asked. First of all the question should be addressed to the team and not to the coach. A very important part of the agile process is trusting the team to improve its own process. Second the budget should be for the team and not for each member of the team (it is a big psychological difference how you present the budget).
So now that the team have their fate in their own hands it is not uncommon to have them turn to the coach for inspiration. Nothing wrong with that. So here is some inspiration:
About a month ago Joel Spolsky wrote a very short post instructing people to not hide or disable menu items that are not available. This I've been working on one of my spare time projects this summer - a project that involves a web based user interface I've given this some thought. At a first glance Joel's recommendation makes sense. At least to me since I've several times found my self in a situation where I see a disabled menu item or button in an application and I know I want to use that command but I have no idea what I needed to do to enable the option. Under such circumstances I would have loved the developers if they'd let me click the darn thing and then tell me what I need to do.
However, things are never black or white - they're gray. And different situations call for different approaches I think. I also think you should include buttons in this discussion. The good thing is that buttons can be handled in the same way as menu items.
A user menu should never change its content due to application state. If menu items are hidden and shown the user will have a harder time recognize the menus. It is easier for the user to navigate the menu if it is always the same (except that some things are disabled from time to time). Same applies to buttons since there sooner or later will be a manual with screen shots and if the user does not see all the buttons they will think they have the wrong view.
There is however one situation I think you should hide the menu item. That is when there is nothing the user can do to enable the item. This typically applies to security settings. If the user does not have access to a certain feature, and never will have unless somebody changes the security policy, it's just annoying to see that option all the time. Personally I hate those web sites where you try to access some page and all you get is a "you do not have access to this feature".
If you disable an item you have to perform some kind of check when rendering the menu item (or button). You will also have to perform the same check when actually handling the click event in order to protect against programming errors and abuse by an evil user. Sometimes this check may be very expensive to perform. If the check is expensive to perform I tend to leave the item enabled (for a quick rendering routine) and then handle it once the item is clicked. But the error message must be descriptive and clearly point out what the user have to do in order to complete the action.
I would also leave the item enabled if there is a complex series of actions the user have to perform in order to enable the item. I think it is better to let the user get a descriptive error message telling him what to do rather than just disabling the item.
Another thing to consider is that many users are afraid of pop-up error messages and even offended since they think they did something wrong. And if you throw an error message in their face for something simple they think they'd understand if the item had been disabled instead they might get angry at your application (and you). You can't please them all but you should consider this. For example if you have an edit view that is used for editing and creating items you might wanna disable the delete-button when in create mode rather than telling the user "they can't delete an item that is not created" when they click it.
Tool-tips are the rescue. Adding a tool-tip for each disabled item telling the user why the item is disabled is an excellent solution.
So as usual in the wonderful world of software development, it depends. For items not available to the user at a given time, these are my recommendations:
If the item is enabled the error message when clicked (and the action fails) must be descriptive and tell the user exactly what went wrong and what he can do to complete the action.
I've always been suspicious to SQL queries that are automagically generated by some framework. And when I read this article on lengthy SQL queries it certainly was another gallon of gasoline on the fire. Sure premature optimization is the root of all evil and all that but there is also another important rule in software development; Don't do obviously stupid things. If you want to use a framework for data access, which is very common for productivity reasons, be sure to design your software so it is easy to replace the generic framework with something specific.
If you on the other hand end up with really large queries when you write the queries your self (I'm a stored procedure guy so I have a hard time even making up what kind of SQL query would end up being that large), but it is obvious what the solution is - Stored procedure.
I recently had to find a neat way to remove all empty directories recursively on a Unix machine. In the world of UNIX you can expect to find a way to do things like this pretty easy. When I started to search for a neat way to do it (rather than reading a bunch of MAN-pages) I came across a really funny story on The Old New Thing. Windows users are so used to having to use an application to do simple things like this, they forget about scripting possibilities. Guess that will change with Power shell.
However this was about how to do this on Unix. Well, this is my solution:
find $1 -type d | sort -r |
while read D
ls -l "$D" | grep -q 'total 0' && rmdir "$D" 2>/dev/null
That script takes one argument; a directory you want to remove if it and all its sub-directories are empty. Any directories encountered where files exists are preserved.
This is a fair recommendation and actually makes a lot of sense. Especially if you're implementing TDD in a team where people are a little bit skeptical. But there are a few dangers with allowing the use of this rule.
Personally I think that if something is considered to be so simple it does not need a test, it must be really simple to test. And if it so simple to test why shouldn't you use it. The relative cost might be high but the absolute cost for adding a real simple test for some real simple functionality is worth it in my book since it removes the decision from the developer. "No new functionality unless you have failing tests" is much simpler to follow and remember than "no new functionality unless you have failing tests or you consider the functionality to be really simple". Also consider the code coverage issues.
In "agile projects" it is common to use user stories to describe what has to be done. But it is also common to use constraints to describe things that cannot be described in a user story. This can be things like:
Constraints are things that should always be considered during every user story implemented.
However sometimes people come along with things that sound like constraints but they're not. For example:
This is a bad constraints since it have no goal. You might think you improve the constraint if you say:
This is still a bad constraint since you probably have a release date. What happens if the constrained time is not enough to achieve the goal?
I think that if there is a known problem (for example some responses take more than one second) you should add user stories (or bugs) for that and prioritize and plan the fixes for those problems just like anything else. And if you have no known problems you should just set up constraints telling the team what kind of quality you want. If it takes the team 50% of the velocity to pass all the constraints you should let them do so since otherwise you'll have to let them do it later anyway. Either way you'll notice the team are struggling to pass all the constraints and maybe they are to strict. But if you time box them from the start you will notice this later than if you force the team to always deliver more or less release quality.
Measuring code coverage is often perceived as a good measure of test quality. It is not. Good tests finds problems when changes are made to the code. But if you just want to have large code coverage you can easily make a number of tests calling all your methods but not checking the results. The only thing high code coverage values really tell you is that the code is at least not crashing with the given input.
If you however are using BDD/TDD, code coverage values might be of interest. For example if you do not have 100% function coverage (i.e. not 100% of the functions are called) then you aren't really using BDD/TDD are you? Because how could you write a method that is not called by anyone? Well actually you might have created methods that are never called by your tests. Many TDD practitioners use a rule "to simple to test" which applies to real simple, property like methods. I don't really like that philosophy, but more on that in a later post.
So now you think with 100% function coverage you will also have 100% line coverage with BDD/TDD, right? Well, yes and no. Typically you don't since you will add error handling with logging methods that will never occur since the errors never occur in your tests. It might be that you open a file and if that fails you log an error and exit your application. This never happens in the tests since you always manage to open the file in your tests. So how did those lines get in there if never called? Well one other rule often used by TDD practitioners is that you "should not do stupid things". If a system call may fail, you check the result regardless of tests or not. With dependency injection you'll probably get close to 100% but there is no point in bending back-wards in order to achieve high code coverage.
Does this mean there is no point in measuring code coverage? I think it is great to measure code coverage if you use the result correctly. You should not add more tests just to increase coverage since test added just to increase coverage tend to just exercise code and not really testing something interesting. But low code coverage when using BDD/TDD is definitely a warning signal that something is wrong. The team is not using the methodology correctly. So what is considered OK coverage levels? From personal experience I think anything below the levels listed in the table below should be considered bad since type without injectiontype withoutinjection with with you should have no problem at all achieving the given values.
But sometimes there is someone (usually a manager) that thinks coverage should be above some level. So even though you know those tests will not really be useful you have to add more tests for coverage. What do you do? Either you can try to ignore the coverage fact and just try to add more tests, testing interesting things. Or you could try using Pex. Pex is a tool from Microsoft Research that is an automated exploratory testing tool. It can be used to make a small test suite with high code coverage and with only a few simple examples I get the impression it is quite good at finding border cases in your code. This will not however replace your traditional tests/specifications written as part of your TDD/BDD process. But it can help you test some cases you did not think of and that way increase code coverage even more without any extra effort from you. At least it is better than adding coverage tests by hand.
And if you listened to me and started to write nice looking SQL, maybe you wanna look ate making your C# code look nice too...