My team is fairly new to agile. During my first week I talked to the PMs and the developers to get a sense as to where we stood and what the issues were. I first asked about user stories.
Me: “Do you have user stores?”
B: “Yes. We have these three stories.”
Hmmm. Three stories. It didn’t take too long to discover that these were epics rather than sprint-sized user stories. I also looked into the tasks and how they were handling/prioritizing the work. Here is what I found:
First, we needed more and smaller user stories. I worked with the PMs to break the existing stories into ones small enough to fit into a single sprint. Ideally we want to have 4-6 user stories in a sprint. The format I like to use for a user story is based on the Behavior Driven Development (BDD) community:
Title In order to business value As a role I want to feature
Title In order to business value As a role I want to feature
The common format for user stories is pretty close to this: “As a role I want to feature so that business value.” So why change the order? There are several reasons. First, and most importantly, leaving the business value until last means that it’s all to easy to leave it out entirely. I’ve seen many user stories that are really just restatements of features. For agile teams, the business value should be the most important thing because it’s what we exist to deliver. So placing the business value first means you have to write it. Writing the business value is hard, and it takes some practice to learn how to be good at it. But when you do focus on the value first, you’ll often discover that you end up with a very different feature than you had in mind when you first started. That’s a good thing, because it means you’re reacting to the value, not to the feature.
We use Team Foundation Server to store all of our tasks and user stories. We place the main part of the story in the description field. But you don’t want to have to read the entire story when you’re trying to stack rank user stories—it’s just too hard to scan a long story. The title, therefore, is a short-hand for the entire story, something like a short feature description.
Next I worked with the teams to add acceptance criteria to the user stories. User stories tend to be somewhat abstract, so they leave a lot of room for interpretation. This, of course, leads to problems when a developer builds something and says they’re done, but someone else says “but you didn’t handle this.” Acceptance criteria makes a user story concrete. If your story is about performance, and performing an operation needs to be fast, the acceptance criteria defines what “fast” means. For example, it might say that response time needs to be less than 2 seconds. That’s measureable and testable.
Acceptance criteria also helps constrain the work so you don’t build too much either. This last part is a little hard for programmers to get used to. For some reason, as programmers we think we have to cover all case we can think of. However, all cases may not have equal business value. Good acceptance criteria focuses just on what’s important for this particular user story. If other cases are important, add more user stories on the backlog to cover those cases.
When do you add acceptance criteria to a user story? There are a couple of pieces to this answer. First, acceptance criteria should be in user stories that are going to be in the next sprint. The product owner needs to spend time preparing for the next sprint by working on the acceptance criteria. Often they’ll involve the testers, who in the agile world are customer advocates. But that doesn���t mean the acceptance criteria need to be complete before the sprint planning meeting.
During the sprint planning meeting, the acceptance criteria helps the team have a very focused discussion about what done means for a user story, which helps the team do a better job of estimating the size of the user story. The acceptance criteria will most likely be modified or extended as a result of these discussions.
Testers usually really like the acceptance criteria because they can start to write test cases right away. In fact, each line in the acceptance criteria often maps directly into one or more test cases. Having the test cases early also means programmers can use them to guide the work they do (we’re not there yet as a team).
In the next sprint, we had much better user stories, so we were able to have five user stories in a single sprint for one project (we have three projects). While watching the team during the standups, I noticed that they were not focusing on completing user stories in stack rank order. They looked at the list of tasks and chose one to work on. I’ve been reading the book Coaching Agile Teams by Lyssa Adkins and she recommended that a coach (which is the role I was playing at the time) sit back and let the team succeed or fail on their own. So I sat back and watched.
When we got to the sprint review the team had only finished one of the five user stories. They were almost finished with the other four, but each had tasks not yet completed. I had the feeling that they were about to pat themselves on the back for finishing most of the work. When I pointed out that they had only finished one user story and had to return the other four stories to the backlog, you could hear a pin drop. They were not happy.
This “failure” turned into a win for the team. During the next sprint they really focused on doing the work in stack-rank order, and they paid a lot more attention to finishing one story before moving on to the next.
To say that the testers were being squeezed is an understatement. When developers waited until the end of the sprint to check in their work, there is no way the testers could possibly test anything until the next sprint. As a result, testing was constantly one sprint behind. So my next goal was to get developers to check in more frequently. Ideally I felt they should check in at least after they finish each task, but at worst when they finish a user story.
When we tried this in a sprint, the tester was very happy because he was able to start testing early in the first week, and by the end of the sprint he had tested much of the work done during the sprint. Success!
In just one month, we’ve gotten to the point where:
We’re by no means done learning how to do all of this well, but the team has made a lot of progress in a short time. I’m glad we’re using two-week sprints because this learning curve would have been slower if we had longer sprints.