Willy's Reflections - Visual Studio ALM Rangers

| Willy-Peter Schaub | Visual Studio ALM Rangers | In search of IT simplicity, quality and tranquility |

ALM Rangers Ruck – Proposed innovations to the v1.2 guidance. Thoughts?

ALM Rangers Ruck – Proposed innovations to the v1.2 guidance. Thoughts?

Rate This
  • Comments 23

We introduced the concept of “loose Scrum” (also known as a maul or ruck in Rugby) in 2011 to streamline the ALM Rangers projects and continuously adjust and improve to maximise respect for people, optimise development flow and embrace the philosophy of kaizen (http://en.wikipedia.org/wiki/Kaizen).

Revised 2014-02-11.

remembering our constraints

Before we cover our observations and proposed innovations it is important that you peruse ALM Guidance: Visual Studio ALM Rangers — Reflections on Virtual Teams and Things you probably [did not] know about the Visual Studio ALM Rangers to understand our world, challenges and constraints. One of the self-inflicted (intentional) constraints is our 24x7 dog-fooding of VS Online and using features out-of-the-box.

We are in the process of wrapping up our last in-flight projects and preparing new teams to take-off on new adventures. As part of kaizen we have made observations and are discussing possible innovations herein.


so, where’s the fire … what are my observations?

OBSERVATION IMPACT  
Teams have agreed on very ambitious estimations, often adding 8h and even 40h tasks to their sprint backlog. Failure to deliver working solutions at the end of sprints and a trend to “carry over” work to next sprint. image
Teams often fail to invest the time to plan each sprint or perform micro-task planning. Inaccurate planning, inconsistent sprint velocities and overhead in terms of task management during the sprint.  
Teams are seldom doing sprint retrospective, or giving candid feedback. Retros are seen as a process overhead, rather than opportunity to improve the team environment.  
Teams often start with an objective to complete in 2-3 months, but continue from sprint to sprint until work is eventually completed. The team has no milestones to work towards and no horizon in sight. Projects turn into an on-going rut Sad smile  
Teams often start energetic, then deflate and awaken towards the end for the usual death march to meet expectations, or Rangers simply wait for the last possible moment to deliver. Frustrations amongst team members, especially where there are dependencies, and a frequent cancellation of features. Also a spike (burn up) during a sprint as unknown dependencies are identified “late” as shown. image

 


proposed innovations

Many of the proposed innovations are based on learning's from Scaled Agile Framework® (SAFe™), which is an approach we are investigating as part of our Agile and Ruck environments.

 

Light bulb #1 - normali[s|z]ed estimations

Simplify estimation effort, improve estimation accuracy and ensure estimates are done and supported by team.

image See How do you normalise your velocity estimation for more information.

  1. Estimate assuming full-time availability of resources.
  2. Team estimates stories using story points based on developer/contributor and a tester/reviewer pair per story.
  3. Team finds smallest story that can be developed and validated in full day, relating it to one story point.
  4. Team estimates the remaining tasks, using the 1 story point task as a reference.
  5. Split up big stories into smaller stories that are equal or less to the part-time sprint maximum.
  6. Visually indicate when a story/task is ready for testing/review. For example, the snippet shows a feature #9462, with a story #9463 that has been developed and tagged as ready for testing.
    image
  7. A story is only complete when development and testing activities are completed, and the definition of done (DoD) is met.
    2014-02-11 …dev + test is not a waterfall sequence. Unit tests and test cases can be created before or in parallel to development efforts.
  8. The tester/reviewer should be a motivation for the developer/contributor to  avoid the last minute submission.

Clarification from discussions:

  • We will use the poll feature in Lync as a bare minimum or online planning poker solutions. Requirement is that at least 50% of the team is present and that the PO does no estimating.
    image … 1,2,3,5,8,13,21
  • Story points have no direct correlation with time (hours) and usually we calculate:
    ~days = days per iteration * (backlog size / velocity)
    However, as
    we have not been able to define an average velocity for Rangers, I propose we use the hybrid story point approach and associate our story point to a day, for example, which of these piles can I develop and test in a day? That’s 1SP and that defines the base of the estimations.
  • A story with a SP of 1, 2, 3, 5, 8 or more could be do’able in a day as well. It is all relative and based on the smallest pile of stuff the team identified as a SP=1.

Light bulb #2 - enforce a TRP sprint

Introduce pro-active sprint planning, reduce ambiguity and task management.

image

Each new project or subsequent release starts with a training-research-planning (TRP) sprint to gather the following objectives:

  • Motivation which outlines why we are considering the project
  • Vision which describes the objectives of the project
  • Smart objectives (specific, measurable, achievable, realistic and time-based)
  • Acceptance criteria which define what it take for the product owner(PO) to signoff the project
  • Definition of Done for the TRP sprint

The TRP sprints starts with a kick-off meeting, followed by research, training and finally a sprint planning meeting that plans the first construction sprint with a high-level of confidence and subsequent sprints with best-case estimates.

Light bulb #3 - enforce PSI time box

Ensure that the team has a realistic horizon in sight. Do less, better and ship what we have on demand.

image

In the context of Ranger solutions each release is time-boxed within one TRP sprint (see above), 1-3 development sprints and a QP sprint (see below). Each project team is divided into one or two feature teams, each with its own sprint plan, but all sharing the same cadence and project vision, objectives and acceptance criteria.

Each feature team commits time at the end of the sprint to do a sprint retrospective and plan the next sprint. In addition each team delivers a 1-3min video which summarises the sprint deliverable and can be used as reference for new team members or stakeholders. This becomes invaluable evidence for the Ruck-of-Rucks! See http://sdrv.ms/La0W1P for an exceptional example of a sprint demo video.

Clarification from discussions:

  • The idea is that we make it easy for a team to deliver evidence on their working solution and allow them to do some well-deserved PR.
  • Essentially I see the following standard story for each sprint:
    Standard: As a stakeholder I can view a sprint demo video so that that I can get a quick overview of deliverables  

Light bulb #4 - enforce a QP sprint

Raise quality bar across the board

image

Scaled Agile Framework® (SAFe™) introduced a hardening-innovation-planning (HIP) sprint at the end of a potentially shippable increment (PSI).

We will us the opportunity to raise the overall quality bar and plan during this last sprint. The team focuses on eliminating any remaining debt, such as copy-editing, and validates that all quality bars have been met and encourages the product owner (PO) to signoff. In addition the team uses the time to experiment with innovations for future releases and if needed, to plan the next release.

Light bulb #5 - agree on one innovation

Minimise waste of time due to meetings and encourage continuous improvement.

Instead of meeting to discuss what went well, what went not so well and how we can improve, we agree on at least one innovation for the next sprint.  This becomes invaluable evidence for the Ruck-of-Rucks!

Clarification from discussions:

  • The idea is that we continue to discuss and agree on what went well and badly if possible, but that each team must deliver one innovation for the next sprint.
  • Essentially I see the following standard story for each sprint:
    Standard: As a team we can define one innovation for the next sprint so that we can improve our project continuously  

Light bulb #6 - revise ruck chart

Create consistent project plan and align with visualization such as the Scaled Agile Framework.

Retire our v1.2 Ruck Cheat Sheet …
image

… and introduce a new poster / cheat sheet …
Ruck Cheatsheet 


we need your thoughts and consensus

We need your thoughts and candid feedback, so that we can make decisions by consensus and start the dog-fooding of innovations with our next project adventures.

Add your comments below or contact us by email.

  • Hi Willy,

    I like this.  It mostly makes sense given our particular constraints and I'm eager to try it on the next Rangers project!  Especially like the normalized estimation concept, TRP sprint, and video sprint review!

    - Woody

  • Section 1 - I am not following all of the math and logic.  In my day job I encourage teams to avoid adding stories with points 13 or greater (this indicates the story is too large or too many unknowns).  I don't even want to see more than a single 8 point story in a sprint but allowing an 8 recognizes that some tasks are just big and easier to tackle at that size than it is to break it down into smaller stories.  If a team fails to deliver on a committed item with 1 to even 8 points it tends not to be that big of deal from a velocity perspective.  The sweet spot for me is around 3 points - I like to see a bunch of 3 point stories in a sprint.  The problem I have with the scaling proposal as I understand it is we seem to be taking the smallest possible number in our story point scale, a 1, as a reference story which does not allow us to estimate the story that might equate to a 1/2 day story.  I would rather see the reference story equate to a 3.  I also don't follow how 1 point is a full time dev/rev pair and 3 points is  the maximum points the dev/rev pair can achieve in a month.  That suggests each team member can commit 24 hours to a project in a month, about 6 hours per week. While I do see some rangers spending 5+ hours per week on work for the team, I also see a measurable number of team members (perhaps even most) spending much less time and often dropping off of the team all together.  I think we should target 2 - 3 hours per week per team member or 8 - 12 hours a month as a team members capacity.   Finally, should we be trying to tie points to hours?  

    Section 3 - I am lukewarm on the notion of having teams within the team.  Normally I would want the entire team focused on the overall success of the sprint.  More than one team could promote a focus on feature team deliverables which could lead to a successful feature delivery but an overall failure of the team.  

    Everything else looks good.

  • @Tim, I agree on the story points and for ALM Rangers I am looking at 3 being the "preferred" maximum and 5 being the "tolerated" maximum, viewed with suspicion. This is primarily due to us working part-time, which makes longer running tasks or "bigger piles of stuff" too risky.

    No need to worry about the maths, because it will be an evolving adventure ... I just tried to associate the time-less story points with a rough estimate of time within a Rangers context ... then again this will vary with each team.

    Assumption 1: 1SP defines a pile of stuff can be completed in a day.

    Assumption 2: We have 260 working days per year, which works out to 21.67 working days per month or roughly 21.67 of the 1SP stories.

    Assumption 3: A Ranger can commit 1/8th of a calendar month to Rangers projects, which works out to <1h/day or roughly 2.708 of 1SPs/month.

    Now I need to emphasise that 2 or 3 SPs may also be do'able in a day, but I am assuming they will take more.

  • Hi all,

    where can i get a new a new poster / cheat sheet in a better quality to print out?

  • @Max, the old poster can be found on http://aka.ms/ruckguides. The new cheat sheet will be released with the next update of the Ruck guide if accepted.

  • I really like this =), can't wait to start using it

  • I think it looks good. The part I see as a potential challenge is the video review. While I like the idea behind it I'm not sure how well this will work in practice with our distributed teams. My initial reaction was that this practice is best suited for co-located teams, but I'd happily be proven wrong! :-)

  • @Tommy, the alternative to the video is to have a sprint review meeting, which includes a the demo. While this is ideal, it has proven *very* challenging to schedule consistently. The idea of the demo video is that it can be done anytime towards the sprint by one of more, reviewed by all in the team and dropped in a share for other teams to review. I would be interested to understand why you feel it is not suited for distributed teams?

  • Scheduling meetings can be a pain, but creating and editing a video as well. It is a very difficult thing to do, especially in a distributed collaborative fashion.

    What worked very well for us was using  wiki of sorts. A couple of screen shorts and some commenting. Much more "collaborative" friendly, and frankly quicker than setting up a video IMO.

  • @Niel, when we talk about videos it is not asking for the quality delivered by RobertM :) Setting up a Lync session, running a review session on your own or preferably with a few colleagues from the team and hitting the record button results in an invaluable video.

  • Overall I quite liked the suggestions but I have some concerns.

    #1 - normali[s|z]ed estimations

    - "The tester/reviewer should be a motivation for the developer/contributor to  avoid the last minute submission"

    This will not prevent developers submiting their work in the last minute. The review/test task should be planned as well development tasks.

    - "We will use the poll feature in Lync as a bare minimum or online planning poker solutions. Requirement is that at least 50% of the team is present and that the PO does no estimating."

    Is Lync the best tool for that? I work at customers' environment all day and usually I don't have full internet access so Lync can be a problem in my case.

    #3 - enforce PSI time box

    "Each team delivers a 1-3min video which summarises the sprint deliverable and can be used as reference for new team members or stakeholders"

    I agree with @Niel. How this will work? Who will be responsible for creating this video? This is not a trivial task and may impact the delivery of the current sprint as well as the planning for the next sprint.

  • @Osmar,

    #1 - Pairing will hopefully improve collaboration and visibility into progress or lack thereof. It is up to the tester to raise an impediment of the progress is lacking.

    #2 - We are currently looking at using Lync Poll feature and the TFS Agile Poker application for the Rangers. Lync is a light-weight poll solution and should be well suited for Ranger teams which use Lync for collaboration.

    #3 - If I compare the effort of producing a 2-3 min video using Camtasia, Lync or any other tool, with the effort of getting the entire distributed team together for a sprint review and demo meeting, I vote for the former. The responsibility of producing the video is with the team and there is always one in the team that knows and enjoys the "fun" of producing a demo video.

  • @Rangers, THANK YOU for the ongoing and candid feedback, which helps us better understand the challenges :) Keep the candid and invaluable feedback rolling in!

  • @Willy.

    Looks really good, especially normalized of estimation and TRP Sprint. Would one of the outcomes of such sprints be a more detailed task list?

  • OK, well you did ask for uncandid feedback, and the quintessential New Yorker in me will come forth:

    #1 Normalized Estimations:  Not so familiar with the SAFE abridged estimation methods, but with care I can see this may be feasible. Although, admittedly I look at it as a ratio and proportion approach with some capping, so let me explain:  I've been estimating activities while building out a backlog based on story points where everyone throws in a card. (Standard Planning Poker, which has been prevalent with story point estimation for years).   I have a couple scrum decks I even use: one with Fibonacci numbers, one with “2 power values” (0,2,4,8,16,32,64), and a few other variations.   I like the 2-power values because I can assign bitwise operators directly and figure out all possible combinations during estimation, and group only those - rather than take the highest/lowest estimations.   That helps me substitute tasks when normalizing my backlogs for resourcing and such.   Anyway, the point here is you could use a regular playing card deck, it really doesn't matter.  The end result is an estimation that leaves you with the most accurate effort estimates. (See en.wikipedia.org/.../Software_development_effort_estimation ).

    While the Wikipedia article on effort estimation lists only psychological factors, I’d say there are actually multiple factors that influence estimation: ie: perceptions, knowledge, resource availability, etc. that may produce inaccurate estimates, in addition to the psychological factors.   Without enough objective data to go with the subjective plans, effort estimates will always skew larger.  So for me, I think understanding HOW the normalized estimations contribute to a more accurate, ALM Rangers project-context is the info I’m missing here.  So that brings me to #2, enforcement of a TRP sprint:

Page 1 of 2 (23 items) 12
Leave a Comment
  • Please add 6 and 7 and type the answer here:
  • Post