Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

April, 2005

Posts
  • Inside Architecture

    Feature Driven Development vs. Traditional Project Planning, part deux

    • 2 Comments
     
    A couple of weeks ago, I blogged about an experience I had that allowed me to directly compare Feature Driven Development's planning process with processes that used traditional workflow breakdown structure and the waterfall approach. 
     
    I pointed out how much pain was felt by the teams that used the traditional approach and compared that to the relative ease with which the FDD team reached consensus and proceeded with our work.
     
    In sharing that article, I got quite a bit of feedback.  Some of it is supportive, much of it is skeptical.   The last article is presented as a case study.  This one is analysis.  The topic is essentially the same: if we focus only on the planning process, how does FDD compare to Traditional Waterfall.
     
    The goal of planning
     
    Let me start with a definition: The intent of planning is to take the requirements, develop cost estimates, and get them back to the customers.  The purpose of this cycle is to give control to the customers in deciding what to build.  "Now that I know how much it costs, what do I really want to build?"
     
    The key takeaway from the FDD article that I blogged is this:
        T-WBS processes provide lots of information, but do not effectively roll up the information in a way that allows the customer to answer the question above.  FDD articles provide the same information in a different order that fosters better decision making.
     
    Avoiding Sticker Shock
     
    How many times have you presented a schedule to a customer, and they came back and said "that's too expensive?"  In my experience, this is common but not really typical.  Why?  Because T-WBS Waterfall processes solve this problem in a different way.
     
    In T-WBS systems, we plan things in layers so that we avoid the problem of "sticker shock."  We cooperate to set high level goals, make high level estimates, and set expectations... then do it again, refining the estimates, clarifying requirements, and negotiating costs.  Then we create "phases" and do it again within the coming phase.  Everything moves, everything shifts.  This negotiation takes time.  It also assumes that the list of requirements is pretty static. 
     
    This also puts a huge focus on the Product Manager and the Program Manager making decisions for other people.  The product manager makes decisions for the customer.  The Program Manager makes decisions for the dev team.  Neither the customer nor the dev team is in the room.
     
    Is this the right way to avoid Sticker Shock?
     
    This traditional negotiation process starts to shred if the requirements are still being discovered, or if folks don't really know what they want.  This is common knowledge in PM circles.  If the requirements aren't known up front, you will have problems with this process.  The problem is that this statement is ALWAYS TRUE.  Requirements are nearly never well known up front, and even when they are, they always change as needs are uncovered or business moves.
     
    What we lose is the ability for the customer and the team to cooperate in an agile manner throughout the life of the project.  We lose the ability for the customer to expand scope in one area, restrict scope in another, and introduce entire sets of requirements part way through the process.  We lose the ability of the team to offer ideas born of experience.  We lose the ability for the customer to explain and expand on their understanding. 
     
    The multi-layered negotiation stage of T-WBS, and the notion that we can "manage to schedule" is very inflexible in this regard.  This traditional process assumes that we are guessing correctly about things that we don't know.  It assumes that information we learn along the way is inconvenient or less valuable than the stuff we knew at the beginning.
     
    This is an absurd notion, and no matter how well you refine and improve the Waterfall planning process, the act of building a framework on top of such an absurd notion cannot produce a stable process. 
     
    The fact that I am making this point, when T-WBS has been repeatedly refined for 25 years, is proof positive that the traditional work-breakdown structure concept is not stable. 
     
    Is FDD better than Traditional WBS when planning a project?
     
    If the goal is to allow the customer the right to decide what to develop, and what not to develop, then yes, because T-WBS offers no easy way for the customer to push back on one feature, or to select one feature over another.
     
    If you judge the process by it's output, then T-WBS is fundamentally broken.
     
    Does FDD planning fit with Waterfall delivery?
     
    Sure.  You can use FDD to plan a waterfall project.  In fact, we've done just that for some very short projects (what agilists would refer to as a single iteration or sprint).  The point is not that Waterfall processes will go away any time soon.  The point is that the traditional planning process, taught in project management class, is missing a basic tenent:  that the tasks need to be collected into features and delivered as features, and that doing so allows the customer to provide better feedback, and to better prioritize the work.
     
    This is an education process.  It isn't easy.  But I'm a pretty persistent fellow.
  • Inside Architecture

    Draw the distinction between a message bus and a services bus

    • 4 Comments

    Many different products claim to be effective for Enterprise Application Integration.  There's about as many products as there are ways to integrate applications.

    The first and most common approach to integration is data integration.  I've seen integration from the standpoint of common data tables from multiple sources, mined data in common reports, and even "star data dispersion" where one "domain data" application acts as a source of data for a host of others.  Domain data alignment is essential to integration, but it is only the first step.

    After you have created a structure for sharing common data values across applications, and keeping them up to date, you need a way to share EVENTS.  This is more important than sharing code or sharing services.  You need to have a common understanding of what events are important to the business, and a common definition surrounding the conditions that define each event. 

    For example: let's say you have a social service case management system running in a state agency responsible for child welfare.  A call comes in to a hotline, and a concerned citizen is calling to report that the child in the next apartment has been crying for hours.  They suspect possible abuse.  Your system needs to assign a task to someone to investigate.  Do you create a case even though you don't know the name of the child?  If there is no abuse, should it be a case or just an unattached incident report?  If your system doesn't know the "magic" condition that means "a case is created" then you have no way to share the "NewCase" event.

    The obvious question then is how and why to share this event.  Do you have a reporting system that tracks the number of cases assigned to each worker?  Do you have a financial system that tracks case costs?  How about a system that instigates a long-running process to look up information on the suspect's address, to see if a known felon is known to frequent the location?  Would you share the event with these systems?  There could be a lot of value there.

    In a loosely coupled system, you really shouldn't care what system subscribes to your events.  You should send your event to a broker and let it decide who to send it to.  More importantly, you need a way to insure that a system that is not online at the moment has a way to get the event when it returns to operation.  Perhaps you are using a system of cooperating components to coordinate these messages... or perhaps you have a single point clustered broker.  Either way, you are implementing a publish-subscribe messaging system.  If it is truly loosely coupled, you have a message bus.

    The point I want to make is: thes two integration mechanisms have Nothing to do with web services. 

    I can't count the number of times I've had to correct folks who talk about Integration only in terms of web services.  Web Services are useful for a third kind of integration: the services bus.  They are NOT the cornerstone of either data or event integration; mechanisms that are centered around data coordination and event brokering, respectively.  While web services can be used as a communication end-point, especially for the messages bus concept, the fundamental technology is not web services or even SOAP.  It's loosely coupled brokering.

    Web services, and the new Indigo framework, are useful for creating a services bus.  A services bus is a mechanism for registering, managing, and serving up a list of services.  If a service has a way to advertise itself, and if an application has a way to find services that match its needs, then a services bus can connect the two, and allow an application (or a user) to consume a service without knowing who wrote it, in what language, or what server its running on.

    This is very useful, and in some ways, fundamental to integration.  However, it is only one aspect of integration.  The services bus compliments the message bus and the shared data repository service.  It does nothing to supplant it.  On the contrary, I would posit that a services bus, without shared domain data or a mechanism for retrieving it, is fundamentally crippled. 

    So, the next time someone says "Use Web Services for Integration", think to yourself: "that's part of the story, but not all."

    After all, a barber shop quartet sounds much better if all four vocalists are in the room.

  • Inside Architecture

    Feature Driven Development: Dev is different than PM

    • 1 Comments

    I'm seeing the difference more clearly than before: how a team can use Feature Driven Development for development, and how that is different than using FDD for project management.  How could I have missed this?

    For Development, FDD means:

    • Take the requirements and convert them to "feature" stories.  (A story is smaller than a use case... often the size of a 3x5 card... that describes the smallest unit of functionality that can be actually demonstrated to the user).
    • Take each feature story and create a "design story" which describes the changes needed for that feature.  Note that a "common" story could be created for two features, or one feature can depend on another... different schools of thought on the same process.
    • Take the design stories and estimate the effort.
    • When performing the work, work on a single design story at a time.  Start the story and finish the story.  Work with others on the team to INSURE that the entire design story can be done in the current iteration (if possible).
    • Report daily on the amount of estimated effort needed to complete the design story (not the time spent... report the time remaining).
    • Demonstrate the feature as soon as possible after completing it

    For Project Management, FDD means:

    • Take the feature stories created by the team and organize them into a list.  Number them, group and summarize as needed.  Trace each feature back to the requirements (using the requirement number, assuming one is available).
    • Estimate the amount of time/effort/cost available in the current delivery cycle.  Make sure that the customer is aware of "how much water is in the bucket."
    • Collect the estimates of the effort from the dev team and connect them with the stories.  Assign an "estimated business value" for each story (literally make it up), sort the list by this value, and draw a line at the point where the cost exceeds the available resources in the dev cycle.  This is the initial cutoff that you present to the customer.
    • Get the customer to review the business value for each item.  Sort each time and set the cut-off to show what is "in scope" and what is "out of scope."  Immediate feedback is good.
    • Negotiate any changes in resources and dates depending on the criteria for the project.  (Some projects are date driven, others are cost driven, etc.) 
    • Take the list of items in the accepted list and return to the dev team to plan iterations.
    • If you are using project tools like MS Project or Primevera, add iterations to your project plan (fixing the end date for each iteration), and add tasks directly from the feature/story list.  For each story, put in the total hours estimated for the design story.  (If you break it down further to tasks, make sure that the tasks remain directly tied to the story).
    • Daily collect the hours remaining on each story and insure that your project tracking tool reflects these estimates.  If you are using Scrum, you should be able to predict if your burn-down rate is sufficiently on track to deliver by the end of the iteration.
    • Do not calculate the earned value of any feature until it is complete... then accept 100% of its earned value against the project.

    So... why did I detail all of this?  Because I ran across a situation where a team was using FDD for dev, but PM wasn't using FDD for planning or management.  It's a little like being hungry at a buffet table. 

    Enough for now.  I hope this makes sense.

  • Inside Architecture

    A direct comparison between FDD and Traditional WBS

    • 3 Comments

    Reader ROI

    Readers of this post will find a "case study" that allowed this author to directly compare Feature Driven Development to the traditional WBS when performing project planning.  This information is useful if you would like to improve your software development processes, especially project and program management, or if you are considering the claims of agile software development.

    Introduction

    It's not often you get to make direct comparisons between Feature Driven Development and the composition of a traditional Work Breakdown Structure when doing project planning.  In fact, its downright rare.  There is an ongoing discussion in Project Management and Standards-driven development circles: Despite the claims of improvement in communication and understanding, is FDD measurably better than traditional WBS?  Can the claims be proven?

    Well... I have a direct comparison.  While there are still variables in the equation that are not compensated for, most of the variables are completely factored out, making this direct comparison between Feature Driven Development and a traditional WBS instructive.

    What do I mean by "Traditional WBS?"

    In the process of planning a project, the first step is collection of high-level requirements.  Of course, requirements are a fascinating area all of their own.  Usually the team creating the requirements learns during the process, causing the requirements to shift radically for a while. 

    However, once the requirements are described, then the project team will take them and literally walk through them, one item at a time, and break the requirements down directly into tasks for the project plan.  These tasks will be grouped together to get a nice logical group, usually for the sake of creating delivery milestones.  Tasks are estimated and the estimates are balanced against the schedule to account for dependencies. 

    What comes back is cost.  The project team comes back to the requirements team and says, in effect, "We will deliver these 21 requirements for 1455 hours of effort (across 4 people).  We will deliver the system to production in 10 weeks."  The cost is 1455 hours.  In software development IT organizations, time is the measurement of cost.  Note: outsourcing is no different.  Outsource vendors either charge by the hour or they will charge a fixed price based on their estimates of the hours.  The difference is where the risk lay.  The cost is still a function of the estimated hours.

    I call this the traditional model because this is the model that I was taught in college, and which, to the best of my knowledge, is fairly similar to the methods currently espoused by the PMI (although I'm sure I've described this process in a far less rigorous manner... my apologies to my PMP friends).

    What is FDD?

    Feature driven development is exceptionally similar, really.  So similar, in fact, that many folks will mstakenly discount FDD as minor or unimportant. 

    FDD teams will pick up at the same point: when the specs are described and delivered to the development team.  However, at this point, the team does not break them into tasks.  The team breaks them into features.  A feature is the smallest unit of deliverable code that can be demonstrated to the customer.  This step is missing in traditional WBS processes. 

    Each feature is then described as a story.  Stories are subsets of use cases.  They describe the method that a person can use to demonstrate the feature.  Each feature must be described as a seperate story.  (It is OK for stories to depend on one another).

    Then, for each story, the team can describe the design needed to implement the story, and can create a list of tasks.  The project plan describes the stories as the milestones, and will in fact create milestones and iterations BASED upon the list of stories that can be completely coded during the cycle.

    What comes back, of course, is cost.  However, it is not the same cost as above.  Instead of saying "The cost of 21 features is..." the FDD team returns with "The cost of each feature is...".  The cost for each feature is described to the customer.  This small intermediate step is all that is needed to provide this information.  It is not expensive.  In fact, once the dev lead learns this process, it is quite natural and can often take less time, since the design can be broken out by story, allowing more than one person to work on the "design stories" in parallel.

    Big deal?  Why should I care?  I'll get to that...

    How did I get to make a comparison between the two?

    While we'd all like to imagine that everyone is on the same page, all the time, the realists among us know better.  It is normal for different teams, who should be working towards a common goal, to get a little out of sync.  This is the case in our group.  We have about five teams, all running in parallel with their own objectives.

    These objectives were supposed to be aligned.  While they were complimentary, they were not really aligned in that there were a few features we had promised to the business that we were not delivering.  At a six month review, our executive sponsor pointed this out, and we had some choices to make.

    So, we agreed to deliver some of the expected features in time for fixed business event that was already on the calendar.  We pulled resources, to the point of effectively shutting down many projects, and put nearly all of our resources on three projects, all of which had to work together to deliver functionality for this fixed date.

    In Project Management parlance, we had fixed resources.  We could not move the delivery date.  Our flexibility was scope.  We would choose the features to fit the schedule.  Given our timelines, there would be no way to bring on people in an effective manner.

    So how does this lead to a comparison of FDD and Traditional WBS?  Two of the three projects were being managed in a traditional manner.  One was an agile project (Scrum) and had been managed using FDD for some time.

    All three projects were pulled together but there was no time to retrain anyone.  Our charter: use whatever method you know to create the cost so we can get the customer to sign off on the scope as quickly as possible.  Two teams delivered traditional breakdowns.  One team delivered a breakdown based on features.  The customer was the same.  The timeline was the same.  The resources were similar: all were already on their respective teams, all were of equivalent caliber, and all were employees of the company.  The same culture applied to all of them.  All were led by the same overall integration team (of which I am a member).

    Here's how it went.

    Observations

    Our business partners delivered the requirements very quickly.  As you'd expect, the requirements were at a very high level and some of the distinctions that would come back to haunt us later were vague or poorly understood in that initial document.  Each team took the same document and went off to figure out what features were to be delivered.

    The FDD team had already been using stories, and had a "backlog" of stories that had not been included in existing releases because their priority was not high enough.  So, using this new requirements document, the FDD team wrote about 25 new stories.  To that, the team reviewed the existing stories in the backlog and selected about 15 that they felt would be beneficial in the new environment.  We then took a very high-level guess as to the number of hours needed to write the code for each feature.  (literally: the entire estimation process took two hours between the dev lead and the architect).  We also did a little "napkin math" to determine about how many hours of dev time he was going to get in the delivery cycle.  The total of all the backlog item estimates far exceeded the estimated iteration -- as we expected they would.

    The FDD team then sat down with the user's representative and had him assign a business value (1 to 100) for each backlog item.  All this was done in an Excel spreadsheet.  Took less than two hours.

    The FDD team then simply sorted the list of stories by the value, and reviewed it with him, in the same meeting.  Using our "napkin math," we drew a line that represented the cut-off.  Everything above the line was "tentatively in scope" while everything below the line was out. He re-ordered a few things now that he had a line, and left the meeting with a very good idea of what he was going to get.

    The T-WBS teams did as you'd expect... they asked questions.  There was some vagueness in the spec, and they wanted to get really good estimates, so they spent a few days figuring out what questions they wanted asked, and then another week getting clarifications.  This process was painful and required a lot of teeth-grinding and more than a few raised voices. 

    While the T-WBS team was argung, the FDD team was taking the list of stories "above the line" and creating design stories.  A design story is a description of the new functionality to be added to the system to support the feature.  It is less than a paragraph long.  From each design story, the FDD team created a list of tasks, and added a few "risk tasks" for situations where the work would fall into highly complex areas.  In essence, they refined the estimates... however, they didn't do this for all of the 40+ stories.  Only for the 12 or so that were "above the line."  Some questions were asked, of course, but not for the functionality that wasn't going to be delivered.

    With the refined estimates, the FDD team had to move the line.  We had another meeting with our user representative and he signed off on the scope.  The FDD team reached concensus for the list of features to deliver, and began work.

    The T-WBS teams continued to argue, and meet, and discuss, and question.  Finally, a full cost was available to the customer.  The cost was too high, and the delivery dates were not aligned with their expectations.  Both teams had to hustle to come up with ways to cut costs and deliver early.   This was tough because, by this time, the coding cycle was already half finished.  The teams had been writing code to a "partial spec" for two weeks, and were now in the process of "correcting the course" to hit the desired functionality.  There was simply no way to cut scope without slowing things down. 

    So, they took time away from test.  (Sound familiar?)

    The project will be delivered in May.  I'll post the results then.

    Lessons

    I hate to argue.  I'm the kind of person who looks at an argument as a lost opportunity to understand one another.  The T-WBS teams spent far too much time using words, and far too little time reaching consensus.  This is not for lack of trying or lack of skill.  The Project Managers were certified and talented and all-around excellent in their roles.

    The process was the problem.  The FDD team provided the information that the customer needed, at the time that they needed it. The T-WBS team did not.  It was as simple as that.  I live on the IT side and I'm not impressed with myself in this process.  If I could go back in time, and lead each team to use FDD, I'm sure we could have delivered concensus much sooner, and with much less stress.

    What can you do with this information?

    We had three projects.  One customer.  One culture.  Similar development teams. Similar requirements on each team.  Yet, one team reached consensus far easier than the other two.  There were some vague requirements in all three projects.  The only real difference: the use of Feature Driven Development on the successful team.  Note: all three teams are delivering the code using similar processes (short daily meetings, short iterations, integrate early and often).  While some of these practices are essentially similar to agile methods, the overall project is entirely waterfall.

    If you have heard claims of great productivity gains from Agile development (like XP and Scrum), it is time to ask yourself: how much of that productivity comes from Feature Driven Development, and how much comes from the other practices?  As a developer, many of the other practices are very important to me (like Test driven development, daily delivery commitments, continuous integration, and frequent demonstration to the customer).  However, from a pure planning standpoint... from the PM standpoint... FDD is huge.

    You can add FDD to any project.  The changes are minor.  One of the other practices of Agile development helps to reinforce FDD, and that is demonstration.  At the end of each short cycle, or milestone, the developers have to personally demonstrate the feature directly to the customer.  If your developers know this, they will make sure that all of the steps needed to actually demonstrate the feature are costed in the plan.

    I would recommend this practice (demonstration) as a pair to go with using FDD in the planning stages. 

    Consider this as a lesson learned.  I know that I do.

Page 1 of 1 (4 items)