Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

Posts
  • Inside Architecture

    A direct comparison between FDD and Traditional WBS

    • 3 Comments

    Reader ROI

    Readers of this post will find a "case study" that allowed this author to directly compare Feature Driven Development to the traditional WBS when performing project planning.  This information is useful if you would like to improve your software development processes, especially project and program management, or if you are considering the claims of agile software development.

    Introduction

    It's not often you get to make direct comparisons between Feature Driven Development and the composition of a traditional Work Breakdown Structure when doing project planning.  In fact, its downright rare.  There is an ongoing discussion in Project Management and Standards-driven development circles: Despite the claims of improvement in communication and understanding, is FDD measurably better than traditional WBS?  Can the claims be proven?

    Well... I have a direct comparison.  While there are still variables in the equation that are not compensated for, most of the variables are completely factored out, making this direct comparison between Feature Driven Development and a traditional WBS instructive.

    What do I mean by "Traditional WBS?"

    In the process of planning a project, the first step is collection of high-level requirements.  Of course, requirements are a fascinating area all of their own.  Usually the team creating the requirements learns during the process, causing the requirements to shift radically for a while. 

    However, once the requirements are described, then the project team will take them and literally walk through them, one item at a time, and break the requirements down directly into tasks for the project plan.  These tasks will be grouped together to get a nice logical group, usually for the sake of creating delivery milestones.  Tasks are estimated and the estimates are balanced against the schedule to account for dependencies. 

    What comes back is cost.  The project team comes back to the requirements team and says, in effect, "We will deliver these 21 requirements for 1455 hours of effort (across 4 people).  We will deliver the system to production in 10 weeks."  The cost is 1455 hours.  In software development IT organizations, time is the measurement of cost.  Note: outsourcing is no different.  Outsource vendors either charge by the hour or they will charge a fixed price based on their estimates of the hours.  The difference is where the risk lay.  The cost is still a function of the estimated hours.

    I call this the traditional model because this is the model that I was taught in college, and which, to the best of my knowledge, is fairly similar to the methods currently espoused by the PMI (although I'm sure I've described this process in a far less rigorous manner... my apologies to my PMP friends).

    What is FDD?

    Feature driven development is exceptionally similar, really.  So similar, in fact, that many folks will mstakenly discount FDD as minor or unimportant. 

    FDD teams will pick up at the same point: when the specs are described and delivered to the development team.  However, at this point, the team does not break them into tasks.  The team breaks them into features.  A feature is the smallest unit of deliverable code that can be demonstrated to the customer.  This step is missing in traditional WBS processes. 

    Each feature is then described as a story.  Stories are subsets of use cases.  They describe the method that a person can use to demonstrate the feature.  Each feature must be described as a seperate story.  (It is OK for stories to depend on one another).

    Then, for each story, the team can describe the design needed to implement the story, and can create a list of tasks.  The project plan describes the stories as the milestones, and will in fact create milestones and iterations BASED upon the list of stories that can be completely coded during the cycle.

    What comes back, of course, is cost.  However, it is not the same cost as above.  Instead of saying "The cost of 21 features is..." the FDD team returns with "The cost of each feature is...".  The cost for each feature is described to the customer.  This small intermediate step is all that is needed to provide this information.  It is not expensive.  In fact, once the dev lead learns this process, it is quite natural and can often take less time, since the design can be broken out by story, allowing more than one person to work on the "design stories" in parallel.

    Big deal?  Why should I care?  I'll get to that...

    How did I get to make a comparison between the two?

    While we'd all like to imagine that everyone is on the same page, all the time, the realists among us know better.  It is normal for different teams, who should be working towards a common goal, to get a little out of sync.  This is the case in our group.  We have about five teams, all running in parallel with their own objectives.

    These objectives were supposed to be aligned.  While they were complimentary, they were not really aligned in that there were a few features we had promised to the business that we were not delivering.  At a six month review, our executive sponsor pointed this out, and we had some choices to make.

    So, we agreed to deliver some of the expected features in time for fixed business event that was already on the calendar.  We pulled resources, to the point of effectively shutting down many projects, and put nearly all of our resources on three projects, all of which had to work together to deliver functionality for this fixed date.

    In Project Management parlance, we had fixed resources.  We could not move the delivery date.  Our flexibility was scope.  We would choose the features to fit the schedule.  Given our timelines, there would be no way to bring on people in an effective manner.

    So how does this lead to a comparison of FDD and Traditional WBS?  Two of the three projects were being managed in a traditional manner.  One was an agile project (Scrum) and had been managed using FDD for some time.

    All three projects were pulled together but there was no time to retrain anyone.  Our charter: use whatever method you know to create the cost so we can get the customer to sign off on the scope as quickly as possible.  Two teams delivered traditional breakdowns.  One team delivered a breakdown based on features.  The customer was the same.  The timeline was the same.  The resources were similar: all were already on their respective teams, all were of equivalent caliber, and all were employees of the company.  The same culture applied to all of them.  All were led by the same overall integration team (of which I am a member).

    Here's how it went.

    Observations

    Our business partners delivered the requirements very quickly.  As you'd expect, the requirements were at a very high level and some of the distinctions that would come back to haunt us later were vague or poorly understood in that initial document.  Each team took the same document and went off to figure out what features were to be delivered.

    The FDD team had already been using stories, and had a "backlog" of stories that had not been included in existing releases because their priority was not high enough.  So, using this new requirements document, the FDD team wrote about 25 new stories.  To that, the team reviewed the existing stories in the backlog and selected about 15 that they felt would be beneficial in the new environment.  We then took a very high-level guess as to the number of hours needed to write the code for each feature.  (literally: the entire estimation process took two hours between the dev lead and the architect).  We also did a little "napkin math" to determine about how many hours of dev time he was going to get in the delivery cycle.  The total of all the backlog item estimates far exceeded the estimated iteration -- as we expected they would.

    The FDD team then sat down with the user's representative and had him assign a business value (1 to 100) for each backlog item.  All this was done in an Excel spreadsheet.  Took less than two hours.

    The FDD team then simply sorted the list of stories by the value, and reviewed it with him, in the same meeting.  Using our "napkin math," we drew a line that represented the cut-off.  Everything above the line was "tentatively in scope" while everything below the line was out. He re-ordered a few things now that he had a line, and left the meeting with a very good idea of what he was going to get.

    The T-WBS teams did as you'd expect... they asked questions.  There was some vagueness in the spec, and they wanted to get really good estimates, so they spent a few days figuring out what questions they wanted asked, and then another week getting clarifications.  This process was painful and required a lot of teeth-grinding and more than a few raised voices. 

    While the T-WBS team was argung, the FDD team was taking the list of stories "above the line" and creating design stories.  A design story is a description of the new functionality to be added to the system to support the feature.  It is less than a paragraph long.  From each design story, the FDD team created a list of tasks, and added a few "risk tasks" for situations where the work would fall into highly complex areas.  In essence, they refined the estimates... however, they didn't do this for all of the 40+ stories.  Only for the 12 or so that were "above the line."  Some questions were asked, of course, but not for the functionality that wasn't going to be delivered.

    With the refined estimates, the FDD team had to move the line.  We had another meeting with our user representative and he signed off on the scope.  The FDD team reached concensus for the list of features to deliver, and began work.

    The T-WBS teams continued to argue, and meet, and discuss, and question.  Finally, a full cost was available to the customer.  The cost was too high, and the delivery dates were not aligned with their expectations.  Both teams had to hustle to come up with ways to cut costs and deliver early.   This was tough because, by this time, the coding cycle was already half finished.  The teams had been writing code to a "partial spec" for two weeks, and were now in the process of "correcting the course" to hit the desired functionality.  There was simply no way to cut scope without slowing things down. 

    So, they took time away from test.  (Sound familiar?)

    The project will be delivered in May.  I'll post the results then.

    Lessons

    I hate to argue.  I'm the kind of person who looks at an argument as a lost opportunity to understand one another.  The T-WBS teams spent far too much time using words, and far too little time reaching consensus.  This is not for lack of trying or lack of skill.  The Project Managers were certified and talented and all-around excellent in their roles.

    The process was the problem.  The FDD team provided the information that the customer needed, at the time that they needed it. The T-WBS team did not.  It was as simple as that.  I live on the IT side and I'm not impressed with myself in this process.  If I could go back in time, and lead each team to use FDD, I'm sure we could have delivered concensus much sooner, and with much less stress.

    What can you do with this information?

    We had three projects.  One customer.  One culture.  Similar development teams. Similar requirements on each team.  Yet, one team reached consensus far easier than the other two.  There were some vague requirements in all three projects.  The only real difference: the use of Feature Driven Development on the successful team.  Note: all three teams are delivering the code using similar processes (short daily meetings, short iterations, integrate early and often).  While some of these practices are essentially similar to agile methods, the overall project is entirely waterfall.

    If you have heard claims of great productivity gains from Agile development (like XP and Scrum), it is time to ask yourself: how much of that productivity comes from Feature Driven Development, and how much comes from the other practices?  As a developer, many of the other practices are very important to me (like Test driven development, daily delivery commitments, continuous integration, and frequent demonstration to the customer).  However, from a pure planning standpoint... from the PM standpoint... FDD is huge.

    You can add FDD to any project.  The changes are minor.  One of the other practices of Agile development helps to reinforce FDD, and that is demonstration.  At the end of each short cycle, or milestone, the developers have to personally demonstrate the feature directly to the customer.  If your developers know this, they will make sure that all of the steps needed to actually demonstrate the feature are costed in the plan.

    I would recommend this practice (demonstration) as a pair to go with using FDD in the planning stages. 

    Consider this as a lesson learned.  I know that I do.

  • Inside Architecture

    How to get rid of circular references in C#

    • 4 Comments

    A refers to B, B refers to A, Why can't we all just get along?

    Every now and again, I see a posting on the newsgroups where someone has created a circular reference in their code structure, and they can't figure out how to get out from under it.  I'm writing this article for those folks (and so I have some place to send them when I run across this problem repeatedly).

    Let's start by describing a ciruclar reference.  Let's say that I have a logging layer that is useful for recording events to a log file or a database.  Let's say that is relies on my config settings to decide where to log things.  Let's also say that I have a config settings library that allows me to write back to my config file...


    //calling app:
    Logging myLogObject = new Logging();
    myLogObject.WriteToLog("We are here!");
    MyConfig cnf = new MyConfig();
    cnf.SetSetting("/MyName","mud", myLogObject);

    The class may look like this:

    public class Logging {
     public Logging()
     {
       MyConfig cnf = new MyConfig();

       LoggingLocation = cnf.GetSetting("//Log/Location");
       if (LoggingLocation == "File")
       {
          // save logs to a file
       } 
       else
       {
            // save logs to a database
       }
     }

     public void WriteToLog(String LogMessage, int Severity)
     {
       // write the log message
     }

    }

    If you notice, my little logging app refers to my config file library in the constructor of the logging object.  So now the logging object refers to the config object in code.

    Let's say, however, that we want to write a log entry each time a value is changed in the config file. 

    public class MyConfig
    {
       public MyConfig() { }
       public string GetSetting(string SettingXPath)
       {
          // go get the setting
       }
       public void SetSetting(string SettingXPath, string newValue, Logging myLog)
       {
          // set the string and...
          myLog.WriteToLog("Updated " + SettingXPath + " : " + newValue);
       }
    }

    OK, so I removed most of the interesting code.  I left in the reference, though.  Now the config object refers to the logging object.  Note that I am passing in actual objects, and not using static methods.  You can get here just as easily if you use static methods.  However, digging yourself out requires real objects, as you will see.

    Now, compile them both.  One class will require the other.  If they are in the same assembly, it won't matter.  However, if they are in seperate DLLs, as I want to use them, we have a problem, because neither wants to be the one to compile first.

    The solution is to decide who wins: the config object or the logging object.  The winner will be the first to compile.  It will contain a definition of an interface that BOTH will use.  (Note: you can put the interface in a third DLL that both will refer to... a little more complicated to describe, but the same effect.  I'll let you decide what you like better :-).

    For this example, I will pick the config object as the winner. In this case, the logging object will continue to refer to the config object, but we will break the bond that requires the config object to refer to the logging object.

    Let's add the Interface to the Config object assembly:

    public interface IMyLogging
    {
       void WriteToLog(String LogMessage, int Severity);
    }

    Let's change the code in the call to SetSetting:

       public void SetSetting(string SettingXPath, string newValue, IMyLogging myLog)
       {
          // set the string and...
          myLog.WriteToLog("Updated " + SettingXPath + " : " + newValue);
       }

    You will notice that the only think I changed was the declaration.  The rest of the code is unchanged.

    Now, in the Logging object:

    public class Logging : IMyLogging {
    // the rest is unchanged
    }

    Now, the Logging assembly continues to rely on the config assembly, but instead of just relying on it for the definition of our config class, we also rely on it for the definition of the IMyLogging interface.

    On the other hand, the config class is self sufficient.  It doesn't need any other class to define anything.

    Now, both assemblies will compile just fine.

  • Inside Architecture

    On Security in Workflow

    • 2 Comments

    It's been ages sinces I've blogged on workflow.  I've been wildly busy implementing a workflow engine in C# that will ride under any .Net app while providing a truly light and easy to understand modeling language for the business user.

    One business modeler is now able to go from inception to full document workflow implementation in about 20 hours, including creating the forms, e-mails, model, staging, debugging, and deployment.  The only tools that need to be installed on the modeler's PC are Infopath (for forms, e-mail, and model development) and our custom workflow management tool that allows management, packaging of a workflow and remote installation to the server.

    One problem that we've been solving has to do with security.  Just how do you secure a workflow.

    For those of you who live on Mars, Microsoft is very heavily focussed on driving security into every application, even ones developed internally.  Plus, workflow apps need security too.

    Thankfully, the first "big" refactoring we've done to the design of the workflow engine was in the area of security.  I'd hate to have added workflow security later, after we had a long list of models in production.  As it stands, we only have a handful of models to update.

    So what does security in a workflow look like?  Like security in most apps, (common sense) plus some interesting twists. Here are some of the most salient security rules.

    a) We have to control who can submit a new item to the workflow. In our models, all new items are added to a specific stage, so you cannot start "just anywhere" but we also have to be cognizant that not all workflows may be accessed by all people.  There are two parts to this: who can open the initial (empty) form and how do we secure submission to the workflow?  We solved both with web services that use information cached from the active directory (so that membership in an AD security group can drive permission to use a form).

    b) Once an item is in a workflow, we need to allow the person assigned to it to work on it.   There are two possibilities here.  Possibility 1 states: There is no reason to set permission on each stage, because the system only works if the person who is assigned to the item can work on it. Possibility 2 states: a bug in the model shouldn't defeat security.  We went with the second one.  This means that the model can assign a work item to a person only if that person will have permission to work on the item (in the current stage for entry actions or in the next stage for exit link actions).

    c) Each stage needs seperate permission settings.  A person can have read-only permission in one stage, read-write in a second, and no permission at all in the third. 

    d) It is rational to reuse the same groups for permission as we do for assignment, since they are likely to coincide.  Therefore, if we assign an item to a group of people (where any one of them can "take the assignment", then it makes sense that the same group of people will have permission to modify the work item in that stage.  Two purposes, one group.

    If you have opinions about the proper rules for managing access to workflow stages and the document they contain, post a response to this message.  I'd love to hear about it.

  • Inside Architecture

    C#: a way to get around the lack of multiple implementation inheritance

    • 9 Comments

    I run across this question from time to time: why is there no multiple inheritance in C# like there was in C++.  Personally, I've never needed it, but I do see a value to it, and there are some times when it would appear to be handy.

    There is a workaround to this problem that is not difficult to do.  You get some of the same abilities as multiple inheritance, with a few structural advantages.  Before I describe the solution (below), let me frame the problem so that we are all using the same terms.

    We have two concrete classes.  They are both derived from different base classes (not interfaces).  You want to give both of them a set of common methods and properties that meets a defined interface or base class (multiple inheritance).

    Note: if you just want to inherit from an interface and implement the properties and methods directly in the class, you can do that now.  That does not require a workaround.  In other words, it is perfectly acceptable to do this:

       public class MyClass : System.Web.UI.Page , MyNewInterface
       { ... }

    So the problem only really arises if you have two or more BASE CLASSES that you want to put on that line... something you cannot do in C#.  I will treat one base class as "given" and one as "add-on".  It really doesn't matter, structurally, which one is which.  There is only one "given" class.  There can be as many "add-on" base classes as you want.

    So, you use a simple composition pattern (that I cannot find the name for... if someone knows the name, please send it to me).

    Step 1) you need an interface.  This defines a single getter property with a name derived from the base class name of the class you want to add on.

    interface IAddOn
    {
        // define a getter to return one "AddOnClass" object
        AddOnClass GetAddOn
        {   get;   }
    }

    Step 2) insure that your concrete object inherits from IAddOn

    public class MyMultiConcrete : MyBaseClass, IAddOn
    { .... }


    Step 3) Create a factory object that will return a class of type 'AddOnClass'  (optional, but good practice). 

    public class AddOnFactory
    {
       public static AddOnClass NewAddOnObject()
       {   return new AddOnClass();   // factory method
        }

    }

    [edited] I want to add one comment here.  You don't have to return a type 'AddOnClass.'  In fact, if the add on class is an abstract class, you cannot.  You would need to derive a class from AddOnClass and then instantiate one of those types.  If you created this class specifically to be called from your new type, then you have a pair of classes that work together.  The derived add-on has access to the private and protected members of the add on type. 

    In this case, you can pass in a reference to the calling class:

        public static AddOnClass NewAddOnObject(IAddOn Caller)
        {   return new ConcreteAddOnClass(Caller);   // factory method }

    This gives the concrete add on the ability to directly manipulate the 'container' when you call a property or method on it, as shown below.  [end edit]

    Step 4) Declare, in your concrete classes, a private property to hold the reference:
          private AddOnClass _AddOn;

    Step 5) In the constructor for your concrete class, call your factory object to return an object of type AddOnClass.  Assign the reference to the private _AddOn property.

       public MyMultiConcrete() : base()
       {    // do normal constructor stuff here...
             _AddOn = AddOnFactory.NewAddOnObject();
       }

    Step 6) Define the property that returns the add-on object
         public property AddOnClass GetAddOn
         {   Get { return _AddOn; } }

    /// you are done ///

    Now, every place in  your calling code, where someone needs a method or
    property from the add-on type, they will reference it this way:
       MyMultiConcrete.GetAddOn.MyAddOnMethod()

    One nice thing to consider, we can do this for as many types as we want within a class.  Therefore, we could theoretically inherit from dozens of base classes. 

    I hope you have as much fun using this pattern as I have had describing it.  I doubt that I'm the first person to identify this pattern, so if someone can send me a link to another name or description, I will be grateful.  If not, perhaps I'll go to ChiliPLoP and present it :-).

  • Inside Architecture

    How is workflow different from a Finite State Automata?

    • 0 Comments

    After showing a workflow diagram to a co-worker, he asked me if I could tell him how this is any different from basic Finite State Automata (FSA).  To be honest, I had to think about it for a few minutes to get my thoughts around this, but there is a fairly big difference.

    For those of you who aren't familiar with FSA theory, this is a segment of computer science that goes back to the earliest days of computing.  The idea is this: arrange your input into a stream of tokens of the same size.  Then, keeping a state, read each token.  The token will dictate the state you move to next.  Side effects could be added to a transition.  These side effects, taken together, were the actual functional code of the system.

    In a strange way, we've all moved to FSA programming when we moved to event driven application programming.  The event handler is essentially a place to put logic for reacting to an input (the event) in a state (the global environment).  It's a bit different, though, in the sense that our input isn't in a stream.  We can't look ahead at the next token or move in the other direction over a series of tokens (important parts of compilier design).

    In that respect, Workflow modeling is closer to event driven programming than it is to FSA theory, because we don't have that input stream.  We can't look ahead. 

    On the other hand, unlike event driven programming, most workflow systems use the FSA approach to modelling, where you look at the behavior of the system by creating a graph showing the stages of work, and the transitions from stage to stage. 

    However, what really distinguishes Finite State Automata from workflow programming, in my mind, are the three layers of abstraction inherent in Workflow analysis. 

    Finite State Automaton development requires a pre-processing step, where you take the input and interpret it as a series of tokens in a language.  In compiler theory, we call this lexical analysis.  This analysis happens at only one layer of abstraction (usually at a very low level: character sequences).  Therefore, the structure of computer languages has to be represented as a parse tree: a complicated heirarchical structure that "represents" the analyzed token stream.  The FSA is done when the parse tree is done.  It isn't involved in actually using that tree to create the target code.

    With workflow analysis, there are three layers of abstraction: Business unit level, Business process level, and Workstep level.  All three are distinct (but related).  All can be described with rules and constraints.  All have a specific purpose, and each one can be represented as a state graph.  The mathematics are considerably more complex.  (Only the lowest level has to be deterministic).   There are many PhD level practitioners of workflow modeling who can attest to the fact that workflow is much more complicated and complex than the fundamental concept of a Finite State Automaton.

    Why?

    Because FSA's interpret logic... nothing more.

    Workflow modeling deals with human behavior.

  • Inside Architecture

    On XML Models of Process

    • 4 Comments

    XML is an interesting language, but is it a useful one for describing a process?

    We have multiple competing standards for workflow and collaboration.  We have BPEL, XPDL, SWFL, XRL, XScufl, and custom XML workflow models developed for the XFlow, AntFlow, Agile, YAWL, and OpenWFE tools.  (If anyone is looking for a good idea for a masters thesis in Workflow, they should create a comparison of these different languages, catalog features, and create a roadmap for the rest of us).

    Just to add to the fun, rather than learn an existing modelling language, I wrote my own for an internal tool I'm working on.  Wise?  Probably not.  In keeping with the philosophy of the project?  Yes.  Most of the languages I mention above are the creation of committees and have many features designed for multiple vendors to extend the core set.  I needed less (features demanded by existing Java projects) and more (features specific to my Microsoft-based solution).

    I also needed a language feature that I didn't see anywhere else, including on the workflow patterns homepage: native support for ad-hoc workflow.  This means allowing a user the right to change the routing rules in the middle of a workflow process, while the engine maintains managability.  No mean feat. 

    So, inspired by YAWL, and angry at the limitations of the competing partner products that we evaluated, our team wrote another XML based workflow model structure. 

    I learned a few things that I want to share:

    1. XML is an interesting language, but not a forgiving one.  It is easy to create bugs by making small errors in the specification of a schema, where the downstream ripples can be quite large.  If I had to do this all again, I'd better appreciate the time it takes to create and debug the schema itself.
    2. I am far from the first person to tackle the idea of Workflow.  Perhaps it would have been better to start with XPDL (or a subset thereof).  My customers would have a better ability to switch away from my project later, which is one of the stated goals of the project.  I, on the other hand, could have leveraged the built-in workflow experience that comes from leveraging a schema that comes from workflow experts.
    3. XML is an old-fashioned declarative language.  It is about as advanced as C (not C# or Java).  Therefore, while there are many things you can do in XML, you have the freedom to do some pretty stupid stuff.  In addition, you don't have the constructs to do some elegant stuff.  By comparison, XML is an infant.  The effect: the resulting code is difficult for a human being to create, read, follow, analyze, debug, test, or support.
    4. XML parsers are touchy.  They remind me of Fortran-77 compilers.  First error and they are done.  You can't count on an error message from a parser to be all that helpful. 
    5. Tools for generating XML are new, but getting better.  Two commercial tools worth mentioning: Microsoft Infopath (the most underrated, creative, well-built, xml forms system I've seen), and Altova Stylevision (an interesting product that suffers primarily from the lack of imagination of its original designers, not the details of the implementation).  Add Visual Studio for Schema generation and you have almost everything you need.
    6. Automatic mapping between XML and databases: a new and immature field.  The current bits in SQL Server 2000 are OK, but I'm looking forward to better capabilities in Yukon and other tools.  Right now, I wouldn't count on using automatically generated or automatically parsed XML as a way of reducing struggle and pain on a development project.  You will only replace one kind of agony with another.
    7. Like any code-based method, process declaration in XML inherently describes only one aspect of a process: the static connectivity between pre-declared states.  The dynamic aspect is not well described or modeled when you focus on the static.  Some folks have tried to focus on a dynamic model exclusively, but the resulting description was even harder to understand (refer: Biztalk HWS).  In other words, the model, in XML, isn't clean enough to provide to business users.  A LOT of translation is required.  XSLT comes in very handy.
    8. Even with these drawbacks, I can't imagine a better way.

    So, XML it is.   And for now, I'm still using my proprietary schema for workflow models.  Perhaps, someday, I will switch over to BPEL or XPDL.  But not this day.

Page 102 of 105 (629 items) «100101102103104»