Many different products claim to be effective for Enterprise Application Integration. There's about as many products as there are ways to integrate applications.
The first and most common approach to integration is data integration. I've seen integration from the standpoint of common data tables from multiple sources, mined data in common reports, and even "star data dispersion" where one "domain data" application acts as a source of data for a host of others. Domain data alignment is essential to integration, but it is only the first step.
After you have created a structure for sharing common data values across applications, and keeping them up to date, you need a way to share EVENTS. This is more important than sharing code or sharing services. You need to have a common understanding of what events are important to the business, and a common definition surrounding the conditions that define each event.
For example: let's say you have a social service case management system running in a state agency responsible for child welfare. A call comes in to a hotline, and a concerned citizen is calling to report that the child in the next apartment has been crying for hours. They suspect possible abuse. Your system needs to assign a task to someone to investigate. Do you create a case even though you don't know the name of the child? If there is no abuse, should it be a case or just an unattached incident report? If your system doesn't know the "magic" condition that means "a case is created" then you have no way to share the "NewCase" event.
The obvious question then is how and why to share this event. Do you have a reporting system that tracks the number of cases assigned to each worker? Do you have a financial system that tracks case costs? How about a system that instigates a long-running process to look up information on the suspect's address, to see if a known felon is known to frequent the location? Would you share the event with these systems? There could be a lot of value there.
In a loosely coupled system, you really shouldn't care what system subscribes to your events. You should send your event to a broker and let it decide who to send it to. More importantly, you need a way to insure that a system that is not online at the moment has a way to get the event when it returns to operation. Perhaps you are using a system of cooperating components to coordinate these messages... or perhaps you have a single point clustered broker. Either way, you are implementing a publish-subscribe messaging system. If it is truly loosely coupled, you have a message bus.
The point I want to make is: thes two integration mechanisms have Nothing to do with web services.
I can't count the number of times I've had to correct folks who talk about Integration only in terms of web services. Web Services are useful for a third kind of integration: the services bus. They are NOT the cornerstone of either data or event integration; mechanisms that are centered around data coordination and event brokering, respectively. While web services can be used as a communication end-point, especially for the messages bus concept, the fundamental technology is not web services or even SOAP. It's loosely coupled brokering.
Web services, and the new Indigo framework, are useful for creating a services bus. A services bus is a mechanism for registering, managing, and serving up a list of services. If a service has a way to advertise itself, and if an application has a way to find services that match its needs, then a services bus can connect the two, and allow an application (or a user) to consume a service without knowing who wrote it, in what language, or what server its running on.
This is very useful, and in some ways, fundamental to integration. However, it is only one aspect of integration. The services bus compliments the message bus and the shared data repository service. It does nothing to supplant it. On the contrary, I would posit that a services bus, without shared domain data or a mechanism for retrieving it, is fundamentally crippled.
So, the next time someone says "Use Web Services for Integration", think to yourself: "that's part of the story, but not all."
After all, a barber shop quartet sounds much better if all four vocalists are in the room.
I'm seeing the difference more clearly than before: how a team can use Feature Driven Development for development, and how that is different than using FDD for project management. How could I have missed this?
For Development, FDD means:
For Project Management, FDD means:
So... why did I detail all of this? Because I ran across a situation where a team was using FDD for dev, but PM wasn't using FDD for planning or management. It's a little like being hungry at a buffet table.
Enough for now. I hope this makes sense.
Readers of this post will find a "case study" that allowed this author to directly compare Feature Driven Development to the traditional WBS when performing project planning. This information is useful if you would like to improve your software development processes, especially project and program management, or if you are considering the claims of agile software development.
It's not often you get to make direct comparisons between Feature Driven Development and the composition of a traditional Work Breakdown Structure when doing project planning. In fact, its downright rare. There is an ongoing discussion in Project Management and Standards-driven development circles: Despite the claims of improvement in communication and understanding, is FDD measurably better than traditional WBS? Can the claims be proven?
Well... I have a direct comparison. While there are still variables in the equation that are not compensated for, most of the variables are completely factored out, making this direct comparison between Feature Driven Development and a traditional WBS instructive.
What do I mean by "Traditional WBS?"
In the process of planning a project, the first step is collection of high-level requirements. Of course, requirements are a fascinating area all of their own. Usually the team creating the requirements learns during the process, causing the requirements to shift radically for a while.
However, once the requirements are described, then the project team will take them and literally walk through them, one item at a time, and break the requirements down directly into tasks for the project plan. These tasks will be grouped together to get a nice logical group, usually for the sake of creating delivery milestones. Tasks are estimated and the estimates are balanced against the schedule to account for dependencies.
What comes back is cost. The project team comes back to the requirements team and says, in effect, "We will deliver these 21 requirements for 1455 hours of effort (across 4 people). We will deliver the system to production in 10 weeks." The cost is 1455 hours. In software development IT organizations, time is the measurement of cost. Note: outsourcing is no different. Outsource vendors either charge by the hour or they will charge a fixed price based on their estimates of the hours. The difference is where the risk lay. The cost is still a function of the estimated hours.
I call this the traditional model because this is the model that I was taught in college, and which, to the best of my knowledge, is fairly similar to the methods currently espoused by the PMI (although I'm sure I've described this process in a far less rigorous manner... my apologies to my PMP friends).
What is FDD?
Feature driven development is exceptionally similar, really. So similar, in fact, that many folks will mstakenly discount FDD as minor or unimportant.
FDD teams will pick up at the same point: when the specs are described and delivered to the development team. However, at this point, the team does not break them into tasks. The team breaks them into features. A feature is the smallest unit of deliverable code that can be demonstrated to the customer. This step is missing in traditional WBS processes.
Each feature is then described as a story. Stories are subsets of use cases. They describe the method that a person can use to demonstrate the feature. Each feature must be described as a seperate story. (It is OK for stories to depend on one another).
Then, for each story, the team can describe the design needed to implement the story, and can create a list of tasks. The project plan describes the stories as the milestones, and will in fact create milestones and iterations BASED upon the list of stories that can be completely coded during the cycle.
What comes back, of course, is cost. However, it is not the same cost as above. Instead of saying "The cost of 21 features is..." the FDD team returns with "The cost of each feature is...". The cost for each feature is described to the customer. This small intermediate step is all that is needed to provide this information. It is not expensive. In fact, once the dev lead learns this process, it is quite natural and can often take less time, since the design can be broken out by story, allowing more than one person to work on the "design stories" in parallel.
Big deal? Why should I care? I'll get to that...
How did I get to make a comparison between the two?
While we'd all like to imagine that everyone is on the same page, all the time, the realists among us know better. It is normal for different teams, who should be working towards a common goal, to get a little out of sync. This is the case in our group. We have about five teams, all running in parallel with their own objectives.
These objectives were supposed to be aligned. While they were complimentary, they were not really aligned in that there were a few features we had promised to the business that we were not delivering. At a six month review, our executive sponsor pointed this out, and we had some choices to make.
So, we agreed to deliver some of the expected features in time for fixed business event that was already on the calendar. We pulled resources, to the point of effectively shutting down many projects, and put nearly all of our resources on three projects, all of which had to work together to deliver functionality for this fixed date.
In Project Management parlance, we had fixed resources. We could not move the delivery date. Our flexibility was scope. We would choose the features to fit the schedule. Given our timelines, there would be no way to bring on people in an effective manner.
So how does this lead to a comparison of FDD and Traditional WBS? Two of the three projects were being managed in a traditional manner. One was an agile project (Scrum) and had been managed using FDD for some time.
All three projects were pulled together but there was no time to retrain anyone. Our charter: use whatever method you know to create the cost so we can get the customer to sign off on the scope as quickly as possible. Two teams delivered traditional breakdowns. One team delivered a breakdown based on features. The customer was the same. The timeline was the same. The resources were similar: all were already on their respective teams, all were of equivalent caliber, and all were employees of the company. The same culture applied to all of them. All were led by the same overall integration team (of which I am a member).
Here's how it went.
Our business partners delivered the requirements very quickly. As you'd expect, the requirements were at a very high level and some of the distinctions that would come back to haunt us later were vague or poorly understood in that initial document. Each team took the same document and went off to figure out what features were to be delivered.
The FDD team had already been using stories, and had a "backlog" of stories that had not been included in existing releases because their priority was not high enough. So, using this new requirements document, the FDD team wrote about 25 new stories. To that, the team reviewed the existing stories in the backlog and selected about 15 that they felt would be beneficial in the new environment. We then took a very high-level guess as to the number of hours needed to write the code for each feature. (literally: the entire estimation process took two hours between the dev lead and the architect). We also did a little "napkin math" to determine about how many hours of dev time he was going to get in the delivery cycle. The total of all the backlog item estimates far exceeded the estimated iteration -- as we expected they would.
The FDD team then sat down with the user's representative and had him assign a business value (1 to 100) for each backlog item. All this was done in an Excel spreadsheet. Took less than two hours.
The FDD team then simply sorted the list of stories by the value, and reviewed it with him, in the same meeting. Using our "napkin math," we drew a line that represented the cut-off. Everything above the line was "tentatively in scope" while everything below the line was out. He re-ordered a few things now that he had a line, and left the meeting with a very good idea of what he was going to get.
The T-WBS teams did as you'd expect... they asked questions. There was some vagueness in the spec, and they wanted to get really good estimates, so they spent a few days figuring out what questions they wanted asked, and then another week getting clarifications. This process was painful and required a lot of teeth-grinding and more than a few raised voices.
While the T-WBS team was argung, the FDD team was taking the list of stories "above the line" and creating design stories. A design story is a description of the new functionality to be added to the system to support the feature. It is less than a paragraph long. From each design story, the FDD team created a list of tasks, and added a few "risk tasks" for situations where the work would fall into highly complex areas. In essence, they refined the estimates... however, they didn't do this for all of the 40+ stories. Only for the 12 or so that were "above the line." Some questions were asked, of course, but not for the functionality that wasn't going to be delivered.
With the refined estimates, the FDD team had to move the line. We had another meeting with our user representative and he signed off on the scope. The FDD team reached concensus for the list of features to deliver, and began work.
The T-WBS teams continued to argue, and meet, and discuss, and question. Finally, a full cost was available to the customer. The cost was too high, and the delivery dates were not aligned with their expectations. Both teams had to hustle to come up with ways to cut costs and deliver early. This was tough because, by this time, the coding cycle was already half finished. The teams had been writing code to a "partial spec" for two weeks, and were now in the process of "correcting the course" to hit the desired functionality. There was simply no way to cut scope without slowing things down.
So, they took time away from test. (Sound familiar?)
The project will be delivered in May. I'll post the results then.
I hate to argue. I'm the kind of person who looks at an argument as a lost opportunity to understand one another. The T-WBS teams spent far too much time using words, and far too little time reaching consensus. This is not for lack of trying or lack of skill. The Project Managers were certified and talented and all-around excellent in their roles.
The process was the problem. The FDD team provided the information that the customer needed, at the time that they needed it. The T-WBS team did not. It was as simple as that. I live on the IT side and I'm not impressed with myself in this process. If I could go back in time, and lead each team to use FDD, I'm sure we could have delivered concensus much sooner, and with much less stress.
What can you do with this information?
We had three projects. One customer. One culture. Similar development teams. Similar requirements on each team. Yet, one team reached consensus far easier than the other two. There were some vague requirements in all three projects. The only real difference: the use of Feature Driven Development on the successful team. Note: all three teams are delivering the code using similar processes (short daily meetings, short iterations, integrate early and often). While some of these practices are essentially similar to agile methods, the overall project is entirely waterfall.
If you have heard claims of great productivity gains from Agile development (like XP and Scrum), it is time to ask yourself: how much of that productivity comes from Feature Driven Development, and how much comes from the other practices? As a developer, many of the other practices are very important to me (like Test driven development, daily delivery commitments, continuous integration, and frequent demonstration to the customer). However, from a pure planning standpoint... from the PM standpoint... FDD is huge.
You can add FDD to any project. The changes are minor. One of the other practices of Agile development helps to reinforce FDD, and that is demonstration. At the end of each short cycle, or milestone, the developers have to personally demonstrate the feature directly to the customer. If your developers know this, they will make sure that all of the steps needed to actually demonstrate the feature are costed in the plan.
I would recommend this practice (demonstration) as a pair to go with using FDD in the planning stages.
Consider this as a lesson learned. I know that I do.