This is an interesting question in business IT. I just sat through a long meeting discussing requirements for a project that is under way. The project started without a detailed list of requirements written out.
So, the business adds a requirement that no one was aware of. I made the mistake of using the words "change request" which led to a ROUSING discussion. The business didn't want to start adding "process" when they had not been required to follow a requirements management process to date. It was a shock to use the words.
Lesson to learn: if you EVER want to control your Business IT project, don't let any progress to occur without a common agreement about the amount of control, and stick to that agreement as long as possible.
There's a layer of stomach lining I'm never getting back.
I ran across a posting by Robert Martin on the Coding Dojo and I admit to being intrigued. I'm running a low-priority thread, in the back of my mind, looking for good examples of kata to use in a coding dojo.
Here's one that I ran across in a programming newsgroup.
You have an app that needs to be able to read a CSV file. The first line of the file specifies the data types of the fields in the remaining lines. The data type line is in the format
you must use a decorator pattern. The decorator must be constructed using a builder pattern that consumes the data type line. Output is a file in XML format
Any row that doesn't match the specification will not produce an output line. The output will pick up with the next line. The file, when done, must be well-formed.
Of course, with a kata, the only thing produced at the start is the set of unit tests (and perhaps, in the interest of time, the frame of the classes from a model). The rest is up to the participants.
Comments are welcome, of course.
Sometimes, the first person to speak up, and point out a problem, gets to be involved in solving it. I find that cool. (Call me crazy).
Last year, I decided to try to get various folks within Microsoft IT to discuss a naming standard for web services that could be used across the enterprise. My attempt didn't get a lot of notice, and fell pretty silent. However, the issue woke up recently now that we have web services that are starting to deploy, in production, across the enterprise. Folks want their namespaces to be right, because changing the namespace later often means that the client app has to be recompiled (or someone gets to edit the WSDL file).
So, traction is starting to develop. I am hopeful. I'll post progress here...
I've been an architect for a while now, but, as far as being an architect within the walls of Microsoft, today was day one.
Already, I've run into an interesting issue: when it is better to forgo the code of the Enterprise Library and roll your own, vs. using existing code.
Roll your own what? Well, the MS Enterprise Library is a set of source code (in both VB.Net and C#) that provides an infrastructure for business applications. The "blocks" that are provided include: caching, configuration, data access and instrumentation, among others.
I know that many people have downloaded the application blocks. I don't know how many people are using them. I suspect far fewer.
I took a look at the blocks myself, and my first impression: unnecessary complexity. Big time. This is what comes of creating a framework without the business requirements to actually use it. To say that the code has a high "bus" factor is a bit deceptive, because along with the code comes ample documentation that should mitigate the difficulty that will inevitably come from attempt to use them.
On the other hand, the code is there and it works. If you have a project that needs a data access layer, why write a new one when a perfectly workable, and debugged, application block exists for it?
Why indeed. I had a long discussion with a developer today about using these blocks. I will try to recount each of the discussion points:
Please... can someone else come up with any better arguments for NOT using the application blocks in the enterprise library? I'm not seeing one.
Craig McMurty, in his recent posting on Indigo indicates a couple of different scenarios for folks who are developing software today with an eye towards the impending release of Indigo. It is a valuable article and quite interesting. However, Craig missed the integration scenario completely, which is unfortunate.
The scenarios covered:
This is useful if all apps are an island, and never need to share data with one another. Unfortunately, that is not the world I live in. As an architect, it is my responsibility to insure that applications are created with data integration built-in.
The primary patterns of data integration are somewhat technical, but to align them with Craig's approach, would fall into the following scenarios:
I will do some digging to see if I can determine if Indigo has a story for these scenarios or if they are simply covered by SQL Notification Services, Biztalk, DTS, and WSE (respectively).
I was discussing the notion, the other day, that a defect in design may be expensive, but a defect in the fundamental assumptions of a project can be catastrophic. In other words, if you are doing the thing wrong, you can fix it, but if you are doing the wrong thing, you get to start over.
So when, in a meeting on Enterprise Architecture, the speaker asked the audience if anyone has ever delivered a project only to find, a short time later, that it failed, I was not surprised when a good percentage of folks raised their hands. We've all been on projects where we thought we were doing the right thing, and doing it well, only to find out after delivery that we had screwed up.
One thing that did surprise me, though, was one gentleman who mentioned that he had been on a project that delivered, and he didn't find out for six months that the project had failed... not because he was "out of the loop" or took a very long vacation, but because the customer didn't know that the project was a failure for that long.
That is scary.
It occurs to me that I haven't seen anything like that on agile projects. The hallmark of an agile project is that you stop, OFTEN, and show the results to the customer. Not the marketing person. Not the project manager... the customer. You get feedback. And you make changes. Change is embraced, not avoided.
So, if you are doing the wrong thing, it should be obvious early. In fact, it could become obvious so early that the team hasn't spent the lion's share of the original funds yet... still time enough to fix something and get back on track. This is Great! If you are going to waste money, find out early and stop. Then, reorient the investment. It is far worse to develop the entirely wrong application than it is to develop what you can of a good one.
That doesn't happen with waterfall projects. On the other hand, the waterfall project has the advantage of delivering the wrong thing. Teams get rewarded on the quality of the delivery, not the alignment between the delivery and the actual needs. Developers get gifts and good marks for "getting it done right" but not for "getting the right thing done". That won't be discovered until later, and then the dev team will deflect the blame to the analysts who collected the requirements.
And this works against the Agile methods. Even though Agile methods spend money better, they don't get to that end-date when everyone throws up their hands for joy and says "We Got It Done." They don't get the prize at the end that people crave: the promotion out of waterfall h_ll. The right to go home on time. The plasticine trophy for the windowsill.
So if you want to know why agile methods aren't fostered more often, or more closely, look no further than the "ship party" that roundly celebrates the delivery of a dead horse.
To this end, I propose a new practice for agilists: the kill party... where everyone celebrates when a bad idea is killed before it consumes buckets of shareholder's cash.