Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

Posts
  • Inside Architecture

    Is it a change request?

    • 3 Comments

    This is an interesting question in business IT.  I just sat through a long meeting discussing requirements for a project that is under way.  The project started without a detailed list of requirements written out. 

    So, the business adds a requirement that no one was aware of.  I made the mistake of using the words "change request"  which led to a ROUSING discussion.  The business didn't want to start adding "process" when they had not been required to follow a requirements management process to date.  It was a shock to use the words.

    Lesson to learn: if you EVER want to control your Business IT project, don't let any progress to occur without a common agreement about the amount of control, and stick to that agreement as long as possible. 

    There's a layer of stomach lining I'm never getting back.

  • Inside Architecture

    Coding Dojo suggestion: the decorator kata

    • 3 Comments

    I ran across a posting by Robert Martin on the Coding Dojo and I admit to being intrigued.  I'm running a low-priority thread, in the back of my mind, looking for good examples of kata to use in a coding dojo.

    Here's one that I ran across in a programming newsgroup.

    You have an app that needs to be able to read a CSV file.  The first line of the file specifies the data types of the fields in the remaining lines.  The data type line is in the format

    [fieldname:typename],[fieldname:typename],...,[fieldname:typename]

    For example:
    [name:string],[zipcode:int],[orderdate:date],[ordervalue:decimal]

    you must use a decorator pattern.  The decorator must be constructed using a builder pattern that consumes the data type line.  Output is a file in XML format


    <file>
       <row><name>Joe Black</name><zipcode>90210</zipcode>... </row>
    </file>

    Any row that doesn't match the specification will not produce an output line.  The output will pick up with the next line.  The file, when done, must be well-formed.

    Of course, with a kata, the only thing produced at the start is the set of unit tests (and perhaps, in the interest of time, the frame of the classes from a model).  The rest is up to the participants.

    Comments are welcome, of course.

  • Inside Architecture

    Developing a standard for naming web services within the enterprise

    • 1 Comments

    Sometimes, the first person to speak up, and point out a problem, gets to be involved in solving it.  I find that cool.  (Call me crazy).

    Last year, I decided to try to get various folks within Microsoft IT to discuss a naming standard for web services that could be used across the enterprise.  My attempt didn't get a lot of notice, and fell pretty silent.  However, the issue woke up recently now that we have web services that are starting to deploy, in production, across the enterprise.  Folks want their namespaces to be right, because changing the namespace later often means that the client app has to be recompiled (or someone gets to edit the WSDL file).

    So, traction is starting to develop.  I am hopeful.  I'll post progress here...

  • Inside Architecture

    A Case For and Against the Enterprise Library

    • 16 Comments

    I've been an architect for a while now, but, as far as being an architect within the walls of Microsoft, today was day one.

    Already, I've run into an interesting issue: when it is better to forgo the code of the Enterprise Library and roll your own, vs. using existing code.

    Roll your own what?  Well, the MS Enterprise Library is a set of source code (in both VB.Net and C#) that provides an infrastructure for business applications.  The "blocks" that are provided include: caching, configuration, data access and instrumentation, among others.

    I know that many people have downloaded the application blocks.  I don't know how many people are using them.  I suspect far fewer.

    I took a look at the blocks myself, and my first impression: unnecessary complexity.  Big time.  This is what comes of creating a framework without the business requirements to actually use it.  To say that the code has a high "bus" factor is a bit deceptive, because along with the code comes ample documentation that should mitigate the difficulty that will inevitably come from attempt to use them.

    On the other hand, the code is there and it works.  If you have a project that needs a data access layer, why write a new one when a perfectly workable, and debugged, application block exists for it? 

    Why indeed.  I had a long discussion with a developer today about using these blocks.  I will try to recount each of the discussion points:

    1. The blocks are complex and we only need to do something simple.  True: the blocks are complex, but the amount of time needed to do something simple is FAR greater than the amount of time needed to understand how to configure the blocks.  If you look at simple project impact, using something complex is still less expensive that writing something simple.
    2. We don't know these application blocks, so it will take time to learn.  True: but if you write new code, the only person who knows it, when you are done, it you.  Everyone else has to read the documentation.  You'd be hard pressed to come up with better documentation than the docs delivered with the application blocks.
    3. The code we write will meet our needs better because we are doing "special" stuff.  False: the stuff that is done in the application blocks is pure infrastructure.  As an architect, I carry the mantra: leverage existing systems first, then buy for competitive parity, and lastly build for competitve advantage.  You will not normally provide your employer with a competitive advantage by writing your own code in infrastructure.  You are more likely to get competitive advantage by using the blocks, since they will be less expensive with capabilities right out of the box.
    4. We don't need all that code.  True.  Don't use the functionality you don't need.  The cost is very low to ignore the functionality you don't need.  More importantly, writing your own code means debugging your own code.  If you leverage the code that is there, you will not have to debug it.  That saves buckets of time.
    5. Our code can be tuned and is faster than the code in the Enterprise Library.  The code in the Enterprise Library is tuned for flexibility, not speed.  This is true.  However, when you first write your own code, it is slow.  It gets faster when you tune it.  Why not jump right to the tuning step?  Put in the EL for the component you are interested in, run a stress test against it, and fine-tune the code to speed it up.  You have unit tests already in place to prove that your tuning work won't break the functionality (highly valuable when doing perf testing). 

    Please... can someone else come up with any better arguments for NOT using the application blocks in the enterprise library?  I'm not seeing one.

     

  • Inside Architecture

    Preparing for Indigo -- an addition

    • 0 Comments

    Craig McMurty, in his recent posting on Indigo indicates a couple of different scenarios for folks who are developing software today with an eye towards the impending release of Indigo.  It is a valuable article and quite interesting.  However, Craig missed the integration scenario completely, which is unfortunate.

    The scenarios covered:

    Scenario Technology
    Web Application Use WSE-secured web services to support Ajax controls. No other use of communication protocols recommended.
    Rich Client (with local db) Use MSMQ for data replication. Use WSE-secured web services to support synchronous calls.
    Data collection from devices (slow is OK) Use MSMQ for data transmission. Use WSE-secured web services to support synchronous calls.
    Data collection from devices (speed required) Use Indigo beta bits for communication instead of remoting.

    This is useful if all apps are an island, and never need to share data with one another.  Unfortunately, that is not the world I live in.  As an architect, it is my responsibility to insure that applications are created with data integration built-in. 

    The primary patterns of data integration are somewhat technical, but to align them with Craig's approach, would fall into the following scenarios:

    • Data source system shares domain data.  The considerations are: security of access to the data source system, whether data can be provided reliably by the data source system directly, and whether synchronous lookup of data would cause an impact on the performance of the data source system.  Patterns are needed for situations where the data source system can (and should) provide data in individual records, in ad-hoc lookup sets, and in time-related (usually daily) batch deliveries.
    • System of Record generates business event.  The considerations are: canonical schema of the business event (see my prior post on business event schemas), a publish and subscribe system for sharing the event, protocol for accepting the notification of the event, and considerations for instrumentation, security, orchestration, and monitoring.
    • Operation Data System shares transactional roll-up information (usually to a reporting system).  The considerations are: source, version, and age of the data, relationships to known dimension tables, updates to dimension data, data size and load frequency, security and monitoring.
    • Application calls on service provided by partner application. This is an interesting (and somewhat dangerous) one. The considerations are: authorization, authentication, subscription management, service schema versioning, network/firewall issues, availability, reliability, monitoring, and error handling

    I will do some digging to see if I can determine if Indigo has a story for these scenarios or if they are simply covered by SQL Notification Services, Biztalk, DTS, and WSE (respectively).

  • Inside Architecture

    Agile also means fall early and get up

    • 3 Comments

    I was discussing the notion, the other day, that a defect in design may be expensive, but a defect in the fundamental assumptions of a project can be catastrophic.  In other words, if you are doing the thing wrong, you can fix it, but if you are doing the wrong thing, you get to start over.

    So when, in a meeting on Enterprise Architecture, the speaker asked the audience if anyone has ever delivered a project only to find, a short time later, that it failed, I was not surprised when a good percentage of folks raised their hands.  We've all been on projects where we thought we were doing the right thing, and doing it well, only to find out after delivery that we had screwed up.

    One thing that did surprise me, though, was one gentleman who mentioned that he had been on a project that delivered, and he didn't find out for six months that the project had failed... not because he was "out of the loop" or took a very long vacation, but because the customer didn't know that the project was a failure for that long.

    That is scary.

    It occurs to me that I haven't seen anything like that on agile projects.  The hallmark of an agile project is that you stop, OFTEN, and show the results to the customer.  Not the marketing person.  Not the project manager... the customer.  You get feedback.  And you make changes.  Change is embraced, not avoided.

    So, if you are doing the wrong thing, it should be obvious early.  In fact, it could become obvious so early that the team hasn't spent the lion's share of the original funds yet... still time enough to fix something and get back on track.  This is Great!  If you are going to waste money, find out early and stop.  Then, reorient the investment.  It is far worse to develop the entirely wrong application than it is to develop what you can of a good one. 

    That doesn't happen with waterfall projects.  On the other hand, the waterfall project has the advantage of delivering the wrong thing.  Teams get rewarded on the quality of the delivery, not the alignment between the delivery and the actual needs.  Developers get gifts and good marks for "getting it done right" but not for "getting the right thing done".  That won't be discovered until later, and then the dev team will deflect the blame to the analysts who collected the requirements.

    And this works against the Agile methods.  Even though Agile methods spend money better, they don't get to that end-date when everyone throws up their hands for joy and says "We Got It Done."  They don't get the prize at the end that people crave: the promotion out of waterfall h_ll.  The right to go home on time.  The plasticine trophy for the windowsill.

    So if you want to know why agile methods aren't fostered more often, or more closely, look no further than the "ship party" that roundly celebrates the delivery of a dead horse. 

    To this end, I propose a new practice for agilists: the kill party... where everyone celebrates when a bad idea is killed before it consumes buckets of shareholder's cash.

Page 98 of 104 (623 items) «96979899100»