Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

July, 2005

Posts
  • Inside Architecture

    Preparing for Indigo -- an addition

    • 0 Comments

    Craig McMurty, in his recent posting on Indigo indicates a couple of different scenarios for folks who are developing software today with an eye towards the impending release of Indigo.  It is a valuable article and quite interesting.  However, Craig missed the integration scenario completely, which is unfortunate.

    The scenarios covered:

    Scenario Technology
    Web Application Use WSE-secured web services to support Ajax controls. No other use of communication protocols recommended.
    Rich Client (with local db) Use MSMQ for data replication. Use WSE-secured web services to support synchronous calls.
    Data collection from devices (slow is OK) Use MSMQ for data transmission. Use WSE-secured web services to support synchronous calls.
    Data collection from devices (speed required) Use Indigo beta bits for communication instead of remoting.

    This is useful if all apps are an island, and never need to share data with one another.  Unfortunately, that is not the world I live in.  As an architect, it is my responsibility to insure that applications are created with data integration built-in. 

    The primary patterns of data integration are somewhat technical, but to align them with Craig's approach, would fall into the following scenarios:

    • Data source system shares domain data.  The considerations are: security of access to the data source system, whether data can be provided reliably by the data source system directly, and whether synchronous lookup of data would cause an impact on the performance of the data source system.  Patterns are needed for situations where the data source system can (and should) provide data in individual records, in ad-hoc lookup sets, and in time-related (usually daily) batch deliveries.
    • System of Record generates business event.  The considerations are: canonical schema of the business event (see my prior post on business event schemas), a publish and subscribe system for sharing the event, protocol for accepting the notification of the event, and considerations for instrumentation, security, orchestration, and monitoring.
    • Operation Data System shares transactional roll-up information (usually to a reporting system).  The considerations are: source, version, and age of the data, relationships to known dimension tables, updates to dimension data, data size and load frequency, security and monitoring.
    • Application calls on service provided by partner application. This is an interesting (and somewhat dangerous) one. The considerations are: authorization, authentication, subscription management, service schema versioning, network/firewall issues, availability, reliability, monitoring, and error handling

    I will do some digging to see if I can determine if Indigo has a story for these scenarios or if they are simply covered by SQL Notification Services, Biztalk, DTS, and WSE (respectively).

  • Inside Architecture

    Agile also means fall early and get up

    • 3 Comments

    I was discussing the notion, the other day, that a defect in design may be expensive, but a defect in the fundamental assumptions of a project can be catastrophic.  In other words, if you are doing the thing wrong, you can fix it, but if you are doing the wrong thing, you get to start over.

    So when, in a meeting on Enterprise Architecture, the speaker asked the audience if anyone has ever delivered a project only to find, a short time later, that it failed, I was not surprised when a good percentage of folks raised their hands.  We've all been on projects where we thought we were doing the right thing, and doing it well, only to find out after delivery that we had screwed up.

    One thing that did surprise me, though, was one gentleman who mentioned that he had been on a project that delivered, and he didn't find out for six months that the project had failed... not because he was "out of the loop" or took a very long vacation, but because the customer didn't know that the project was a failure for that long.

    That is scary.

    It occurs to me that I haven't seen anything like that on agile projects.  The hallmark of an agile project is that you stop, OFTEN, and show the results to the customer.  Not the marketing person.  Not the project manager... the customer.  You get feedback.  And you make changes.  Change is embraced, not avoided.

    So, if you are doing the wrong thing, it should be obvious early.  In fact, it could become obvious so early that the team hasn't spent the lion's share of the original funds yet... still time enough to fix something and get back on track.  This is Great!  If you are going to waste money, find out early and stop.  Then, reorient the investment.  It is far worse to develop the entirely wrong application than it is to develop what you can of a good one. 

    That doesn't happen with waterfall projects.  On the other hand, the waterfall project has the advantage of delivering the wrong thing.  Teams get rewarded on the quality of the delivery, not the alignment between the delivery and the actual needs.  Developers get gifts and good marks for "getting it done right" but not for "getting the right thing done".  That won't be discovered until later, and then the dev team will deflect the blame to the analysts who collected the requirements.

    And this works against the Agile methods.  Even though Agile methods spend money better, they don't get to that end-date when everyone throws up their hands for joy and says "We Got It Done."  They don't get the prize at the end that people crave: the promotion out of waterfall h_ll.  The right to go home on time.  The plasticine trophy for the windowsill.

    So if you want to know why agile methods aren't fostered more often, or more closely, look no further than the "ship party" that roundly celebrates the delivery of a dead horse. 

    To this end, I propose a new practice for agilists: the kill party... where everyone celebrates when a bad idea is killed before it consumes buckets of shareholder's cash.

  • Inside Architecture

    Considering: Temporal database relationships

    • 5 Comments

    I suggest that we add temporal foreign keys to relational database design.

    Programs move data.  Databases store data in a consistent fashion.  These different purposes can lead to different organizing principles.  One of the key reasons for Object Oriented structures is to minimize cost and complexity when things change over time.  This creates a temporal relationship between various designs, a relationship that is powerfully supported by object oriented development. 

    However, while programs hold functions that change over time, we don't have a good structure for isolating complexity caused by data that changes over time.  And so I ask the question: How do we begin to create the notion, in the data storage layer, that data can hold a temporal relationship with other data. 

    I don't mean the notion that a data record would have a "last updated date".  I mean the notion that a data table may contain a foreign key to another table, where a record keeps the values in the related table that existed on a particular date, even if the data in the related table is later changed. 

    For example: let's say that company A sells products.  Their products are P1 and P2, and they sell for $20 each.  Now, company B makes an order for product P1.  In current RDBMS systems, we actively copy the price at the time of the order from the 'products' table to the 'purchase order details' table because the price could change later, and we want to remember the price on the date it was made.

    However, this is a workaround.  The fact is that the purchase order has a temporal relationship with the products table... a relationship that the notion of RDBMS cannot handle... so we copy fields around.  The decision of what fields to copy belongs to the 'purchase order details' table, and the code becomes complex by the notion that specific fields have to be selected from that table instead of the products table when evaluating a product.  It's a kind of "overlay".  The relationship says: pick fields from the related table unless a field by that name happens to exist in the current table. 

    Should this relationship be defined by the 'order details' table?  Shouldn't the owner of the data (the products table) decide what columns to expose as "time related" while other columns are not?  I submit that this would put responsibilities where they belong and reduce the complexity of the data systems themselves.

    I suggest this innovative feature for RDBMS systems: a temporal foreign key.  The owner of a table indicates what fields are likely to change frequently and would have their data kept in a temporal structure, while other fields are not temporal (like a relationship with the bill of materials used for new shipments).  Then, when a foreign key is made, by placing the product id into the 'order details' table, the date of the relationship is noted.  Temporal field values are fixed at the value in place at that time.  No reason to copy fields to another table. 

    The code in the calling systems would be much simpler, as would the designs of the databases themselves.  The complexity of the database system would increase, but not moreso than other forms of referential integrity. 

    It is time to consider this kind of relationship as an innovation to the now 30 year old basic notions of relational databases. 

  • Inside Architecture

    Atlas = Ajax = asp.net 2.0 script callbacks and more

    • 3 Comments

    The marketplace of ideas is an amazing place.  When Microsoft came up with the notion of Remote Scripting (many years ago), the Netscape folks scoffed.  At the time, folks looked at MS and said, "This is a war, and I won't use a feature from the big bad wolf!"  The notion of asynchronously updating part of a web page, while powerful, lay dormant for years.

    Sure, IE has kept the feature alive, but few folks used it.  Then, as soon as the Mozilla/Firefox folks decided to go ahead and embrace the notion, then it becomes safe for the public to use.  Only then is it "cross platform."  Alas, the key was not to add the feature to our browser, but to add it to every browser.  (interesting).

    The success of Gmail, and a marketing campaign by a consulting company, have led to some visibility.  There's a new marketing term for this long-existing technique: Ajax.  Nice name.  Marketing, they get.

    The great thing for MS platform developers: Just as the term will be gaining steam, Microsoft will release ASP.Net 2.0, which looks to have built-in support for it.  The product groups have come up with a competing name: Atlas.

    So, special thanks to Jesse James Garrett for publicizing a feature of our new platform.  If you want to know more about implementing Ajax, both in ASP.Net 2.0 and in .Net 1.1, see this paper by Dino Esposito on the MSDN site.

    http://msdn.microsoft.com/msdnmag/issues/04/08/CuttingEdge/

    If you want to know more about Atlas, see this blog entry from scottgu

    http://weblogs.asp.net/scottgu/archive/2005/06/28/416185.aspx

    It is nice to be ahead of the curve.

Page 1 of 1 (4 items)