Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

November, 2006

Posts
  • Inside Architecture

    Should our next generation of languages require us to declare the applications' architecture?

    • 10 Comments

    As languages 'improve' over time, we see a first principle emerge:

    Move responsibility for many of the 'good practices' into the language itself, allowing the language (and therefore the people who use it) to make better and more consistent use of those practices.

    With assembler, we realized that we needed a variable location to have a consistent data type, so in comes variable declaration.  We also want specific control structures like WHILE and FUNCTION.  As we moved up into C and VB and other 3GLs, we started wanting the ability to encapsulate, and then to create objects.  OO languages emerged that took objects into account.

    Now that application architecture is a requirement of good application design, why is it that it that the languages don't enforce basic structural patterns like 'layers' and standard call semantics that allow for better use of tracing and instrumentation?  Why do we continue to have to 'be careful' when practicing these things?

    I think it may be interesting if applications had to declare their architecture.  Classes would be required to pick a layer and the layers would be declared to the system, so that if the developer accidentally broke his own rules, and had the U/I call the data access objects directly, instead of calling the business objects, for example, then he or she could be warned.  (With constructs to allow folks to override these good practices, of course, just as today you can create a static class which gives you, essentially global variables in an OO language).

    What if an application had to present it's responsibilities when asked, in a structured and formal manner?  What if it had to tie to a known heirarchy of business capabilities, as owned by the organization, allowing for better maintenance and lifecycle control? 

    In other words, what would happen if we built-in, to a modern language, the ability of the application to support, reflect, and defend the solution architecture?

    Maybe, just maybe, it would be time to publish the next seminal paper: "Use of unconstrained objects considered harmful!"

  • Inside Architecture

    Introducing a culture of code review

    • 0 Comments

    I got a ping-back from another blog post written by Jay Wren.  He mentioned that his dev team doesn't have a 'test culture' so he has to play a 'noisemaker' role when he is challenging bad designs or code.

    I read with interest because, to be fair, code review and design review is not usually done by the test team.  Functional testing is usually where the test team really digs in, although they have a healthy input in other places.

    Designers need to 'test' the design, but this is most often done by having the architect create high level designs and having other team members, including other architects, review the design. 

    This is, far and away, the most important 'test' of software, in my opinion, but is too rarely done, especially for code that is written for internal use as most IT systems are. 

    Frequently, there is no architect available. 

    Even if there is a senior person available who can play the role of reviewing architect, what process would they follow?  I'd suggest that any company in this position investigate the ATAM method from SEI.   This is a way of evaluating a design from the standpoint of the tradeoffs that the design accounts for.

    Essentially, the concept is: each design must serve the purpose of meeting functional requirements, organizational requirements, and cost/complexity requirements.  By reviewing first collecting and prioritizing requirements for reusability, scalability, maintainability, and other 'abilities' (called system quality attributes), you can then evaluate the code to decide if it is 'scalable enough' or 'maintainable enough' to meet the needs.

    This allows a realistic review of a system.  It takes a lot of the 'personality conflict' out of the equation.  There is no perfect software system.  However, if a system's design is a good match for the requirements of the organization that creates it, that's a good start. 

  • Inside Architecture

    going quiet for a while

    • 0 Comments
    I'll be on vacation for the next 10 days, so don't expect a lot of blogging.  I'll try to take nice photos and post them when I get back.
  • Inside Architecture

    Should an interface be stable when semantics are not?

    • 4 Comments

    I know an architect who is developing an enterprise service for the passing of contracts from one system to another, (document metadata, not the image).  He knows the needs of the destination system very well, but he defined an interface that is not sufficient to meet the needs.

    The interface describes the subset of data that the source system is prepared to send.  The source system is new, and will be released in iterations.  Eventually, it will send all of the data that the destination system needs.

    In the mean time, it will send only a subset.

    For some reason, he wants the interface to change with each iteration.  And thus, he will create the service repeatedly.

    This thinking is typical: define what you need, refactor as you go.  The problem is that it ASSUMES that no one else ever needs to call your service or use your service.  It assumes that no one else will get to the destination system first.  In short, that you know everything.

    The justification: we will change the destination system when the source system comes online.  Since we will not change it right now, there is no need to model the interface.

    What do you think?  Should the interface describe all of the data that will eventually be needed, even if neither the source nor destination systems can leverage all of it yet?  Should there be a different interface each time the behavior of the destination system changes?

     

Page 2 of 2 (10 items) 12