Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

May, 2009

  • Inside Architecture

    Test yourself: 25 most dangerous security programming errors


    The SANS institute has published a list of the top 25 most dangerous programming errors.  Not only is this a must-read, but it is critical for architects, developers and testers, of all stripes, to be aware of these programming errors.  Unless and until we have platforms that simply prevent these errors, we can combat these security gaps best through education, careful testing, and responsible project delivery practices.

    How familiar are you with these mistakes? 

    Would you be able to spot them in code you reviewed? 

    Would you be able to prevent them in your own code? 

  • Inside Architecture

    Why “good” doesn’t happen


    On a blog post titled “what good looks like,” Alan Inglis detailed a list of “project architectectural artifacts” that he wishes that previous architects would leave behind when a project is completed. 

    In his list, Alan details 10 artifacts (actually 14, if you use the ZF to catalog them) that he’d suggest should be created.  His advice is interesting, but there is a flaw in the logic.  I’ll examine his suggestion from a couple of viewpoints, to illustrate why I believe that his advice is incomplete, and to offer a suggestion for completing it.


    It is normal, when we begin a project, to detail out the things that we wish we had.  Therefore, we “should” create them for the next person.  Viewpoint 1: at the beginning, looking forward, defining project requirements. 

    It is also typical to open up a maintenance project and need to make changes, only to find yourself wondering about the choices made by the person before you.  Viewpoint 2: in the middle, looking back, trying to understand.  (In my past dev teams, we called this process an “archeology expedition.”)

    The problem with his list of artifacts (which is quite comprehensive) is that “wishing” does not constitute a requirement.  If I create an artifact “for the future” that does not mean that the people, in viewpoint 2, will use it. 

    Unless there is a built-in development process that REQUIRES architects and developers to visit a repository, and withdraw relevant documents, then you have no business justification for performing the task.

    I question every requirement that has no business justification, especially if it is not tied to a business process.  This is easily fixed: tie the documentation ‘requirement’ to a business process… the process of designing architecture.


    People in “viewpoint 2” should have the requirement of looking things up, in a specific place, for a specific reason.  We need to carefully describe the processes around this “learning” phase.  Why would we look things up?  What things would we look up, and most importantly, what are the triggers or scenarios in which a lookup process is required?

    Let’s draw the requirements for documentation from that development process… not from a wish list

    For example: If I believe that it will be useful to create a list of terms (glossary), let’s understand the scenarios where a list of terms would be useful. 

    • Would it be on THIS application, or any application tied to the same business unit
    • What do I need to know about a term? 
    • If I look up a term, would it be natural to link that term to the conceptual data model (if applicable)? 
    • What about changes in the meaning or use of the term over time? 
    • If a term has changed, or an older term is no longer used, what guidelines must be followed to update that glossary? 
    • Should we keep older definitions, so that people inspecting older code can understand the code? 
    • Should we leave advice to those poor souls for how to interpret an older term in the context of a newer practice or process? 
    • Assuming many projects would use the same glossary, should we tie each term to the project or app that needed it?  (That way, a term won’t be deleted if a system that needs it is still running in production).


    I picked on the glossary, but EVERY ONE of the artifacts that Alan lists would have this problem.  Each describes some tiny part of a much larger ecosystem of information.  The moment the project goes into production, the artifacts must take their place among the hundreds of other relevant documents, from every other project.  They need to be findable, consistent, and AUTOMATICALLY linked together in a way that minimizes the “archeology expedition” that “viewpoint 2” implies.

    This is no longer a “project” problem.  This is an enterprise problem.  The data describes part of the architecture of the enterprise, and as such, needs to be maintained at the enterprise level, for the sake of engineering. 

    As Leo de Sousa pointed out in his reply to the Alan’s post, a repository is required.  But it won’t be a simple one, where we drop documents.  No, it will be a complex thing, that understands what each architectural element is, and how to find it, and how to link it to other elements, so that the artifact of the present doesn’t become the archeology of the future.

  • Inside Architecture

    Make IT appear as simple as possible, but not simpler


    Sometimes I hear a complaint from an IT architect who wants to have direct conversations with “the business” or “the customer” but, for some reason (usually bureaucratic), they cannot.  There is a team of analysts or project managers that they are supposed to talk to. 

    The original objective of having “layers” of people is to make IT appear simple.  We all agree that business constituents can become confused if they are dealing with a long list of people from IT, each of which have different concerns.  In the worst case scenario, a business user reaches out to an analyst to tell them about a software error, and the ‘problem’ gets handed off from person to person, adding time and confusing the user.

    Many companies favor the “Single Point of Contact” approach. For each business unit, there is a single point of contact for all projects.  There may still be one more point of contact for “support” related concerns.  But that is all.  This hides some of the complexity from the business customer, but adds a layer between IT and the customer.


    So where does the IT architect fit?  Does it depend on the type of architect?  Does the enterprise architect need to have direct conversations with business stakeholders? 

    What about solution or platform architects?  Should they be talking “directly to the business?” 

    It would seem obvious that business architects should, but how do business architects relate to business analysts?  There’s still the support side as well.  Does each application have their own support contact?  What happens when one application has the right data, and the next one over has the wrong data… who should the customer call?

    So we have a problem.  That much is clear.  How to solve it?

    I’d like to consider introducing a concept into the conversation: interdisciplinary teams.

    The notion of an interdisciplinary team is not widely used in computing, but there are many examples in science.  Used widely in research, medicine, and public policy, interdisciplinary teams provide a way for specialists in many fields to work together to solve a problem.  Any problem can be addressed from many viewpoints, using an understanding that emerges from the unique combination of talent and responsibilities.

    Many of the processes for collecting and describing requirements, including the well-understood “Joint Application Development” or JAD process, incorporate the same basic ideas, but do so in a less structured manner and only for a single “problem” (understanding requirements). 

    What I’d like to see done is to use the concept more consistently.  For Information Technology, and for consulting, this is quite doable.  Instead of having a single person represent IT to the business, have a team of people.  They meet the business on a monthly basis, and the concerns of each of the people can be brought to the monthly meeting.  All of this is coordinated by a single “IT Engagement Manager” or “IT Relationship Owner.”  However, unlike the bureaucratic processes we see in some companies, there are a few rules that apply.

    The interdisciplinary team will have predefined roles.  The list of roles cannot be reduced by either the IT engagement manager or business stakeholder.  One person can fill more than one role.  However, the IT engagement manager does not assign IT staff to those roles.  That is up to IT leadership to do.

    This kind of interdisciplinary structure can allow a more direct flow of information, communication, and shared commitments than is possible with the “single point of contact” model.  At the same time, the business stakeholders don’t get randomized by multiple requests for the same information or by the miscommunication that comes from collecting different information at different times in different contexts to apply to the same problem.

    In many ways, using a single point of contact is an attempt to make the relationship, between IT and the business, simple.  It is too simple… to the point of ineffectiveness.  I believe that a broader approach is often a better one.

  • Inside Architecture

    Architecture makes Agile Processes Scalable


    As many of you may know, Microsoft has a vocal and thriving Agile Software Development community.  Recently, on our community forum, a question appeared about the ability of Agile development to “scale” to a large team.  In other words, if we can make agile development practices work in a dev group with hundreds of people, can we make it work in a dev group with thousands of people?

    There was a lot of discussion on the alias.  Much focused on process improvements.  E.g. how to create scrum of scrums, and how to automate test and build processes so that large systems can be integrated continuously.  That is part of the answer.

    However one quote, from a seasoned and experienced engineering leader, Nathan McCoy, who joined Microsoft as part of our acquisition of aQuantive , provides a real clue to the rest of the answer.

    The answer is yes, agile can scale to larger systems...  Here’s the quote:

    When we were in waterfall mode, we tended to batch up our releases.  They were complicated to plan and manage.  We burned people out on death march projects that culminated in release weekends where we would work 72 hours with little sleep and little contact with our families. 

    We turned to agile engineering practices – in my case, not simply because I believed it would be a panacea, but rather it gave me a whole arsenal of techniques to make improvements, techniques that built on engineering practices that made a lot of sense to me.

    We evolved away from the big batch release by decoupling on component boundaries, putting in services, adding contracts and other techniques mentioned often on this [forum], to the place where we have not done such a big batch release weekend in years.

    Let’s look at that for a minute. 

    The Nathan McCoy is talking about a painful deployment process that could not scale.  Early on, deployment to live servers would take hours, but as the code complexity and number of customers grew, hours turned to days.  Deployment suffered.  People suffered.  Quality suffered. 

    This team turned to agile techniques, and solved their scalability problem.  They did it with decoupling, interfaces, and services.  They did it with architecture.

    The real lesson is this: using architecture allowed an agile team to decouple various parts of a system, which enabled agility to go further.  In other words, the success of the agile project depended on the addition of architecture, at the right time, in the right manner.  The problem could not be solved by agile processes alone. 

    They solved their problem , in an agile environment, using agile architecture.  What makes it agile architecture?

    • The architecture was introduced through refactoring. 
    • The architecture supports a specific business problem, and the minimum amount was applied to solve the problem.
    • The architecture was not described in a 200 page document beforehand.  It was designed in small increments and expressed directly in code.
    • The practice emphasized all of the principles of the agile manifesto: working code, delivered at a sustainable pace, in small quantities, with direct customer involvement, using the best practices available.

    My conclusion it threefold:

    1. Solution architecture can be applied in an agile manner. 
    2. As the solutions get larger or teams grow in size and scope, Agile practices alone are not sufficient to solve every problem.  For some problems, architecture is required. 
    3. Therefore: solution architecture is a necessary and critical skill for agile project teams to master.
  • Inside Architecture

    When Johnny comes marching home medicated


    Off topic for architecture… but startling nonetheless:

    In other words, thousands of American fighters armed with the latest killing technology are taking prescription drugs that the Federal Aviation Administration considers too dangerous for commercial pilots.


  • Inside Architecture

    Are we ready to prove the “Architecture hypothesis?”


    Science progresses on the willingness to take widely held beliefs and test them.  It is one thing to say that a medicine will “cure all that ails ye” but it is another thing altogether to prove that a particular medicine will have a particular effect on health.  Proof is expensive, but science does not march forward without it.

    For quite a while, our team has been working diligently to increase the use of architectural models (and architectural thinking) among our IT units.  There are thousands of employees engaged in developing software in MSIT, and it is not always easy to reach a wide audience and make a compelling case.  Our arguments have been based on appeals to logic (clearly, it works), metaphor (it works in other areas of engineering) and anecdotal evidence (it worked on project Foo and see the benefits they got). 

    But at the end of the day, we have not been using science.  We never ran a true scientific study to demonstrate that the use of a particular architectural practice improved the outcome of software engineering in a specific and measurable way, compared to a project that did not use architecture.

    I’d like to see us, as an industry, get to the point where we can perform a scientific experiment on the efficacy of using software architecture practices to improve software development outcomes.

    We’d need a protocol for the experiment, and controlled variables, to reproduce the effects of using architecture.  We’d need a specific definition of what it means to “use architecture” and the specific practices that we believe will have an impact on outcomes.  We’d need a clear way to measure the outcome of the experiment so that, when repeated, anywhere in the world, the experiment could produce comparable results. 

    And here’s the rub: I’d like to see it be something that YOU can help with.  The community of architects and developers that care about things like SOA, Architecture, and Planning… we are the people who can perform the experiment in 1,000 settings around the world.  We are the ones that can prove, or disprove, the hypothesis of software architecture.

    If anyone has experience with this kind of experiment, I’d love to hear from you, just to see if this idea is too difficult for the community to do.  Personally, I’m not sure where to start.  If you have ideas, please reach out to me.

Page 1 of 2 (8 items) 12