Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

February, 2008

Posts
  • Inside Architecture

    Why can't Open Source be used to fix problems like this?

    • 6 Comments

    I work deep in the heart of Microsoft, and I make no secret of it: I like open source. 

    I won't start some great debate about OSS, or the traditional opposition of my employer to OSS.  I support my company, and I want the stock price to go up, but I do believe that there are some problems that Open Source can solve far better than anyone else can.

    Barry Briggs, on his blog, pointed to a news story in the Seattle Times that is a perfect example.  In the story, we hear about the frustration faced by the local school board because they have finally created an acceptable method for allowing students to choose their schools, and to put more students into schools close to home, only to find out that the old system runs on badly outdated VAX machines, and cannot be quickly changed.

    (I briefly worked on VAX systems... 20+ years ago.  I feel their pain.)

    Now, if the school board were to put their requirements out for a community of good people, couldn't that community come together to to build a solution for ANY SCHOOL BOARD to use?  Couldn't we make it inexpensive to license and inexpensive to install and inexpensive to maintain?  Couldn't we make it reliable and easy to maintain, and quite able to run on modern, inexpensive equipment?

    Why does the Seattle School Board have to spend millions to move this software?  There are tens of thousands of school boards in the USA, and all of them need software to help run operations, including things like managing the assignment of students to schools.  Could a solution be built that is flexible enough to meet the needs of any mid-sized city, to be consumed by at least five mid-sized cities? 

    Think of the millions of taxpayer dollars that could be saved!  Even better, the students themselves can participate in writing the systems that their school boards use.  (with adult supervision and review, of course, to prevent back doors).

    There is a whole class of software that fits neatly in this bucket... software for non-profits, small government agencies, and public utilities.  There is little incentive for large players to write software here, and consulting companies can't charge a lot of fees, so they don't focus a lot on this space either. 

    We can all feel good about participating in these kinds of things. 

    To the Seattle School Board: I'll contribute some architecture work, and even write some code, if you agree to work with an open source community to develop the requirements and build out the system for all school boards to use.

  • Inside Architecture

    Enterprise Architecture: Earning our keep

    • 12 Comments

    Does anyone ever ask you to justify what you do?  I've worked in most roles in technical development and management.  Only in Enterprise Architecture do I get that question.  So I'm posing an open question to the EA community: how do you demonstrate value?

    There's no question what a tester is for.  You can point to the consequences of poor testing by demonstrating a large number of defects making it to production.  Poor development practices produce a more complex answer, but the system quality attributes provide measurements for the software development profession.

    So what is the measurement for EA?  What is the proof that we are needed?

    What fact or problem or number can you point to that says "this company needs EA?" 

    image

    Getting folks organized around the concept of Enterprise Architecture is tough.  You have to show that there is a problem that the company is unable to solve without EA.

    The thing is: the problems that EA can solve are really not that many.  Yes, we can drive strategic alignment, but we aren't the only ones that do that.  In fact, if IT does align with the business, there is no way that EA gets credit for it.  Making IT align with the business usually means making people very angry.  The folks will will benefit are NOT the same as the folks who are hurt.  If IT doesn't align with the business, then it's business as usual.  Either way, EA looks like a waste of money.

    We can "model the enterprise" until the cows come home, but if we do that, we need to answer the question "Why would anyone want a model of the enterprise?"  I work in the Seattle area, where the Boeing company builds most of their commercial airliners.  When Boeing creates a model of a new airplane, it is for a purpose... often they want to test the model, but sometimes it helps to create a visual representation of a new design for other purposes as well.

    So it comes back to the purpose.  What is the purpose for EA in your company? How do you answer the question: "This is the measurement that we are paid to improve?"

    If we have a clear measurable... a clear consequence that occurs when we are NOT here... then EA gets a lot easier to maintain.

    I have ideas for what I think it should be, but I'd be interested in what you have to say.  How do you go about demonstrating that EA is a good investment?

  • Inside Architecture

    Inversion of control, part two

    • 9 Comments

    I started an interesting thread when I weighed in on the use of IoC and the Dependency Injection pattern a few days back.  Seems I wasn't sufficiently supportive of the concept of lightweight containers to please some of my readers. 

    Should we, the blog community, encourage IT developers to adopt new coding practices? 

    Hmmmm.  Well, if in doing so, we are doing their organizations a favor, I'd say yes.  Increasing their understanding of the tools and the limitations of the environment certainly qualifies.  Encouraging the adoption of standards is generally 'good' as long as you don't add costs.  Encouraging the use of a lightweight container may, or may not qualify.

    You see, the maintainability of an application depends not on the application, or where the requirements come from, but rather the organization that must maintain it.  Who are the people, and what do they do?  How close or friendly is the relationship between development and support?  How are fixes funded?

    Understanding the organization, the processes, and the people who will benefit from any new coding technique is the first step to understanding if there is a benefit to using it. 

    So, while my prior post focused on the tradeoffs of IoC in business software, this one focuses on the tradeoffs in development tools and process that impact an organization.

    In general, I'd classify tools and techniques into two buckets: evolutionary and revolutionary.  I'd say, in general, the use of complex OO design patterns was revolutionary.  It requires training to understand how to leverage the use of a bridge pattern when maintaining systems.  Other things, like moving up a version in Visual Studio tools, is more evolutionary.  IoC is complex and to use it well requires training.  I'd consider it revolutionary.  Quite trainable, but still revolutionary.

    Organizations should strive to find a solid balance between practical efforts to develop good systems, and the availability of talent that can maintain them.  If your support team is not the same group of folks that writes the software in the first place, and if they have a higher turnover rate, requiring you to replenish staff from the general programming community, then it is in your best interests to optimize the 'ramp-up' period for a new support developer.  One way to do this is to limit the number of revolutionary shifts between "average developer from the street" and "functional developer in an organization."  Depending on your choices, that may, or may not, include the use of complex pattern-based systems like an IoC container. 

    Why do you think RoR is so popular?  Because it hides so much of the complexity, yet surfaces a great deal of power.  Same thing goes for LINQ (in a different way, of course).  When complexity is low, the change is more evolutionary, even if the shift in capabilities is dramatic.

    Contrast the above situation with a different one.  If your support team is a rotating set of folks who both develop new systems and support existing ones, and your body of code in support is not very large, then it makes perfect sense to experiment with many revolutionary changes in coding techniques.  After all, you can train once, and leverage the training across a body of code. 

    The point is that the successful use of any revolutionary technology depends on the organization's ability to swallow it.  If your organization is structured to foster the introduction of new techniques and ideas, then by all means, adopt the appropriate ones, including lightweight containers if that appeals to you.  You can train folks on design patterns in general, and DIP in specific, and you can refactor existing systems to use these patterns.  Hopefully, you will have designed your system interfaces correctly so that you can get the benefit of the easy configurability promised by the IoC coding technique. 

    Many of the quality attributes are dependent on people.  Maintainability and supportability among them.  So if you honestly evaluate the use of a technique or tool in a particular system, you will need to consider the organization that will own the result.  If they are not ready, it doesn't matter how good the technology is... it will not be useful.

    Software reflects the structure of the organization that builds it. 

    That axiom is as true today as it was when it was coined. 

  • Inside Architecture

    Teaching Science with Mythbusters

    • 2 Comments

    I love the Mythbusters.  Not just because I like to watch them blow stuff up.  (I do),  More importantly... infinitely more importantly... because those two goofy guys are teaching my son to love science, engineering, and experimentation.  Yes, the clips are stripped down of any real rigor, but they have made science fun for thousands of teenager and young adults. 

    If just a few of them choose to pursue science, and just a few more support science in their communities, and a few more notice when the film industry bashes science or blames science for the ills of the world, or invents yet another monster because of "immoral science," then we might move the needle just a little bit towards a country where science matters.

  • Inside Architecture

    Introducing the "Double Iron Triangle" of Enterprise Architecture

    • 3 Comments

    You have probably heard of the Iron Triangle of project management (Cost, Scope, and Time).  Did you know that Enterprise Architecture has it's own iron triangle?  It does, and understanding the EA triangle is a great way to understand and describe the role, mission, and value of Enterprise Architecture.

    By Analogy

    By now, most technologists are familiar with the concept of the Project Management "Iron Triangle:" Cost, Scope, and Time.  (I hear that the Project Management Institute calls this the "Triple Constraint").

    image

    These are not really cast in iron.  That implies that they cannot change.  Far from being unchanging, the "Triple Constraint" of project management says that if you change one, the other two must change.  If, for you example, you increase scope, you will need to adjust cost or time (or both).   

    That said, I know many creative project managers who conspire to make minor changes in scope or time or cost, and manage, through skill and magic, to avoid affecting the other two.  This is not the three laws of thermodynamics, here.  They can bend.  The Project Management triangle is a concept and a tool.  It helps you to understand the impact of changes, and explain them to the business.

    The Double Iron Triangle of Enterprise Architecture

    There is a triangle in Enterprise Architecture as well, and it has many of the same characteristics.  To be fair, it is not one triangle, but two, one embedded in the other.  The sides are made up of Information, Business Process, and Functionality.  If you change one side, then you have to change the other two.

    image

    There are three viewpoints on this triangle (which is one of the things that makes Enterprise Architecture so much fun).  We have largely defined different roles and made them responsible for understanding each of these different viewpoints.  Sometimes, they fight.  Oftentimes, they forget the triangle.  An Information Architect may change the way a business entity is modeled, but in doing so, he or she must then consider the impacts on process and functionality.  (Not to pick on IA.  We are all guilty, at one time or another, of this mistake).

    So why two triangles?

    I use two triangles to illustrate the "spectrum of variance" that exists along each of three axes.  From each of the three viewpoints, there is a difference between "core" and "supporting" models.

    The terms "core vs. supporting" is a pair that we (Microsoft IT EA) created by combining terms from the works of Geoffrey Moore and Michael Porter.  I'm sure that purists will argue that the terms are not designed to be mixed.  Perhaps.  We went for simple terms that we felt were understandable on their face.  This pair seemed to hit the mark.  (special thanks to Gabriel Morgan, who did this analysis work for us). 

    Geoffrey Moore described in his book 'Living on the Fault Line' of identifying a business' core activities. In the book Moore writes "For core activities, the goal is to differentiate as much as possible on any variable that impacts customers' purchase decisions and to assign one's best resources to that challenge. By contrast, every other activity in the corporation is not core, it is context. And the winning approach to context tasks is not to differentiate but rather to execute them effectively and efficiently in as standardized a manner as possible."

    Michael Porter described in his book “Michael E. Porter on Competition and Strategy”, Primary activities create the product or service, deliver and market it, and provide after-sale support. The categories of primary activities are inbound logistics, operations, outbound logistics, marketing and sales, and support. Support activities provide the input and infrastructure that allow the primary activities to take place. The categories are company infrastructure, human resource management, technology development, and procurement.”

    Our use of core vs. supporting is closer to Moore's view than Porter, but we are different than both, primarily because each of these esteemed gentlemen was taking one of the three viewpoints in the triangle.  To be truly agnostic of the triangle is difficult.  The closest I can come up with is this (Nick's definition):

    For Core architectural elements, the goal is to describe those aspects of the business that are the closest to the value offered by the business to the customer.  Core information describes things that the customer can see, and core processes affect the customer's experience, while core functionality directly impacts the customer's view of quality.  ("Customer" includes many roles, including partners, investors, and employees, in addition to the traditional meaning).  Therefore, businesses must differentiate themselves from their competition using variations in core elements.  To compete effectively, businesses must master the art of being agile in their core elements.

    Supporting architectural elements provide the support for the core elements.  The goal for these elements is to be as efficient and inexpensive as possible.  Supporting processes should be finely tuned and widely shared.  Supporting information, likewise, must be widely available and unambiguous in meaning.  Supporting functionality should be broadly available in the enterprise and consistently usable in many different ways.  To compete effectively, business must master the art of self discipline to maintain consistency and uniformity in their supporting elements.

    The Spectrum of Variance

    I introduced a term above, 'spectrum of variance'.  I don't believe that there is a clear dividing line, in any of the three architecture disciplines, between core and supporting elements.  Information assets can certainly be supporting (like product information) and yet may have aspects that differentiate the business.  Some processes can be involved with creating or delivering a product, yet aspects of those processes can be supporting in nature.   

    image

    In the image above, I show the spectrum of core vs. supporting, and some of the stakeholder concerns that appear at different levels along the spectrum.

    The impact of the Double Iron Triangle on EA models

    When each of the three disciplines view the triangle, each must recognize both aspects of the triangle:

    1. Recognize that a change in one will probably affect the other two.
    2. Recognize which of their assets (process, functionality, information) fall in various places along that spectrum. 

    It is not enough to create a single view of the processes of an enterprise, or a single all-up information model, or a single application model, without recognizing the nuances added by this spectrum. 

    Alas, this is not an easy thing to do.  There is a lot of complexity involved when you ask a person to be aware of two dimensions at once.  How is an information architect to cope if business processes and applications change all the time, and if they have to keep track of the attempts of the business to differentiate themselves from their competition?

    The answer is no different in architecture than it is in OO design: find a way to separate things that change from one another by using stable abstractions.  (Think Strategy Pattern or Bridge Pattern... isolate change).

    And there is the beauty of the double triangle.  If you can stabilize the supporting triangle into a stable and consistent set of information, process, and functionality, then you can extend those supporting elements in each of the three directions quite independently of one another, and yet still achieve both interoperability and competitive advantage.

    Call to Action

    So take a long look at your models.  Decide if you have a differentiation along this spectrum of variance.  Then look at your architectural practices, and examine if you have a recognition of the interrelatedness of the three sides. 

    If you do not have both, consider adopting the double iron triangle.  Consider using it to drive consistency to the center, and drive innovation and differentiation to the edge.  You may just find that you can take your Enterprise Architecture to the next level.

  • Inside Architecture

    What is the tradeoff with Inversion of Control (IoC)

    • 18 Comments

    Recently, I caught wind of a discussion about use or overuse of Inversion of Control and Dependency Injection.  One small team was quite religious about using it, while another was, let's say, a bit more circumspect.  It made me think about where I would put IoC into the pantheon of silver bullets...

    Inversion of Control is a good pattern.  You get more testable components and it reinforces good design.  IoC does not replace good design.  That said, the problems you can solve with IoC, while important, are a small subset of really big issues.  It is perfectly appropriate to care about using IoC patterns, and I’m a fan of Spring.Net and Unity.  However, IoC can only solve a small number of problems.  Its value is specific to the context in which it is applied.

    When should it be used, and when should it be avoided?

    All quality attributes must be examined with each effort.  How reliable is the software?  How secure is the software?  How maintainable?  Gabriel Morgan does a good job of describing 12 quality attributes that he thinks are important in this blog… and I quote him often.

    To everything, there is a tradeoff.  IoC is very useful for software that is carefully crafted by hand and operates in an environment where limited and secure access to the environment is absolutely insured.  If you have a configuration file that controls the application, and a bit of malware works its way onto the box, a simple change to a config file can inject truly nasty functionality directly into an application.  Imagine that your application is an e-commerce web site, and the malware hijacks credit card info simply by changing the config file, thus inserting itself into the exe!  If the system is to be installed on the client machine, this problem becomes, if anything, worse.  This is because your name is on the “front” of the application, while anyone else’s code can be running in the guts, without any way for the user to “remove the add-on.”  It may not be acting as an add-on at all.  A malicious change to the config file can replace huge swaths of functionality, in a way that can be quite difficult to detect.  So, I’d say that IoC is a maintainability item with a solid tradeoff with application security.

    IoC also has a complexity tradeoff.  Applications that are wired together with IoC may not be the easiest to understand or debug.  If every single class has to be injected, then you have added a layer of complexity to the coding and debugging effort.  Sure, you can train folks around that, but it is a tradeoff, and one that you have to consider.  I remember a “fad” where every class had to have a factory, because using the ‘new’ keyword was evil.  I also remember IT dev managers complaining that they had to spend double on a maintenance cycle to rip out about 80% of the factories because they added complexity for classes that were never reused.

    I love IoC, but I’m not religious about it.  I use it sparingly, to inject major sections of code into an executable.  I do believe it is very easy to overuse IoC.  Think of it like over-the-counter pain medicine: take a little and a head ache goes away, but don’t overdose.

Page 1 of 2 (8 items) 12