Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

March, 2012

Posts
  • Inside Architecture

    How Enterprise Architects can cope with Opportunistic Failure

    • 2 Comments

    You may not think that Failure is a desired outcome, and on the surface, there are some negative connotations to failure.  It just sounds “bad” for a team to fail.  However, there are times when a manager will INTENTIONALLY fail in a goal.  Let’s look at the scenario where a manager may choose to fail, and see whether EA has a role in either preventing that failure, or facilitating it.

    What is Opportunistic Failure?

    Does your organization manage by objectives and scorecards?  Scorecards and metrics are frequently used management tools, especially in medium and large organizations.  In a Manage-By-Objective (MBO) organization, a senior leader is not told “how” to do something, but rather a negotiation takes places that results in the development of a measurable objective.  The term “measurable objective,” used here, is a well-defined idea.  It is specific, measurable, actionable, realistic, and time-bound (SMART).  A measurable objective is a description of the results that a senior manager is expected to achieve.  In BMM terms, the objective is the “ends” while the senior leader is expected to figure out the “means.”  In business architecture parlance, the objective describes the “what” while the senior leader is expected to figure out the “how.”

    Now, in many situations, a senior leader has to rely on other groups to perform, in some way, in order for him to achieve his measurable objectives.  This is quite common.  In fact, I’d say that the vast majority of senior-level objectives require some kind of collaboration between his or her people, and the people who work in other parts of the organization (or other organizations). 

    For a small percentage of those dependencies, there may be competition between the senior leader’s organization and some other group, and that is where opportunistic failure comes in.

    The scenario works like this: 

    Senior leader has an empowered team.  They can deliver on 30 capabilities.  Someone from outside his or her organization, perhaps an enterprise architect, points out that one of those capabilities is overlapping.  Let’s say it is the “Product Shipment Tracking” capability.  The EA instructs the senior leader to “take a dependency” on another department (logistics in this case) for that.  The senior leader disagrees in principle because he has people who do a good job of that capability, and he doesn’t want to take the dependency.  However, he cannot convince other leaders that he should continue to perform the “product shipment tracking” capability in his own team. 

    So he contacts the other department (logistics) and sets up an intentionally dysfunctional relationship.  After some time, the relationship fails.  Senior leader goes to his manager and says “we tried that, and it didn’t work, so I’m going to do it my way.”  No one objects, and his team gets to keep the capability.

    In effect, the senior leader felt it was in her own best interest to fight the rationale for “taking a dependency,” but instead of fighting head-on, s/he pretends to cooperate, sabotages the relationship, and then gets the desired result when the effort fails.  This is called “opportunistic failure.” 

    Thoughts on Opportunistic Failure

    Opportunistic failure may work in the favor of anyone, even an Enterprise Architect.  However, as an EA, I’ve most often seen it used by business leaders to insure that they are NOT going to be asked to comply with the advice of Enterprise Architecture, even when it makes sense from an organizational and/or financial standpoint. 

    In addition, one of the basic assumptions of EA is that we can apply a small amount of pressure to key points of change to induce large impacts.  That assumption collapses in the face of opportunistic failure.  Organizations can be very resistant to change, and this is one of the ways in which that resistance manifests. 

    Therefore, while EA could benefit from EA, I’ll primarily discuss ways to address the potential for a business leader to use operational failure to work against the best interests of the enterprise.

    1. Get senior visibility. – If a business leader is tempted to use opportunistic failure to resist good advice, get someone who is two or more levels higher than that leader to buy in to the recommended approach.  This radically reduces the possibility that the business leader will take the risk to his or her career that this kind of failure may pose.
    2. Get the underlying managers in that senior manager’s team on board, and even better, get them to agree to the specific measures of progress that demonstrate success.  This creates a kind of “organizational momentum” that even senior leaders have a difficult time resisting.
    3. Work to insure that EA maintains a good relationship with the business party that will come up short if the initiative does fail.  That way, they feel that you’ve remained on their side, and will call on you to escalate the issue if it arises.
    4. Play the game – look for things to trade off with.  If the senior manager is willing to risk opportunistic failure, you may be able to swing them over to supporting the initiative if you can trade off something else that they want… perhaps letting another, less important, concern slide for a year.  

     

    Conclusion

    For non-EAs reading this post: EA is not always political… but sometimes it is.  Failing to recognize the politics will make you into an ineffective EA.  On the other hand, being prepared for the politics will not make you effective either… it will just remove an obstacle to effectiveness. 

  • Inside Architecture

    Time-to-Release – the missing System Quality Attribute

    • 4 Comments

    I’ve been looking at different ways to implement the ATAM method these past few weeks.  Why?  Because I’m looking at different ways to evaluate software architecture and I’m a fan of the ATAM method pioneered at the Software Engineering Institute at Carnegie Mellon University.  Along the way, I’ve realized that there is a flaw that seems difficult to address. 

    Different lists of criteria

    The ATAM method is not a difficult thing to understand.  At it’s core, it is quite simple: create a list of “quality attributes” and sort them into order, highest to lowest, for the priority that the business wants.  Get the business stakeholders to sign off.  Then evaluate the ability of the architecture to perform according to that priority.  An architecture that places a high priority on Throughput and a low priority on Robustness may look quite different from an architecture that places a high priority on Robustness and a low priority on Throughput.

    So where do we get these lists of attributes?

    A couple of years ago, my colleague Gabriel Morgan posted a good article on his blog called “Implementing System Quality Attributes.”  I’ve referred to it from time to time myself, just to get remind myself of a good core set of System Quality Attributes that we could use for evaluating system-level architecture as is required by the ATAM method.  Gabriel got his list of attributes from “Software Requirements” by Karl Wiegers

    Of course, there are other possible lists of attributes.  The ISO defined a set of system quality attributes in the standard ISO 25010 and ISO 25012.  They use different terms.  Instead of System Quality Attributes, there are three high level “quality models” each of which present “quality characteristics.”  For each quality characteristic, there are different quality metrics.

    Both the list of attributes from Wiegers, and the list of “quality characteristics” from the ISO are missing a key point… “Time to release” (or time to market).

    The missing criteria

    One of the old sayings from the early days of Microsoft is: “Ship date is a feature of the product.”  The intent of this statement is fairly simple: you can only fit a certain number of features into a product in a specific period of time.  If your time is shorter, the number of features is shorter. 

    I’d like to suggest that the need to ship your software on a schedule may be more important than some of the quality attributes as well.  In other words, “time-to-release” needs to be on the list of system quality attributes, prioritized with the other attributes.

    How is that quality?

    I kind of expect to get flamed for making the suggestion that “time to release” should be on the list, prioritized with the likes of reliability, reusability, portability, and security.  After all, shouldn’t we measure the quality of the product independently of the date on which it ships? 

    In a perfect world, perhaps.  But look at the method that ATAM proposes.  The method suggests that we should created a stack-ranked list of quality attributes and get the business to sign off.  In other words, the business has to decide whether “Flexibility” is more, or less, important than “Maintainability.”  Try explaining the difference to your business customer!  I can’t. 

    However, if we create a list of attributes and put “Time to Release” on the list, we are empowering the development team in a critical way.  We are empowering them to MISS their deadlines of there is a quality attribute that is higher on the list that needs attention. 

    For example: let’s say that your business wants you to implement an eCommerce solution.  In eCommerce, security is very important.  Not only can the credit card companies shut you down if you don’t meet strict PCI compliance requirements, but your reputation can be torpedoed if a hacker gets access to your customer’s credit card data and uses that information for identity theft.  Security matters.  In fact, I’d say that security matters more than “going live” does. 

    So your priority may be, in this example:

    • Security,
    • Usability,
    • Time-to-Release,
    • Flexibility,
    • Reliability,
    • Scalability,
    • Performance,
    • Maintainability,
    • Testability, and
    • Interoperability.
       

    This means that the business is saying something very specific: “if you cannot get security or usability right, we’d rather you delay the release than ship something that is not secure or not usable.  On the other hand, if the code is not particularly maintainable, we will ship anyway.”

    Now, that’s something I can sink my teeth into.  Basically, the “Time to Release” attribute is a dividing line.  Everything above the line is critical to quality.  Everything below the line is good practice.

    As an architect sitting in the “reviewer’s chair,” I cannot imagine a more important dividing line than this one.  Not only can I tell if an architecture is any good based on the criteria that rises “above” the line, but I can also argue that the business is taking an unacceptable sacrifice for any attribute that actually falls “below” the line.

    So, when you are considering the different ways to stack-rank the quality attributes, consider adding the attribute of “time to release” into the list.  It may offer insight into the mind, and expectations, of your customer and improve your odds of success.

Page 1 of 1 (2 items)