Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

August, 2007

Posts
  • Inside Architecture

    The perfect service oriented architecture is... small

    • 3 Comments

    Special thanks to 'Jerman of the Board' for this blog post on a good book, "The Paradox of Choice."  Just from the links on this page, I intend to go out and get the book right away.

    Quote: "The basic premise of the book is that TOO much choice leads to LESS satisfaction. In other words, More is Less."

    I see this all the time in software development as well.  If a customer is given a choice of 'buy commercial software or build an IT system', and money is no object, then they will choose to build a system, every time, because they have infinite choices about the features they want.  They will also expand the cost of the system to include expensive features that will never justify themselves.  Some business users who are familiar with the experience of having an IT analyst write down every word are too spoiled to actually purchase commercial software, because it doesn't meet 110% of their stated needs.

    But the moment you say "you have no choice to write your own... you must buy something available," then suddenly the software that is available on the market will work just fine.  They pick one, implement it, and live with it for years.

    So if we take this logic and apply it to SOA, what should we provide when we create the list of services that the enterprise 'needs' to have (the periodic table of services)?  Do we provide 40 different endpoints that are minor variations on 'create a customer' or do we provide three, one for each of three different (standardized) interaction patterns?

    I suspect that if we provide 40, the customer will want 42.  If we provide three, they will pick one and move on.

    So if you are considering a list of enterprise services that exceeds 500 core services across the enterprise, perhaps this is an opportunity to think again.  Too much choice is not necessarily a good thing. 

  • Inside Architecture

    Does APM reduce cost? That depends.

    • 4 Comments

    The goal of Application Portfolio Management is to reduce the cost of owning the portfolio.  The fundamental premise is this:

    • We own a lot of code.
    • It costs a lot to maintain our code.  (too much)
    • Management wants to be able to make cuts in the maintenance budget.
    • We don't know which costs are optional and which costs are necessary
    • APM answers that question, allowing us to cut unnecessary maintenance costs

    Something like that anyway.  (OK... it's a bit over simplified.  sue me).

    There are some problems with the notion, problems that show up in the details.  Systems are not simple marbles that you can remove from your bag and sort into neat little buckets.  They are tied to business problems that need to be solved.  They are part of a solution that automates activities in a business process.  The criticality of those business processes, and their rate of change, does more to drive the cost of maintenance than most other factors in play.

    Yet, most APM tools are simply catalogs where you write down things about the app, but don't do much to help you to analyze the importance, or rate of change, of the business processes that they are attached to.

    And thus, all the data collection in the world won't help without that analysis that puts things into perspective.

    Some folks do that analysis, and they see costs decline... sometimes by large margins.  Others expect the tools to help, or they measure the wrong things, and costs don't drop.

    My gut tells me that APM will only succeed when it is tied to tangible analytic methods like Six Sigma.  Without that tie... it may not do very much.  A car, without an engine, won't get you to the church on time.

  • Inside Architecture

    Put a ruler to the blueprint... is it useful?

    • 25 Comments

    My favorite posession in high school was my drafting board.  Yep... I was geek, even then.  I was going to be the next Frank Lloyd Wright (or at least, I wanted to die trying).  I fell in love with Architecture in a high-school drafting class and was hooked.  I had notebook after notebook filled with sketches of floor plans and perspective drawings.  The year was 1979.  Good times.

    So when I was talking to a fellow architect recently about one of our team meetings, I realized that I had a good thing back then, something that I don't have today in my current incarnation of 'Architect.'  When I created a set of blueprints for a house, it was accurate.  I was a careful person because I had to be. 

    You see, the goal of a blueprint is that I can give a package of drawings to a builder and literally walk away.  The house that he or she builds should (if I did my job well) come out looking A LOT like the picture on the paper.  Not identical, mind you.  There will be minor gaps, and the builder may have to make a compromise or two, but for the most part, I should be able to walk through the finished house and find everything pretty much where I put it on paper.

    If the builder had a question about the amount of carpet to order for a room, for instance, they could whip out a ruler and measure the size of the room on the blueprint.  If the scale was 1/2", and the room, on paper, measured out to 6 inches wide, the builder KNEW he could order 12 feet of carpet.  (Of course, he would order 13 feet... just in case).

    Point is that the diagram was so accurate that the builder would not have to ask me for anything that he could get by whipping out a ruler and measuring the drawing on the paper.

    Why don't we have this kind of accuracy in our architectural models? 

    Is that something we should strive for?  This is not an MDA question.  This is an accuracy question. 

    In your opinion, gentle reader, what level of accuracy should the architectural model go to?

  • Inside Architecture

    We are going to miss... do we stretch out the Sprint?

    • 5 Comments

    travel_by_trainWe had a really good discussion this afternoon between the 'agilists' in Microsoft on one of our discussion threads, and I wanted to share portions of the thread with the rest of the world.

    Note: many of these folks are not bloggers, so I edited down their last names a bit. 

    It started with this message:

    We are close to the end of our second sprint. We have some major components that are almost complete but will not be ready for the Sprint 2 demo.

    I was thinking…rather than punting half-baked tasks to the next sprint and also having nothing to demo but mockup functionality, should we let the major server components complete by adding a week to our sprint? 

    The chorus of messages that came back were in unison.  NO.  Do not extend the sprint!  Here are some of the (edited) replies...

    Don’t do this.  Disaster lies along this path – I know, I’ve made this mistake myself.  I will never ever ever extend a Sprint again.  Yes, it’s painful.  You have a problem, and fixing problems can be painful.

    It sends exactly the wrong message:  the team committed, but now can’t deliver.  If you bail them out now, they’ll expect you to do it again. 

    [When using] a Sprint burn-down chart, the Team has known that they’re in trouble as long as anyone.  [In hindsight,] they should have broken the items into smaller pieces that they could complete, prioritized work so that they’d have something to show, or aborted when it became obvious that the planning and goal was so far off as to be impossible to achieve.

    Bill H. -- Deployment Technology

    To the end of being ‘constantly shippable’, there is always a way to deliver some functionality in the time remaining. I’m not saying that it’s easy to see how to break things up along vertical slices (instead of traditional horizontal slices), or that it’s easy to carry through and deliver it, but there is a way. If you aim for that, you’ll get better at it. Doing this _is_ a skill, and it takes practice to get good at it.

    Rohit E. --  SQL Server

    I advise you to not change dates. 

    Furthermore you shouldn’t demo mock-up functionality, but instead should demo what you have. 

    I’ve always tried to keep the sprint review demo’s unpolished and unedited.  So, show the good, bad and the ugly.  I think in many ways we’re afraid to show poor functionality, half-baked features, etc when in fact this is precisely what is needed for the stakeholders to get as clear a picture as possible of the actual state of the project.  Better to be completely up-front.  This also spills over to how you deal with customers (complete transparency).

    You should avoid making the demo look nice (implementation) without the unit tests and scenario tests enforcing requirements (testing). 

    When the sprint deadline is close, it’s very tempting to put effort into implementation and make up testing work in the next sprint.  That’s not a whole ship cycle in an iteration and it’s a dangerous game to get into because you’ll continue to spiral by always having testing behind.

    Allen D. -- Dev Div

    If you extend a sprint, you set up a pattern where the sprint end date isn’t real, and on every other sprint, the team will (rightly) expect that you will extend the sprint for them if they are a little bit late.

    If you can demo stuff, you demo it. If not, you don’t demo it.

    Eric Gu. 

    +1 – Never extend a sprint.  A sprint should not have “exit criteria”.

    Also, be careful about how you evaluate the sprint, saying the team failed in the execution of the sprint is most likely the wrong perspective.  Most likely the biggest issue was in the sprint planning.  I almost always see teams try to do too much in their early sprints until they get realistic about what they can actually achieve in a month.

    My recommendation, plan your next sprint more realistically and allow some variability.  One good way of doing this is splitting your sprint plan into “Required” and “Bonus” work for the sprint.  Make sure the “Required” work can always be done in less then the sprint timeline, but your required + bonus work could not be done in the sprint.

    For example, plan two weeks of “Required” work and four weeks of “Bonus” work.  This helps improve your predictability because you’ll know the required stuff will get done, and also makes sure the team won’t run out of work before the sprint end.  Remember that there is no such thing as a perfect estimate, so if you try to plan exactly a month of work, you’ll always be wrong with a 50/50 chance whether you’re over or under. 

    With 2 weeks required and 4 weeks bonus, then the team can be up to 100% over their estimate and still complete the required work, or 33% under their estimate and still not run out of work.  If your teams estimates get more accurate then you can plan 3 required and 2 bonus, but never go to just 4 required.

    Jonathan W. -- Server and Tools

    One of the guiding principals of agile is building customer trust. There was a great session of experience reports yesterday here at the Agile 2007 Conference on this subject. If you tell your customer you will have something done on a certain date, you need to deliver.

    If you can only get part of it done, it better be darn close to perfect and released on time, and the next sprint better deliver the missing functionality in perfect condition and on time. If you just push out your deadline, trust from the customer, and confidence from the team will errode.

    I think it is also extremely important to the team to be able to say, "This spint is done." By lengthening the sprint, you lose sight of the end, and the team can lose focus on the goal. Some schools of thought say to celebrate even the small successes. Acknowledge the stories that are done done done. Acknowledge what is left to do. Figure out why you weren't able to finish and make a plan to overcome that in the next sprint.

    Robin P. -- Windows

    And what did I get out of all of this?  The undeniable fact that Microsoft has some very passionate agilists in nearly every part of the company, each using agile methods to deliver 'trustworthy computing' one sprint at a time.

    And in case you haven't heard me say it before, this is the best place on Earth to be a developer... hands down. 

  • Inside Architecture

    SOA is not a disruptive technology (selling SOA, part three)

    • 4 Comments

    I've been having a really fun discussion these past few days with some folks on how (or what) to sell to the business when you want to build apps using SOA.  Mike Kavis believes that there are three camps: (1) sell the technology, (2) sell the solution (and only the solution) and hide the SOA, and (3) sell both the solution and the technology and show how the business benefits by it.  (I cannot, for the life of me, find anyone in the first camp.  Sorry Mike.  I think there are only two camps.)

    I'm all about the business benefits.  In other words, if I can go the entire conversation without bring up the word SOA (hasn't come up yet), then I will.  From my standpoint, SOA is good design.

    Innovation and Disruption

    Mike Kavis posted a really thoughtful post: a real-life story of how he sold a disruptive technology combination (SOA+BPM)... as an enabler to what the business wanted to do.  Quote...

    We sold the business on the benefits of BPM and then explained how SOA was the key to allow the BPMS tool to talk to our legacy systems. [...]  Once the business knew that SOA was the enabler for their BPM initiative, which happened to have an eight figure ROI over five years, they didn't need to hear anymore.

    What a great story!  I love this.  This is an example of how IT can come to the table and offer disruptive innovation to change the way we do business.  That is something that happens far too rarely.  I strongly agree with Mike that this is the right approach to take in this case.

    I want to make something clear: I strongly applaud Mike for what he did and the success he has had.  (Dude... if you ever want a job, let me know.  I'm drop-dead serious). Mike sold a disruptive technology to the business, and he described the technology when he did it.  If I were selling something disruptive, I would too.

    But what about when I'm not trying to do something disruptive?  What if I am just trying to keep the trains running (and build new trains and track)? 

    Business and IT and SOA

    This year, Microsoft will embark on something like 200 substantial IT projects, with eight of them costing more than $5M each in the coming year.  Eight sizable projects.  (Total costs of IT, including data centers, maintenance, and new initiatives, will top $1B this year... which is typical.  There are 6,000 people in Microsoft IT.  This is not a small organization.)

    Let's look at those 'big' projects.  In each case, the need to change the infrastructure ties to business strategy (some more strongly than others, but alas, that's how these things work).  The business comes with a strategy and we work out what the process changes are, and then we figure out what the data, application, and technology changes are.  We plan out the future state, and the business funds projects to support it.  It's all quite collaborative.

    I want to make two observations:

    • All eight of these "big fish" projects will use SOA. 
    • In the business case for each of these projects, there is not a single word about SOA.

    Each project is free to contribute a bit of their funding to shared infrastructure where needed.  That gives us better monitoring and management and helps fund "tool things" if we choose to do them.

    Listening to the customer means knowing what they care about

    We do talk about SOA, just not with the business.  It's not that the business doesn't understand SOA, or wouldn't if we told them... it's just that they are not incented to care.  If I walk into a business meeting with the head of the Volume Licensing organization, I have to realize that his operational team books a little over $1B in revenue to Microsoft's bottom line every month.  Let's say I take an hour of his time.  During that hour, his organization has closed on $6M in revenue.  If I ask for $200,000 for SOA, when SOA is basically good design done well, using modern tools, many of them free... I've wasted his time. 

    If I am spending an hour of his time, that hour had better be about increasing sales, reducing costs, or expanding markets.  That's it.  That's what he cares about.  That is what he signs up, each year, to care about.  He has very specific targets to hit.  People count on him.  He counts on us.  That is how it works.

    Innovation on the inside

    We have business process teams here, but they are not tied to composing services... not yet.  We are working on the BPM+SOA approach in Microsoft IT, but honestly, the proof of concept needs to be identified (and the planets have to align) before I can turn on the charm.  I am a huge proponent of BPM and believe strongly that SOA powers BPM to success.  When we are ready to sign up a proof-of-concept project, to introduce a disruptive technology, I'm sure we will have to talk about technology to that business customer.  In that case, sure... talk tech.

    But it won't be a huge, expensive, 'innovation breakthrough' project where we introduce disruptive technology.  We will do it in a test case project that can deliver quickly, and where the risk of failure can be mitigated.  I'm not going to bet the (huge) Vista business or the (comparatively smaller but very visible) XBox business on my personal pet project, and neither will those business leaders.  No one is asleep at the wheel here.  People notice when Microsoft screws up. 

    We screw up sometimes... All IT shops do.  And you don't see the failures (usually).  That is because we innovate quietly and then, when innovation pays off, we tell the planet.  When it doesn't, you don't see it.  We try again until we get it right.  We are a pretty persistent bunch.  It's a culture thing.

    Once the test case for disruptive technology shows promise, we will discuss it with other teams.  It will grow virally.  There is no 'strategy' to adopting an idea that has proven itself valuable.  It's more like flood gates around here.  Usually too many people will rush to the good idea, and we have to tell them to take turns until we naturally grow our capacity.

    SOA = Professionalism, not tools

    SOA is not a disruptive technology.  Not here.  Not anywhere.  SOA is mainstream.  We don't discuss SOA with the business because we don't discuss professionalism or intelligence with our business... it is assumed and required that we behave with best practices and bring the best available design.  That includes SOA. 

    Our stack, with WCF and SQL Service Broker side by side, and with Workflow Foundation up a level, and with Biztalk and the ESB toolkit on top, is rich and deep and very solid.  It gets better every year.  Once you know the solution you want to deliver, then I'm happy to talk about tools, because these tools will lower the cost of delivering it. 

    SOA simply is not disruptive.  Not any more.

  • Inside Architecture

    Politecture

    • 1 Comments
    Aaron Hanks gets credit for coining this term.
     
    Ever heard the old saw that says that software reflects the organizational structure of the team that writes it?  That begs the question: which came first?  The Architecture of the app (to which the organization fit itself) or the organization (to which the design was contorted to fit within).  Usually, the team already exists and the architecture has to fit the team.
     
    Politecture - A software architecture that exhibits the effects of non-functional constraints placed or enforced by the political needs of the software development or IT organization that describes, manages, codes or maintains the resulting system.
     
       (term by Aaron Hanks.  definition by Nick Malik)
     
     
Page 3 of 5 (27 items) 12345