Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

April, 2006

Posts
  • Inside Architecture

    Help wanted: who pays to simplify your IT portfolio?

    • 6 Comments

    Entropy creates IT portfolios.  As time goes on, business needs change.  Mergers happen.  Tool strategies change.  It is inevitable that every IT department will find itself (rather frequently) facing the situation where there are two or three or four apps that perform 'important task fourty-three' when the enterprise needs exactly one.

    So let's say that important task fourty-three is to "track the number of widgets in the supply chain from distributor to retailer."  Let's say that the Contoso company has two divisions.  One makes toys.  The other makes fruit snacks (a mildly perishable snack).  These two divisions have different needs, and the were built at different times.  Contoso purchased a supply chain application for the toy division, and built their own for the food division. 

    So, along comes an architect and he says 'you only need one.'  After the business stops laughing, they ask if he's serious.  When he says that he is, they argue that he's wrong.

    Let's assume he wins the argument.

    Who pays to combine them?

    The needs are different.  The application that manages the toys is not able to handle perishable items and the interesting stock manipulations that go on.  It is not able to track batches, and their due dates.  It is not able to calculate the incentive rebates that Contoso offers to retailers to refresh their stock when they flush aging inventory.

    Combining the two may very well mean purchasing a third application and migrating both of the two older systems' data into it.  Or it may mean expensive modifications to the commercial package, or the addition of custom code to the in-house tool.

    That costs money.  We want to spend money to save money.  Fine. So who takes it on the chin?  Who extends the credit?  The food division or the toy division?

    These decisions are not rare enough to run it up to the CIO at every turn.  There has to be a rational way to fund the decommission of an application, move and archive data, validate functionality, update process flows, re-attach broken integration pathways, etc.  It can cost as much to turn an application off as it did to install it in the first place.

    What have you seen work in your organizations? What process do you use?

  • Inside Architecture

    Do we need a new measure of software complexity to calculate the TCO of a portfolio?

    • 2 Comments

    A few days back, I blogged about a formula I'd suggest to measure the TCO of the software portfolio.  One responder asked "how do you measure complexity, since it is a large part of your formula?"

    To be honest, I was thinking about traditional models of complexity measurement that are often built into static analysis tools.  Things like Cyclomatic Complexity are useful if you have a codebase that you are maintaining, and you want to estimate the risk of changing it.  I would charge that risk is proportional to cost when it comes to maintenance.  That said, the complexity should get as low as possible, but no lower.

    However, after reading the question, and rereading the entry on Cyclomatic Complexity on the SEI site, I realized that the definition on that site is wildly out of date.  The page references a number of methods, but not one had any notion of whether an app's complexity goes down if it is assembled from configured components (See Fowler's paper on Dependency Injection and Inversion of Control).

    In addition to the advent of patterns, we have made great strides in removing code complexity by placing the business rules external to the code.  In some respect, this has a payoff in reducing the cost of ownership.  On the other hand, you have to account for the complexity of how the business rules are encoded and/or maintained.  Rules encoded as VBScript are hardly less complex than code.  But they may be less complex (to maintain) than rules encoded as a hand-built static linked list or tree structure stored as database records. 

    We have also removed complexity from code by placing some of it in the orchestration layer of an integrated system.  In fact, this can be a problem, because complexity in orchestration can be quite difficult to manage.  I've seen folks install multiple instances of expensive server software just because they felt that they could better manage the proliferation of messaging ports and channels if they dedicated entire instances of integration server software to a specific subset of the channels. 

    Not for performance is this done. It may even make deployment more difficult. But if your messaging infrastructure is a single flat address space, then fixing a single message path is like trying to find a single file in a directory with 12,000 files, each having a GUID for a filename, and the sort option is broken. 

    So complexity in the orchestration has to be taken into account.  Remember that we are talking about the complexity of the entire portfolio.  If you say that neither App one nor App two own the orchestration between them, then are you saying that the orchestration itself is a new app, called App three?  How will THAT affect your TCO calculations? 

    Most of the really old complexity measures are useless for capturing these distinctions.

    Of course, you could just measure lines of code or Function Points.  While I feel that Function Points are useful for measuring the size of the requirements, the TCO is not derived from the size of the requirements.  It is derived from the size of the design.  And while I feel that LOC has a place in application measurement, I do not feel that it is useful for providing any useful mechanism for the total cost of owning the application, since a well architected system may require more lines of total code, but should succeed in reducing the amount of 'cascading change' since well architected systems reduce coupling.

    On the other hand, complexity, to be useful, must be measurable by tools. 

    I'm not sure I if I can whip up a modern complexity calculation formula to replace these older tools in the context of a blog entry.  To do this topic justice would require the time to perform a masters thesis. 

    That said, I can describe the variables I'd expect to capture and some of the effects I'd expect each of these variables to play on total complexity.  Note: I would view orchestrations to be 'inside the boundary' of an application domain area, but if an area of the architecture has a sizably amount of logic within the connections between two or more systems, then I'd ask if the entire cohesive set could be viewed,f or the sake of the complexity calculation, to be a single larger application glued together by messaging.

    Therefore, within this definition of application, I'd expect the following variables to partake in some way in the function:

    Variables for a new equation for measuring complexity

    number of interfaces in proportion to the number of modules that implement the interface: an interface that has multiple children shows an intent to design by contract, which is a hallmark of good design practice.  That said, each interface has to have at least two modules inheriting from it to achieve any net benefit, and even then, the benefit is small until a sizable amount of the logic is protected by the module.

    Total ports, channels, and transformations within the collaboration layer divided by the number of port 'subject areas' that allow for grouping of the data for management.: The idea here is that the complexity of collaboration increases in an S curve.

    Total hours it takes to train a new administrator on how to perform each of the common use cases to a level of competence that does not require oversite. 

    Total number of object generation calls -- in ther words, each time the 'new' keyword is used, either directly or indirectly.  By indirectly, we want to count each call to a builder as the same (or slightly less) complexity for each call to the 'new' keyword.

    Total count of the complexity as it is measured by coupling -- There are some existing tools that appear to do a fine job of measuring the complexity by measuring the module coupling. 

    I'm sure I'm missing some more obvious ones, because I'm tired. 

    Once the list is understood and generated, then creating a formula that models the actual data isn't simple.  That said, I'm sure that we need to do it. 

  • Inside Architecture

    Architecture is an attitude, not a model

    • 5 Comments

    I ran across an interesting post by Bob McIlree that discusses, among other things, that the 'real problem' is not what we might think.  To quote:

    So...the real problem we're solving for, as JT noted, isn't necessarily better, faster, cheaper. In the large corporate and governmental areas, I'd argue that the real aggregate problems we're solving for are, as examples: cost-effective front and back-end, compliant, auditable, available (pick your nines), extendable/maintainable, interoperable, and secure.

    This is a soft descripton of Software Quality Attributes, which is a mechanism that you can use to evaluate and review software architecture.  (search for ATAM method).  That Bob needed to take the time to describe this is, in my opinion, indicative of just how young, and probably how misunderstood, the architecture profession really is. 

    Anyone who needs to spend a lot of time telling others what their job is, is working in a new job.  Some would say "a job that is not needed," but there I would disagree.

    Those of you who say that Project Managers have always been part of software are too young to remember when Systems Analysts would solve the problems themselves.  The emergence of the Project Management profession was not an easy one.  A lot of folks questioned the need for a PM, and others resented that they got to do some of the up-front work that tends to get a good bit of the visibility.  Is it really that hard to remember that the reason we needed project managers in the first place is that developers, by themselves, had a cruddy success rate in delivering software on time?

    The problem was not that developers couldn't manage time, or tasks.  It was that there needed to be a dedicated group of people who were seperate, and were dedicated to solving the problem of delivery (time, resources, funding), seperate from development.

    Where PMs solve the problem of "deliver software right," EAs solve the problem of "deliver the right software."  We are needed for the same reasons: because development teams, by themselves, have a cruddy success rate at delivering software in small testable components that are hooked together in carefully managed but loosely coupled links. 

    We are the ones that figure out where those small testable components are, what their boundaries look like, how they are managed, and how to communicate with them.  We tie their abilities with the needs of the business: the capabilities of the organization.

    For those who say that one technology or another will be the 'magic bullet,' I'd point out that we introduced a technology a long time ago that allows for loose coupling... it's called the dynamically linked library (DLL).  That will solve everything!  Right? 

    The problem is not that developers cannot manage loose coupling, or messaging.  It's that there needs to be a group of people who are incented to solve this particular problem, seperate from all the other stresses of business or IT that tend to prevent these designs from emerging.  We need people dedicated to solving the problem of capability scope and component boundary definition, seperate from appliation design or technology deployment.

    It's a dual role, not unlike that of being both the city planner and the zoning board inspector.  You not only help decide where the street is supposed to go, but you 'encourage' the land developers to build according to the city plan.   When it works, the streets flow.  When it doesn't, you get Houston.

    To be fair, I think we are still coming to terms with the profession ourselves.

    So, to Bob and all the others who feel the need to explain what EAs are for, I add my voice and recognize, that in my meager attempt to describe what I do, I am also defining it, refining it... and maybe even understanding it... just a little bit more.

  • Inside Architecture

    How Enterprise Architecture enables Web 2.0

    • 1 Comments

    The role of an enterprise architect is not well understood.  That much is clear.  Some folks say that EA is at one end of the scale, while Web 2.0 is at the other.  Those people are not enterprise architects.  They are missing the point.

    Web 2.0 is about building solutions in a new way.  Enterprise Architecture does not tell you to build the solution in the correct way, as much as it tells you to build the correct solution. 

    Enterprise Architecture would be completely unnecessary if you could simply teach all of the practitioners of IT software development to build the right systems.  In fact, that was the first approach most organizations used. 

    Smart people would notice that stupid things were happening, like many systems showing up in an organization, all doing the same things in a different way, each consuming millions in cash to create and maintain, instead of building smaller components, with independent capabilities, hooked together with messages.  Smart people would say "This is dumb." 

    Management would say "We agree.  Tell everyone to stop doing that."

    Smart people would tell other IT staff to stop doing it.

    And it kept happening.

    So Enterprise Architecture is born.  Not to be a bastion of smart people who are somehow smarter than anyone else.  Nope.  To be a group of smart people who are incented differently.

    Every day, I make decisions.  Some of them are easy.  Others are about as difficult as they come.  I have a set of principles on my wall that I use to guide my decisions.  They are public principles.  Others helped to craft them.  But I don't report to those others.  I report to central IT.  So when the customer says "I need to solve this problem," and IT says "Let's build a glorious new infrastructure," I can say "No" without fear of reprisal.

    And here's the kicker:

    I can say "I've been working with this other team, and they built a bunch of apps based on shared services.  Those services do the same things that you need done.  The services catalog is at http://servicecatalog and I expect you to use those services.  Take a look.  I will review your design doc.  If you aren't consuming those services, you will stop.  If you are adding to the list, I'll be thrilled."

    Then I sit back, and watch the "next generation web" launch itself into success.

    Look, there are going to be a lot of things needed to make the next web successful.  I've blogged about them in the past.  We need to know which services to pick, and when to pick them, this is true.  But we also need to know when they have died, and which dependencies a service has, so that we can adapt to changing situations. 

    As an Enterprise Architect, I am incented to think about these things, find solutions, and make sure we are all using them.  That way, when you pick a service, you KNOW how reliable it is (because it uses our framework which allows you to inspect the uptime numbers), and you KNOW that it can handle the traffic you are going to send it (because we required that the team that developed it performed a scalability test and published the results for you to see). 

    For those services that have no information, you can use them... I won't stop you... but your customers (the business users who are paying your salary), they will.  They like cheap.  They like agile.  They don't care how.  But they do care that it runs reliably.  They do care that it can be tested. 

    Enterprise Architecture is not the Central Soviet of IT.  We are the city planners who set zoning, inspect new construction, enforce setbacks, and protect wetlands.  You are just as free to make a brave new world of Web 2.0 with EA in the picture. 

    In fact, you are far more likely to succeed if we are there.

  • Inside Architecture

    Is IT software development BETTER than embedded software development?

    • 2 Comments

    We hear all the time, especially in IT, about the dismal failure rate for software development projects.  So may millions wasted on project X or project Y that was cancelled or rewritten within a year.

    So I've been part of a group of frustrated 'change agents' who, for years, have struck out to find better ways to improve.  Better requirements, better estimation, better design.  More agile, more deliveries, more quality.  Tight feedback loops.  All that.  It works.

    But then I get in my Toyota Prius and I can't figure out how to find the darned FM radio station that is tuned to my MP3 transmitter because it involves a complex interaction of pressing a panel button, followed by a button on the touch-screen, followed by a completely different panel button.

    The labels on the buttons are meaningless.  The layout is seemingly random.  No IT software development process that I know of would get NEAR production with an interaction design like this, yet here it is, in one of the most technologically advanced cars in the world, from a world class innovative engineering company, after the model spent numerous years in consumer trials in Japan. 

    That isn't the only example, of course.  Consumer electronics are full of bad interface designs.  I have a wall-mounted stereo that uses an LCD backlight in the default setting, except that the default setting is to show you the radio station, not the clock, and if you switch to the clock display, the back light goes out. 

    How about the remote control that requires a complicated sequence of button presses to allow you to watch one channel while you record another (on your XP-Media Center, Tivo or VCR)?  Or the clock radio with a "Power" button on the face to turn the radio on, but reusing the Snooze button on top to turn it off, unless you happen to hit the 'sounds' button in the middle, which now requires you to hit the power button first, then followed by the snooze button to turn it off (I'm not kidding).

    I have an MP3 player that doesn't let you move forward two songs quickly until it fully displays the title of the first song on the scrolling LCD display.  If it is not playing, and you press the play button for one second, it plays, but if you mistakenly hold the play button down for two seconds, it turns off.  Quick: Find song 30 and play it while driving... I dare you.

    I use a scale that shows my weight and body fat and supposedly records previous settings, although I have yet to figure out, from looking at the six buttons (on a scale, no less), what combination of magic and voodoo is needed to actually get the previous weight to pop up.

    How about the cell phone that makes me punch 6 buttons to add a new entry to the internal phone book, or the Microwave oven with 20 buttons labeled with things like Popcorn and soup, but which proves inscrutable if you just want to run it for 90 seconds on High?

    All of these are software bugs or usability issues embedded in hardware devices.  Nearly all of these devices are inexpensive consumer electronics (except the car), and therefore the manufacturer was not particularly motivated to produce an excellent interface.

    Yet, if a software application, like Word or MS Money, was to have some of these issues, that application would be BLASTED in the media and shunned by the public.  Software developers in hardware companies seem to get a pass from the criticism... that is until a hardware company comes along that does it VERY VERY WELL (example: Apple iPod), and puts the rest to shame.

    I used to write software for embedded devices.  I understand the mindset.  Usability is not the first concern.  However, it shouldn't be the last either.

    It think it is high time that we turn the same bright light of derision on hardware products with sucky usability and goofy embedded software, with the same gusto that we normally reserve for source code control tools. 

    My expections of good design have been raised.  You see, I work in IT. 

  • Inside Architecture

    Ahead of the curve... again

    • 10 Comments

    Fascinating.  First, we hear that pundits on the blogosphere have given the name AJAX to 1997 Microsoft technologies and called it 'new.' Now some folks are talking about the basic capabilities of Windows Sharepoint Services as though they didn't happen three years ago.  (See Enterprise 2.0)

    Blogs, wikis, worker-driven content in the Intranet.  Dude, Microsoft has been using these technologies, internally, for years, literally.  The product is Sharepoint, and it has been a FREE download for Windows Server 2003 almost since the day that product was released. 

    The IT group I'm in uses blogs to communicate.  Nearly all of our documents, plans and specs are shared in public or semi-public collaboration sites, entirely self service, hosted through Sharepoint portal server.  In addition, there are two major Wiki sites with literally hundreds of sub-sites on each one for internal use.  (One based on FlexWiki, the other based on Sharepoint Wiki Beta).

    Sharepoint is not just used in Microsoft.  It is one of the most successful server products in the line.  Once a company installs Sharepoint, it is hard to keep it from becoming a de-facto standard for collaboration, sharing, and distribution of content.  The product is unstoppable.

    I guess I don't mind when two scientists reach the same conclusion from different sources.  Happens all the time.  However, reputable scientists give credit to the first one to publish their ideas.  In this case, I'd expect that folks wouldn't name products from other companies without also mentioning widely accepted products from Microsoft.

Page 1 of 3 (13 items) 123