Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

June, 2007

  • Inside Architecture

    Mort and the Economics of Unmaintainable Code


    I've been called a lot of things in the past few days since I had a public disagreement with many folks over the definition of Mort.  On the surface, it looks like I'm a pretty "out of touch" guy when it comes to the 'common vernacular.' Granted, but looks can be deceiving.  There's more here.  Please bear with me.

    First off, I want to publically apologize to Sam Gentile.  I meant no offense when I asked if he wanted MS to develop BDUF tools.  It was an obviously absurd question, (OK, obvious to me, not obvious to Sam).  I sometimes ask obviously absurd questions when I want to point out that the "logical conclusion" of a line of thinking will take someone where they don't intend to go.  That didn't work on the blog.  My bad. 

    Add to that another mistake: My reading of Sam's message to me was probably incorrect.  In a response I made to a post on Microsoft and Ruby, Sam said:

    I don't think you are getting that point. MSFT is making tools for Morts (the priority) at the expense of every other user (especially Enterprise Developers and Architects). They have nothing for TDD. And I would further contend that making these tools "dumbed down" has significantly contributed to why Morts are Morts in the first place and why they are kept there. Think VB6 and the disatrous code that was unleashed everywhere. If Microsoft took some responsibilty and created tools that advocate good software design principles and supported them then things would be different. You and Peter (which is a friend of mine) are covering the corporate butt. It's a cop out.

    Does it look like Sam is saying "Mort is dumb" or "Mort is bad?"  I thought so at the time.  Perhaps that was not right.  I carried my misreading a bit further.  I read Sam's message to mean, "people who think like Mort thinks are errant."  In hindsight, I believe that Sam meant to say "people who work like Mort works are errant."  The difference is subtle but the result is profound.  Implication: Mort is not a bad person or a stupid person, but the code that Mort produces is not maintainable, and that is bad for all of us.  (I hope I got that right this time).  Sam cares about maintainable code

    To rephrase, the problem is not Mort.  The problem is the unmaintainable code he produces.

    So my apologies to Sam. 

    But was I insane in my conclusions as I've been accused of?  Did I redefine Reality? No.

    First, let me put up a quadrant. 

    Agile Quadrant

    I got the axis values directly from the agile manifesto, so it shouldn't be a surprise to anyone in the agile community. 

    Take a look for a minute and see if there's anything you disagree with.  One thing that may be a bit odd is the entry called "agile tool users without ceremony."  This is for the folks who use agile tools like NUnit and CruiseControl to do development, but don't follow any of the other elements of agile development (like rapid cycles, time-box development, FDD, XP, Scrum, etc).  I don't know how prevalent these folks are, but I've certainly met a few.

    Regardless, look at the values expressed in the Agile Manifesto.  Someone who cares more about "meeting the needs of a user" than they do "following a process" would move up the Y axis.  Someone who cares more about "working software" than they do "comprehensive documentation" would move along the X axis.

    OK... reader... where is Mort?

    Think about it.

    Mort doesn't follow lots of process.  He writes code for one-off applications.  Why?  Because that is why he was hired, and that is what he is paid to do.  He does exactly what his company pays him to do.  Does he write a lot of documentation?  No.  So given those two variables, which quadrant does he fall into?

    The upper right.  The one marked "agile."

    If you wonder why a lot of development managers are unsure of the agile community, it is because this comparison is not lost on them.  Any person who doesn't care for process and who doesn't want to write a lot of documentation can fit in that upper quadrant.  Agile folks are there.

    So is Mort.

    I can hear the chorus of criticism: "That doesn't make Mort agile!"  Hugo is Agile.  Mort is not!

    I'm not done.

    Mort is certainly a problem, because in our world, unmaintainable software is a pain to work with.  Some folks have decided not to hate Mort but to educate him.  (which is good).  The subtle goal here: move Mort's skillset and mindset: help him to value maintainable code. 

    If we do this, and we help Mort grow, will he keep his current job?  Probably not.  He was hired into his current job because he was a Mort.  He was hired because his company values quick fix apps.  Once our intrepid student no longer values unmaintainable code, he will no longer fit in his current positon.  He will find another job.  So what will the company do with the open position?  They will hire someone else and TRAIN THEM TO BECOME ANOTHER MORT.

    Remember, we don't hate Mort.  We have a hard time with his code.  We want to eradicate his code.  But the code is still being developed... by a new Mort.

    There are an infinite supply of new Morts.  Therefore, the solution of "educate Mort" doesn't work to solve the problem of unmaintainable code.  The solution doesn't address the underlying reasons why Mort exists or why his code is bad.  You cannot fight economics with education.  You have to fight economics with economics. 

    Let's look at the economics of unmaintainable code, and think about Mort a little more.

    Code is unmaintainable because it's complexity exceeds the ability of a developer to maintain it.  Would you agree that is a good definition of 'unmaintainable code?' 

    Rather than look at "making code maintainable," what if we look at making code free.  Why do we need to maintain code?  Because code is expensive to write.  Therefore, it is currently cheaper to fix it than rewrite it.  On the other hand, what if code were cheap, or free?  What if it were cheaper to write it than maintain it?  Then we would never maintain it.  We'd write it from scratch every time. 

    Sure, we can choose to write maintainable code.  We can use practices like patterns, object oriented development, and careful design principles.  On the other hand, we can give Mort an environment where he can't hurt himself... where his code is always small because only small amounts of code are needed to get the job done. 

    This is useful thinking here.  If you cannot make sure that Mort will write maintainable code, make him write less code.    Then when it comes time for you (not Mort) to maintain it (he can't), you don't.  You write it again.

    And that is fighting Mort with economics.  Soon, Mort's skill set doesn't matter.  He is writing small amounts of unmaintainable code, and we really won't care.  Someone 'smart' has written the 'hard' stuff for Mort, and made it available as cross cutting concerns and framework code that he doesn't have to spend any time worrying about.  Mort's code is completely discardable.  It's essentially free.

    Hugo cares about quality code.  Mort does not.  In the fantasy world of free code, what value does Hugo bring, and where does Mort fit?  Does Mort put process first or people first?  He puts people first, of course.  He writes the code that a customer wants and gets it to the customer right away.  The customer changes the requirements and Mort responds.  If it sounds like a quick iteration, that is because it is.  This process is fundamentally agile

    Yep. I said it.  In situations where maintainability doesn't matter, Mort is agile.  His values are agile.  He is paid to be agile.  He delivers value quickly, with large amounts of interaction with the customer, not a lot of process, and not a lot of documentation.  According to the Agile Manifesto, in a specific situation, Mort is agile.  He is also dangerous. 

    So we constrain him.  As long as Mort can't hurt himself and others, we are protected from him. 

    Of course, we can give Mort smarter tools.  But that goes back to the argument that Mort is the problem.  Mort is not the problem.  His employer is.  We train Mort.  He becomes a quality programmer.  He leaves. The company hires another Mort.

    So what about those Morts that we cannot train?  Every time we try to shove great tools at "untrainable Mort", we don't get "smarter Mort."  The tools get used by other people, but Mort ignores them.  We get faster and better code written by the people who care about faster and better code.  Mort doesn't care.  He is not paid to care.  He is paid to write code quickly, solve a quick problem, and go on.  His code is not maintainable, and THAT IS OK, because he can write small amounts of code (or no code) and still deliver value.

    So how do we pull this off?  How do we allow Mort to write small amounts of code so that we don't care?

    We've been trying to solve this problem for a decade or so.  We tried creating an easy drag-and-drop environment, but it didn't protect us from Mort.  We tried creating controls that do all the hard stuff, but it didn't protect us from Mort. 

    Now, SOA and the Web 2.0 space has opened up a whole new world for Mort to play in.  Generation Next is here, and finally we may be a bit closer to an answer.

    Possible Answer: We can have Mort consume a service.  He can't change it.  He can't screw it up.  But he can still deliver value, because often 60% of the business value is in supporting individual steps in a business process.  Those steps are carefully controlled by the business, but honestly, are not that hard to put together.  It's a matter of "step one comes first, step two comes next."  As long as the details of the interaction are hard to screw up, we are protected from Mort. 

    Here's the cool thing: Microsoft didn't invent this iteration.  This little bit of "Mort defense" came from the integration space (think ESB) combined with great thinking from the web community.  This approach is not something we thought of, but it works to the same ends.  This new way is based on SOA principles and REST protocols (what some are calling WOA or Web Oriented Architecture). 

    Web 2.0 and Mashups are the new agility.  Write little or no code... deliver value right away.

    And in this space, Mort is agile.  Heck, we even like him.

    And in case you are wondering why I don't hate Mort... this is the space I live in.

  • Inside Architecture

    Tools for Mort


    For those of you not familiar with the term "Mort," it comes from a user profile used by the Devdiv team.  This team has created imaginary "people" that represent key market segments.  They have talents, and goals, and career paths.  An the one that developers love to bash is poor Mort.

    I like Mort.  I have hired many folks like Mort.  Mort is a guy who doesn't love technology... he loves solving problems.  Mort works in a small to medium sized company, as a guy who uses the tools at hand to solve problems.  If the business needs some data managed, he whips up an Access database with a few reports and hands it to the three users who need the data.  He can write Excel macros and he's probably found in the late afternoons on Friday updating the company's one web site using Frontpage.

    Mort is a practical guy.  He doesn't believe in really heavy processes.  He gets a request, and he does the work. 

    One fellow who I know, and who worked with me on a project here in Microsoft  (and who reads this blog) once told me that he considers himself "a Mort" and was quite proud of it.  He introduced me to CruiseControl.  I already new NAnt and NUnit, but he showed me how CruiseControls adds continuous automated testing to the mix.  I loved it.  You see, my friend was a programmer, but he was also Mort.  He believed in getting things done.  He was no alpha geek, to use Martin Fowler's terminology.  (Yes, I know that Martin didn't originate the term, O'Reilly did at MacWorld many years ago.  But he repeated it, and he's a pretty important guy).

    My friend had taken a typical "Mort" point of view.  He is a really good programmer.  He could write fairly decent C# code and his code worked well.  But what made this guy "better than your average bear" was the practical point of view that he took to his work.  He believed like I believe: technology is a tool.  It is like any tool.  You can use it too little or too much, but the point is to use it well.

    The world needs MORE people like Mort.  In fact, with the movement towards mashups and web 2.0, the world will soon be taken over by people like Mort.  And I couldn't be happier.  Put architects and BDUF designers out of work!  That would really shake things up.

    Most of my open source friends either are a Mort or know a few, because the underlying philosophy of Mort is a big part of the open source and agile movements: People come before Process.  Solving the Problem comes before negotiating the contract.

    So when I got this reply to one of my posts from Sam Gentile, I have to admit that I was really confused.  He starts by quoting me and then provides his feedback.

    >In this response to Martin, Peter argues eloquently for including tools in the toolset that support ALL developers, not just Martin's "alpha geeks."  

    I don't think you are getting that point. MSFT is making tools for Morts (the priority) at the expense of every other user (especially Enterprise Developers and Architects). They have nothing for TDD. And I would further contend that making these tools "dumbed down" has significantly contributed to why Morts are Morts in the first place and why they are kept there. Think VB6 and the disatrous code that was unleashed everywhere. If Microsoft took some responsibilty and created tools that advocate good software design principles and supported them then things would be different.

    Wow, Sam.  I didn't know you had so much animosity for the Agile community!  Are you sure that's what you intended to say? 

    Do you really mean that Microsoft should make a priority of serving top-down project managers who believe in BDUF by including big modeling tools in Visual Studio, because the MDD people are more uber-geeky than most of us will ever be?  I hate to point this out, Sam, but Alpha Geeks are not the ones using TDD.  It's the Morts of the programming world.  Alpha geeks are using Domain Specific Languages.

    I think you are wrong about Visual Basic.  As a trend, VB brought millions of people out of the realm of Business BASIC and slowly, over the course of seven versions, to the world of object oriented development.  Microsoft, whether on purpose or not, single-handedly drove millions of people into the arms of C# and Java. 

    Microsoft cares passionately about good design principles.  So does IBM and so does Sun. Each of these companies (and others) has one or more dedicated groups of people publishing research for free on the web, supporting university research programs, and funding pilot efforts in partner companies, with the expressed goal of furthering Computer Science.  Do NOT underestimate the value that Microsoft Research and our humble Patterns and Practices group has provided to the Microsoft platform community.  You do yourself a disservice by making any statement that reeks of that.

    VB.Net is object oriented.  It is not compatible with VB6.  Microsoft took a LOT of community heat for that... much more so than the current TD.Net dust-up.  If that isn't taking responsibility, I don't know what is.  We alienated millions of developers by telling them "move up or else."  We are the singlehanded leaders in the space of bringing BASIC developers up to modern computing languages.  NO ONE ELSE HAS DONE THAT.  I dare you to find someone else that has forcebly moved millions of people to object oriented development. 

    Lastly, where do Microsoft's own add-ons, in the paid version of Visual Studio, actively PREVENT any of the agile tools from working?  Give me a break!  If an agile tool does an excellent job, and it is free, what motivation does MS have for spending real money to add that capability to the product when it cannot possibly increase the value of the paid Visual Studio product for the people who are already using the free open source tool?

    We didn't add MSTest for the users of NAnt and NUnit.  We added it for the poor folks who wanted to use those tool but their corporate idea-killers wouldn't let them.  To be honest with you, we made mistakes.  Our first attempt at those tools are flawed, but they are an improvement over the previous version, and I've been told that there is further interest in improving them. 

    Just like Toyota's first Corolla was a boxy ugly car that bears no resemblence to today's sleek little sedan, steady improvement is a mindset.  It takes time to understand.  Toyota is winning, and the reason is the same as for Microsoft.  Both companies share a relentlessness of improvement.  Slow, perhaps.  Steady?  not always.  Customer-driven?  yes.  We don't win by being monopolistic.  We win by being persistent.

    No one is asking you to stop using the free tools you are using, ever.  If those tools continue to improve faster than Microsoft's attempts (something I expect) then you have won the "uber-geek" for the platform, not us.  Thank you.  Keep it up. 


  • Inside Architecture

    Microsoft ESB as a toolkit


    Sorry it took me a while to notice, but Microsoft released their first CTP of the ESB Guidance toolkit on Codeplex in May.  If you are interested in Enterprise Services Bus, or message brokers in general, I recommend this link.

    I've downloaded it and will start looking to see if the connected services team have finally delivered an ESB for Microsoft customers.

  • Inside Architecture

    Martin Fowler wants to see Ruby on Microsoft to save the alpha geek


    I like Martin Fowler.  As a veritable lighthouse of the patterns and agile communities, he's both a resource and a calm steady voice for change in an industry that cannot succeed without change.

    So, when he posted his recent entry on "Ruby and Microsoft" I was eager to take a look.  He cites a general willingness of the Ruby community to work with Microsoft and I'm glad of that.  He also points out, and rightly so, that Microsoft has some pretty strict rules designed to prevent open source code from creeping into the product code base, rules that get in the way of open source collaboration.  That's what happens when the company is sued repeatedly for two decades by our competitors and government agencies. 

    Just as IBM suffered under long running, financially and politically motivated, anti-trust suits, which knocked them down a step and opened up the computer hardware market, Microsoft has been similarly affected.  Hopefully, we are making the turn quicker than our friends in big blue did, largely by observing their example.  They did turn the corner, and IBM makes money.  We will turn the corner, and we make money too.  I'm sure of that.  But the lawsuits matter.  They really do.

    That said, I have to say that I disagree with Martin about many of the aspects he hit upon.  I refer readers to this excellent post from Peter Laudati.

    In this response to Martin, Peter argues eloquently for including tools in the toolset that support ALL developers, not just Martin's "alpha geeks."   I agree with Peter.  The MS Platform should encourage all developers to succeed.  I also resent the term "alpha geek."  Truly awful. 

    I would add that Microsoft should NOT deliver open source tools built in to the Visual Studio platform, because we cannot possibly support those tools.  If the community develops a tool, they should support it.  I have no problem linking to the stuff and encouraging folks to use it. 

    I think it would be great if a group of Open Source developers would create an all-up "add-on" install that contains all their favorite tools like NAnt, NHibernate, NUnit, Spring.Net, etc in a single package, complete with documentation and samples, that allows folks to easily add the tools to their setup in one jump. 

    Mr Fowler's being unfair to suggest that MS treats open source differently than "technology companies" like IBM, Sun and Apple.  We aren't wildly supporting open source.  We don't oppose open source either.  (not anymore).  The vast majority of software companies are "friendly but not too friendly" with open source.  (There are tens of thousands of software companies.  Martin doesn't name a single serious software company on the open source side.) 

    It's not the entire industry on one side with Microsoft on the other.  It's an industry segment who support open source and make their money on hardware and/or services vs. the segment of companies who make their money selling software licenses.  That latter group pretty much ignores open source (or releases bits into open source when we don't want to support it ourselves).  Microsoft happens to be in the latter camp, and we are a big player... but we are far from unusual.  (Note: I include OSS vendors like Redhat as services companies because, face it, you aren't paying for the operating system... you are paying for the support, and support is a service).

    Oh, and I remember when the uber-geeks of yesterday went to Powerbuilder (and declared the death of VB) and then to Delphi (and again declared the death of VB) and then to EJB (and declared the death of everything).  Nothing happened.  Those platforms are not serious threats.  The uber-geeks don't have a great track record for picking winners.   I'm not worried. 

  • Inside Architecture

    Canonical Model, Canonical Schema, and Event Driven SOA


    One thing I've been thinking and talking about for the past few weeks is the relationship between four different concepts, a relationship that I didn't fully grasp at first but have become more convinced of as time wears on.  Those terms are:

    • Enterprise Canonical Data Model
    • Canonical Message Schema
    • Event Driven Architecture
    • Business Event Ontology

    I understood a general relationship between them, but as time has passed and I've been placing my mind directly in the space of delivering service oriented business applications, the meanings have crystalized and their relationship has become more important.  First, some definitions from my viewpoint.

    • Enterprise Canonical Data Model - The data we all agree on.  This is not ALL the data.  This is the data that we all need to agree on in order to do our business.  This is the entire model, as though the enterprise had one and only one relational database.  Of course, it is impossible for the enterprise to function with a single database.  So, in some respect, creating this model is an academic exercise.  It's usefulness doesn't become apparent until you add in the following concepts, so read on.
    • Canonical Message Schema - When we pass a message from one application to another, over a Service Oriented Architecture or in EDI or in a batch file, we pass a set of data between applications.  Both the sender and the reciever have a shared understanding of what these fields (a) data type, (b) range of values, and (c) semantic meaning.  The first two we can handle with the service tools we have.  The third one is far and away the hardest to do, and this is where most of the cost of point-to-point integration comes from: creating a consistent agreement between two applications for what the data MEANS and how it will be used. 
    • Event Driven Architecture - a style of application and system architecture characterized by the development of a set of relatively independent actors who communicate events amongst themselves in order to achieve a coordinated goal.  This can be done at the application level, the distributed system level, the enterprise level, and the inter-enterprise level (B2B and EDI).  I've used this at many levels.  It's probably my favorite model.  At the application level, I once participated in coding a component that interacted in an EDA application that ran in firmware on a high-speed modem.  At the system level, I helped design a system of messages and components that controls the creation of enterprise agreements.  At the enterprise level, I worked for numerous agencies, in my consulting days, to set up EDI transactions to share business messages between different business partners.
    • Business Event Ontology -- A reasonably complete list of business events, usually in a heirarchy, that represents the points in the overall business process where two "things" need to communicate or share.  I'm not referring to a single event, but rather to the entire list.  Note that a business event is not the same as a process step.  An event may trigger a process step, but the event itself is a "notification of something that has occurred," not the name of the process we follow next.

    I guess what escaped me, until recently, was how closely related these concepts really are.

    The way I'm approaching this starts from the business goal: use data to drive decisions.  Therefore, we need good data.  In order to have good data, we need to either integrate our applications or bring the data together at the end.   Either way, if the data is used consistently along the way, we will have a good data set to report from at the end. 

    To create that consistency, we need the Enterprise Canonical Data Model.  Creating this bird is not easy.  It requires a lot of work and executive buy-in.  Note that the process of creating this model can generate a lot of heated discussions, mostly about variations in business process.  Usually the only way to mitigate these discussions is to create a data model that contains either none of the variations between processes, or contains them all.  Neither direction is "more correct" than the other.

    However, in order to integrate the applications, either along the way or at the end of the data-generation processes, we need to use a particularly constrained definition of Canonical Schema: the Enterprise Canonical Message Schema is a subset of the Enterprise Canonical Data Model that represents the data we will pass between systems that many people feel would be useful. Note that we added a constraint over the definition above.  Not only are we sharing the data, but we are sharing the data from the Enterprise CDM. 

    By constraining our message schema to the elements in the Enterprise Canonical Data Model, we radically reduce the cost of producing good data "at the end" because we will not generate bad data along the way.  The key word is "subset."  In order to create a canonical schema without a canonical data model, you are building a house on sand.  The CDM provides the foundation for the schema, and creating the schema first is likely to cause problems later.

    Therefore, for my friends still debating if we should do SOA as a "code first" or "schema first" approach, I will say this: if you want to actually share the service, you have no choice but to create the service "schema first" and even then, only AFTER a sufficiently well understood part of the canonical data model is described and understood.

    And for my friends creating schemas that are not a subset of the overall model, time to resync with the overall model.  Let's get a single model that we all agree on as a necessary foundation for data integration.

    The next relationship is between the Canonical Message Schema and the Event Driven Architecture approach.  If you build your application so that you are sending messages, and you want to create autonomy between the components (goodness), you need to send data that has a well understood interpretation and as little 'business rule baggage" as you can get away with.  What better place than the Canonical Data Model to get that understanding?  Now, this is no longer an academic exercise.  Creating the enterprise level data model provides common understanding, so that these messages can have clear and consistent meaning.  That is imperative to the notion of Event Driven Architecture, where you are trying to keep the logic of one component from bleeding over into another. 

    The business event ontology defines the list of events that will occur that require you to send data.  Creating an ontology requires that you understand the process well enough to generalize the process steps into common-held sharable events.  To get this, the data shared at the point of an event should be in the form of an Enterprise Canonical Message Schema.

    Therefore, to summarize the relationship:

       Business Events occur in a business, causing an application to send a Canonical Message to another application.  The Canonical Message Schema is a subset of the Canonical Data Model.  Event Driven Architecture is most efficient when you send a Canonical Message Schema message between components.  This provides you with more consistent data, which is better for creating a business intelligence data warehouse at the end.

    Some agility notes:

    The list of business events in a prospect ontology may include things like "receive prospect base information", "receive prospect extended information", "prospect questionnaire response received", "prospect (re)assigned", "prospect archived", "prospect matched to existing customer", "prospect assigned to marketing program," etc. It is not a list of process steps.  Just the events that occur as inputs or outputs.

    Clearly, this list can be created in iterations, but if it is, you need to make sure that you capture all of the events that surround a particular high level process and not just focus from technology.  In other words, the business processes of "qualify prospect" or "validate order" may have many business events associated with them, and those events may need to touch many applications and people.  If you decide to focus on "qualify prospect" first, then understand all of the events surrounding "qualify prospect" before moving on to "validate order," but if both processes hit your Customer Relationship Management system, focus on the process, not the system. 


  • Inside Architecture

    Showing up can be the hardest part


    Not an architecture post, so if you are looking for technical content, skip this post.

    This week, I am in Nashville Tennessee at the Gartner Application Architecture, Development and Integration conference and the Gartner Enterprise Architecture conference.  I'll post seperately on content, and ideas, that I'm going to adopt.  I may even disagree with an analyst or two (yiikes!) but I'm really enjoying this content.  For those folks who work in Enterprise Architecture or in any derivation of strategic architecture, I heartily recommend this conference.

    Travel to get here is a story that I am compelled to tell, for the sheer red tape of it.

    Last year, I was going to come to the Gartner conference.  It was in San Diego and I had purchased tickets on Alaska Air.  I didn't get to go, so my ticket from Alaska air was just sitting on my desk, waiting to be used.  This year, with the conference in Nashville, I called the travel agent and asked to pay the change fee to use it.  No go.  Alaska doesn't fly to Nashville, and their code partner, American Air, wasn't going to accept the ticket.  The agent told me that to use last year's ticket would cost $1,300.  To buy a new one was less than $500.  Clearly, it was cheaper to throw away last year's ticket!  That was 90 minutes I'm not getting back. 

    So I booked my flight on American Airways.  It was not a direct flight.  I would change planes in Houston.  Fortunately, I had only a 60 layover.  The travel site failed to register my frequent flier number, but I figured I'd take care of that at the Airport. 

    So I got to Seattle Tacoma airport about 75 minutes before flight time, normally plenty of time to catch a flight.  Except that this was Sunday, and the cruise ships had let off a huge group of travelers all wishing to return home.  The airport was packed.  It took nearly 45 minutes to check my bag and another 15 minutes to get through security.  I got to the gate just as they were due to begin boarding.  Whew.

    No boarding.  We just sat.  After a few minutes, the gate agent announced that the flight time was delayed by two hours.  There was a part not working in the cockpit of the plane.  The airline was calling other airlines to see if one of them had the part on hand (not kidding... they went begging for parts).  Many passengers just sat.  I decided not to sit.  I went to the gate agent and asked to move the connecting flight to a later flight.  That way, if I got to Houston late, I wouldn't miss my connection.  No problem.

    The agent promised to make an announcement in 20 minutes.  After 30 minutes, I figured they were going to cancel the flight and, wanting to get a jump on all the passengers who were now waiting in line at the gate desk, I called my travel agent and asked for another flight.  Had to cancel the entire round trip and rebook on Northwest airways.  Turns out the flight on Northwest was going to be cheaper anyway.  While I was on the phone, the American Air flight was cancelled.  100 cell phones lit up at once.  I already had my alternate ticket.  Good call. 

    However, I had to get my bags from baggage claim and go recheck in to Northwest.  The flight was two hours away.  It would be close.

    Baggage claim didn't take long.  Maybe 20 minutes.  So I go back up to check in to Northwest.  Cruise traffice was even heavier, and since Northwest flies international out of Seattle, there was a LOT of folks in line.  The line was HUGE.  Almost an hour in that line.  My plane was about to board and my bags were finally on the belt.  Time to sprint to the flight...

    Oh, wait... security.  Again.

    This time, I had purchased the flight that day.  This time, I got the special treatment.  I got to be patted down and have my bag inspected.  So five minutes before boarding begins, and I'm begging with the TSA agent to let me skip through the frequent flier line to go around an hour-long security line.  She takes one look at my boarding pass, sees the SSSSSS that says "he's in for a fine time" and sends me through.

    TSA is great.  I love these guys. I don't care what anyone else says.  They are professional, quick, thorough, and they keep me safer, by a long shot, than the patchwork quilt of security that was in place five years ago.  Thank God the democrats didn't back down with Bush opposed creating the TSA.

    As efficient as they were, I got out of there in 10 minutes.  Flight was boarding... in the South gate.  I needed to ride a subway to get to the plane.  So I'm sprinting to the subway station.  (Not a pretty sight).  I had a pepsi in my bag.  I leaked.  On my paperback book.  So here I was, running through the terminal, dripping brown soda in a steady stream behind me. 

    Got to the gate and checked in.  Got on the flight, panting and sweaty. 

    And then sat.  This flight had a mechanical problem too.  We sat for 40 minutes at the gate, in a hot plane, before they got it fixed.  Great.  I still had a connection, this time in Memphis.  The layover was, once again, an hour, and there were no later flights.  If I missed the connection, I'd be spending the night in Memphis.

    Got to Memphis.  I bolt out of the plane (leaving behind my windbreaker), and head for the other flight at top speed, once again tearing through the terminal.  Got to the other gate... an no need to rush... that flight had been delayed for TWO HOURS.  The plane hadn't arrived in Memphis yet.


    The next flight arrived and we got to Nashville fine, but a trip that was supposed to last a few hours turned into an odyssey I won't soon forget. 

Page 2 of 3 (15 items) 123