Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

September, 2005

Posts
  • Inside Architecture

    Are Helper Classes Evil?

    • 15 Comments

    First off, a definition: A helper class is a class filled with static methods.  It is usually used to isolate a "useful" algorithm.  I've seen them in nearly every bit of code I've reviewed.  For the record, I consider the use of helper classes to be an antipattern.  In other words, an extraordinarily bad idea that should be avoided most of the time.

    What, you say?  Avoid Helper Classes!?!  But they are so useful!

    I say: they are nearly always an example of laziness.  (At this point, someone will jump in and say "but in Really Odd Situation ABC, There Is No Other Way" and I will agree.  However, I'm talking about normal IT software development in an OO programming language like C#, Java or VB.Net.  If you have drawn a helper class in your UML diagram, you have probably erred).

    Why laziness?  If I have to pick a deadly sin, why not gluttony? :-)

    Because most of us in the OO world came out of the procedural programming world, and the notion of functional decomposition is so easy that we drop to it when we come across an algorithm that doesn't seem to "fit" into our neat little object tree, and rather than understand the needs, analyze where we can get the best use of the technology, and place the method accordingly, we just toss it into a helper class.  And that, my friends, is laziness.

    So what is wrong with helper classes?  I answer by falling back on the very basic principles of Object Oriented Programming.  These have been recited many times, in many places, but one of the best places I've seen is Robert Martin's article on the principles of OO.  Specifically, focus on the first five principles of class design. 

    So let's look at a helper class on the basis of these principles.  First, to knock off the easy ones: 

    Single Responsibility Principle -- A class should have one and only one reason to change -- You can design helper classes where all of the methods related to a single set of responsibilities.  That is entirely possible.  Therefore, I would note that this principle does not conflict with the notion of helper classes at all.  That said, I've often seen helper classes that violate this principle.  They become "catch all" classes that contain any method that the developer can't find another place for.  (e.g. a class containing a helper method for URL encoding, a method for looking up a password, and a method for writing an update to the config file... This class would violate the Single Responsibility Principle).

    Liskov Substitution Principle -- Derived classes must be substitutable for their base classes -- This is kind of a no-op in that a helper class cannot have a derived class. (Note my definition of a helper class is that all members are static).  OK.  Does that mean that helper classes violate LSP?  I'd say not.  A helper class looses the advantages of OO completely, an in that sense, LSP doesn't matter... but it doesn't violate it.

    Interface Segregation Principle -- Class interfaces should be fine-grained and client specific -- another no-op.  Since helper classes do not derive from an interface, it is difficult to apply this principle with any degree of seperation from the Single Responsibility Principle. 

    Now for the fun ones:

    The Open Closed Principle -- classes should be open for extension and closed for modification -- You cannot extend a helper class.  Since all methods are static, you cannot derive anything that extends from it.  In addition, the code that uses it doesn't create an object, so there is no way to create a child object that modifies any of the algorithms in a helper class.  They are all "unchangable".  As such, a helper class simply fails to provide one of the key aspects of object oriented design: the ability for the original developer to create a general answer, and for another developer to extend it, change it, make it more applicable.  If you assume that you do not know everything, and that you may not be creating the "perfect" class for every person, then helper classes will be an anathema to you.

    The Dependency Inversion Principle -- Depend on abstractions, not concrete implementations -- This is a simple and powerful principle that produces more testable code and better systems.  If you minimize the coupling between a class and the classes that it depends upon, you produce code that can be used more flexibly, and reused more easily.  However, a helper class cannot participate in the Dependency Inversion Principle.  It cannot derive from an interface, nor implement a base class.  No one creates an object that can be extended with a helper class.  This is the "partner" of the Liskov Substitution Principle, but while helper classes do not violate the LSP, they do violate the DIP. 

    Based on this set of criteria, it is fairly clear that helper classes fail to work well with two out of the five fundamental principles that we are trying to achieve with Object Oriented Programming. 

    But are they evil?  I was being intentionally inflammatory.  If you read this far, it worked.  I don't believe that software practices qualify in the moral sphere, so there is no such thing as evil code.  However, I would say that any developer who creates a helper class is causing harm to the developers that follow. 

    And that is no help at all.

  • Inside Architecture

    Killing the Helper class, part two

    • 11 Comments

    Earlier this week, I blogged on the evils of helper classes.  I got a few very thoughful responses, and I wanted to try to address one of them.  It is far easier to do that with a new entry that trying to respond in the messages.

    If you didn't read the original post, I evaluated the concept of the helper class from the standpoint of one set of good principles for Object Oriented development, as described by Robert Martin, a well respected author and speaker.  While I don't claim that his description of OO principles is "the only valid description as annointed by Prophet Bob", I find it extremely useful and one of the more lucid descriptions of fundamental OO principles available on the web.  That's my caveat.

    The response I wanted to address is from William Sullivan, and reads as follows:

    I can think of one case where helper classes are useful... Code re-use in a company. For instance, a company has a policy on how its programs will access and write to the registry. You wouldn't want some products in the company saving its data in HKLM/Software/CompanyName/ProductName and some under .../Software/ProductName and some .../"Company Name"/"Product Name". So you create a "helper class" that has static functions for accessing data in the registry. It could be designed to be instantiatable and extendable, but what would be the advantage? Another class could implement the companies' policy on encryption, another for database access;[clip]

    If you recall, my definition of a helper class is one in which all of the methods are static.  It is essentially a set of procedures that are "outside" the object structure of the application or the framework.  My objections were that classes of this nature violate two of the principles: the Open Closed Principle, and the Dependency Injection Principle.

    So, let's look at what a company can do to create a set of routines like William describes. 

    Let's say that a company (Fabrikam) produces a dozen software systems.  One of them, for our example, is called "Enlighten".  So the standard location for accessing data in the registry would be under HKLM/Software/Fabrikam/Enlighten.  Let's look at two approaches: one using a helper class and one using an instantiated object:

    class HSettings
    {
       static String GetKey(String ProductName, String Subkey)
       {   // -- interesting code  
       }
    }

    class FSettings
    {
         private string _ProductName;
         public FSettings (String ProductName)
         {   _ProductName = ProductName;
         }
         public String GetKey(String Subkey)
         {  // nearly identical code
         }
    }

    Calling the FSettings object may look to be a little more effort:

    public String MyMethod()
    {   FSettings fs = new FSettings("Enlighten");
        string Settingvalue = fs.GetKey("Mysetting");
        //Interesting code using Settingvalue
    }

    as compared to:

    public String MyMethod()
    {   string Settingvalue = HSettings.GetKey("Enlighten","Mysetting");
        //Interesting code using Settingvalue
    }

    The problem comes in unit testing.  How do you test the method "MyMethod" in such a way that you can find defects in the 'Interesting Code' without also relying on any frailties of the code buried in our settings object.  Also, how to test this code without there being any setting at all in the registry?  Can we test on the build machine?  This is a common problem with unit testing.  How to test the UNIT of functionality without also testing any underlying dependencies. 

    When a function depends on another function, you cannot easily find out where a defect is causing you a problem.  A defect in the dependency can cause a defect in the relying code.  Even the most defensive programming won't do much good if the dependent code returns garbage data.

    If you use the Dependency Injection Principle, you can get code that is a lot less frail, and it easily testable.  Let's refactor our "FSettings" object to inherit from an interface.  (This is not something we can do for the HSettings class, because it is a helper class).

     

    Interface ISettings
    {
         public String GetKey(String Subkey);

    }
    class FSettings : ISettings // and so on

    Now, we refactor our calling code to use Dependency Injection:

    public class MyStuff {
    private ISettings _fs; public MyStuff() {
        _fs = new FSettings("Enlighten");
    }
    public SetSettingsObject(ISettings ifs)
    {
        _fs = ifs;
    }
    public String MyMethod()
    {    string Settingvalue = _fs.GetKey("Mysetting");
        //Interesting code using Settingvalue
    }
    }

    Take note: the code in MyMethod now looks almost identical to the code that we proposed for using the static methods. The difference is important, though. First off, we seperate the creation of the dependency from it's use by moving the creation into the constructor. Secondly, we provide a mechanism to override the dependent object.

    In practical terms, the code that calls MyMethod won't care. It still has to create a 'MyStuff' object and call the MyMethod method. No parameters changed. The interface is entirely consistent. However, if we want to unit test the MyMethod object, we now have a powerful tool: the mock object.

    class MockSettings : ISettings
    {
         public MockSettings (String ProductName)
         {   if (ProductName != "Enlighten")
            throw new ApplicationException("invalid product name");
         }
         public String GetKey(String Subkey)
         {  return "ValidConnectionString";
         }
    }

    So, our normal code remains the same, but when it comes time to TEST our MyMethod method, we write a test fixture (a method in a special class that does nothing but test the method). In the test fixture, we use the mock object:

    class MyTestFixture
    {
         public void Test1 ()
         {   MyStuff ms = new MyStuff();
             MockSettings mock = new MockSettings();
             ms.SetSettingsObject(mock);
                // now the code will use the mock, not the real one.
             ms.MyMethod();
            // call the method... any exceptions?

          }
    }

    What's special about a test fixture? If you are using NUnit or Visual Studio's unit testing framework, then any exceptions are caught for you.

    This powerful technique is only possible because I did not use a static helper class when I wanted to look up the settings in my registry.

    Of course, not everyone will write code using unit testing. That doesn't change the fact that it is good practice to seperate the construction of an object from it's use. (See Scott Bain's article on Use vs. Creation in OO Design).  It also doesn't change the fact that this useful construction, simple to do if you started with a real object, requires far more code change if you had started with a helper class.  In fact, if you had started with a helper class, you may be tempted to avoid unit testing altogether. 

    I don't know about you, but I've come across far too much code that needed to be unit tested, but where adding the unit tests would involve a restructuring of the code.  If you do yourself, and the next programmer behind you, a huge favor and simply use a real object from the start, you will earn "programmer's karma" and may inherit some of that well structured code as well.   If everyone would simply follow "best practices" (even when you can't see the reason why it's useful in a particular case), then we would be protected from our own folly most of the time.

    So, coming back to William's original question: "it could be designed to be instantiable and extendable, but what's the advantage?"

    The advantage, is that when it comes time to prove that the calling code works, you have not prevented the use of good testing practices by forcing the developer to use a static helper class, whether he wanted to or not. 

  • Inside Architecture

    Ajax and Soap, again

    • 7 Comments

    I'm flattered by all the attention my statements are getting on comparing Ajax with SOA web services.  Another one popped up over night: Dare Obasanjo  with the statement: "This is probably one of the most bogus posts I've ever seen written by a Microsoft employee."

    So first off, a disclaimer: I'm an employee of Microsoft, but I do not speak for the company in any official capacity.  That said... my turn...

    With all due respect to Mr. Obasanjo, a service that delivers data to a front end (whether it is for use by an Ajax page or a small rich-client app) is not a SOA web service.  I hate to have to point out the obvious, but alas, I must.  That is my point.  The fact that Mr. Obasanjo missed that point is led to the statement above.  I am not saying that Ajax cannot use SOAP.  I am not saying that Ajax should use WS_*.  I am not saying that lightweight services as used by front-ends are "bad" or "not really important."  I am simply saying that they have nothing to do with SOA.

    His example is that, on his site, there is a web service that he uses to display movies in the Seattle area.  It returns XML that his Ajax page formats and displays.  Cudos. 

    Now let's look at Service Oriented Architecture.  SOA is not really an Application-level concept.  It is an EAI-level concept.  SOA is not used to make front-ends talk to their back ends.  Web services can be used for this, but as I have pointed out before, simply using web services does not mean you are using Service Oriented Architecture.. 

    Let's look at Service Oriented Architecture for a moment. Actually read the paper I reference.  You'll notice some statements that completely contradict the view that Ajax plays in the SOA space.  Excerpts below:

    • Precise, published specification functionality of service interface, not implementation.
    • Formal contract between endpoints places obligations on provider and consumer.
    • Functionality presented at a granularity recognized by the user as a meaningful service.

    From their description, it is clear that a service that is so finely-tuned as to be useful for a front end is unlikely to be useful as a SOA service.  My statement is that, in fact, it would not be useful.  This is because, in a SOA environment, the transactions that pass between systems need to be encapsulated, fully self-describing, secure, reliable, and related to the business process in which they are involved.  This is simply too much overhead for consumption by a front-end.

    Therefore, Ajax interfaces, while useful from the front end standpoint, do not need to be managed from the standpoint of discoverability, transaction management, workflow, business rules, routing, or any of the other aspects of enterprise architecture that must be applied in a SOA environment.  The original post that I objected to maintained that Ajax services would need to be managed in this way and, in fact, would tax IT departments because these services will be frequently used.  That was the disagreement that Mr. Obasanjo failed to recognize.

    My position remains unchanged: Ajax interfaces escape from this level of scrutiny because they are not used to connect systems to each other... they are used to connect the front-end to the back-end. 

    And that isn't SOA.

  • Inside Architecture

    Coding Dojo concept: one kata for each common design pattern

    • 2 Comments

    Time to combine two basic ideas: the idea of the coding dojo and the idea of design patterns as an essential element of development training.

    For those of you who haven't seen my previous posts on a coding dojo, the concept is that a karate dojo is a safe place to come and practice elemental skills in a supportive but corrective environment.  The karate master presents problems and assists as each student practices and demonstrates their skills at solving the problem repeatedly.  These "problems" to be solved repeatedly, formally, are the katas.

    So, you come to a meeting once or twice a month to get together with other developers.  You work in a pair.  You get a problem statement and a set of unit tests to start.  Your job: meet the needs of the app by getting the unit tests to pass.

    One pair works on the projector. 

    I also believe that we are well served by practicing the basic design patterns.  Things like strategy, facade, decorator, bridge, observer, and chain of responsibility.  These basic structures are worth practicing.  We improve our understanding of OO code simply by following the kata.  Practice.  Hone.  Concentrate.

    So, if we combine the two, perhaps that would be better.  What if we create 10 kata for each of the basic design patterns and a couple of architectural patterns?  Order them at random.  Practice.  Hone.  Concentrate.  Improve.

    This idea could have some legs.  Hmmmmm......

  • Inside Architecture

    Developer accountability? How about PM accountability!

    • 2 Comments

    There's some current talk in the Agile community about making developers more accountable in the agile process.  Apparently, the problem is that developers will commit to delivering too many features in an iteration, and if they slip on the features, they say "so what... we'll do that in the next iteration."  That is laziness, plain and simple.

    I'm actually going to side-step that issue completely.  That issue happens, and folks are talking about it.  Good.  However, I want to add another issue to the table: project managers who are incompetent, and then blame the dev team for failure.

    I'm ranting here folks.  This is a reaction to the notion that we should hold developers accountable (which I agree with) in a community that doesn't recognize the role of a PM.  However, agile evangelists aside, the rest of us have to live with project managers, and they can be a tremendous asset or a massive liability. 

    Here are some of the counter-productive behaviors I've observed. 

    Focusing on tasks, not functionality - where do I start?  I could rant for days on this self-defeating pattern, yet I have seen nearly every project manager fall into this mentality during project work, some for a day or so but others stay there for the duration of the project.  Even if you shake the really egregious ones by the collar and scream, it may not help (except that you may feel better for 20 seconds or so), because this is a mentality.  Some PMs think that focusing on tasks is the RIGHT thing to do, and it is not.  The customer doesn't want a task to be completed.  They want functionality to be delivered. 

    The net effect of this focus on tasks: marking a task complete (and rewarding folks for it) when a developer says "it is done" without demonstrating functionality, quality, unit tests, and compliance to standards and interfaces.  The PM is the enforcer who must understand that the customer buys these things, and if they don't insure that these things are in the product, they will not be.

    Focusing on process instead of people - This is a novice mistake, but I've seen it a lot in high-ceremony environments like PSP/TSP and XP.  The process is important.  It is how everyone comes together on the problem space.  But the people are important too.  Leave room to bend the rules if the people will benefit.  Make a connection to the individuals.  Listen to their needs.  Understand their schedules.  If a person needs to leave a 4:30 to pick their kids up from daycare, don't schedule the team meeting at 4pm! 

    Counting yourself outside or above the delivery team - People are not perfect.  When the list of tasks is estimated at the beginning of a project, it will not be correct.  There will be tasks missed from the plan.  There will be questions that need to be answered "out of order."  There will be times when you really need to get the customer in the room, even if it makes you look bad, because the developers don't have what they need.  Swallow your pride.  You are the servant of the project.  The developers are not your master, but if the developers need something, and the alternative is a sacrifice of quality or time or design, then jump.  That's your cue to really shine.  Be a part of the team.  It's your delivery too.

    Celebrating completion instead of correctness - You are responsible for a large part of the culture of a project team.  You can set the tone for how people communicate, how they share, and how they feel when a release goal is hit.  Change your outlook away from "milestones of activity" and towards "user acceptance" by throwing a celebration for the team when the user is satisfied with something.  In one shop I was in, we would put a stuffed monkey in the cube of the person who had achieved acceptance.  It would move frequently.  In another, we decorated the doors, or handed out colorful banners.  Building joy around correctness and acceptance will build a culture of quality and team cameraderie. 

    Failing to understand the mathematics of human achievement - I saw a project team that was filled with young developers commit to achieving 8 productive hours in a day on a highly-visible, time criticial, three month project.  When I heard this, I yelled.  I went to dev management (I wasn't on the team) and complained.  I went to the PM.  I even went to the customer and said that this committment was absurd.  All simply said "the developers believe it is possible and we will hold them accountable if they don't meet it."  WRONG.  If you agree to doing the impossible, (jump onto a train moving at 90 miles per hour), I would be a FOOL for saying "I'll hold you responsible if you don't."  The impossible is still the impossible.  A 200% improvement in productivity over the average, for a 3 month period, is impossible.  You have to know enough to know when someone is making an absurd committment, and you have to stop them.  It is your project too.  Blame cannot be allowed to roll downhill when this kind of mistake is made. 

    So, folks, if we want to hold a conversation about holding developers accountable, let's also hold a conversation about holding project managers accountable.  Let's find a way to measure the PM as well, and let's hold them accountable for failing these points.  It is their responsibility too.

  • Inside Architecture

    Is placing the assembly in the database next?

    • 2 Comments

    In SAP implementations, the ABAP code that performs the functions of various business processes is stored in the database, or so I'm told.  I was having a discussion a few days back with an architect who works closely with SAP, and has for years.  His take: now that SQL Server Yukon can call .net code easily, why not start placing byte code directly into the database.

    The advantages are interesting:  the deployment of a system is a matter of placing data into a database.  Version control takes on a whole new meaning for folks not used to this concept.  You can create a utility for placing a version into production, and if there is a problem, rolling back to a prior version is a matter of flagging records in the database as "inactive" or deleting them.  Database backups are also system backups.  Deployment requires data movement utilities, but can happen to multiple locations easily.

    This doesn't mean that the assemblies have to run on the database server.  SAP uses application servers extensively.  The code stored in the database could be installed to the app server and could run there without difficulty.  Kind of "database driven one click deployment".

    It's an interesting idea.  I'm sure that theres an article in there for someone who wants to write it up for one of the dev magazines: how to put your assemblies into SQL Server and call them. 

Page 1 of 2 (8 items) 12