Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

Posts
  • Inside Architecture

    Killing the Helper class, part two

    • 17 Comments

    Earlier this week, I blogged on the evils of helper classes.  I got a few very thoughful responses, and I wanted to try to address one of them.  It is far easier to do that with a new entry that trying to respond in the messages.

    If you didn't read the original post, I evaluated the concept of the helper class from the standpoint of one set of good principles for Object Oriented development, as described by Robert Martin, a well respected author and speaker.  While I don't claim that his description of OO principles is "the only valid description as annointed by Prophet Bob", I find it extremely useful and one of the more lucid descriptions of fundamental OO principles available on the web.  That's my caveat.

    The response I wanted to address is from William Sullivan, and reads as follows:

    I can think of one case where helper classes are useful... Code re-use in a company. For instance, a company has a policy on how its programs will access and write to the registry. You wouldn't want some products in the company saving its data in HKLM/Software/CompanyName/ProductName and some under .../Software/ProductName and some .../"Company Name"/"Product Name". So you create a "helper class" that has static functions for accessing data in the registry. It could be designed to be instantiatable and extendable, but what would be the advantage? Another class could implement the companies' policy on encryption, another for database access;[clip]

    If you recall, my definition of a helper class is one in which all of the methods are static.  It is essentially a set of procedures that are "outside" the object structure of the application or the framework.  My objections were that classes of this nature violate two of the principles: the Open Closed Principle, and the Dependency Injection Principle.

    So, let's look at what a company can do to create a set of routines like William describes. 

    Let's say that a company (Fabrikam) produces a dozen software systems.  One of them, for our example, is called "Enlighten".  So the standard location for accessing data in the registry would be under HKLM/Software/Fabrikam/Enlighten.  Let's look at two approaches: one using a helper class and one using an instantiated object:

    class HSettings
    {
       static String GetKey(String ProductName, String Subkey)
       {   // -- interesting code  
       }
    }

    class FSettings
    {
         private string _ProductName;
         public FSettings (String ProductName)
         {   _ProductName = ProductName;
         }
         public String GetKey(String Subkey)
         {  // nearly identical code
         }
    }

    Calling the FSettings object may look to be a little more effort:

    public String MyMethod()
    {   FSettings fs = new FSettings("Enlighten");
        string Settingvalue = fs.GetKey("Mysetting");
        //Interesting code using Settingvalue
    }

    as compared to:

    public String MyMethod()
    {   string Settingvalue = HSettings.GetKey("Enlighten","Mysetting");
        //Interesting code using Settingvalue
    }

    The problem comes in unit testing.  How do you test the method "MyMethod" in such a way that you can find defects in the 'Interesting Code' without also relying on any frailties of the code buried in our settings object.  Also, how to test this code without there being any setting at all in the registry?  Can we test on the build machine?  This is a common problem with unit testing.  How to test the UNIT of functionality without also testing any underlying dependencies. 

    When a function depends on another function, you cannot easily find out where a defect is causing you a problem.  A defect in the dependency can cause a defect in the relying code.  Even the most defensive programming won't do much good if the dependent code returns garbage data.

    If you use the Dependency Injection Principle, you can get code that is a lot less frail, and it easily testable.  Let's refactor our "FSettings" object to inherit from an interface.  (This is not something we can do for the HSettings class, because it is a helper class).

     

    Interface ISettings
    {
         public String GetKey(String Subkey);

    }
    class FSettings : ISettings // and so on

    Now, we refactor our calling code to use Dependency Injection:

    public class MyStuff {
    private ISettings _fs; public MyStuff() {
        _fs = new FSettings("Enlighten");
    }
    public SetSettingsObject(ISettings ifs)
    {
        _fs = ifs;
    }
    public String MyMethod()
    {    string Settingvalue = _fs.GetKey("Mysetting");
        //Interesting code using Settingvalue
    }
    }

    Take note: the code in MyMethod now looks almost identical to the code that we proposed for using the static methods. The difference is important, though. First off, we seperate the creation of the dependency from it's use by moving the creation into the constructor. Secondly, we provide a mechanism to override the dependent object.

    In practical terms, the code that calls MyMethod won't care. It still has to create a 'MyStuff' object and call the MyMethod method. No parameters changed. The interface is entirely consistent. However, if we want to unit test the MyMethod object, we now have a powerful tool: the mock object.

    class MockSettings : ISettings
    {
         public MockSettings (String ProductName)
         {   if (ProductName != "Enlighten")
            throw new ApplicationException("invalid product name");
         }
         public String GetKey(String Subkey)
         {  return "ValidConnectionString";
         }
    }

    So, our normal code remains the same, but when it comes time to TEST our MyMethod method, we write a test fixture (a method in a special class that does nothing but test the method). In the test fixture, we use the mock object:

    class MyTestFixture
    {
         public void Test1 ()
         {   MyStuff ms = new MyStuff();
             MockSettings mock = new MockSettings();
             ms.SetSettingsObject(mock);
                // now the code will use the mock, not the real one.
             ms.MyMethod();
            // call the method... any exceptions?

          }
    }

    What's special about a test fixture? If you are using NUnit or Visual Studio's unit testing framework, then any exceptions are caught for you.

    This powerful technique is only possible because I did not use a static helper class when I wanted to look up the settings in my registry.

    Of course, not everyone will write code using unit testing. That doesn't change the fact that it is good practice to seperate the construction of an object from it's use. (See Scott Bain's article on Use vs. Creation in OO Design).  It also doesn't change the fact that this useful construction, simple to do if you started with a real object, requires far more code change if you had started with a helper class.  In fact, if you had started with a helper class, you may be tempted to avoid unit testing altogether. 

    I don't know about you, but I've come across far too much code that needed to be unit tested, but where adding the unit tests would involve a restructuring of the code.  If you do yourself, and the next programmer behind you, a huge favor and simply use a real object from the start, you will earn "programmer's karma" and may inherit some of that well structured code as well.   If everyone would simply follow "best practices" (even when you can't see the reason why it's useful in a particular case), then we would be protected from our own folly most of the time.

    So, coming back to William's original question: "it could be designed to be instantiable and extendable, but what's the advantage?"

    The advantage, is that when it comes time to prove that the calling code works, you have not prevented the use of good testing practices by forcing the developer to use a static helper class, whether he wanted to or not. 

  • Inside Architecture

    Coding Dojo concept: one kata for each common design pattern

    • 2 Comments

    Time to combine two basic ideas: the idea of the coding dojo and the idea of design patterns as an essential element of development training.

    For those of you who haven't seen my previous posts on a coding dojo, the concept is that a karate dojo is a safe place to come and practice elemental skills in a supportive but corrective environment.  The karate master presents problems and assists as each student practices and demonstrates their skills at solving the problem repeatedly.  These "problems" to be solved repeatedly, formally, are the katas.

    So, you come to a meeting once or twice a month to get together with other developers.  You work in a pair.  You get a problem statement and a set of unit tests to start.  Your job: meet the needs of the app by getting the unit tests to pass.

    One pair works on the projector. 

    I also believe that we are well served by practicing the basic design patterns.  Things like strategy, facade, decorator, bridge, observer, and chain of responsibility.  These basic structures are worth practicing.  We improve our understanding of OO code simply by following the kata.  Practice.  Hone.  Concentrate.

    So, if we combine the two, perhaps that would be better.  What if we create 10 kata for each of the basic design patterns and a couple of architectural patterns?  Order them at random.  Practice.  Hone.  Concentrate.  Improve.

    This idea could have some legs.  Hmmmmm......

  • Inside Architecture

    Why Ajax can be safely ignored for a SOA adoption program

    • 1 Comments

    While it is interesting that a wide variety of consulting and product companies have tried to brand themselves as "the" experts on Service Orientation, there are a few examples of good sites that, although sharing corporate sponsorship, managed to describe SOA principles in a way that is fairly neutral.  The important thing to remember, even when using these sites, is that the opinions expressed in them are not standard, even if well described. 

    Therefore, when a recent exchange between myself and Dion Hinchcliffe got rolling, Mr. Hinchcliffe pointed to a nice site at serviceorientation.org and stated that interoperability is not one of the SOA principles, and therefore my argument could be dismissed.  The two problems with this argument are, of course, (a) that the principles on the site do not represent consensus, and that (b) interoperability is specifically required by one of the principles on the site (service contract).

    The core disagreement is on this point: does an enterprise that is implementing a SOA environment need to be concerned about the use of Ajax tools?  Mr. Hinchcliffe asserts that Ajax tools will use services, and therefore will drive the implementation of an SOA environment.  My assertion is that Ajax tools will use fine-grained application interfaces, not re-usable services, and therefore will not have any effect, positive or negative, on the implementation of a SOA environment.

    The reason for this is simple: Ajax is too light-weight to play in the SOA world.  Ajax controls cannot meet or enforce a contract.  Ajax controls cannot use discovery protocols.  They must be tightly coupled with their services due to many considerations, including browser-enforced data security, in addition to the lack of discovery capabilities.  Ajax cannot compose a composable service request.  All Ajax requests will be simple, by nature. 

    The requirements for an Ajax interface are speed of execution, small size of response, and very specific interaction behavior.  Loose coupling is not a requirement for Ajax services.  I would state that loose coupling is nearly an impossibility for Ajax interfaces.

    The requirements for a web service are reliability, compliance to contract, loose coupling (in the sense of coding to contract and service discoverability) and services provided at the level of composability.  This last one is the most important point.  A composable service is one that can be understood by the business to be composed of atomic units of functionality.  The problem with the notion of an Ajax site consuming an enterprise web service is that the atomic units are TOO BIG to be useful at the front end.  Therefore, in order to create a composable service, the smallest unit of composition is not appropriate for the use of the Ajax site. 

    In conclusion: it is completely safe to assume that Ajax sites will not consume enterprise web services.

  • Inside Architecture

    Are Helper Classes Evil?

    • 25 Comments

    First off, a definition: A helper class is a class filled with static methods.  It is usually used to isolate a "useful" algorithm.  I've seen them in nearly every bit of code I've reviewed.  For the record, I consider the use of helper classes to be an antipattern.  In other words, an extraordinarily bad idea that should be avoided most of the time.

    What, you say?  Avoid Helper Classes!?!  But they are so useful!

    I say: they are nearly always an example of laziness.  (At this point, someone will jump in and say "but in Really Odd Situation ABC, There Is No Other Way" and I will agree.  However, I'm talking about normal IT software development in an OO programming language like C#, Java or VB.Net.  If you have drawn a helper class in your UML diagram, you have probably erred).

    Why laziness?  If I have to pick a deadly sin, why not gluttony? :-)

    Because most of us in the OO world came out of the procedural programming world, and the notion of functional decomposition is so easy that we drop to it when we come across an algorithm that doesn't seem to "fit" into our neat little object tree, and rather than understand the needs, analyze where we can get the best use of the technology, and place the method accordingly, we just toss it into a helper class.  And that, my friends, is laziness.

    So what is wrong with helper classes?  I answer by falling back on the very basic principles of Object Oriented Programming.  These have been recited many times, in many places, but one of the best places I've seen is Robert Martin's article on the principles of OO.  Specifically, focus on the first five principles of class design. 

    So let's look at a helper class on the basis of these principles.  First, to knock off the easy ones: 

    Single Responsibility Principle -- A class should have one and only one reason to change -- You can design helper classes where all of the methods related to a single set of responsibilities.  That is entirely possible.  Therefore, I would note that this principle does not conflict with the notion of helper classes at all.  That said, I've often seen helper classes that violate this principle.  They become "catch all" classes that contain any method that the developer can't find another place for.  (e.g. a class containing a helper method for URL encoding, a method for looking up a password, and a method for writing an update to the config file... This class would violate the Single Responsibility Principle).

    Liskov Substitution Principle -- Derived classes must be substitutable for their base classes -- This is kind of a no-op in that a helper class cannot have a derived class. (Note my definition of a helper class is that all members are static).  OK.  Does that mean that helper classes violate LSP?  I'd say not.  A helper class looses the advantages of OO completely, an in that sense, LSP doesn't matter... but it doesn't violate it.

    Interface Segregation Principle -- Class interfaces should be fine-grained and client specific -- another no-op.  Since helper classes do not derive from an interface, it is difficult to apply this principle with any degree of seperation from the Single Responsibility Principle. 

    Now for the fun ones:

    The Open Closed Principle -- classes should be open for extension and closed for modification -- You cannot extend a helper class.  Since all methods are static, you cannot derive anything that extends from it.  In addition, the code that uses it doesn't create an object, so there is no way to create a child object that modifies any of the algorithms in a helper class.  They are all "unchangable".  As such, a helper class simply fails to provide one of the key aspects of object oriented design: the ability for the original developer to create a general answer, and for another developer to extend it, change it, make it more applicable.  If you assume that you do not know everything, and that you may not be creating the "perfect" class for every person, then helper classes will be an anathema to you.

    The Dependency Inversion Principle -- Depend on abstractions, not concrete implementations -- This is a simple and powerful principle that produces more testable code and better systems.  If you minimize the coupling between a class and the classes that it depends upon, you produce code that can be used more flexibly, and reused more easily.  However, a helper class cannot participate in the Dependency Inversion Principle.  It cannot derive from an interface, nor implement a base class.  No one creates an object that can be extended with a helper class.  This is the "partner" of the Liskov Substitution Principle, but while helper classes do not violate the LSP, they do violate the DIP. 

    Based on this set of criteria, it is fairly clear that helper classes fail to work well with two out of the five fundamental principles that we are trying to achieve with Object Oriented Programming. 

    But are they evil?  I was being intentionally inflammatory.  If you read this far, it worked.  I don't believe that software practices qualify in the moral sphere, so there is no such thing as evil code.  However, I would say that any developer who creates a helper class is causing harm to the developers that follow. 

    And that is no help at all.

  • Inside Architecture

    Whose name is in the namespace?

    • 2 Comments

    There's more than one way to group your code.  Namespaces provide a mechanism for grouping code in a heirarchical tree, but there is precious little discussion about the taxonomy that designers and architects should use when creating namespaces.  This post is my attempt to describe a good starting place for namespace standards.

    We have a tool: namespaces.  How do we make sure that we are using it well?

    First off: who benefits from a good grouping in the namespace?  I would posit that a good namespace taxonomy benefits the developers, testers, architects, and support teams who need to work with the code.  We see this in the Microsoft .Net Framework, where components that share an underlying commonality of purpose or implementation will fall into the taxonomy in logical places. 

    However, most IT developers aren't creating reusable frameworks.  Most developers of custom business solutions are developing systems that are composed of various components, and which use the common shared code of the .Net Framework and any additional frameworks that may be adopted by the team.  So, the naming standard of the framework doesn't really apply to the IT solutions developer. 

    To start with, your namespace should start with the name of your company.  This allows you to easily differentiate between code that is clearly outside your control (like the .Net framework code or third-party controls) and code that you stand a chance of getting access to.  So, starting the namespace with "Fabrikam" makes sense for the employees within Fabrikam that are developing code.  OK... easy enough.  Now what?

    I would say that the conundrum starts here.  Developers within a company do not often ask "what namespaces have already been used" in order to create a new one.  So, how does the developer decide what namespace to create for their project without know what other namespaces exist?  This is a problem within Microsoft IT just as it is in many organizations.  There are different ways to approach this.

    One approach would be to put the name of the team that creates the code.  So, if Fabrikam's finance group has a small programming team creating a project called 'Motor', then they may start their namespace with: Fabrikam.Finance.Motor.  On the plus side, the namespace is unique, because there is only one 'Motor' project within the Finance team.  On the down side, the name is meaningless.  It provides no useful information.

    A related approach is simply to put the name of the project, no matter how creatively or obscurely that project was named.  Two examples: Fabrikam.Explan or even less instructive: Fabrikam.CKMS.  This is most often used by teams who have the (usually incorrect) belief that the code they are developing is appropriate for everyone in the enterprise, even though the requirements are coming from a specific business unit.  If this includes you, you may want to consider that the requirements you get will define the code you produce, and that despite your best efforts, the requirements are going to ALWAYS reflect the viewpoint of the person who gives them to you.  Unless you have a committee that reflects the entire company providing requirements, your code does not reflect the needs of the entire company.  Admit it.

    I reject both of these approaches. 

    Both of these approaches reflect the fact that the development team creates the namespace, when they are not the chief beneficiary.  First off, the namespace becomes part of the background quickly when developing an application.  Assuming the assembly was named correctly or the root namespace was specified, the namespace becomes automatic when a class is created using Visual Studio (and I would assume similar functionality for other professional IDE tools).  Since folders introduced to a project create child levels within the namespace, it is fairly simple for the original development team to ignore the root namespace and simply look at the children.  The root namespace is simply background noise, to be ignored.

    I repeat: the root namespace is not useful or even important for the original developers.  Who, then, can benefit from a well named root namespace?

    The enterprise.  Specifically, developers in other groups or other parts of the company that would like to leverage, modify or reuse code.  The taxonomy of the namespace could be very helpful for them when they attempt to find and identify functional code that implements the rules for a specific business process.  Include the support team that knows of the need to modify a function, and needs to find out where that function is implemented.

    So, I suggest that it is more wise to adopt an enterprise naming standard for the namespaces in your code in such a way that individual developers can easily figure out what namespace to use, and developers in other divisions would find it useful for locating code by the functional area.

    I come back to my original question: whose name is in the namespace?  In my opinion, the 'functional' decomposition of a business process starts with the specific people in the business that own the process.  Therefore, instead of putting the name of the developer (or her team or her project) into the namespace, it would make far more sense to put the name of the business group that owns the process.  Even better, if your company has an ERP system or a process engineering team that had named the fundamental business processes, use the names of the processes themselves, and not the name of the authoring team.

    Let's look again at our fictional finance group creating an application they call 'Motor.' Instead of the name of the team or the name of the project, let's look to what the application does.  For our example, this application is used to create transactions in the accounts receivable system to represent orders booked and shipped from the online web site.  The fundamental business process is the recognition of revenue. 

    In this case, it would make far more sense for the root namespace to be: Fabrikam.Finance.Recognition (or, if there may be more than one system for recognizing revenue, add another level to denote the source of the recognition transactions: Fabrikam.Finance.Recognition.Web)

    So a template that you can use to create a common namespace standard would be:

    CompanyName.ProcessArea.Process.Point

    Where

    • CompanyName is the name of your company (or division if you are part of a very large company),
    • ProcessArea is the highest level group of processes within your company.  Think Manufacturing, Sales, Marketing, CustomerService, Management, etc.
    • Process is the name of the basic business process being performed.  Use a name that makes sense to the business.
    • Point could be the name of the step in the process, or the source of the data, or the customer of the interaction.  Avoid project names.  Definitely avoid the name of the group that is writing the code.

    In IT, we create software for the business.  It is high time we take the stand that putting our own team name into the software is a lost opportunity at best, and narcissistic at worst.

  • Inside Architecture

    On Atlas/Ajax and SOA

    • 5 Comments

    I ran across a blog entry that attempts to link Atlas/Ajax to SOA.  What absolute nonsense!

    The technology, for those not familiar, is the use of XMLHTTP to link fine-grained data services on a web server to the browser in order to improve the user experience.  This is very much NOT a part of Service Oriented Architecture, since the browser is not a consumer of enterprise services.

    So what's wrong with having a browser consume enterprise web services?  The point of providing SOA services is to be able to combine them and use them in a manner that is consistent and abstracted from the source application(s).  SOA operates at the integration level... between apps.  To assume that services should be tied together at the browser assumes that well formed architecturally significant web services are so fine-grained that they would be useful for driving a user interface.  That is nonsense.

    For an Atlas/Ajax user interface to use the data made available by a good SOA, the U/I will need to have a series of fine-grained services that access cached or stored data that may be generated from, or will be fed to, an SOA.  This is perfectly appropriate and expected.  However, you cannot pretend that this layer doesn't exist... it is the application itself!

    In a nutshell, the distinction is in the kinds of services provided.  An SOA provides coarse-grained services that are self-describing and fully encapsulated.  In this environment, the WS-* standards are absolutely essential.  On the other hand, the kinds of data services that a web application would need in an Atlas/Ajax environment would be optimized to provide displayable information for specific user interactions.  These uses are totally different. 

    If I were to describe the architecture of an application that uses both Atlas/Ajax and SOA, I would name each enterprise web service.  All of the browser services would be named as a single component that provides user interface data services.  The are at different levels of granularity.

    Atlas/Ajax, for better or worse, is an interesting experiment in current U/I circles.  Perhaps XMLHTTP's time has finally come.  However, A/A it will have NO effect on whether SOA succeeds or fails.  Suggesting otherwise demonstrates an amazing lack of understanding of both.

     

Page 98 of 106 (631 items) «96979899100»