While it is interesting that a wide variety of consulting and product companies have tried to brand themselves as "the" experts on Service Orientation, there are a few examples of good sites that, although sharing corporate sponsorship, managed to describe SOA principles in a way that is fairly neutral. The important thing to remember, even when using these sites, is that the opinions expressed in them are not standard, even if well described.
Therefore, when a recent exchange between myself and Dion Hinchcliffe got rolling, Mr. Hinchcliffe pointed to a nice site at serviceorientation.org and stated that interoperability is not one of the SOA principles, and therefore my argument could be dismissed. The two problems with this argument are, of course, (a) that the principles on the site do not represent consensus, and that (b) interoperability is specifically required by one of the principles on the site (service contract).
The core disagreement is on this point: does an enterprise that is implementing a SOA environment need to be concerned about the use of Ajax tools? Mr. Hinchcliffe asserts that Ajax tools will use services, and therefore will drive the implementation of an SOA environment. My assertion is that Ajax tools will use fine-grained application interfaces, not re-usable services, and therefore will not have any effect, positive or negative, on the implementation of a SOA environment.
The reason for this is simple: Ajax is too light-weight to play in the SOA world. Ajax controls cannot meet or enforce a contract. Ajax controls cannot use discovery protocols. They must be tightly coupled with their services due to many considerations, including browser-enforced data security, in addition to the lack of discovery capabilities. Ajax cannot compose a composable service request. All Ajax requests will be simple, by nature.
The requirements for an Ajax interface are speed of execution, small size of response, and very specific interaction behavior. Loose coupling is not a requirement for Ajax services. I would state that loose coupling is nearly an impossibility for Ajax interfaces.
The requirements for a web service are reliability, compliance to contract, loose coupling (in the sense of coding to contract and service discoverability) and services provided at the level of composability. This last one is the most important point. A composable service is one that can be understood by the business to be composed of atomic units of functionality. The problem with the notion of an Ajax site consuming an enterprise web service is that the atomic units are TOO BIG to be useful at the front end. Therefore, in order to create a composable service, the smallest unit of composition is not appropriate for the use of the Ajax site.
In conclusion: it is completely safe to assume that Ajax sites will not consume enterprise web services.
First off, a definition: A helper class is a class filled with static methods. It is usually used to isolate a "useful" algorithm. I've seen them in nearly every bit of code I've reviewed. For the record, I consider the use of helper classes to be an antipattern. In other words, an extraordinarily bad idea that should be avoided most of the time.
What, you say? Avoid Helper Classes!?! But they are so useful!
I say: they are nearly always an example of laziness. (At this point, someone will jump in and say "but in Really Odd Situation ABC, There Is No Other Way" and I will agree. However, I'm talking about normal IT software development in an OO programming language like C#, Java or VB.Net. If you have drawn a helper class in your UML diagram, you have probably erred).
Why laziness? If I have to pick a deadly sin, why not gluttony? :-)
Because most of us in the OO world came out of the procedural programming world, and the notion of functional decomposition is so easy that we drop to it when we come across an algorithm that doesn't seem to "fit" into our neat little object tree, and rather than understand the needs, analyze where we can get the best use of the technology, and place the method accordingly, we just toss it into a helper class. And that, my friends, is laziness.
So what is wrong with helper classes? I answer by falling back on the very basic principles of Object Oriented Programming. These have been recited many times, in many places, but one of the best places I've seen is Robert Martin's article on the principles of OO. Specifically, focus on the first five principles of class design.
So let's look at a helper class on the basis of these principles. First, to knock off the easy ones:
Single Responsibility Principle -- A class should have one and only one reason to change -- You can design helper classes where all of the methods related to a single set of responsibilities. That is entirely possible. Therefore, I would note that this principle does not conflict with the notion of helper classes at all. That said, I've often seen helper classes that violate this principle. They become "catch all" classes that contain any method that the developer can't find another place for. (e.g. a class containing a helper method for URL encoding, a method for looking up a password, and a method for writing an update to the config file... This class would violate the Single Responsibility Principle).
Liskov Substitution Principle -- Derived classes must be substitutable for their base classes -- This is kind of a no-op in that a helper class cannot have a derived class. (Note my definition of a helper class is that all members are static). OK. Does that mean that helper classes violate LSP? I'd say not. A helper class looses the advantages of OO completely, an in that sense, LSP doesn't matter... but it doesn't violate it.
Interface Segregation Principle -- Class interfaces should be fine-grained and client specific -- another no-op. Since helper classes do not derive from an interface, it is difficult to apply this principle with any degree of seperation from the Single Responsibility Principle.
Now for the fun ones:
The Open Closed Principle -- classes should be open for extension and closed for modification -- You cannot extend a helper class. Since all methods are static, you cannot derive anything that extends from it. In addition, the code that uses it doesn't create an object, so there is no way to create a child object that modifies any of the algorithms in a helper class. They are all "unchangable". As such, a helper class simply fails to provide one of the key aspects of object oriented design: the ability for the original developer to create a general answer, and for another developer to extend it, change it, make it more applicable. If you assume that you do not know everything, and that you may not be creating the "perfect" class for every person, then helper classes will be an anathema to you.
The Dependency Inversion Principle -- Depend on abstractions, not concrete implementations -- This is a simple and powerful principle that produces more testable code and better systems. If you minimize the coupling between a class and the classes that it depends upon, you produce code that can be used more flexibly, and reused more easily. However, a helper class cannot participate in the Dependency Inversion Principle. It cannot derive from an interface, nor implement a base class. No one creates an object that can be extended with a helper class. This is the "partner" of the Liskov Substitution Principle, but while helper classes do not violate the LSP, they do violate the DIP.
Based on this set of criteria, it is fairly clear that helper classes fail to work well with two out of the five fundamental principles that we are trying to achieve with Object Oriented Programming.
But are they evil? I was being intentionally inflammatory. If you read this far, it worked. I don't believe that software practices qualify in the moral sphere, so there is no such thing as evil code. However, I would say that any developer who creates a helper class is causing harm to the developers that follow.
And that is no help at all.
There's more than one way to group your code. Namespaces provide a mechanism for grouping code in a heirarchical tree, but there is precious little discussion about the taxonomy that designers and architects should use when creating namespaces. This post is my attempt to describe a good starting place for namespace standards.
We have a tool: namespaces. How do we make sure that we are using it well?
First off: who benefits from a good grouping in the namespace? I would posit that a good namespace taxonomy benefits the developers, testers, architects, and support teams who need to work with the code. We see this in the Microsoft .Net Framework, where components that share an underlying commonality of purpose or implementation will fall into the taxonomy in logical places.
However, most IT developers aren't creating reusable frameworks. Most developers of custom business solutions are developing systems that are composed of various components, and which use the common shared code of the .Net Framework and any additional frameworks that may be adopted by the team. So, the naming standard of the framework doesn't really apply to the IT solutions developer.
To start with, your namespace should start with the name of your company. This allows you to easily differentiate between code that is clearly outside your control (like the .Net framework code or third-party controls) and code that you stand a chance of getting access to. So, starting the namespace with "Fabrikam" makes sense for the employees within Fabrikam that are developing code. OK... easy enough. Now what?
I would say that the conundrum starts here. Developers within a company do not often ask "what namespaces have already been used" in order to create a new one. So, how does the developer decide what namespace to create for their project without know what other namespaces exist? This is a problem within Microsoft IT just as it is in many organizations. There are different ways to approach this.
One approach would be to put the name of the team that creates the code. So, if Fabrikam's finance group has a small programming team creating a project called 'Motor', then they may start their namespace with: Fabrikam.Finance.Motor. On the plus side, the namespace is unique, because there is only one 'Motor' project within the Finance team. On the down side, the name is meaningless. It provides no useful information.
A related approach is simply to put the name of the project, no matter how creatively or obscurely that project was named. Two examples: Fabrikam.Explan or even less instructive: Fabrikam.CKMS. This is most often used by teams who have the (usually incorrect) belief that the code they are developing is appropriate for everyone in the enterprise, even though the requirements are coming from a specific business unit. If this includes you, you may want to consider that the requirements you get will define the code you produce, and that despite your best efforts, the requirements are going to ALWAYS reflect the viewpoint of the person who gives them to you. Unless you have a committee that reflects the entire company providing requirements, your code does not reflect the needs of the entire company. Admit it.
I reject both of these approaches.
Both of these approaches reflect the fact that the development team creates the namespace, when they are not the chief beneficiary. First off, the namespace becomes part of the background quickly when developing an application. Assuming the assembly was named correctly or the root namespace was specified, the namespace becomes automatic when a class is created using Visual Studio (and I would assume similar functionality for other professional IDE tools). Since folders introduced to a project create child levels within the namespace, it is fairly simple for the original development team to ignore the root namespace and simply look at the children. The root namespace is simply background noise, to be ignored.
I repeat: the root namespace is not useful or even important for the original developers. Who, then, can benefit from a well named root namespace?
The enterprise. Specifically, developers in other groups or other parts of the company that would like to leverage, modify or reuse code. The taxonomy of the namespace could be very helpful for them when they attempt to find and identify functional code that implements the rules for a specific business process. Include the support team that knows of the need to modify a function, and needs to find out where that function is implemented.
So, I suggest that it is more wise to adopt an enterprise naming standard for the namespaces in your code in such a way that individual developers can easily figure out what namespace to use, and developers in other divisions would find it useful for locating code by the functional area.
I come back to my original question: whose name is in the namespace? In my opinion, the 'functional' decomposition of a business process starts with the specific people in the business that own the process. Therefore, instead of putting the name of the developer (or her team or her project) into the namespace, it would make far more sense to put the name of the business group that owns the process. Even better, if your company has an ERP system or a process engineering team that had named the fundamental business processes, use the names of the processes themselves, and not the name of the authoring team.
Let's look again at our fictional finance group creating an application they call 'Motor.' Instead of the name of the team or the name of the project, let's look to what the application does. For our example, this application is used to create transactions in the accounts receivable system to represent orders booked and shipped from the online web site. The fundamental business process is the recognition of revenue.
In this case, it would make far more sense for the root namespace to be: Fabrikam.Finance.Recognition (or, if there may be more than one system for recognizing revenue, add another level to denote the source of the recognition transactions: Fabrikam.Finance.Recognition.Web)
So a template that you can use to create a common namespace standard would be:
In IT, we create software for the business. It is high time we take the stand that putting our own team name into the software is a lost opportunity at best, and narcissistic at worst.
I ran across a blog entry that attempts to link Atlas/Ajax to SOA. What absolute nonsense!
The technology, for those not familiar, is the use of XMLHTTP to link fine-grained data services on a web server to the browser in order to improve the user experience. This is very much NOT a part of Service Oriented Architecture, since the browser is not a consumer of enterprise services.
So what's wrong with having a browser consume enterprise web services? The point of providing SOA services is to be able to combine them and use them in a manner that is consistent and abstracted from the source application(s). SOA operates at the integration level... between apps. To assume that services should be tied together at the browser assumes that well formed architecturally significant web services are so fine-grained that they would be useful for driving a user interface. That is nonsense.
For an Atlas/Ajax user interface to use the data made available by a good SOA, the U/I will need to have a series of fine-grained services that access cached or stored data that may be generated from, or will be fed to, an SOA. This is perfectly appropriate and expected. However, you cannot pretend that this layer doesn't exist... it is the application itself!
In a nutshell, the distinction is in the kinds of services provided. An SOA provides coarse-grained services that are self-describing and fully encapsulated. In this environment, the WS-* standards are absolutely essential. On the other hand, the kinds of data services that a web application would need in an Atlas/Ajax environment would be optimized to provide displayable information for specific user interactions. These uses are totally different.
If I were to describe the architecture of an application that uses both Atlas/Ajax and SOA, I would name each enterprise web service. All of the browser services would be named as a single component that provides user interface data services. The are at different levels of granularity.
Atlas/Ajax, for better or worse, is an interesting experiment in current U/I circles. Perhaps XMLHTTP's time has finally come. However, A/A it will have NO effect on whether SOA succeeds or fails. Suggesting otherwise demonstrates an amazing lack of understanding of both.
I blogged a little while back that there is some interest in creating a common naming convention for enterprise web services within the company's IT group. I've been looking in to this. One thing that came up: if we don't have consensus on how we tie the EAI strategy to business goals, why should we proceed on a standard?
The notion is: if we proceed without a solid connection to business goals, we may lead people in a direction that is ineffective or just plain wrong. On the other hand, if we wait for strategy to produce basic standards, we will fail to lay good groundwork. When we do decide to proceed with a common EAI strategy (which is a foregone conclusion), then we will be further behind that we should be. This would slow adoption of common integration patterns across the board.
Don't get me wrong. There is a huge amount of integration going on. However, different strategies are being followed by different groups, and there are literally thousands of home-grown applications being used to run the business. (That's the problem with having too many developers!)
It is my supposition that we could benefit from creating a common strategy for EAI integration that we can evangelize throughout the company's IT division. I have very strong ideas about how this should look, but I'm not the first, and I'm will not be the last. Common strategy requires intelligent consensus. And it will not be free.
And there is the crux of the issue. If we spend money, are we doing it wisely? If so, we should see a business benefit. In order to understand and measure that benefit, we need to tie to a beneficial business strategy. In other words, we need to get our ducks in a row and describe the ROI of a common EAI strategy.
This is coming. We have a terrific team. I am truly excited about it.
So a standard is being created in lieu of a tie to the business strategy, but not as anything enforcable. Rather, as a de-facto standard that we can share and begin to use, but which we cannot enforce or necessarily encourage until we have some way of knowing if it is the right direction to take.
And that, folks, is enterprise architecture.
Can a strategy for Enterprise Application Integration be developed in an iterative manner?
I just had a conversation with a very well respected architect who was fairly unconvinced of the positive benefits of using iterative mechanisms to create a common strategy for EAI. I greatly respect his opinion. Yet, another architect, who I also respect, felt that iterative development may prove fruitful.
My opinion: creating a common EAI strategy is not easy. There are a lot of parts to it. You have to consider how to deliver advice to a large body of IT developers in multiple timezones. You have to consider what information should pass across integration boundaries, and how it should be secured. You have to consider how integrated information will be managed, tracked, and instrumented. You have to make sure not only that services are provided in a correct manner, but that the correct services are provided. You have to try to use the technologies that are inexpensive to exploit.
When I'm faced with this kind of situation on a project, I will sit down with the customer to understand the needs as best I can, come up with designs that I think may work, and get something back to the customer as soon as I possibly can. This helps me to prove if I am doing it correctly early, and helps the customer to give me feedback early. In other words, I don't pretend that I can guess it right. I guess it and work with the customer to improve.
So, who is the customer for a common Enterprise Application Integration strategy to cover the enterprise? Clearly there are business benefits. Strong ones. Applications are being integrated all over the company. But who needs the common strategy to exist, and how can I get something in front of him or her to work towards improvement? Agile methods work when a customer is present. Are agile methods appropriate for this kind of activity?
It's an interesting problem. I'd love to hear about how this may have played out in other organizations. If you have an anecdote or experience that you'd like to share about creating a common EAI strategy for the enterprise, please do so. I love to learn.