Recently, Mike Walker posted a blog entry on the difference between Enterprise Architect and Solution Architect (sometimes called Application Architect). I think this is an interesting space, because I believe that some folks have a mistaken perception that these two roles do the same things at different levels. Nothing could be further from the truth, as Mike's breadth-depth diagram helps to illustrate.
The work of an Enterprise Architect is different from a Solution Architect, as different from each other as they are from Project Manager or Software Tester. Certainly, a single person can play different roles over time. A developer can become a project manager (or Scrummaster), but that doesn't mean that the job is the same.
Being an Enterprise Architect myself, who grew up out of Solution Architecture, I don't view the differentiation so much as a difference between breadth and depth, or the overlapping of roles, but rather as a partitioning of responsibilities.
I think much of the misunderstanding about the role of the EA comes from a lack of visibility of the planning cycle for Information technology. Many developers have no idea that a planning cycle even goes on. (In some companies, the planning process is informal, or worse, hidden, so it is no surprise).
As Gabriel Morgan pointed out last fall, the activities of the Enterprise Architect fall mostly in the planning space, while the activities of the Solution Architect fall mostly in the Development space. I indicate this with the "Span of Responsibility" triangles. At any point in time, the thicker the triangle, the more responsibility that role has.
To Be Clear: Planning does NOT include requirements gathering. That is part of "Deliver."
Planning is about the organization deciding what projects to do, why to do them, what they should accomplish for the company, and how much should the company spend for them. Planning decides to "build the right app."
Delivery is the entire SDLC, including waterfall, agile, spiral, or some blended process. Pick your poison. The point is that the dividing line is the point where the organization decides to fund the project. Only then are the requirements, use cases, scenarios, etc, collected. All of our notions of object oriented development, and all the process debates, affect ONLY the "Deliver" slice. Delivery decides to "build the app right."
I believe that once a stakeholder understands this distinction, it becomes more clear to them what the Enterprise architect is responsible for. The EA is not there to design an app, or figure out what the interfaces are. They are there to make sure that all of the apps in the portfolio continue to be "about" building systems for the enterprise. They insure that project managers keep integration interfaces in scope, because the app that will use that interface will be built next year... long after the current project is delivered.
Enterprise architects take the long view. No one else is paid to.
[note: I updated the model on 10-Jun-08 to correct a mistake in the span of responsibility.]
We use process models for lots of things. One is simply to understand the processes we have and to analyze them looking for opportunities to improve. But in IT, we have another good reason: to better understand software requirements.
One goal that we are chasing these days, on my project, is tracability. Specifically: We want to know that each requirement is tied somewhere to a business process.
Of course, there is nothing more mind-numbing that reading a long list of requirements in a database, word document, or spreadsheet. Business users cannot possibly keep track of every requirement, or look at a list of 2,000 requirements and be able to tell if one of them is not connected to a business process.
It just isn't normal to expect a human being to be able to do that.
The answer that we are using: add the requirements directly to the BPMN diagrams. (Use a modeling tool, so that when you add an element to a diagram, it gets added to a database as well).
So what does this look like? I have attached an example, and some text to help you to understand what it says. I recommend this practice for anyone wanting to help the business users to understand where their requirements come from. Click the image to enlarge it.
It is easy to walk through a process diagram and ask your business users what the requirements are AT THIS STEP. Business folks understand processes, and the fact that you won't tie more than about three or four requirements to any one step in a process, no one will have a mental breakdown trying to understand the list.
Note: there are two ways that I've attached requirements to the process above. One way is by directly connecting a requirement to a process activity. One requirement is listed above connected in this way.
The other way is by indicating a "support trigger". A support trigger is a question that arises in the mind of a user who is using this tool or process. Those questions drive calls to customer support, cause confusion, and otherwise lower the general level of customer experience.
By placing these support triggers underneath the process, and then tying requirements to them, a business customer can now demonstrate why a requirement is needed.
I hope you find this technique useful.
A few weeks ago, in a blog post, I asked about the relationship between business process modeling and business capability modeling. I asked some open ended questions to get honest feedback. I presented two models to illustrate two potential relationships between capabilities and processes. I called them "Model A" and "Model B."
So when doing a little research on the Federal Enterprise Architecture Framework, I got a good feel for how the OMB has attempted to solve this problem.
Answer: The FEA uses "Model A." My analysis showed that the FEA notion of a "Business Area" and "Line of Business" describes the same thing as the notion of "Business Capability" in a business capability framework. I also noted that, in the FEA, processes exist under the capabilities. There is no notion, in the framework itself, of processes that tie together capabilities across functional boundaries.
I find this curious, because there is clearly INTENT to create and understand cross-functional processes. Looking at some of the presentations made to various conferences, it certainly appears to be the goal to create an optimized structure for the improvement of cross-functional processes... yet there is nothing in the framework for it.
Studying the FEA, I noticed something else interesting. In the FEA, business processes live at the intersection of the agency and the capability (or sub-function, as the FEA calls it). In other words, if two agencies share a capability (say "loan guarantees"), then each agency will implement that capability using their own processes.
The FEA taxonomy has nothing to say about alignment or reuse at the process level. While processes show up in mappings to technology, they don't show up in the framework.
Without a process taxonomy, there is no mechanism to align processes, or find common elements even further down the list, at the activity level. This is key, because in an ideal state, Enterprise SOA services support business process activities, and can be composed from there into support for processes themselves. Without the activity level, the FEA cannot simplify beyond EAI levels, where entire applications were rationalized.
So my suggestion, to my hard-working colleagues who work on the FEA, is to include and make useful a business process hierarchy, separate from the functional (capability) hierarchy that the FEA now describes, and fill out the process activity element that sits at the join (junction) between an organization and the sub function (capability).
In any relationship, it is dangerous for one side to "decide" what the other one wants. Marriage advisors say things like "Don't control others or make choices for them." Yet, I'd like to share a story of technologists doing exactly that. Think of this as a case study in IT screw-ups. (Caveat: this project was a couple of years back, before I joined Microsoft IT. I've changed the names.)
Business didn't know what they wanted.
"Business" in this case is a law firm. The attorneys are pretty tight-lipped about their clients, and don't normally share details of their cases. In the past, every attorney got their own "private folders" on the server that he or she must keep files in. Those folders are encrypted and backed up daily. A 'master shared folder' contained templates for contracts, agreements, filings, briefs, etc.
Of course, security is an issue. One of the attorney had lost a laptop in an airport the previous year, and lost some client files. But security is also a problem. Major cases involved creating special folders on the server that could be shared by more than one attorney, just for that client.
None of this was particularly efficient. They knew that they wanted to improve things, but weren't sure how. Some ideas were to use a content management system, to put in a template-driven document creation system, and to allow electronic filing with local court jurisdictions. They didn't have much of an IT department. Just two 'helpdesk' guys with the ability to set up a PC or fix a network problem. No CIO either. Just the Managing Partner (MP).
To fix the problems, and bring everyone into the 21st century, the MP brought in consultants. He maintained some oversight, but he was first-and-foremost an attorney. He hired a well-recommended project manager and attended oversight meetings every other week.
The newly-minted IT team started documenting requirements in the form of use cases.
The use cases included things like "login to system" and "submit document". The IT team described a system and the business said "OK" and off they went. The system was written in .Net on the Microsoft platform, and used Microsoft Word for creating documents. They brought in Documentum for content management.
A year later, the new system was running. The law firm had spent over $1M for consulting fees, servers, software licenses, and modifications to their network. A new half-rack was running in the "server room" (a small inside room where the IT guys sat). Their energy costs had gone up (electricity, cooling) and they had hired a new guy to keep everything running. Everyone saw a fancy new user interface when they started their computers. What a success!
The managing partner then did something really interesting. He just finished reading a book on business improvement, and decided to collect a little data. We wanted to show everyone what a great thing they had in their new system. He asked each of the firm's employees for a list of improvements that they had noticed. Partners, associates, paralegals, secretaries, and even the receptionist.
He asked: Did the new system improve their lives? What problems were they having before? What problems were they having now? Did they get more freedom? More productivity?
The answer: no.
He was embarrassed, but he had told the partners that he was creating a report on the value of the IT work and so he would.
This is where I came in. He hired our company to put together the report.
Business Results: There were as many hassles as before. Setting up a new client took even longer to do. Partners and associates still stored their files on glorified 'private folders' (they were stored in a database now). There were new policy restrictions on putting files on a laptop, but many of the partners were ignoring them. The amount of time that people spent on the network had gone up, not down.
So what did they do wrong? What did we tell the Managing Partner?
The IT Team had started by describing use cases. They were nice little 'building blocks' of process that the business could compose in any way they wanted. But how did the business compose those activities? In the exact same way as before.
Nothing improved because no one had tried to improve anything. The direction had been "throw technology at problems and they go away," but they don't. You cannot solve a problem by introducing technology by itself. You have to understand the problem first. The technology was not wrong. The systems worked great, but they didn't solve measurable business problems.
The IT team should not have started with low-level use cases. That is an example of IT trying to read the minds of business people. IT was making choices for the business. "you need to do these things." No one asked what the measurable business problems were. No one started by listening.
They should have started with the business processes. How are new clients discovered? What steps or stages do cases go through? What are the things that can happen along the way? How can attorneys share knowledge and make each other more successful?
We explained that business processes describe "where we are and what we do." Therefore,operational Improvement comes in the form of process improvements. These are different questions than they had asked: What should we be doing? How should we be doing it? Where should we be? What promises do we need to make to our clients, and how can our technology help us to keep these promises?
Business requirements for an IT solution cannot be finalized until these questions are asked and answered. Writing code before understanding the process requirements is foolish. Not because the code won't work, but because the code won't improve the business. All the unit tests in the world won't prove that the software was the right functionality to create.
Here are the suggestions we gave. I don't know if the law firm actually did any of them or not. (I added one that I didn't know about five years ago, but I believe would be a good approach. I marked it as "new" below).
I hate to say it, but the real mistake was starting at the middle. They started with a IT centric approach: write use cases and then write code. I love use cases. But they are not 'step one.' Step one is to figure out what needs to be improved. Otherwise, IT is being asked to read minds or worse, to make decisions for their business partners.
I'm going to suggest a minimal way to gather requirements, one that produces a (minimum) requirements document in an iterative and agile manner.
In the systems space, it is common to write up a "requirements document" that attempts to capture all of the business requirements for a system. Doing so, of course, is a great example of BDUF and this runs into the YAGNI concept rather forcefully. My gut tells me to minimize these works of fiction, and to produce them in an automated and repeatable way.
But what drives the creation of these things anyway? let's look at that question.
The problem with a requirements document is that we don't have a good practice, as an industry, for putting ONLY IMPORTANT STUFF into the requirements document. We spend a lot of time collecting, and documenting, requirements, when most of the words in the document are not requirements.
Let's focus on what matters. Requirements should create an agreement between business and IT, describing the "things that technical people need to know" in order to assist in problem solving and empowering business change.
So, what do technical people need to know? I did a little browsing on the idea, focusing on BPM tools (because I'm looking at BPM concerns these days). I ran across Shaji Sethu's site along the way. He shares a mind map of technologies and features for Business Process Management (BPM) that he uses when collecting business and technical requirements. This approach is not dissimilar from the one taken on the Savvion site, where you can find a list of features to consider for BPM tools. (see link here).
If you follow these ideas, and ask about a long list of features, and ask about various technologies, what will you get? Will you create an environment where the business describes the minimum amount of information needed to share the things that technical people need to know?
Or will you create a situation where the business feels like they are being asked to decide on technical details and features? And if you ask the business to make technical decisions, is that appropriate? Are you allowing the IT department the flexibility that they need? Are you asking the business to become expert at things that they don't, or shouldn't, care about?
Based on the notion of model driven development, I'd like to automatically generate a simple document that provides information in a document that is auto-generated out of a modeling tool. That way, we can gather information incrementally and consistently, and put out a document daily or a couple of times each week.
I've got the tools that do it. The challenge is not so much the tools. It goes back to "what do we include in the requirements document itself?" Do we ask about a long list of features?
I believe that is a path of pain. If you ask a business person about a feature (like "do you need to be able to simulate a business process"), will they say "no?" At best, they will ask about the feature, and will try to construct a business scenario where that feature is useful, and if it sounds appealing, then the feature becomes a requirement. This is a useful way to sell things, and vendors like to encourage this thinking, but it is a poor way to buy.
Analogy: Do you need the seats in your car to automatically adjust to your settings? Think about it. You hop into your car, and your spouse was driving it yesterday, so you press a button and the seat and mirrors automatically adjust to your settings. Cool. But is it a requirement? Do you limit the selection of cars to those models that offer this feature?
Not if the goal is to get to work in an inexpensive way.
It makes more sense to avoid discussions of features until you understand the problems the business needs to solve, and then describe the processes that the business will use to solve them (at least from a high level).
So instead of a large mind map of features and technologies, consider the following map for BPM.
This is not a perfect list. I don't believe in perfect. I believe in "pretty good."
So you start with these business objectives, and you sit with your business and you ask which of these scenarios they care about. It is fairly easy to get a "no" answer to some of these. These map to business problems, not business solutions, and when you are describing requirements, you are describing the problems.
For each of the elements in the tree, have your business user provide a 'valuation.' I like the "thousand dollar" approach. Tell your business user that they have an imaginary amount of money to spend. I like using $1,000. Have them put the money on the leaves of the tree, in proportion to the amount of value that they would get. The business person must spend all of the money, and they cannot divide the money equivalently.
What you will get is 20 or 30% of the money going to one scenario, and over 50% going to one group. Focus there. Solve the problems for that space. For everything where the business user put 5% or less of the money, drop it.
This map is generic. You can use it again and again. Different business users will give you different values depending on the problems they want to solve.
From there, you can map features, and from there, technologies.
This approach allows you to produce a much smaller 'requirements' document, much more quickly.
I call the diagram above a "scenario map." If you know of other scenario maps running around the Internet, please share them.
And let me know what you think...
[added by Nick: I don't have time to fix the things wrong with this post, but I'll add a couple of 'afterthoughts' that help refine the idea. I don't think there is anything wrong with the core idea, but the places where you can use it are narrower than I imply. (a) this approach works if you know the solution space first, and then if you tie the business capabilities to the problem space second. That is backwards for most folks, because it implies you know the solution before you have modeled the problem. So I would say that this approach is good for BUY-VS-BUILD DECISIONS, but not normal IT software development requirements. (b) There is an opportunity for Completely missing the boat, even in a b-v-b process implied above. You have to select the solution space, first. You could, potentially, select the wrong class of solutions, and then proceed to pick the 'best' solution, within the wrong space. I need to think this idea through a bit before I suggest that anyone actually use it. YOUR SUGGESTIONS ARE WELCOME.]
I have always taken the advice at face value: the "to be" model matters much more than the "as is" model does. Implicit in that: spend as little time on the "as is" model as you can. Perhaps, even, do the "to be" model first.
Of course, I wouldn't be blogging this point if I didn't run into that bit of advice today. We are modeling the 'as is' process first. And spending a good bit of time on it. Why in the world would we do that?
Because, there's a BPM benefit to modeling the 'as is' process, and sometime we have to earn that benefit before we can wander in the clouds of 'what will come.'
Sometimes we have to be willing to write down what others have not wanted to write down: that the customer doesn't experience a simple process... that our methods are not efficient or effective... that different people use overlapping words in confusing ways... that levels of abstraction create layers of confusion that can be impenetrable for "outsiders" to understand.
Once the complexities are pointed out, and sometimes only after they have, can we begin to get people focused on the future.
Sometimes, we have to take the time to consider where we are before we can begin to understand where we are going.