Inside Architecture

Notes on Enterprise Architecture, Business Alignment, Interesting Trends, and anything else that interests me this week...

Can EA Data be independent of the Metamodel?

Can EA Data be independent of the Metamodel?

  • Comments 5

One thing that I’ve come to appreciate is both the importance, and impermanence, of the Enterprise Architecture metamodel. 

If that last sentence didn’t piss you off, you weren’t listening.

I’ve found two common groups of Enterprise Architects:

  1. Folks who do not understand, or care, about EA metamodels.  Starting with Zachman aficionados, and working up to some practitioners of Balanced Scorecards, Business Process Management, and Business Strategy development (all fields that benefit from, and are necessary to, metamodels, but which were developed entirely without that concept).  To the credit of many folks who have come up from these fields, they have seen the value of metamodels along the way and moved to the second group.  Others, unfortunately, have not been able to see the holistic value of understanding knowledge using a connected model of well-defined concepts, and remain in the camp of “metamodel doubters.”
  2. Folks who believe that there should be a solid, unchanging, metamodel, and that all business and technical metadata should fit within it.  The ranks of this group are growing rapidly, as TOGAF has adopted the concept of metamodels and as groups focused on Business Architecture have brought out research materials and books dedicated to specific metamodels.  Readers of this blog will note that I produced a metamodel of sorts with the EBMM (Enterprise Business Motivation Model) nearly two years ago now, with an update to come soon.

 

Unfortunately, there are flaws with the thinking of both groups, and I’d like to propose a third way…

I’d like to propose that metamodels should be created as part of the “view", and not part of the “model” itself: That data will exist independently of the metamodel, in a manner that can be formed into a metamodel that is custom-suited to meet a particular need, at the time of that need. 

Kind of hard to imagine, isn’t it?  After all, as Information Scientists, we think in terms of the data structures… how data will be created, stored, manipulated, and consumed.  And ALL databases have a data model.  (the relationship between tables, fields, keys, indices, triggers, and constraints, as concepts, is an underlying RDBMS model).  How, exactly, can we store data in a database without first creating a single model that describes the type of data that we intend to store, how it will be stored, and how it will be related?

Yet, we’ve seen the field of “unstructured data” blossom in the past decade with the emergence of search engines like Google and Bing.  These engines have brought ever-increasing sophistication to the notion of “answering human questions” from data that is not, fundamentally, structured into an information model.  That said, the most useful data in unstructured systems is still classifiable into complex types, and that classification allows the usefulness of that data to come through. 

For example, if I go to Google and search on a local department store, I could type “Kohls in Covington WA”.  I will get the results below.  Note that if I go onto Bing and issue the same search, I will get nearly identical results.  In both cases, model is applied.  The word “Kohl’s” is taken to mean “a department store” and from that, we can add attributes.  After all, department stores have phone numbers, addresses, can appear on a map, can have items on sale, and, almost as an afterthought, can link to a web site.  

The search results illustrate far more than just links to web sites.  The search engines are applying a classification to the otherwise unstructured information.  To add value, the question is understood, and results are produced, based on classification.  This “result” is not just a web site.  It is not just “random unstructured data.”  And the results are more useful as a result of this understanding.

Bing-metamodel Google-metamodel

Imagine that we have a search engine that works for business and information systems structural data instead of web sites.  We can “know” a great deal of information about business motivation, strategy, competition, business goals, initiatives, projects, business processes, IT systems, information stores, software instances, etc, all the way down to servers, network infrastructure, and telephone handsets.   But can we “apply the appropriate metamodel” to the data at the time when it is needed?

In other words, can we answer questions like these?

  • What systems need to be modified in order to improve competitiveness as expressed through the business goals of the Retail unit? 
  • What is the accumulated Return on Investment of the projects that have completed in IT in the past two years?
  • What gaps exist in the initiatives chartered to create a strategic response to the competitive threat posed by the Fabrikam corporation’s new product line?

Can we do it without pre-specifying a metamodel?

Folks in the second camp above will ask an obvious question here: why not catalog data according to a single super-dee-duper, one-size-fits-all metamodel?  After all, once you have the right metamodel, every one of these questions can be understood and answered.

Let’s parse that idea a little… What makes a metamodel “right?”  I would venture that a metamodel is not “right” or “wrong.”  It is simply “useful” for the purpose that it is being used for… or not.  For example, sometimes I care about the distinction between a business process and a business capability.  Other times, I do not.  If my metamodel is static, I must always collect business data according to a single unified taxonomy, or I must always have two different taxonomies.  But the world is not so simple.  Sometimes, I need one.  Sometimes, I need two.  The metamodel is dependent upon the question I’m asking and the problem I need to solve.

In other words, the metamodel itself is a dependent variable.  Only the raw data, the business stakeholders, and the business concerns themselves, are independent.  All the rest is self-organizing and, here’s the problem, changes depending on the situation.  The structure, relationships, and important attributes of any one set of elements is particular to the problem that the stakeholder is solving.

So, I will re-ask my question: can we collect information in a manner, and understand it in a mechanism, that allows us to apply different metamodels to the data depending on the need of the stakeholders?

I think we can.  I think we must. 

  • Good points indeed.

    Minor catch is that there are several distinct questions here:

    - Can EA data be independent of the metamodel?

    - Can we collect information  in a way that allows us to apply different metamodels?

    - [from your Twitter intro to this post] Can EA tools be rewritten to allow different metamodels for specific purposes?

    What you've said above does intersect with each of those questions, but often in different ways.

    Perhaps more important, there've been a lot of other folks already passed along this way (though admittedly not many of them from EA). The main point we could learn from them is the layering of metamodels, to metameta- and even metametameta- and beyond. For example, consider OMG's layering of MOF to UML to UML-based models - MOF is a metametamodel on which the UML metamodel is based. MOF in turn could be described by a metametametamodel such as Object-Role Modelling en.wikipedia.org/.../Object-Role_Modeling or an ontology-language such as RDF or OWL.

    So to answer your last question, IMHO we need our EA tools to support layered metamodels, where the entities are actually described in terms of a deeper meta[*]model, and displayable/orderable in terms of any appropriate surface-layer metamodels and models.

    Most modern EA toolsets allow us to change (or, more usually, extend) the base metamodel, but still structure the entities in terms of a single surface-layer metamodel - which means that model-interchange between toolsets or even individual instances of toolsets is often a nightmare, because inevitably everyone will configure their surface-layer metamodels in different ways. A few tools, such as the open-source Essential www.enterprise-architecture.org , work directly at the metametamodel layer, building a separate ontology for each project - which sounds like the right move at first, yet all but guarantees incompatibility with anything else. There's also a trade-off of usability versus rigour: an ontology tool such as Essential permits precise rigour, but is very difficult for non-specialist to understand - and the precision itself is often too much of an abstraction from the sheer messiness of the real world...

    One point here is that the deeper we go into the metamodel-stack, the more stable it is likely to be - and hence something upon which we could build a useful interchange-standard. There've been various attempts to do this in the EA space over the years, but they seem to have faded away into nothing - such as Open Group's ADML (Architecture Description Modelling Language), which doesn't appear to have been updated since 2002. Could Microsoft perhaps apply its considerable leverage to get these efforts going again?

    Thanks again for a usefully thought-provoking post, anyway.

  • Agree wholeheartedly to points you make. Description logic based ontologies are better suited than meta-model for capturing enterprise architecture artefacts in my opinion. DL-Ontology will allow for representations you seek. Becuase as you have rightly pointed out there cannot be 'the' metaodel. [Any attempt to reach ultimate meta-model will lead to a pure PJNF representation, totally useless for any practical application.]  The mechanism you seek will be sort of inferencing engine acting on unstructured data which is annotated with an evolving DL-ontology.This engine will realise the views. Sadly there is no EA methodology as yet which advocates this. Partly because not many EA practiotioners have reached your level of maturity and demanded this sort of mechanism.  

    I sincerely hope this becomes mainstream thought soon.

  • Hello Tom,

    I am not the only voice within Microsoft that would be thrilled to see my company take up EA tools.  However, our traditional marketplace has been focused on broader tools, and there is not a lot of appetite for taking this space on at this time.  The glass is neither half-empty nor half-full... it is simply the wrong size.

    As to your note of meta-metamodels and the layering of models: I completely agree and said as much in a prior post [ http://bit.ly/eeKuO0 ].  That said, I'm suggesting that there could be a branch at an earlier state allowing the types to be defined in the EA tool and the relationships to be defined as part of the view mechanism, allowing specific reports to layer under each model.

    It is a minor distinction in the grand scheme, but a huge innovation in terms of adoption, because you won't need to get "enterprise-wide" agreement on a metamodel before bringing in a tool and attempting to use it.  You need only to produce a "segment" or "perspective" specific model to gain traction.

    Thank you for your careful reading and thoughtful feedback.

    --- Nick

  • I do agree with your third-way approach to meta-models.

    In large part, I think the one-model-to-rule-them-all crowd has conflated different categories and uses of modelling.

    I commented on ontology on your previous post. Good additional points made here.

    If we just take language as the starting point the entities in a model are the nouns, and the relationships are the prepositions and verbs (roughly).

    When the closed grammars and vocabularies of the current crop of meta-models are bound to a computer program the effect is like designing a language that only allows 52 sentences.

    I like to think of the idea of a meta-meta model as a semantic platform. XML is a perfect example of such. Generic XML can be 'applied' to build any number of well formed 'languages' of which OWL DL is one of a very large number.

    If we must have an 'industry standard' for modelling, it should be a simple completely abstract language from which context-specific reusable meta-models can be easily built. This would allow automated translation. Data exchange. System integration. And so on. The context may an organisation or an entire industry sector. I work in higher education and while a University shares much with the world of corporations there are some differences - particularly in the business architecture that could do with a dedicated vocabulary.

    The tools would then encapsulate the abstract model only. The designer (architect) would create the local model (or reuse an available one) and move up the abstraction layers (or is it down?) populate it with the actual entities found in the described organisation.

    But wait... we already have completely abstract language from which context-specific (and computable) meta-models can be built. In fact there are several. Wouldn't it be nice if say, a big consortium, made up of lots of big companies, universities, and government departments and agencies, considered the possibilities. :)

  • I appreciate that meta-models are extremely useful tools for a number of disciplines including EA, but only as a means of being able to build appropriate models, surely!  By appropriate I mean models that are useful and valuable to the client organisation to identify, aid, enable and drive EA activities from which they'll benefit.  

    What concerns me is the fascination, nay - obsession, that some in EA functions have with meta-models.  The navel-contemplating nature of meta-model design in most cases is not creating value to the organisation employing that person... unless you're a consultancy!

    As for when at a meeting someone started expressing concern about their meta-meta-model I almost lost the will to live.

    Ask yourself this:

    Can you do effective EA without a Meta-model?

    If you need a meta-model can't you find one off the shelf that you can implement with minor tweaks if necessary?

    Is contemplating your navel - sorry - designing your own synthesised, specific, finely honed meta-model - stopping you from doing something valuable like actual EA work of value to your organisation?

    I'm reminded of an excellent paper called "Death by UML Fever" by Alex E Bell of Boeing, 2004.  Easy to find on the net.  It amusingly, yet very effectively discusses how implementing UML can go wrong for so many reasons, each of these defined as a 'fever' in one of four groups of 'meta-fever'.  Some of these fevers have clearly trans-mutated and merged to be deadly to EA hosts too!

Page 1 of 1 (5 items)