Stuart Kent - Software Modeling and Visualization

May, 2005

  • stuart kent's blog

    Interesting observations on XMI

    • 3 Comments

    I just came across this post about XMI from Steven Kelly over at Metacase. In particular, he quotes some figures about usage of the various XMI versions: he's conducted a web search for XMI files out there on the web. The first thing that struck me was how few there are, and secondly how very few there are (34) that use the latest version (2.0) released in 2003.

    Steven also makes the observation that XMI is just an XML document containing sufficient information to describe models. Provide your tool stores its models in XML files and/or provides API access to allow you to create models using the API, it's no big deal to write your own importer in code or using XSLT. And if you want to import an XMI file into a domain specific tool, you'd have to do this in any case, because it is very likely your model will be full of stereotypes and tagged values which will need special interpretation that would not be provided by an off-the-shelf importer.

    In another post, Steven talks about XMI[DI] which also supports diagram interchange. Another fact: a 20-class diagram takes nearly 400KB. That was a bit of a surprise to me. But it's his observation that "Standards are great, but I think they work best when they arise from best practice based on good theory, not when some committee tries to create something without ever having done it in practice". This strikes a chord with me: it's what I've called premature standardization.

  • stuart kent's blog

    Reflections on the spec process

    • 1 Comments

    Back in the days when I was an academic and researcher, I used to teach Software Engineering. There are many interpretations of this term, but the focus in my classes was on turning a set of vague requirements into a tangible, detailed spec from which you could reliably cut code. I didn't go much for teaching the text book stuff - waterfall versus iterative and all that - but rather encouraged students to try out techniques for themselves to see what works and doesn't.

    Perhaps unsurprisingly given my background, I preached a modelling approach. We'd start out scripting and playing out scenarios (yes, we would actually role play the scenarios in class). I didn't go in for use case diagrams - never really understood, still don't, how a few ellipses, some stick men and arrows helped - but I guess the scenario scripts could be viewed as textual descriptions of uses cases. We'd then turn these scripts into filmstrips. For the uninitiated, these are sequences of object diagrams (snapshots), illustrating how the state of the system being modelled changes as you run through the script. I learnt this technique when teaching Catalysis courses for Desmond D'Souza - indeed, my first assignment was to help Alan Wills, the co-author of the Catalysis book, and now my colleague, teach a week course somewhere in the Midlands. The technique is great, and I still swear by it as the way to start constructing an OO model. From the filmstrips, we'd develop an OO analysis model, essentially an OO model of the business processes. This was class diagrams, plus invariant constraints written in English or more formally, plus lists of actions, plus some pre/post specs of these actions. Then would come the job of turning this model into an OO design model for the (non-distributed, self-contained) program written in Java. And the scripts and accompanying filmstrips could be turned into tests and test data.

    Well, that was the theory, anyway. In practice, only a very few students really got it end-to-end, though most picked up enough to still do well in the exam. Reflecting on it now, here are some of my observations:

    1. Doing this kind of detailed modelling is fiendishly difficult without tool support. As soon as you've got a few filmstrips and a few class diagrams, keeping all of them consistent by hand is hard work. Many students wall struggle just to understand what it means for all the artefacts to be consistent.
    2. Many students tried to skip the filmstrip stage - just went straight for the class diagram. Very often this ended up with class diagrams which were nothing more than some boxes and lines annotated with domain terminology, scattered randomly in many cases, or so it seemed. Doing the filmstrips meant you could connect the meaning of the diagram to something tangible - you can look at an object in an object diagram, point at something tangible, say a car or dvd (yes, rental and resource allocation systems were a favourite), then say something like 'this object represents that thing'
    3. Moving from the OO analysis model to the OO design model was always hard. One of the main problems was that both would be expressed using the same notation (class diagrams), but the notation was interpreted differently in each kind of model. The meaning of the notation in the analysis model boiled down to being able to answer the question whether any given object diagram represented a valid instance of the model expressed through the class diagram. For a design model, the interpretation was in terms of what OO program the class diagram visualized. Associations were particularly difficult, especially when a tool (TogetherJ as it happens) was used to form the OO designs - the interpretation the tool gave was as a pair of private attributes in the Java program, not exactly appropriate for an OO analysis model
    4. Developing scenarios and scripts always delivered value. It helped get into the domain and understand it much better.
    5. You always learnt that you'd got something wrong in the earlier work, as you got more detailed. This continued to happen throughout coding. Given 1, you had to make the decision as to whether or not to keep the earlier modelling up to date. Nearly always the decision was not to, which meant that coming back to extend, alter, or even document the behaviour of the program was much harder, as you couldn't rely on any of the models you'd invested in initially. There was also a balance to be struck on how much modelling work to do up front, and what should be left to code. In group situations, you often found that non-coders would do the spec work (and these were often the weaker students, so not very good at abstraction or modelling either), then those good at coding would get on with the development, largely ignoring the specs because they weren't much good. So the specifiers would end up in a state of analysis paralysis, the developers would observe this and just plough on, and there would be a total break down in communication.
    6. Even if there was not a breakdown of communication, and developers really did implement the spec, the UI rarely got specified. It was left up to the developer to invent it (a failure on my part here, as I never really mentioned UI)

    I now find myself in a role in which most of my time is spent doing what I was trying to teach, though there are a couple of differences:

    • It's for real. The deadlines are real, the resources are fixed and limited, you do what brings most value, not what sounds good in theory.
    • The domain isn't business systems, but, well, tools and frameworks to support the development of domain specific languages and tools to support software development.
    • We're not developing a self-contained OO program, but rather code that has to integrate with the various components of visual studio. There's also xml config files and text templates for code generation.

    Here are my observations from the experience so far:

    1. I still find the development of scenarios and scripts an essential and valuable part of the spec task
    2. I don't do any OO modelling, at least not until I get to spec APIs, when then it's OO design models, not OO analysis models, and then only sometimes.
    3. Instead of filmstrips, I develop storyboards. These can be useful even when developing an API - the storyboard takes you through what the developer has to do (which for us is some experience in Visual Studio) in order to build something useful using that API.
    4. The detailed specs are written in English, peppered with UI snapshots and code fragments. Occasionally the odd state diagram gets used. For example, if you are familiar with our DSL tools, our own code generators are expressed in terms of what they get as input (a designer definition and a domain model) and what the expected behaviour of the designer that gets generated, together with a spec of the API that's being targeted. An OO model would be inappropriate here because (a) its not the most natural way of expressing this behaviour and (b) the code generators are implemented in text templates anyway, not a plain OO program, so an OO model wouldn't help really.
    5. As we get to bootstrap our tools using the tools we're building, more parts of the specs can be expressed as formal models. Again, these are not general purpose OO models, rather they are definitions of languages. So, for example, the specs of the designer definition format and the domain model designer involve the formal definition of a domain model, which is built, as you've probably guessed, using the domain model designer. Developers can then take these, tweak them and generate code from them. As we progress we want to bootstrap more parts of our own tooling using this approach.Managing the links between all the spec artefacts (scenarios, scripts, storyboards, detailed specs) is hard work. We've developed a database to help with the tracking.
    6. The specs document decisions about what is being built - they require input from all disciplines on the team and don't replace communication and discussion between those team members. They should be kept up to date, so when it comes to extending, altering, documenting or further testing the behaviour, the starting point is not an out of date record of decisions. It doesn't matter if they are incomplete before development begins. Indeed, this can save much 'trying to work out in your head' time, when it would be better to work it out in the code.

    If I was back teaching again, I think I would focus much less on specific notations, and much more on the need to track scenarios through to detailed features, and have coherent specs of the details that communicate the decisions made. I'd also look forward to the prospect of greater automation and use of code generation and software factory techniques. If you've got the right domain specific languages, then models expressed in those languages can replace reams of English spec, and code generators sourced on those models can replace a lot of hand coding. However, they have to be languages matched to the problem, and I suspect that for most systems there's still going to be old fashioned spec work to do.

    [edited soon after original post to correct some formatting issues]

  • stuart kent's blog

    Wojtek has started blogging

    • 0 Comments

    I see that the architect of the Guidance Automation Toolkit (GAT), Wojtek Kozaczynski, has started blogging. I worked with Wojtek closely for a couple of months at the inception of GAT (fun it was too), and have been continuing to work with him and his team to merge our text templating technologies - the next version of GAT will contain the new, merged engine, as will the next version of DSL Tools (should be available by the end of the month, and will work with VS2005 Beta2). So soon folks will be able to install both GAT and DSL Tools, and it will be very interesting to see how they get used in combination.

    Anyway, his second post is a pocket history of how GAT came to be and makes for an interesting read.

    My last post was about this toolkit, pointing at a webcast that gave a demo. But at that point there was no download. Well, now there is. Just visit the GAT workshop site.

Page 1 of 1 (3 items)