Stuart Kent - Software Modeling and Visualization

March, 2005

  • stuart kent's blog

    More on consuming models

    • 1 Comments

    A set of  interesting questions were posted to the DSL Tools Newsgroup recently, so I've decided to reply to them here. The text from the newsgroup posting appears like this.

    Hello -- I quickly ran through the walkthroughs and worked a little with the beta version of your tools, and they are neat. Having designed a modeling language and build a few models, one thing which I would like to do is 'execute' those models. I want to write a C# plugin for Visual Studio which uses an automatically-generated domain-specific API to query and perhaps modify the models programmatically. Based on what the plugin finds in the models, it can do some other useful work. Let's say I want to do some domain-specific analysis, where there isn't any existing analysis framework which correctly supports my domain. In that case, I might as well roll my own analysis framework as a plug-in which is integrated with VS's DSL tools. What I don't want to do is serialize the models to XML and have my independent tool read in the XML file, create an internal representation of the models in memory, and then do stuff. It's a waste of time. I want to integrate my analysis tool with VS and access my models...directly.

    These are exactly the kinds of scenario we are envisaging. As I discussed in a past entry, creating models is not much use if it's difficult or impossible for other tools to consume them.

    So, my hope is:

    1. that VS is storing the models in memory in some kind of repository where my plugin can get at them quickly, and
    2. that the repository exposes both domain-specific and generic hooks for CRUD operations on my models, and
    3. there is some way for me to write a plug-in which can integrate with VS to create an integrated domain modeling environment. Sort of like using VS as the foundation for my own domain-specific modeling tool.

    Will this be supported? If so, can you publish a walkthrough about this? The models are only worth so much if they're only good for xml/text/code generation --software isn't the only thing which needs to be modeled.

    The models are held in memory (we call it the in-memory store). As well as giving access to CRUD operations, this supports transactional processing and event firing. We also generate domain specific APIs from domain models - indeed, you can see what these APIs look like if you look at e.g. XXXX.dmd.cs generated from the XXXX.dmd using the template XXXX.dmd.mdfomt in a designer solution. These APIs work against the generic framework, thus allowing both generic and domain specific access to model data. However, we still have some work to do to make all this easily available, including making some improvements to the generic APIs and doing some repackaging of code. The goal would be that you'd be able to use the dll generated from a domain model to load models into memory from XML files, access them through generic and and domain specific APIs, and then save them back to XML files. We will also be overhauling the XML serialization, so that models will get stored in domain specific, customized XML - see Gareth's posting for some details around this.

    As for VS plugins, these will be supported in some way, for example via the addition of custom menus to your designer, or by writing 'standalone' tools integrated into VS making use of the existing VS extensibility features.

    On the issue of timing, the API work will happen over the next few months, the serialization work after that. We will continue to put out new preview releases as new features are introduced. Walkthroughs, other documentation and samples will be provided with the new features.

  • stuart kent's blog

    Next Release of DSL Tools

    • 3 Comments

    Now we've got the March release out of the door, I'm sure folks are going to ask soon what's in the next release and when to expect it.

    First the 'when' bit. As Harry Pierson has already indicated, we expect the when to be shortly after VS2005 Beta2 is released, where 'shortly after' = a small number of weeks. At this point we'll be moving from VS2005 Beta1 to VS2005 Beta2.

    Now the 'what'. We're focusing on two feature areas next release (at least that's the plan, usual disclaimers apply):

    • Improvements to the template-based code/text generation framework, including a richer syntax allowing you to do richer things.
    • Better support for containment hirearchies, through (a) compartment shapes (nearly everyone we've talked to has asked for this) and (b) a richer experience in the explorer and properties grid, including the ability to create elements through the explorer.

    The above should mean that users will be far less restricted than they are at present in the kindof designer they can build.

    We're also making an investment on quality in this cycle, ramping up the automated testing & fixing a whole swathe of bugs.

    And after the next release?

    Well, here are some of the features in the pipeline: richer notations, constraints and validation, a proper treatment of serialization in XML (see this entry from Gareth), better hooks for code customization of generated designers, deployment of designers to other machines, multiple diagrams viewing a model,  better hooks for writing your own tools to consume model data (as explained in this post), ...

  • stuart kent's blog

    Validation in DSL Tools

    • 2 Comments

    In his announcement of the March release of DSL Tools, Gareth mentioned that we now have a designer definition (DD) file validator. This validates the DD file for anything that is not caught by XSD validation, including whether the cross references to the domain model are correct. It also validates those aspects of the domain model which impact the mapping of the designer definition to the domain model. For example, it will check that the XML Root class is mapped to the diagram defined in the DD file. Errors and warnings appear in the Visual Studio errors window whenever you try to generate code from the DD file (i.e. any of the code generators in the Designer project) and disappear the next time your try if the error has been fixed.

    You may not have realized this, but the domain model designer also includes some validation. It gets invoked whenever you try to save the file, or you can invoke it from the ValidateAll context menu. Try, for example, giving two classes the same name and then invoking ValidateAll.

    As Gareth indicated, this validation is implemented on top of a validation framework, that we will be leveraging to allow users to include validation as part of their own designers, or calling directly through the API, from within a text generation template, for example, as a precondition to code generation. We'd be interested to hear from users about what they would do with such features, whether they think this to be an important set of features (customers we so far have talked with do), what kind of authoring experience they would expect or want, and any suggestions for other features in this general area (for example, how else would you like validation to be exposed through the UI of a designer). You can provide this feedback as comments to this posting, as comments to Gareth's post, or through the DSL Tools newsgroup, or as suggestions through the feedback center.

  • stuart kent's blog

    March release of DSL tools

    • 2 Comments

    As posted by Gareth.

  • stuart kent's blog

    Building the right system, faster and more reliably

    • 1 Comments

    I've been pondering what the fundamental problems are that we and others are trying to solve with DSLs, Software Factories, Model Driven Software Development, and the like. I've distilled it down to two key problems:

    1. Automating rote tasks that are tedious and time-consuming to perform, and error-prone when done manually. I.e. How can we build (and evolve) more reliable systems, faster?
    2. Establishing and maintaining the connection between business requirements and the systems built to help meet those requirements. I.e. How do we ensure that the right system is built (and continues to be the right one)?

    DSLs help with the first of these because they let you codify information that would otherwise be scattered and repeated in many development artefacts. The idea is that to change that information you change it in a single domain specific viewpoint or model, and the changes are propagated to all the artefacts that would otherwise need to be changed by hand. Of course the interesting problem here is how the propagation is performed, and one common approach is to propagate by regenerating the development artefacts by merging the information  in the domain specific model with boilerplate. This works best if you can separate out generated aspects of artefacts from hand written aspects, for example by using C# partial classes. In this way you avoid that task of copying boilerplate code and making changes in designated places, and when things change you avoid multiple manual updates.

    If it is not possible to cleanly separate the generated aspects from the hand written ones then more sophisticated synchronization techniques will be required, but I'm not going to go into that now.

    And once you start thinking in this way, you then discover you can have multiple domain specific viewpoints contributing different aspects to your development artefacts. And then you discover that you can relate these viewpoints, synchronizing information between them and generating one from another. You're treading a path towards software factories.

    Domain specific models created to help solve the first problem, also tend to be more abstract and provide new perspectives on the system. They hide detail and can reveal connections that it is difficult to find by looking directly at the development artefacts, especially when those models are visualized through diagrams. This contributes to the second problem: they provide viewpoints on the system which it is often easier to connect to business requirements. One can then go a step further, and build new viewpoints specifically focused on expressing and communicating the business requirements, and set up connections between those viewpoints and viewpoints of the system which can be monitored and synchronized as one or other change. 

    We see customers already leveraging such techniques in their development processes, codifying their DSLs using XML or UML + stereotypes & tagged values, for example. They also tell us they are having problems with these technologies, and it is those problems that we're trying to address with DSL tools. I'll go into more depth on this, and reveal more of what we're planning to help solve these problems, in future posts.

  • stuart kent's blog

    Interview with Steve Cook

    • 1 Comments

    No doubt lots of my colleagues will point you at this, including Steve himself.

    But here is a great interview with Steve Cook, giving lots of detailed answers to questions about software factories, DSLs, MDA and UML.

Page 1 of 1 (6 items)