Stuart Kent - Building developer tools at Microsoft - @sjhkent

  • stuart kent's blog

    DSL Tools beyond VS2008

    • 22 Comments

    I promised a while ago to publish a roadmap for what we're doing with DSL Tools, post VS2008. Now that VS2008 and the VS2008 SDK have just shipped (thanks Gareth for providing a post which points at both announecements) now seems a good time honour that promise.

    There are two main themes in how we expect DSL Tools to develop in the future:

    • Evolve the graphical designer and code generation platform to provide more capabilities, to integrate better with other parts of the VS platform, to componentize it so it is easier to choose which parts you want to use, to better support extensibility and customization, and to improve performance.
    • Generalize the authoring approach pioneered in DSL Tools, to the creation and maintainence other kinds of Visual Studio extension, and integrate the DSL authoring experience with that. These are the tools to build tools that I've used as a tagline for my blog, although, in the context of Visual Studio, VSX Tools might be a better term.

    I mentioned the second theme soon after the DSL Tools team joined the Visual Studio platform team. That is still in our sights and will be part of our roadmap for the VS SDK as a whole, which we'll publicize when it's ready. For the remainder of this post, I’ll focus on the first theme.

    Roadmap for VS codename Rosario

    Below is a list of features we have in the backlog for Rosario. We’re not promising to complete all of them, but expect to complete a significant subset. Some of the features may appear earlier in a service pack for VS2008.

    • DSL extensibility. This enables you to extend a designer that has already been deployed – i.e. without cracking open the source. An extension defines a set of domain class extensions (adding new domain properties, new relationships, new domain classes), which appear on elements of that domain class when the extension is enabled. An extension also defines a root domain class and a scope. You activate an extension on an element of the root domain class, and then all elements in scope also get that extension enabled. Extension activation is dynamic through the UI of the designer (we’ll provide some standard UI, and the base designer author will be at liberty to write their own custom UI). Multiple extensions may be activated on the same element. The root domain class for an extension may be the root of the domain model of the DSL being extended, which then allows an extension to apply to the whole model. Extensions may also define additional validation constraints, in the usual way. I’ll write a separate blog entry on this with more detail a bit later. The authoring experience for extensions will require support for referencing one dsl definition from another, which will also benefit customers who want to build a set of designers that share some parts of their definition (e.g. a common core set of domain classes).
    • Support for databinding of domain model to Winforms. Although it is possible to create a custom editor with a placeholder for you to write your own forms-based editing surface, and it is also possible to add new tool windows to supplement the graphical design surface, there is currently no support for easily binding the form to underlying model elements. This feature involves generating the appropriate APIs from a domain model so that the standard Winforms databinding experience can be used.
    • Nested shapes. Although there is some support for nested shapes, it is harder work than it should be to get them to work. This feature involves improving the overall experience and fixing some bugs in the runtime which currently need to be worked around.
    • Various fit & finish items, for example:
      • Print preview. Enable the print preview menu for designers built with DSL Tools, so that you do not have to print to a file in order to preview the page layout. Page setup already provides some control on how to layout the pages.
      • Search and replace. Enable search and replace to work with designers built with DSL Tools. You should be able to search for a text string and replace that string throughout the model.
      • Sticky tools on the toolbox. Once a tool is sticky, you don't need to keep going back to the tool to use it repeatedly on the design surface.
      • Standard layout options. The underlying framework supports a number of layout options. This feature would surface it through some standard UI in the generated designer.
      • Pan/zoom control. A pan/zoom control provided as part of the standard UI in the generated designer, rather than having to custom code this as you do now.
    • Provide better support for orchestrated code generation. At present we have a button in the solution explorer for executing all templates in a solution. It is possible, with a little custom code, to hook templates up to a menu command (there's an example in our book), and then write code which does things like apply the same template multiple times to a model, generating a new file for each application. One option for improving this (though we're not fully decided yet) is to hook code generation into the build system, using MsBuild to coordinate it, and make it easy to write custom build actions that do the kind of things you can do when you launch code generation off a menu command. 

    Post-Rosario

    The two main investments we’re considering post-Rosario are:

    • WPF-based design surface. We'd like to replace the current graphical design surface with one founded on WPF (WIndows Presentation Foundation). Not only will this open up many new possibilities in the quality of visualization that one can achieve, we also believe it will make it much easier to customize the graphical notation. At the same time, we'd take the opportunity to componentize the architecture, so that it would be possible to bind a graphical design surface built on top of this component to other data sources, such as XML or a database.
    • First class support for cross-referencing between different models and multiple views. This has been requested many times by customers, although it's also interesting to note that many customers have been able to achieve their scenarios by writing custom code.

    Well, that's all for now. Please provide your feedback on this. What do you like? What don't you like? What's on your wish list?

  • stuart kent's blog

    Domain Specific Modelling. Is UML really the best tool for the job?

    • 13 Comments
    This is a reaction to a recent posting by Grady Booch on his  blog (May 21st 2004, “Expansion of the UML“). Before honing in on particular statements, here's a good chunk of the posting to set some context:

     

    "I was delighted to see today's report that Sun has announced support for the UML in their tools. This comes on the heels of Microsoft telegraphing their support for modeling in various public presentations, although the crew in Redmond seem to trying to downplay the UML open standard in lieu of non-UML domain-specific languages (DSL). […] There is no doubt that different domains and different stakeholders are best served by visualizations that best speak their language - the work of Edward Tufte certainly demonstrates that - but there is tremendous value in having a common underlying semantic model for all such stakeholders. Additionally, the UML standard permits different visualizations, so if one follows the path of pure DSLs, you essentially end up having to recreate the UML itself again, which seems a bit silly given the tens of thousand of person-hours already invested in creating the UML as an open, public standard."

     

    Let's start with the statement "There is no doubt that different domains and different stakeholders are best served by visualizations that best speak their language". This seems to imply that a domain specific-language is just about having a different graphical notation - that the semantics that underpin different DSLs is in fact the same - only the notation changes. This view is further reinforced by the statement "but there is tremendous value in having a common underlying semantics model for all such stakeholders".

     

    How can one disagree with this? Well, what if the semantic model excludes the concepts that the stakeholders actually want to express? If you examine how folks use UML, especially those who are trying to practice model driven development, you'll see that exactly the opposite is happening. Model driven development is forcing people to get very precise about the semantics of the models they write (in contrast to the sometimes contradictory, confusing and ambiguous semantics that appears in the UML spec). This precision is embodied in the code generators and model analysis tools that are being written. And, surprise, surprise, there are differences from one domain to another, from one organization to another. Far from there being a common semantics, there are significant and actual differences between the interpretations of models being taken. And how are these semantic differences being exposed notationally? Well, unfortunately, you are rather limited in UML with how you can adapt the notation. About all you can do is decorate your diagrams with stereotypes and tagged values. This leads to (a) ugly diagrams and (b) significant departure from the standard UML semantics for those diagrams (as far as this can be pinned down).

     

    I speak from experience. I once worked on the Enterprise Application Integration UML profile, which has recently been ratified as a standard by the OMG. The game here was to find a way of expressing the concepts you wanted, using UML diagrams decorated with stereotypes and trying to align with semantics of diagrams as best you could. It boiled down to trying to find a way of expressing your models in a UML modelling tool, at first to visualize them, and then, if you were brave enough, to get hold of the XMI and generate code from them. So in the EAI profile, we bastardized class diagrams to define component types (including using classes to define kinds of port), and object diagrams were used to define the insides of composite components, by representing the components and ports as objects, and wires as links. Now, you can hardly say that this follows the "standard" semantics of UML class diagrams and object diagrams.

     

    And this gets to the heart of the matter. People are using UML to express domain-specific models, not because it is the best tool for the job, but because it saves them having to build their own visual modelling tool (which they perceive as something requiring a great deal of specialist expertise). And provided they can get their models into an XML format (XMI), however ugly that is, they can at least access them for code generation and the like. Of course, people can use XML for this as well, provided they don't care about seeing their models through diagrams, and they are prepared to edit XML directly.

     

    So, rather than trying to force everyone to use UML tools, we should be making it easy for people to build their own designers for languages tailored to work within a particular domain or organization's process, not forgetting that these languages will often be graphical and that we'll want to get at and manipulate the models programmatically. And this does not preclude vendors building richer designers to support those (often horizontal) domains where it is perceived that the additional investment in coding is worthwhile. Indeed, Microsoft are releasing some designers in this category with the next release of Visual Studio.

  • stuart kent's blog

    DSL Tools in Visual Studio 2010

    • 11 Comments

    The first public CTP of Visual Studio 10

    Anyone attending or watching the goings on at Microsoft's Professional Developers Conference (PDC) will have heard about the next version of Visual Studio.

    If you were at PDC, you will have received a VPC containing a Community Technology Preview of Visual Studio 10. If you were not, you might know that you can download it at: Visual Studio 2010 and .NET Framework 4.0 CTP.

    What about the DSL Tools?

    If you are interested in the DSL Tools, and try this CTP, you might be disappointed not to find anything new in DSL Tools. This doesn't mean we haven't been busy. In fact we've been developing a number of highly-requested new features, but unfortunately did not get these integrated into this CTP release. We'll share them with customers in our next preview release.

    Also, in the CTP you'll find a suite of new designers from Team Architect, including a set of UML designers. These have all been built using DSL Tools, and some of the new features we're delivering are directly to support this effort.

    The new features

    So what are the new features then? Below is a summary. They've been driven by a combination of supporting internal teams such as Team Architect, and responding to customer feedback. We'll blog more about these features over the coming weeks, including, we hope, publishing some videos demo'ing them as a taster whilst you wait for the next preview.

    Dsl Libraries. Share fragments of DSL definitions between different designers.

    Dsl extensibility. Extend the domain model and behavior for a DSL after it's been deployed, including DSLs shipped by a third party.

    Readonly. Selectively switch off the ability to edit models and model elements in a designer.

    Forms-based UI. Easily bind models to winforms and WPF-based forms UI. IMS now implements the necessary databinding interfaces.

    Modelbus. A new piece of platform to support cross-referencing between models and interaction between designers. This has been one of customers' main requests.

    T4 precompile. Precompile text templates so that they can be deployed for use on machines that do not have VS installed. (This applies to scenarios where text templates take inputs from data sources other than DSL models. We've had requests from internal teams to use text templates in scenarios which don't use models created from DSLs.)

    UI enhancements:

    • Moveable labels on the connectors (moveably can be activated
    • Sticky tools in the toolbox
    • Quick navigation and editing of compartments
    • Copy and paste of selection as BMP and EMF images
    • Copy and paste of elements in the same or in different diagrams
  • stuart kent's blog

    DSL Tools and Oslo

    • 9 Comments

    The Oslo modeling platform was announced at Microsoft's PDC and we've been asked by a number of customers what the relationship is between DSL Tools and Oslo. So I thought it would be worth clearing the air on this. Keith Short from the Oslo team has just posted on this very same question. I haven’t much to add really, except to clarify a couple of things about DSL Tools and VSTS Team Architect.

    As Keith pointed out, some commentators have suggested that DSL Tools is dead. This couldn’t be further from the truth. Keith himself points out that "both products have a lifecycle in front of them". In DSL Tools in Visual Studio 2010 I summarize the new features that we're shipping for DSL Tools in VS 2010, and we'll be providing more details in future posts. In short, the platform has expanded to support forms-based designers and interaction between models and designers. There's also the new suite of designers from Team Architect including a set of UML designers and technology specific DSLs coming in VS 2010. These have been built using DSL Tools. Cameron has blogged about this, and there are now some great videos describing the features, including some new technology for visualizing existing code and artifacts. See this entry from Steve for details.

    The new features in DSL Tools support integration with the designers from team architect, for example with DSLs of your own, using the new modelbus, and we're exploring other ways in which you can enhance and customize those designers without having to taking the step of creating your own DSL. Our T4 text templating technology will also work with these designers for code generation and will allow access to models across the modelbus. You may also be interested in my post Long Time No Blog, UML and DSLs which talks more about the relationship between DSLs and UML.

  • stuart kent's blog

    UML, DSLs and software factories: let those analogies flow...

    • 9 Comments

    I typed this entry a few days ago, but then managed to lose it through a set of circumstances I'm too embarrassed to tell you about. It's always better second time around in any case.

    Anyway, reading this recent post from Simon Johnston prompted a few thoughts that I'd like to share. In summary, Simon likens UML to a craftsman's toolbox, that in the hand of a skilled craftsman can produce fine results. He then contrasts this with the domain specific language approach and software factories, suggesting that developers are all going to be turned into production-line workers - no more craftsman. The argument goes something like this: developers become specialized in a software factory to work on only one aspect of the product through a single, narrowly focussed domain specific language (DSL); they do their work in silos, without any awareness of what the others are doing; this may increase productivity of the individual developers, but lead to a less coherent solution.

    Well, I presume this is a veiled reference to the recent book on Software Factories, written by Jack Greenfield and Keith Short, architects in my product group at Microsoft, and to which I contributed a couple of chapters with Steve Cook. The characterization of sofware factories suggested by Simon is at best an over-simplification of the vision presented in this book.

    I trained as a mathematician. When constructing a proof in mathematics there are two approaches. Go back to the original definitions, the first principles, and work out your proof from there; or build on top of theorems already proven by others. The advantage of the first approach is that all you have to learn is the first principles and then you can set your hand to anything. The problem, is that it will take you a very long to time to prove all but the simplest theorems, and you'll continually be treading over ground you've trod many times before. The problem with the second approach is that you have to learn a lot more, including new notations (dare I say DSLs) and inevitably end up becoming a specialist in a particular branch of the subject; but in that area you'll be a lot more productive. And it is not unknown for different areas of mathematics to combine to prove some of the more sophisticated theorems.

    With software factories we're saying that to become more productive we need to get more domain specific so that we can provide more focused tooling that cuts out the grunt work and let's us get on with the more challenging and exciting parts of the job. As with mathematics, the ability to invent domain specific notations, and, in our case, the automated tools to support them, is critical to this enterprise. And sophisticated factories (that is, most of them) will combine expertise from different domains, both horizontal and vertical, to get the job done, just as different branches of mathematics can combine to tackle tricky problems.

    So our vision of software factories is closer to the desirable situation described  by Simon towards the end of his article, where he talks about the need for a "coherent set of views into the problem". Each DSL looks at the problem, the software system being built or maintained, from a particular perspective. These perspectives need to be combined with the other views to give a complete picture. If developers specialize in one perspective or another, then so be it, but that doesn't mean that they can sit in silos and not communicate with the others in the team. There are always overlaps between views and work done by one will impact the work of another. But, having more specialized tooling should avoid a lot of error-prone grunt work, and will make the team as a whole far more productive as a result.

    So what about UML in all this? To return to Simon's toolbox analogy (and slightly toungue-in-cheek) UML is like having a single hand drill in the toolbox, which we've got to try and use to drill all sizes of hole (for large holes you drill a number of small holes close together), and in all kinds of material; some materials you won't be able to drill into at all. DSLs, on the other hand, is like having a toolbox full of drill bits of all different sizes, each designed to drill into a particular material. And in a software factory, you support your DSLs with integrated tooling, which is like providing the electric hammer-drill: you'll be a lot more productive with these specialist tools, and even do things you couldn't manage before, like drill holes in concrete.

    So I don't see UML as a central part of the software factory/DSL story. I see it first and foremost as a language for (sketching) the design of object-oriented programs - at least this is its history and its primary use to date. Later versions of UML, in particular the upcoming UML 2, have tried to extend its reach by adding to the bag of notations that it includes. At best, this bag is useful inspiration in the development of some DSLs, but I doubt very much that they'll get used exactly as specified in the standard - as far as conformance against the standard can be checked that is...

     

  • stuart kent's blog

    Why we view a domain model as a tree

    • 9 Comments

    In some feedback to my last posting, there was a question about why we visualized a domain model as a tree, with the point that this seemed inappropriate for some domain models. This is an interesting question, and warrants a more public response. So, here goes.

    • The visualization is a tree view of a graph of classes (actually  tree views of two graphs - the inheritance and relationship graphs). In the tool you have a good deal of control over how you want to view the graph as a tree.
    • The tree views certainly enables the expand/collapse and automated layout, both things which we wanted in the tool
    • But, perhaps most importantly, it is the case that many of the behaviours required in a designer and other tools implemented using code generated from the domain model, require one to walk the graph as a tree (and often in different ways). Examples include: deletion behaviour, how a model appears in an explorer, how it gets serialized in XML, how constraint evaluation cascades through a model, and so on. We have also found that although the trees required for each of these can be subtly different, they all seem to be some variation on some common, core tree. So we have chosen to make one tree central in the visualization, and have editing experiences for defining variants from this. At the moment, the only variant supported is the definition of deletion behavior.
  • stuart kent's blog

    Designing notations for use in tools

    • 8 Comments

    Tools make available a whole range of facilities for viewing, navigating and manipulating models through diagrams, which are not available when using paper or a whiteboard.  Unfortunately, these facilities can not always be exploited if they are not taken into account when the notation is designed: there is a difference between designing a notation to be used in a tool and one to be used on paper. 

     

    Some of the facilities available in a tool which are not available on paper:

    • Expand/collapse. The use of expand collapse buttons to show/hide detail inside shapes, descendants in tree layouts of node-line diagrams, and so on.
    • Zoom in/out. The ability to focus in on one area of a diagram revealing all the detail, or zoom out just to see the diagram at various scales.
    • Navigation. The ability to navigate around a diagram, or between diagrams, by clicking on a link, using scroll bars, searching, and so on.
    • (Virtually) No limit on paper size (though limited screen size). Paper is physically limited in size, so one is forced to split a diagram over multiple sheets, there is no choice. In contrast, there can be a very large virtual diagram in a tool, even if one only has a screen sized window to look at it. But the screen is not such a limitation, if combined with expand/collapse, zoom, and great navigation facilities.
    • Automatic construction/layout of diagrams. The ability to construct a diagram just from model information (the user doesn't have to draw any shapes explicitly at all), and/or the ability to layout shapes on a diagram surface. The best diagram is one which is created and (intuitively) laid out completely automatically.
    • Explorer. Provides a tree view on a model, which can be as simple as an alphabetical list of all the elements in the underlying model, possible categorized by their type. The explorer can provide an index into models, which provides a valuable aid to navigation in many cases.
    • Properties window. A grid which allows you to view and edit detailed information about the selected element.
    • Tooltips. Little popups that appear when you hover over symbols in a diagram. They can be used to reveal detailed information that otherwise can remain hidden.
    • Forms. An alternative way of viewing and editing detailed information about the model, selected elements or element.

    To see how designing a notation for use in a tool can lead to different results than if the focus is paper or whiteboard use, let's take a look at two well-known notations. UML class diagrams and UML state machines. The former, I would argue, has not been well designed for use in a tool; whereas the latter benefits from features which can be exploited in a tool.

     

    Class diagrams. A common use of class diagrams is to reverse engineer the diagram from code. This helps to understand and communicate designs reflected in code. Industrial scale programs have hundreds of classes; frameworks even more. Ideally, one would like a tool to create readable and intuitive diagrams automatically from code. However, the design of the notation mitigates against this. Top of my list of problems is the fact that the design of associations, with labels at each end, precludes channeling on lines, where channeling allows many lines to join a node at the same endpoint (inheritance arrows are often channeled to join the superclass shape at the same point). Because labels are placed at the ends of association lines, each line has to touch a class shape at a different end point in order to display the labels. This exacerbates line crossing and often means that class nodes need to be much larger than they ought to be, making diagrams less compact than they ought to be and much harder to achieve a good layout automatically.

     

    State Machines. State machines can get large with many levels of nesting. This problem can be mitigated using zoom facilities, by a control that allows one to expand/collapse the inside of a state or a link that launches the nested state machine in a new window. As transitions can cross state boundaries, using any of these facilities means that you'll need to distinguish between when a transition (represented by an arrow) is sourced/targeted on a state nested inside and currently hidden from view, or on the state whose inside has been collapsed. This distinction requires a new piece of notation. In UML 1.5, the notation of stubbed transitions (transitions sourced or targeted on a small solid bar) was introduced to distinguish between a transition which touched a boundary and one which crossed it. Interestingly, in UML 2 this notation has been replaced by notation for entry/exit points. In this scheme, one can interrupt a transition crossing the boundary of a state by directing it through an entry or exit point drawn on the edge of the state. This is the state machine equivalent of page continuation symbols that one gets with flowcharts. However, it can be used to assist with expand/collapse in a tool: collapsing the inside of a state, leaves the entry/exit point visible, so one can still see the difference between transitions which cross the boundary and those which don't. In one sense, this isn't quite as flexible a solution as the solid bar notation, as, for expand/collapse to work in all cases, it requires all transitions that cross a boundary to be interrupted by an entry/exit point. I guess, a tool could introduce them and remove them as needed, but I must confess that I prefer the stubbed transition notation for the expand/collapse purpose - seems more natural somehow.

     

    To conclude, designing a notation for use in a tool can lead to different decisions than if the focus is on paper or whiteboard use. However, much as we might like to think that a notation will only be used in a tool, there will always be times when we need to see or create it on paper or a whiteboard, and this has to be balanced against the desire to take advantage of what tools can offer. For example, it is always a good idea to incorporate symbols to support 'page continuation', which will make it easier to provide automated assistance for cutting a large virtual diagram into page-sized chunks (and, as we have seen, such symbols can also support other facilities like expand/collapse). And it is always worth considering whether the notation is sketchable. If not, it may be possible to define a simplified version which is. For example, one could provide alternatives for sophisticated shapes or symbols, that look great in tool, but are very hard to use when sketching.

  • stuart kent's blog

    On code generation from models

    • 7 Comments
    In a recent article, Dan Hayward introduced two kinds of approaches to MDA: translationist and elaborationist. In the former approach 100% code is generated from the model; in the latter approach some of the code is generated and then hand finished. He gives examples of tools and companies following each of these approaches.

     

    Underlying Dan's article seemed to be the assumption that models are just used as input to code generation. To be fair, the article was entirely focused on the OMG's view of model driven development, dubbed MDA, which tends to lean that way. My own belief is that there are many useful things you can use models for, other than code generation, but that's the topic of a different post. I'll just focus here on code generation.

     

    So which path to follow? Translationist or elaborationist?

     

    In the translationist approach, the model is really a programming language and the code generator a compiler. Unless you are going to debug the generated (compiled) code, this means that you'll need to develop a complete debugging and testing experience around the so-called modeling language. This, in turn, requires the language to be precisely defined, and to be rich enough to express all aspects of the target system. If the language has graphical elements, then this approach is tantamount to building a visual programming language. The construction of such a language and associated tooling is a major task that requires specialist skills. It will probably be done by a tool vendor in domains where there is enough of a market to warrant the initial investment. Indeed, one doesn't have to look far for examples. There are several companies who have built businesses on the back of this approach to MDA, especially in the domain of real-time, embedded systems. And, for obvious reasons, they have been leading efforts to define a programming language subset of UML, called Executable UML, xUML or xtUML, depending on which company you talk to.

     

    In contrast, the elaborationist approach to code generation does not require the same degree of specialist skill or upfront investment. It can start out small and grow organically. However, there are pitfalls to watch out for. Here's some that I've identified:

    • Be careful to separate generated code from handwritten code so that when you regenerate you do not overwrite the hand written code. If that is not possible, e.g. because you have to fill in method bodies by hand, then there are mitigation strategies one can use. For example, you can use the source control system and code diff tools to forward integrate hand written code in the previous version to the newly generated version.
    • Remember that you will be testing and debugging your handwritten code in the context of the generated code. This means that your developers can not avoid coming into contact with the generated code. So make the generated code as understandable as possible. Simple generated code that extends well factored libraries (as opposed to generated code that starts from low-level base classes) can make a big difference.
    • The code generator itself will need testing and debugging, especially in the early stages. It should be written in a form that is accessible to your developers and allows the use of testing and debugging tools.
    • Manage your models, like you manage code. Check them into the source control system and validate them as much as you can. The amount you can validate the models depends on the tools you're using to represent them. You could just choose to represent the models as plain XML, in which case the definition of your modeling language might be an XSD, so you can validate your models against the XSD. If you choose to represent your models as UML, then it is likely that you'll also be using stereotypes and tagged values to customize your modeling language (see an earlier post). In general, UML tools don't do a good job of validating whether models are using them in the intended way, so resort to inspection or build validation checks into your code generator instead. 
    • Remember that 'code' is not just C# or Java. Run-time configuration files, build scripts, indeed any artifact that needs to be constructed in order to build and deploy the system, count as code.
    • Remember that the use of code generators is meant to increase productivity. So look for those cases where putting information in a model and generating code will save time and/or increase quality. Typically you'll be building your system on top of a code framework, and your code generator will be designed to take the drudgery out of completing that framework, and prevent human errors that often accompany drudgery. For example, look for cases where you can define a single piece of information in a model, that the generator then injects into many places in the underlying code. Then, instead of changing that piece of information in multiple places, you just change it once in the model and regenerate.

    Of course, we have been talking to our customers and partners about their needs in this area. But we're always to keen to receive more feedback. If you've been using code generation, then I'd like to hear from you. Has it been successful? What techniques have you been using to write the generators? To write the models? What pitfalls have you encountered? What development tools would have made the job easier?

  • stuart kent's blog

    GAT and recipes

    • 7 Comments

    I've just noticed that a webcast on the Guidance Automation Toolkit (GAT) is now available. This is some emerging technology that should soon be made available in a download. Harry Pierson has a nice description over on his blog.

    GAT and DSL Tools are both key technologies for realising the software factories vision - they tackle different aspects of the problem. What GAT brings to the table is a notion of recipe and recipe spawning. In its simplest form, a recipe is a wizard that gathers information from the user, then does stuff in Visual Studio, based on that information and information in the environment, thereby automating one or more steps of the software development process. A typical example of 'stuff' would be to create a a set of new items in the solution, perhaps further configure a project, perhaps add one more new projects, and so on. All these things that are created would be based on templates, which get filled in by the information supplied in the wizard. But it's not restricted to creating stuff; you can also delete stuff, perfrom refactoring operations, whatever really, provide you can work out how to do it programatically. A really neat feature of GAT is the notion of recipe spawning: one thing a recipe can do is create new recipes and attach them to items in the solution. This is crucial to automating guidance, where there are many steps to be performed and often repeated. With GAT, you automate the individual steps as recipes, then use recipe spawning to guide folks to the next steps that need to be performed, by spawning recipes which are revealed to you in the context of the items created (or which have been manipulated) by the recipe you've just applied. A spawned recipe can be a one-off action, whcih disappears when done, or can hang around to be repeated as many times as you like.

    If you think of the DSL tools as a factory for building designers, then you can see how GAT and DSL Tools can work together. DSL Tools has a wizard for creating a solution in VS used to build a graphical designer. This is effectively a recipe. One of the things a recipe creates is a domain model, based on whatever language template was chosen when running the wizard. The domain model can be edited using a graphical designer (created solely for the purpose of editing domain models), then code generation templates (another key technology) are used to generate code for some aspects of the designer from the domain model. There's another DSL involved as well, the designer definition, from which other aspects of the code are generated. So here's a little factory involving (so far) one recipe and two DSLs.

    [13 May 2005: Added this to the GAT category in my blog.]

    As a footnote, I should also confess to some involvement with GAT. I spent a little time working with Wojtek and Tom at the inception of GAT, in particular on the notion of recipes and recipe spawning. It's great to see this work come to fruition, and it will be even better when the tools are available for download.

    Update: Corrected the spelling of 'Harry Pierson'. Apologies Harry...

  • stuart kent's blog

    DSL Tools V1 release - latest news

    • 7 Comments

    Having just returned from vacation, I thought I'd update folks about the V1 release of DSL Tools. This will be shipped as part of the Visual Studio 2005 SDK Version 3 in the first part of September.

    We have signed, sealed and delivered our code to the VS SDK team who are now just wrapping up. There will be a significant new documentation drop at the same time, although we will continue to update the documentation until the end of the year.

    Apart from bug fixes, the main feature in this release is a completed Dsl Designer for editing .dsl definitions.

    I'll post again as soon as the toolkit has been released.

  • stuart kent's blog

    DSL Tools with Visual Studio 2005 RTM

    • 6 Comments

    Now that Visual Studio 2005 has been released to manufacture (RTM), folks are asking us when to expect a version of DSL Tools that works with the RTM release. Well, you won't have to wait long - it should be available within the next two or three weeks. And we've got two new features lined up for you as well:

    Deployment, where you create a setup project for your designer authoring solution using a new DSL Tools Setup project template, which, when built, will generate a setup.exe and a .msi for installing your designer on another machine on which VS2005 standard or above is installed.

    Validation, where it's now possible to add constraints to your DSL definition which can be validated against models built using your designer. Errors and warnings get posted in the VS errors window. We support various launch points for validation - Open, Save, Validate menu, and these are configurable. All the plumbing is generated for you - all you have to do is write validation methods in partial classes - one per domain class to which you want to attach constraints - which follow a fairly straightforward pattern.

    We're also going to release our first end to end sample about the same time. The sample is a designer for modelling the flow in Wizard UI's (it's state chart like), from which working code is generated.

     

  • stuart kent's blog

    The UML / DSL debate

    • 6 Comments

    There's been some debate between some of my MSFT colleagues (Alan Wills, Steve Cook, Jack Greenfield) and Grady Booch and others over at IBM around UML and DSLs. For those interested, Grady actually posted an article on UML and DSLs back in May last year - seems to have gone unnoticed. It touched on some of the themes that have been under discussion. My first blog entry was a response to this article.

    A particular theme that crops up in the discussion is the issue of tools versus language. Grady seems to want to keep the two separate, whereas we believe that the two are closely linked - a language designed to be used in a tool is going to be different to a language designed to be used on paper.

    I touched on this line of discussion in an article a few months back: http://blogs.msdn.com/stuart_kent/articles/181565.aspx

    An example I used was that of class diagrams, and I pointed out a couple of aspects of class diagrams which may have been designed differently had the original intention been to use them within a tool rather than on paper. Now I can point you at an example notation that addresses some of these issues. It is the notation we use for defining domain models in our DSL Tools. Here's a sample:

    Domain model for UIP Chart Language

    Nodes are classes, lines are relationships or inheritance arrows. Nodes are organized into relationship and inheritance trees, which can be expanded or collapsed, making it easy to navigate and drill into large models (not something you do on paper).

    A relationship line has the role information (would be association end information in UML 2) annotated in the middle of the line rather than at the ends: a role is represented as a triangle or rectangle containing a multiplicity, and the name of a role is annotated using a label (in the diagram above reverse role names have been hidden as they are usually less interesting).

    Annotating role information in the middle of the line not only makes it easier to automatically place labels, it also means that relationship lines can be chanelled together enabling an expandable relationship tree to be automatically constructed and layed out. An example of channelling is given by the diagram below, where you can see all the relationship lines sourced on Page channelled together so that they connect to Page at the same point:

    Domain model for UIP Chart Language

    The point here is that this notation was designed from the start using criteria such as 'must support autolayout' and 'must support easy navigation of large models'. If the criteria were 'must be easily cut up into page size chunks', must be easy to sketch on a whiteboard' then the notation may well have turned out different.

  • stuart kent's blog

    What I've been working on

    • 6 Comments

    I can now point you at a couple of announcements about the technology I've been working on. Here's the Microsoft Press announcement:

    http://www.microsoft.com/presspass/press/2004/oct04/10-26OOPSLAEcosystemPR.asp

    The exciting aspect of this is that we're going to start making the technology available to the community as soon as we can. Here's the site to watch:

    http://lab.msdn.microsoft.com/vs2005/teamsystem/workshop/

    I'll post to my blog as soon as some content is available - should be before the end of the week.

    We gave a demo of some of the technology during Rick Rashid's keynote at the OOPSLA conference. Unfortunately I expect there'll be some flak about this - watching the keynote, our demo felt a bit like a 'commercial break'. I'll respond to the flak when it arrives.

    So not quite the launch I'd hoped for, but that shouldn't detract from the technology itself. Here's a quick heads  up on what we've been doing:

    • A wizard that creates a solution in VS which when built installs a graphical designer as a first class tool hosted in VS. The designer can be customized to the Domain Specific Language (DSL) of your choice.
    • The wizard allows you to choose from a selcetion of language templates. You can define your own templates based on designers you've already built.
    • Once you've run the wizard, you can edit a couple of XML files to customize the designer. Code is generated from these files to customize the designer. One file allows you to define and customize the graphical notation and other aspects of the designer, like the explorer, properties window and so on. The other file defines the concepts that underpin the language in the form of an object model (metamodel, if you're familiar with that term). We have graphical designer for editing the object model.
    • You then just build the solution, hit F5 and get a second copy of VS open in which you can use the designer you've just built.

    Our first release, at the end of this week, will be a preview of the object model editor. Previews of the other components should be available by the end of the year. 

  • stuart kent's blog

    Premature standardization

    • 5 Comments

    I used the phrase 'premature standardization' in an earlier post today. I'm rather pleased with it, as it is a crisp expression of something that has vexed me for some time, namely the tendency of standards efforts in the software space to transform themselves into a research effort of the worst kind - one run by a committee. I have certainly observed this first hand, where what seemed to be happening was not standardization of technologies that existed and were proven, but instead paper designs for technology that might be useful in the future. Of course, then I was an academic researcher so was quite happy to contribute, with the hope that my ideas would have a better chance of seeing the light of day as part of an industry standard than being buried deep in a research paper. I also valued the exposure to the concentration of clever and experienced people from the industry sector. But now, as someone from that sector developing products and worrying everyday about whether those products are going to solve those difficult and real problems for our customers, I do wonder about the value of trying to standardize something which hasn't been tried and tested in the field, and, in some cases not even prototyped. To my mind, efforts should be made to standardize a technology when:

    • There are two or more competing technologies which are essentially the same in concept, but different in concrete form
    • The technologies are proven to work - there is sufficient evidence that the technologies can deliver the promised benefits
    • There is more value to the customer in working out the differences, than would be gained through the innovation that stems from the technologies competing head-to-head

    Even if all these tests come up positive, it is rarely necessary to standardize all aspects of the technology, just that part which is preventing the competing technologies to interoperate: a square plug really will not fit in a round hole, so my French electrical appliance can not be used in the UK, unless of course I use an adaptor...

    If we apply the above tests to technologies for the development of DSLs, I'd say that we currently fail at least two of them. Which means that IMHO standardization of metamodelling and model transformation technologies is premature. We need a lot more innovation, a lot more tools, and, above all, many more customer testimonials that this stuff 'does what it says on the tin'.

  • stuart kent's blog

    Walkthroughs for DSL tools available

    • 5 Comments

    Three tutorial documents which 'walk through' various aspects of the December release of our DSL Tools are now available for download in a zip file.

    You can also navigate to them from the  DSL Tools December release

  • stuart kent's blog

    Answers to questions on the domain model designer and future features

    • 5 Comments

    Here's a lot of detailed questions from Kimberly Marcantonio, an active participant in the DSL Tools Newsgroup. I thought it would be more useful to the community to publicise the answers on my blog. Indeed, expect me to do this more, when the answers are likely to be of interest to those following the DSL Tools. The questions also touch on issues concerning the direction we're taking with this technology. I've tried to be open in my responses, without making firm commitments. I hope soon to be more precise about our plans for new features and their roll out.

    In what follows, Kimberly's text is in blue, and my responses are in red italic...

    I am currently trying to model the Corba Component Model using this DSL package and have run into the following problems/questions:

    1. Why is it that you can not currently move the classes (drag and drop) around the canvas? This leads to very spread out models that take up a lot of space and do not print well. Is there a better way to print these models to make sure that they fit onto one page?
      We took the decision to automate as much of the layout as possible. You can use 'Bring Definition Here' and 'Create Root' context menu items when a node in the diagram is selected, to control where definitions of classes appear in the diagram. Our experience is that this gives a reasonable amount of control of diagram layout, without losing the significant advantages of autolayout. I'd be interested to know if anyone has tried using these, and whether this helps with the printing issue? We would like to add facilities for being able to create partial diagrams, perhaps showing relationships structures as the true graphs which they are, but have to balance this against the myriad of other features we need to build (e.g. see comments about constraints and serialization below).
    2. Can you offer further explanation into when to use an embedded relationship, and a Reference relationship? I understand the use of Inheritance, but often do not know when to use the other two. Also I feel as if there should just be a regular connection, for sometimes I feel these two types of connections are not fitting. Is containment the same as embedding?
      Embedding and reference are used to drive behaviours of the underlying tools, or, to be more accurate will be used to drive the behaviour of the underlying tools. At the moment they drive the deletion behaviour of a designer - the default behaviour is taken from the diagram, deletion is propagated across embeddings relationships but not reference relationships, though this can be overridden using the delete behaviour designer in the DMD. This information will also be used to drive the XML serialization format (the approach to serialization is only an interim measure at the moment) and the default layout of the explorer. There are other aspects of behaviour where this kind of information is useful, though I won't go into that here. Also see answers below. 
    3. Also could I have more information as to what the XML root is used for? I sometimes feel as if my diagrams have no root, or multiple roots, yet this is not supported.
      The current serialization solution is only an interim measure. XML root is used to indicate which element is used at the top of the tree when a model is serialized into a file, and teh kind of element a diagram must map to. Our actual approach to serialization should be richer and mor domain specific than this, and the constraint requiring a diagram to map to the XML root is likely to be relaxed. 
    4. Is there anyway to enforce constraints in this modeling language, such as OCL (Object-Constraint Language)?
      Not yet, but constraint validation is in our plans. We'll probably just use .Net languages to write the bodies of constraints initially, as you that brings with it the intellisense and debugging support, but all the plumbing into the designer, reported of errors etc. will be handled for you.
    5. Is it possible to have more than one .dmd file in a project? If so do you have one .dd file for all of these .dmd files, or many .dd files, one for each .dmd files?
      Yes you can have more than one dmd file per project, and indeed you can generate the code for each one you have (currently you'll need to make copies of the three .mdfomt files in the ObjectModel directory, giving them names that match the name of your .dmd file and editing the <%@ modelFile ... %> line). However, at the moment a .dd file can only refer to one .dmd file. Our plans include the ability to define many designers per .dmd, and have one designer able to view models from multiple dmd's. Exactly how we'll do this (there are a number of design options) is yet to be worked out. Basically, we're in the business of being able to create multiple designers which can provide different perspectives on the same model, as well as being able to break down the definition of a domain model into component parts. At least that's the plan.
    6. If you can have multiple .dmd files can you reference classes on other models?
      Yes, though the mechanism is a bit clunky at the moment. To create a reference to a class in another domain model, create a class with the same name and namespace, and then set its IsLoaded property to be false. The code generated from the domain model will put in the appropriate references, though, thinking about it, I don't think the designer will quite do the right thing (it needs to reference both models and ensure that their definitions are loaded into the store on startup).
    7. Can you show Bi-directional connections?
      All relationships are bidirectional. It's just that we have chosen to overlay the definition of relationships with a tree navigator - the diagram reflects one way of walking a graph of objects of classes defined in the model connected by links whcih are instances of relationships in the model, as a tree. This tree is used as the default for behaviours in the designer which require this, such as serialization to XML, viewing a model in the explorer and deletion behavior. At present, the DMD allows you to override this default for deletion behaviour in the Delete Behavior Designer. In future versions, we hope to provide a means of defining similar overrides to drive XML serialization and the explorer behaviour. Also see answer to (2).
    8. Can you cut across the tree hierarchy?
      If I understand the question correctly, yes. You can define relationships between any two classes, including different nodes in the tree. When you do so, a 'use node' will be created for the target class, as a child of the source class, wherever the definition of the source class appears. You can do this for both embedding and reference relationships. Also see answer to (2).  
    9. Can two classes share the same value property?
      No. We follow standard OO practice in this regard. So the only way to achieve this result is to have a common superclass which defines the value property.
    10. Why is it not possible to cut and paste? This would make it easier to create similar classes
      This is just a feature we have not implemented yet.
    11. If B is a child of A, and C is a child of B, does C have to be a child of A?
      I assume you're not talking about inheritance here, but embedding relationships, and that you are asking whether the definition of C must appear beneath the definition of B which appears beneath the deifnition of A. It is possible to have the defintiion of C appear as a root on the diagram, or anywhere C is referenced, as the target of an embedding or reference relationship, or as a child in an inheritance relationship. Select the node where you want the definition to appear, and choose the 'Bring Definition Here' option, or choose 'Create Root' if you want the definition to appear at the root of the diagram. Details are in the DMD walkthrough. Also see answer to (2).
  • stuart kent's blog

    Writing a book

    • 5 Comments

    I see that Steve has let the cat of the bag - we (that is Steve, Alan, Gareth and myself) are writing a book on DSL Tools.

    Let us know if there are particular topics you'd like to see covered.

    We'll try and blog about some of the content as we write it. 

     

  • stuart kent's blog

    More rapid deployment of VS extensions

    • 5 Comments

    Over on Deep's blog I came across the following comment to his post on Tools for Tools.

    I had looked at creating a DSL for one of our projects a while ago, and in the end decided against it because the deployment was too complicated. In particular, here is how I want deployment to work:

    In my normal solution of my project, I add a new DSL project, along with my other C# projects that constitute my end piece of software. In the DSL project I define a new DSL that I can then use right away in the C# projects that are part of the solution as well. That is, I don't want any setup at all, but rather want the DSL to be loaded as for example WinForm custom controls and the designers that go along are loaded from a dll project that is simply part of the solution of the consuming project.

    Why is this so important? First, this would automatically solve the version problem. In my case we would rapidly (or shall I say "agile" ;) change the DSL as the project goes along. With a model where the DSL is installed wia a setup program on a machine wide basis, we run into huge trouble when we want to work with a previous version of our source code (say a branch from the source code repository that corresponds to a previous version). We would need to make sure that our DSL project can always load old versions, i.e. we would need to be super backword compatible. But I don't want that, I want the DSL versioned along with the source code so that I get a predictable match of tools and source code artefacts.

    Also, of course, deployment via setup is just a pain, I really just want my solution file to have all the info about tools (be it DSL, compilers what have you) needed to build it and take care of that.

    Any chance that we might end up with something like that?

    Generalizing this scenario a bit, I think what the commenter is asking for is the following two capabilities:

    1) For VS extensions (in this case DSLs) to be checked in the rest of the code on a project, so that you can manage the tools used to build your code just in the same way that you manage the code itself.

    2) Not to have to go through an expensive process (e.g. install an MSI) every time you want to reconfigure VS with the tools (extensions) needed for a particular project, or when you want to update those tools.

    There are things you can do now to get over these two problems.

    For (1) you can have a separate tools authoring solution that is checked in and used to rebuild the tools. You can do this now, and I don't see any issues around this. Indeed I would argue that it's not a good idea to mix your tools authoring solution and application development solutions, just because I think it's a good idea to keep these two concerns separate. However, you still need a way of getting revisions of tooling quickly out to developers, rolling back to past versions and so on. And making sure that before you build your application code, the right version of the tools is built and installed.

    For (2), if you don't want to go to the trouble of installing an MSI, then, today, you could do the application building in the experimental version of VS (though you'll need to have the VS SDK installed for that). Building the authoring solution will install the tools in the experimental hive, and then you just need to remember to launch VS Exp rather than VS to do your work. So once you've updated your machine from source control, you'd go an build the authoring solution, then you'd go an open the experimental version of VS and continue with your development. You can reset the experimental version of VS anytime to mirror main VS, when you want to switch projects.

    An alternative is to work in virtual PCs (our team does much of its development this way). Then to roll out a new version of the tools to your team, you build a new VPC with the tools installed and distribute this. The advantage of this approach is that you can set up multiple VPCs for different projects and then it's very easy to switch between them. Of course there is more upfront cost taking this approach. This is a good solution if you are distributing tools to a team of developers, you want to retain some stability in the toolset they are using, and you expect developers to be needing to switch context between projects.

    Ideally, what I'd like to see happen is that when a developer signs up for a Team System project, they get their machine configured automatically with the necessary tools, and, further, that designated members of the team (which could be anyone) can go and make changes and rebuild those tools, which then get pushed out to other members of the team via Team System. When you switch to a different project, VS reconfigures itself with the tools required for that project. This won't be achieved overnight - it requires some platform changes to VS - but it's certainly on the radar.

  • stuart kent's blog

    Back with Team Architect

    • 5 Comments

    As Cameron blogged in Visual Studio Team System 2010 Architecture: Prologue, the DSL Tools team have recently moved back to Team Architect from the Visual Studio Platform team. We’ve been working increasingly closely with Team Architect since they renewed their focus on building VS modeling tools integrated using the DSL Tools. By moving back to that team we’ll be better placed to focus on three goals:

    1. Evolve the platform to deliver a better experience for users of tools developed on that platform, including seamless integration with other tools used for software development (particularly those in Visual Studio), and rich productivity features for example around linking and transforming models. The experiences Team Architect have in mind will place strong demands on the platform; we can work closely with the teams building those experiences to make sure we get the platform right.
    2. Evolve the authoring tools in the platform to make teams building tools using it more productive. This goes for teams inside Microsoft as much as for customers outside.
    3. Provide a seamless experience for customization of the tools that we ship, from lightweight extensions of UML, through to patterns of integration and transformations layered on those tools, through to the creation of whole new DSLs which can be used in isolation or un harmony with the other tools.

    We expect to make progress on all three goals in VS2010.

    That said, our sojourn with the VS Platform Team was time well spent. We understand a lot better the technologies and the direction of the core VS platform and, just as importantly, know the people in that team a lot better. DSL Tools is layered on top of that platform and, already in VS2010, we’ll be taking advantage of the integration of the Managed Extensibility Framework (MEF) into the heart of VS (see e.g. this post) and the technologies underpinning the Extension Manager.

  • stuart kent's blog

    What's new for DSL Tools in VS2008 / VS2008 SDK

    • 4 Comments

    VS2008 and VS2008 SDK have just shipped. The announcement about the SDK over on the VSX blog didn't get into much detail about changes in DSL Tools for VS2008, so I thought it would be worth summarizing them here:

    • The runtime is now part of the VS platform. There is no need for a DSL author to redistribute a separate DSL runtime.
    • The authoring experience now works with Vista RANU (Run as Normal User). You don’t have to run VS as an admin in order to develop a DSL.
    • We’ve added a path editor to the DSL Designer. Paths are used largely to express mappings between domain classes / relationships and shapes / connectors. The new editor provides UI that allows you to select valid navigation paths through domain classes and relationships in the domain model.
    • LINQ can be used in code written against the code generated from a domain model, for example when writing validation constraints or text templates for code generation. For those of you familiar with UML / MOF, LINQ gives you OCL-like querying capability (actually its far more powerful than that) against domain models.
    • We’ve fixed a large number of bugs, in both runtime and authoring.
  • stuart kent's blog

    Are qualified associations necessary?

    • 4 Comments

    When I'm creating domain models, the ability to place properties on a relationship is proving very useful.

    For example, at the moment I'm remodelling our designer definition format (we generate it from a domain model) and have a relationship between Connector and Decorator and Shape and Decorator. Decorators have a position, which is modelled as an enumeration (inner-top-right, etc.), but the values of the enumeration are different depending on whether the decorator is on a Shape or on a Connector (inner-top-right is not a meaningful position for a connector). Without properties on relationships, we'd have to subclass Decorator to ShapeDecorator and ConnectorDecorator, and all we'd be adding was a differently typed Position property in each case. With properties on relationships, we can just attach a differently typed Position property to the relationships from Shape to Decoarator and from Connector to Decorator, respectively - no subclassing required.

    UML has associations which are like our relationships. You can attach properties (attributes) to associations via their association classes. UML also has qualified associations, where you can index links of the associations by a property - e.g. an integer or a position. But it seems to me that one could achieve the effect of qualified associations by adding attributes to association classes, as we add properties to relationships. So, in my mind, if you've got association classes, qualified associations are redundant.

    Am I missing something?

  • stuart kent's blog

    DSL tools samples download now available

    • 4 Comments

    Like buses, you wait a while then two come at once. That's right, just a week after we released the latest version of DSL Tools, we've also released an update to the samples. This download includes:

    1. A complete end-to-end sample of a project in which most of the code is generated from a DSL, and the use of the new validation and deployment features is illustrated. A detailed guide describes the features and their use.
    2. Examples of using custom code to enhance languages developed with the DSL Tools, together with a detailed guide. Features demonstrated include: computed properties, constraints on connections, line routing, shadow & color gradient control, shape constraints, and template-generated code.

    (1) is brand new. (2) is an update of the samples we released a few weeks ago.

    And next? We should get some more documentation up soon, and we're working on integrating the release into the VS SDK. After that, you'll have to wait a while as we do some root and branch work on some of the APIs and replace the .dsldmd and .dsldd with a single .dsl format, with it's own editor (yes, editing .dsldd files will become a thing of the past). We'll also be supporting two more pieces of new notation - port shapes and swimlanes, and providing richer and more flexible support for persisting models in XML files. Our goal is a release candidate at the end of March. 

  • stuart kent's blog

    More ruminations on DSLs

    • 4 Comments

    A domain specific language is a language that's tuned to describing aspects of the chosen domain. Any language can be domain specific, provided you are able to identify the domain it is specific to and demonstrate that it is tuned to describe aspects of that domain. C# is a language specific to the (rather broad) domain of OO software. Its not a DSL for writing insurance systems, though. You could use it to write the software for an insurance system, but it's not exactly tuned to that domain.

     

    So what is meant by the term 'domain'?. A common way to think about domains is to categorize them according to whether they are horizontal or vertical. Vertical domains include, for example: insurance systems, telephone billing systems, aircraft control systems, and so on. Horizontal domains include, for example, the bands in the classic waterfall method: requirements analysis, specification, design, implementation, deployment. New domains emerge by intersecting verticals and horizontals. So, for example, there is the domain of telephone billing systems implementation, which could have a matching DSL for programming telephone billing systems.

     

    Domains can be broad or narrow, where broad ones can be further subdivided into narrow ones. So one can talk about the domain of real-time systems, with one sub-domain being aircraft control systems. Or the domain of web-based systems for conducting business over the internet, with a sub-domain being those particular to insurance versus another sub-domain of those dealing in electrical goods, say. And domains may overlap. For example, the domain of airport baggage control systems includes elements of real-time systems (the conveyer belts etc. that help deliver the luggage from the check-in desks to the aircraft) and database systems (to make a record of all the luggage checked in, its weight and who it belongs to, etc.).

     

    So there are lots of domains. But is it necessary to have a language specific to each of them? Couldn't we just identify a small number of general purpose languages that cover the broad domains, and just use those for the sub-domains as well?

     

    What we notice in this approach is that users demand general purpose languages that have extensibility mechanisms which allow the base language to be customized to narrower domains. There's always a desire to identify domain specific abstractions, because the right abstractions can help separate out the things that vary between systems in a domain and things that are common between them: you then only have to worry about the things that vary when defining systems in that domain.

     

    Two extensibility mechanisms in common use today are:

    • class inheritance and delegate methods, which allow one to create OO code frameworks;
    • stereotypes and tagged values in UML which provide primitive mechanisms for attaching additional data to models.

    These mechanisms take you so far, but do not exactly deliver customized languages that intuitively capture those domain specific abstractions - the problem is that the base language gets in the way. Using OO code frameworks is not exactly easy: it requires you to understand all or most of the mechanisms of the base language; then, although you get clues from the names of classes, methods and properties on where the extension points are, there is no substitute for good documentation, a raft of samples and understanding the framework architecture (patterns used and so on). Stereotypes and tagged values in UML are powerful in that you can decorate a model with virtually any data you like, but that data is generally unstructured and untyped, and often the intended meaning takes you a long way from the meaning of the language as described in the standard. Neither OO framework mechanisms or UML extensibility mechanisms, allow you to customize the concrete notation of the language, though some UML tools allow stereotypes to be identified with bitmaps that can be used to decorate the graphical notation.

     

    Instead of defining extensibility mechanisms in the language, why not just open up the tools used to define languages in the first place, either to customize an existing language or create a new one?

     

    Well, it could be argued that designing languages is hard, and tooling them (especially programming languages) even harder. And the tools used to support the language design process can only be used by experts. That probably is the case for programming languages, but I'm not sure it needs to be the case for (modelling) languages that might target other horizontal domains (e.g. design, requirements analysis, business modelling), where we are less interested in efficient, robust and secure execution of expressions in the language, and more interested in using them for communication, analysis and as input to transformations. Analysis may involve some execution, animation or simulation, but, as these models are not the deployed software, it doesn't have to be as efficient, robust or secure. Other forms of analysis include consistency checking with other models, possibly expressed in other DSLs, taking metrics from a model, and so on. Code generation is an obvious transformation that is performed on models, but equally one might translate models into other (non-code) models.

     

    It could also be argued that having too many languages is a barrier to communication - too much to learn. I might be persuaded to agree with that statement, but only where the languages involved are targeted at the same domain and express the same concepts differently for no apparent reason (e.g. UML reduced the number of languages for sketching OO designs to one). Though it is worth pointing out that just having one language in a domain can lead to stagnation, and for domains where the languages and technologies are immature, inevitably there will be a plethora of different approaches until natural selection promotes the most viable ones - unless of course this process is interrupted by premature standardization :-). On the other hand, where a language is targeted on a broad domain, and then customized using its own extensibility mechanisms, the result carries a whole new layer of meaning (OO frameworks, stereotypes in UML), or even an entirely different meaning (some advanced uses of stereotypes). In the former case, there is a chance that someone who just understands the base language might be able to understand the extension without help; in the latter case, I'd argue that the use of the base language can actually hinder understanding, as it replaces the meaning of existing notation with something different.

     

    Finally, whether we like it or not, people and organizations will continue to invent and use their own DSLs. Some of these may never ever get completed and will continue to evolve. Just look at the increasing use of XML to define DSLs to help automate the software development process - input to code generators, deployment scripts and so on. Yes, XML is a toolkit for defining DSLs; it's just that there are certain things missing: you can't define your own notation, certainly not a graphical one; the one you get is verbose; validation of well-formedness is weak.

     

    Am I going to tell you what a toolkit for building domain specific modelling languages should look like? Soon I hope, but I've run out of time now. And I'm sure that some folks reading this will give feedback with pointers to their own kits.

     

    One parting thought. In this entry, I have given the impression that you identify a domain and then define one or more languages to describe it. But perhaps it's the other way round: the language defines the domain…

  • stuart kent's blog

    Hints and tips for using Powerpoint and Visio for storyboarding

    • 3 Comments

    Here are a few techniques I have found useful for building storyboards or click-throughs using Powerpoint and Visio. If you have further suggestions please add them as comments to this article.

     

    Making parts appear and disappear

    Use custom animation in powerpoint. You can change the order in which things appear/disappear, and decide whether the effect should happen on mouse click or automatically after the previous effect. Or copy the slide and add/remove the part to/from the new slide. Animation will happen through slide transitions.

     

    Don't do everything in one slide

    Otherwise the animation will become unmanageable. I tend to have one slide per step in the scenario. Use your common sense to decide on the granularity of steps.

     

    Use 'callouts' to comment on aspects of the scenario

    A callout is a box with text, that has a line attached which can point to a particular aspect of the graphic. I tend to color my text boxes light yellow (like a yellow post-it note). Remember callouts can be made to appear and disappear too. Using callouts saves having to have separate slides with text notes on, which avoids breaking up the flow. The notes can also be set in context.

     

    How to get a mouse pointer to move

    Paste in a bitmap of the pointer. Select the bitmap. Use SlideShow > Custom Animation > Add Effect > Motion Paths to define a path along which the pointer should move.

     

    Build/get a graphic for your application shell

    A graphic of the application shell can be used as a backdrop for all the other animation. Use Alt-PrintScreen to create a bitmap of the shell for your running application. This works if you're building a plugin for an existing application, or you have an existing application with a similar shell to the one you're storyboarding. Alternatively build a graphic of the shell using the Windows XP template in Visio.

     

  • stuart kent's blog

    UML conference

    • 3 Comments

    Two posts in the same day. I guess I'm making up for the two month gap.

    Anyway, Alan Wills and I are giving a tutorial at the UML conference in Lisbon in October. The tutorial is called "How to design and use Domain Specific Modeling Languages" and is on Tuesday 12th October in the afternoon. We promise you not much presentation, interesting exercises and lots of discussion and reflection.

Page 1 of 7 (152 items) 12345»