Stuart Kent - Software Modeling and Visualization

  • stuart kent's blog

    Storyboarding tool integrated with team system

    • 2 Comments

    A while back I posted about how useful I find storyboards in the specification process.

    Now there's a tool integrated with VSTS for doing storyboarding. Check it out at http://www.stpsoft.co.uk/story/.

  • stuart kent's blog

    Jos Warmer on small models

    • 0 Comments

    Here is a great post from Jos Warmer over on the DSL Tools forum.

    He makes some good arguments. Please add your own opinions to the discussion. We'll definitely take note as we think about the next set of features...

  • stuart kent's blog

    New dsl definition format

    • 2 Comments

    Those of you who have downloaded the June CTP of DSL Tools will, by now, be aware of quite a few changes. In particular you'll have noticed that we've replaced the old .dd and .dmd formats with a single .dsl format, and provided a (currently unfinished) designer for this format.

    Whilst we're waiting for the documentation to catch up, I thought I would spend a few cycles telling you about the new .dsl format, and provide some insights into some of the thinking behind it.

    So where to start. Well, it's probably a good idea for you to take a look at a .dsl file. Follow one of the walkthroughs in the documentation that comes with the kit to get hold of an example.

    If you open the file in the xml editor, you'll see that it contains definitions for: classes, relationships, types, shapes, connectors, a designer, a diagram, plus sections to define xml serialization and explorer behavior, and a section to define things called connection builders.

    Looking a bit deeper, you'll see that a diagram defines shape maps (various kinds) and connector maps, that is, how shapes and connectors map to classes and relationships for that particular style of diagram.

    You'll also see that, as well as referencing the diagram to be used, a designer also defines one or more toolbox tabs, with tools on them.

    I'm going to focus in this post, and probably a couple after, on one of the key changes we've made, which is to make a clear distinction between how information in the model gets presented on the diagram, and the actions used to create new parts of the model.

    In the old dd format, you defined a shape, a class, and a map from class to shape, then a tool that referred to a shape. In the designer that was generated, using the tool caused an instance of the mapped class to be created together with an instance of the shape to view it. This scheme had the advantage of being simple, but the disadvantage of being very inflexible. Without a lot of custom code, you couldn't define (element) tools that did anything other than create an instance of a class embedded in the object mapped to the diagram with a new shape on the diagram viewing it. A restriction of this scheme was that it forced most classes in a domain model to be directly embedded in the root class.

    In the new format, the shape and connector mappings only govern how a diagram views the model: what shapes get created to view what model elements, and what connectors get created to view links of what relationships. The tools, in conjunction with merge directives and connection builders, govern how model elements and links get created in a model. An element tool refers to a class. It causes an element of that class to be created and merged into the model via the element mapped to the diagram or shape that the tool is dragged onto. Merge directives on the class of that latter element define the mechanics of the merge. A connection tool refers to a connection builder, which defines how the element mapped to the shape at the source drop point of the tool gets connected in the model to the element mapped to the shape at the target drop point of the tool.

    So we now have a definition in the style of the well-known model-view-controller pattern. Tools are the controllers, and are used to add stuff to the model, whose elements are viewed by shapes and connectors on the diagram, as determined by the shape and connector mappings defined for that diagram.

    Combined with the use of paths in the definitions of maps, connection builders and merge directives, plus the ability to flag the need for custom code at various key points, we end up with a very rich and flexible system for defining the interaction between the design surface and the model.

    In the next post, I'll walk through an example. But for now, I'll point you at a great post by Alan the reveals some of the details of paths, merge directives and connection builders. 

  • stuart kent's blog

    DLinq Designer

    • 1 Comments

    Goto http://msdn.microsoft.com/data/ref/linq/ and download the latest CTP of LINQ. There you'll find a designer for DLINQ that's hosted in Visual Studio. Anything look familiar? Wondering what tools were used to build it? If you haven't guessed by now, here'e another hint: http://msdn.microsoft.com/vstudio/DSLTools/ 

  • stuart kent's blog

    Sam has started blogging

    • 0 Comments

    Sam Guckenheimer has started blogging - http://blogs.msdn.com/sam/

    Sam is the chief product planner for Visual Studio Team System. He's also just written a book about it.

  • stuart kent's blog

    June CTP of DSL Tools now available

    • 2 Comments

    I haven't written a blog entry for a while, and the reason is that I've been very busy working on the latest CTP of DSL Tools.

    I'm pleased to announce that it is now available.

    Details on the DSL Tools web page. When you get to the Visual Studio SDK download the page you'll need to scroll down to find the June 2006 CTP.

    We look forward to receiving your feedback.

    Existing users of DSL Tools, please take note of the Important information about migrating existing designers:

    "To migrate a designer produced with an earlier (though no earlier than November 2005) CTP for Domain-Specific Language Tools to work with this June CTP, you must have both the CTP with which the designer was produced and the June CTP installed on your computer. Therefore, if you have an older designer that you want to migrate, you should NOT uninstall the previous CTP before you install the June release of the Visual Studio SDK, which contains the June CTP of Domain-Specific Language Tools. You can migrate designers from older CTPs to the current one if you have the older CTP on one computer and the current CTP on another computer if you can copy files between the two computers."

  • stuart kent's blog

    VSTS Work Item Type designer

    • 1 Comments

    Darren just IM'ed me to say that he had put out his first preview of the work item type designer he's been building with DSL Tools.

    See http://blogs.msdn.com/darrenj/archive/2006/03/20/555821.aspx 

  • stuart kent's blog

    Japanese translation of Software Factories book

    • 0 Comments

    Another belated announcement.

    The Japanase translation of the Software Factories book is now available:

    http://bpstore.nikkeibp.co.jp/nsp/special/0472x/

  • stuart kent's blog

    Belated announcement of February CTP of DSL Tools

    • 0 Comments

    I've been so immersed in getting V1 DSL Tools out the door, that I see I haven't posted a blog entry yet this year.

    I'm also very late in telling you about the new release of DSL Tools. The main purpose of this release was to integrate DSL Tools into the VS SDK, which it now is. There's also a significant refresh of the help documentation, which is also integrated.

    More details from Gareth at: http://blogs.msdn.com/garethj/archive/2006/02/04/AnnounceFebCTP.aspx 

  • stuart kent's blog

    Seven stages of models

    • 0 Comments

    There aren't really seven, but it makes for a good title (look up "seven stages shakespeare" if you're feeling puzzled at this point.)

    Anyway, I've been watching a debate between Harry, Gareth and now Steven, on the process of building models, including how alike or unalike that is to programming. There also seem to be a number of different terms being suggested for the different states of models - terms like complete/incomplete or precise/imprecise. Now I start to worry when I see terms like that without concrete examples to back them up. So I thought I'd weigh in with some comments, and give a concrete example to illustrate what I mean.

    The general point I'd like to make is that I think models and programs do go through different states, like complete or incomplete, but I wouldn't try to use generic terms to describe these states. I think those states are language, dare I say domain, specific, and depend largely on the tools we have for checking and processing expressions in those languages, be they programs or models. So a program goes through states such as 'compiles without error' and 'executes' and 'passes all tests' and 'has zero reported bugs against it' and 'has passed customer acceptance testing'. At least the first three of those states are very specific to programming - we know what it means to compile a program, to execute it, to test it. We'll have to find different terms for different kinds of model.

    And if I was going to use a generic term for the sliding scale against which we can judge the state of a model or program, I would use the term 'fidelity', as it has connotations of trustworthiness, confidence and dependibility, which terms like 'precision' and 'completeness' don't really have.

    So now for an example of stages a model might go through, taken from my very recent experience as we are developing Dsl Tools (I apologize for the mild case of navel-gazing here).

    In the last few days I've been developing a new domain model for the new Dsl definition language that will replace the designer definition (.dsldd) file we currently use in Dsl Tools. There are various stages which that model goes through. I don't bother with sketching it on paper, as the graphical design surface is good enough for me to do that directly in the designer. I miss out lots of information initially, and the model is generally not well-formed most of the time. However, I then get to a point where I'm basically happy with the design (or as happy as I can be without actually testing it out) and I'm ready to get it to the next level of fidelity. Let's call this 'Initial Design Complete'.

    For the next step, I start by running a code generation template against the model, which generates an XSD. This usually fails initially, so I then spend time adding missing information to the model and correcting information that is already there. Finally the XSD generation works - I have more faith in my model than I had before. This is the 'XSD generation succesful' state.

    Now I've got the XSD, I develop example xml files which represent instances of the domain model I'm building (in this case, particular dsl definitions) and check they validate against the XSD. I find out, in this stage, whether the model I'm building captures the concepts required to define Dsl's. Can I construct a definition for a particular Dsl? As these examples are built, I refine the domain model, and regenerate the XSD, until I have a set of examples that I'm happy with. The fidelity of the model is increased and I'm much more confident that it's correct. We might call this the 'All candidate Dsl Definitions can be expressed' state.

    The next stage, which will now mostly be done by developers I work with, is to write the code generators of this new format that generate working designers. We'll write automated tests against those generated designers and exercise them to check that they have the behavior we expected to get from the Dsl definitions provided as input to the code generators. This process is bound to find issues with the original model, which will get updated accordingly. This might be the 'Semantics of language implemented' state, where here the semantics is encoded as a set of designer code generators.

    So how is this process similar to programming? Well, with programming I'd probably do my initial sketches of the design on a whiteboard or paper. I might use a tool like the new class designer in Visual Studio, which essentially visualizes code, and just ignore compile errors during this stage.

    When I think the design is about right I'll then do the work to get the code to compile. I guess that's a bit like doing the work in the modeling example above to generate the XSD.

    Once I've got the code to compile, I'll try and execute it. I'll build tests and check that the results are what I and the customer expect. This, perhaps, is analagous to me writing out specific examples of dsl definitions in Xml and checking them against the generated XSD.

    We could probably debate for ages how alike generating an XSD from a domain model is to compiling a program, or how alike checking that the domain model is right by building example instances in XML is to executing and testing a program. I don't think it really matters - the general point is that there are various stages of running artefacts through tools, which check and test and generally increase our confidence in the artefact we're developing - they increase the fidelity of the artefact. Exactly what those stages are depends on the domain - what the artefact is, what language is used to express it, and what tools are used to process it.

    Now it would be interesting to look at what some of these states should be for languages in different domains. For example, what are they for languages describing business models...

    [10th February 2006: Fixed some typos and grammatical errors.]

  • stuart kent's blog

    The ideal tool...

    • 1 Comments

    Just noticed this little nugget over on Steven's blog at MetaCase (just gloss over the blatant marketing):

    "Obviously the ideal tool would be as fast and powerful to use as MetaEdit+, and have its maturity, but also have the extensibility of Microsoft's DSL Tools, without crashing into the customization cliff. It should however run on all platforms like MetaEdit+, but be tightly integrated to Visual Studio, Eclipse, and vi. And be free, with complimentary commercial support. But I digress..."

    And whilst we're using it, we can stare dreamily out of the window and not be surprised as we watch a pig float gently by in the warm summer breeze...

  • stuart kent's blog

    DSL tools samples download now available

    • 4 Comments

    Like buses, you wait a while then two come at once. That's right, just a week after we released the latest version of DSL Tools, we've also released an update to the samples. This download includes:

    1. A complete end-to-end sample of a project in which most of the code is generated from a DSL, and the use of the new validation and deployment features is illustrated. A detailed guide describes the features and their use.
    2. Examples of using custom code to enhance languages developed with the DSL Tools, together with a detailed guide. Features demonstrated include: computed properties, constraints on connections, line routing, shadow & color gradient control, shape constraints, and template-generated code.

    (1) is brand new. (2) is an update of the samples we released a few weeks ago.

    And next? We should get some more documentation up soon, and we're working on integrating the release into the VS SDK. After that, you'll have to wait a while as we do some root and branch work on some of the APIs and replace the .dsldmd and .dsldd with a single .dsl format, with it's own editor (yes, editing .dsldd files will become a thing of the past). We'll also be supporting two more pieces of new notation - port shapes and swimlanes, and providing richer and more flexible support for persisting models in XML files. Our goal is a release candidate at the end of March. 

  • stuart kent's blog

    The creator of WiX blogs about the deployment feature in the new release of DSL Tools

    • 1 Comments

    In the latest release of DSL Tools we make use of the WiX toolset to create MSIs. Rob Mensching, the creator of this toolset, has blogged about what we have done. Without WiX, our job would have been much harder in implementing this feature... so thanks, Rob.

    Rob also mentions Grayson Myers in his blog. Grayson is a dev on the DSL Tools team who implemented (and, to be frank, designed) this feature. Rob is right to compliment him on his efforts.

  • stuart kent's blog

    New release of DSL Tools available

    • 0 Comments

    Our next release of DSL Tools is now available.

    This release works with the final released version of VS2005. Goto the DSL Tools website at http://msdn.microsoft.com/vstudio/teamsystem/workshop/DSLTools/default.aspx to download it.

    This is the release I talked about in my blog entry at http://blogs.msdn.com/stuart_kent/archive/2005/11/02/488101.aspx.

  • stuart kent's blog

    Automating tedious tasks (2)

    • 0 Comments

    In my last entry I talked about how to go about automating tedious tasks for developers. Of course, developers are not the only people involved in software development. What about automating stuff for everyone else?

    Part of my job involves project management, and recently I've experienced the joy of having tasks automated for me. In particular, as described in an earlier blog entry, I have been able to set up a spreadsheet to connect to the work item tracking system in Visual Studio Team System and have burn down charts created for me (nearly) automatically - takes me 5 mins per day to update them. (I have talked with the team system guys, and it looks like I can do even better, using some of their built in reporting facilities - I just don't know how yet.) This has freed up loads of time, which I can now put to good use in the creative aspect of program management - working out what the product needs to do now and in the future, working through scenarios, writing specs, working closely with developers. I get the added benefit of having real-time tracking data at my finger tips which helps me identify risks and problems earlier.

    One aspect of Team System that I like a lot, is its configurability. They realized that a closed world solution was not going to acceptable in managing software development projects. So they've made sure that you can get at the data through APIs, that you can design your own work item types (every organization needs to record slightly different information to the next), that the data can easily be exported to the Microsoft Office applications Excel and Project, that there's linkage to sharepoint for document storage. This makes it much more likely that you're able to customize the product to automate tasks that suit you or your organization's way of working. And so I've found for the work that I have to do.

  • stuart kent's blog

    Automating tedious tasks

    • 1 Comments

    I've just added a new byline to my blog: automating tedious tasks.

    A complaint often levelled at Software Factories is that it will turn developers into automatons, just pushing buttons to drive the factory to do all the clever stuff. I think the exact opposite is the case. The point of technologies, such as DSLs, which we're building to support the Software Factory vision, is that they are focused on removing those tedious and boring tasks, thereby releasing the developer to focus on the creative and interesting parts.

    For example, you can write some code by hand which uses the same label in a number of places, as part of a property name, part of a class name, to identify a database, in the title of a web page, and so on, and then every time that term changes (which it will) you have to go and make the change by hand in all those places. Wouldn't it be better if we had tools which allowed you to identify that the particular label was used in a number of places, and then propagated the change to the label in all the places it's used whenever you need to change it? Then, instead of looking at an hour or more spent laboriously working through the code changing all uses of the label, you can do that in a couple of seconds and spend those hours trying to work out that tricky algorithm you've been stuck on. 

    DSL Tools supports something like this. We're lowering the bar to defining small domain specific languages from which code can be generated. So the label referred to above will be captured in one place, in an expression of the DSL, and change to that label will be propagated to all the places it's used by the code generators. Now, in DSL Tools at present, code generation is fairly crude (but still useful). You can generate complete text files - as many as you like. This does mean that if you need to write custom, hand-code to finish off the generated code, you need to put it in separate files. With the partial class facility of C#, this is not so restricting as it might seem (assuming you're coding in C#, that is). I fully expect developers writing and adapating the code generators, as much as I expect them to be writing the custom code in the carefully worked out plug points. Our challenge, as vendors of a tooling platform, is to enable this way of working at a finer level of granularity and with an even lower bar to entry.

  • stuart kent's blog

    Using VSTS with Excel for task tracking

    • 1 Comments

    I've had reason recently to use the task tracking features in Visual Studio Team System. We've been tracking our bugs for some time in VSTS, and wanted to move over to tracking tasks there as well - it's so convenient to have it all in one place. The blocker was that we had been using spreadsheets to generate glide path or burn down charts of task effort against time, with the tasks lists being maintained in the spreadsheets. This is not ideal as you can't query the lists very easily, it's yet another environment for team members to work in, and the lists get split over a number of spreadsheets (e.g. by feature crew) to make them manageable.

    I had heard about the Excel integration that VSTS provides and thought I'd give it a go. An hour later, and I had an Excel workbook refreshing a task list in one worksheet by running a predefined query against work items in VSTS, and then, through the magic of Excel formulae (the SUMIF function was particularly useful) I was generating a glide path chart of actual against planned. I also had another spreadsheet giving me a breakdown of remaining work by person. Now as developers close tasks, or update them with completed work, the changes can be reflected in the chart and table at the press of a button. This gives us real-time data on the progress of the project, allowing us to be more agile.

    So, congratulations to the VSTS team. This is good stuff.

  • stuart kent's blog

    Writing a book

    • 5 Comments

    I see that Steve has let the cat of the bag - we (that is Steve, Alan, Gareth and myself) are writing a book on DSL Tools.

    Let us know if there are particular topics you'd like to see covered.

    We'll try and blog about some of the content as we write it. 

     

  • stuart kent's blog

    DSL Tools with Visual Studio 2005 RTM

    • 6 Comments

    Now that Visual Studio 2005 has been released to manufacture (RTM), folks are asking us when to expect a version of DSL Tools that works with the RTM release. Well, you won't have to wait long - it should be available within the next two or three weeks. And we've got two new features lined up for you as well:

    Deployment, where you create a setup project for your designer authoring solution using a new DSL Tools Setup project template, which, when built, will generate a setup.exe and a .msi for installing your designer on another machine on which VS2005 standard or above is installed.

    Validation, where it's now possible to add constraints to your DSL definition which can be validated against models built using your designer. Errors and warnings get posted in the VS errors window. We support various launch points for validation - Open, Save, Validate menu, and these are configurable. All the plumbing is generated for you - all you have to do is write validation methods in partial classes - one per domain class to which you want to attach constraints - which follow a fairly straightforward pattern.

    We're also going to release our first end to end sample about the same time. The sample is a designer for modelling the flow in Wizard UI's (it's state chart like), from which working code is generated.

     

  • stuart kent's blog

    VS Team System Virtual Labs

    • 0 Comments

    You can now try out DSL Tools and other aspects of Visual Studio Team System, without going to the trouble of installing them, using one of the online virtual labs. See Rob Caron's recent posting for details.

  • stuart kent's blog

    Are qualified associations necessary?

    • 4 Comments

    When I'm creating domain models, the ability to place properties on a relationship is proving very useful.

    For example, at the moment I'm remodelling our designer definition format (we generate it from a domain model) and have a relationship between Connector and Decorator and Shape and Decorator. Decorators have a position, which is modelled as an enumeration (inner-top-right, etc.), but the values of the enumeration are different depending on whether the decorator is on a Shape or on a Connector (inner-top-right is not a meaningful position for a connector). Without properties on relationships, we'd have to subclass Decorator to ShapeDecorator and ConnectorDecorator, and all we'd be adding was a differently typed Position property in each case. With properties on relationships, we can just attach a differently typed Position property to the relationships from Shape to Decoarator and from Connector to Decorator, respectively - no subclassing required.

    UML has associations which are like our relationships. You can attach properties (attributes) to associations via their association classes. UML also has qualified associations, where you can index links of the associations by a property - e.g. an integer or a position. But it seems to me that one could achieve the effect of qualified associations by adding attributes to association classes, as we add properties to relationships. So, in my mind, if you've got association classes, qualified associations are redundant.

    Am I missing something?

  • stuart kent's blog

    The 'U' in 'UML'

    • 0 Comments

    Last night the conference banquet was held for the MoDELS conference in Jamaica. The MoDELS conference changed it's name this year - it used to be known as the UML conference, and UML is still used in the byline. I'm the general chair for the conference this year, and Microsoft sponsored the banquet. As you know, Microsoft is arguing for domain specific languages, with UML playing a useful role in some circumstances within that approach. 

    So I was mildly amused to see the title printed on the menu that interpreted the acronym 'UML' as 'Universal Modeling Language'. I also noticed that alongside the UML logo on the conference program, the acronym is expanded as 'United Modeling Language'. Having experienced OMG politics at first hand during the standardization process for UML 2.0, I find the latter interpretation particularly ironic.

    This reminds me of a game I once played with colleagues in a quiet moment at an OMG meeting. It's surprising how many subsititutions for the letter 'U' one can come up with. Kept us going for a good 30 minutes.

  • stuart kent's blog

    Model Transformation

    • 0 Comments

    I'm in Jamaica at the MoDELS 05 conference.

    Yesterday I attended a workshop on model transformation, where a number of different techniques were presented. The organizers had asked all submitters to apply their technique to a standard example (the object-relational mapping) so it was quite easy to compare the different approaches. There were also some excellent discussions. Here's my distillation of key take-aways:

    1) Most of the approaches modeled the tracing data (i.e. the mapping itself) in some way. Transformations created both the tracing data as well as the target model. Some of the approaches (e.g. triple graph grammars) used the tracing data generated by one application of the transformation as input into the next application.

    2) Rules which use pattern matching was a common theme running through most (though not all) techniques. Placing priorities on rules was one way of controlling the order in which rules are processed. Some combined rules with imperative code, to put control structure around rules or to 'finish' off work done by a rule. Some techniques used constraint solvers to avoid writing any imperative code at all.

    3) There was an interesting discussion about specifying and testing transformations. One could argue that a dedicated model transformation language, if it is any good, is high level enough not to require a separate specification. Even so, it's still necessary to validate that the transformation meets the business need: does it produce the expected results for designated example input models? So we need testing frameworks for delivering input models to a set of rules and checking the output is correct. How do we check the output is correct? It depends on what it is. If the model is executable, you can test its execution to see that it has the desired behavior. If not you can at least inspect it and check that it is well formed. One can also write well formedness constraints on the tracing model (see 1) and check that generated traces are well formed. This then led into a discussion about debugging transformation rules... here again the tracing data may be useful information, especially if the order in which rules have been fired, hence tracing information created, is also kept.

    (An aside: In building DSL Tools we have been faced with the issue of specifying code generators, which are expressed as text templates. An effective way of specifying them, we have found, is to describe the behavior of the generated code for various combinations of inputs.)

    4) Another interesting discussion emerged around the topic of bidirectional mappings and model management. Suppose we have two models where one is not fully generated from the other - they are both edited directly in some way. The goal is then to keep them consistent and help the user bring them back to consistency, and there would probably need to be UI specific to the particular transformation in question to do this. Again, tracing information seems important in this scenario. But now consider a team scenario with multiple models and multiple mappings between them. Different team members make changes to models, checking them into source control. How do you go about keeping all models consistent with one another across the mappings? What are the steps a developer must take when they check in? When dealing with a large code base through source control, you soon learn to use diff and merge tools and also soon learn that there are different levels of consistency: will the code build? does it run all tests? does it meet all scenarios? I think it's a similar situation with models. We need diff and merge tools with models, and, they have to be domain specific, just like the languages. We need to start considering what the different levels of consistency are: are there any unresolved cross references between model elements? are the models well-formed? are the mappings between models consistent?

    5) Finally there was a brief discussion about diagrams and diagram layout. The point being that if you run a transformation to create a new model, then what about creating and laying out the diagram to go with that model (assuming a graphical modeling language here)? And what about getting that layout to be a transformation of the layout of the source model?

    Interesting, uh? Well I thought so, at least.

  • stuart kent's blog

    Windows Workflow Foundation

    • 0 Comments

    If you're interested in Workflow then you'll want to have a look at Windows Worflow Foundation, announced at last week's PDC. Here are some links to get you going:

    The main page: http://msdn.microsoft.com/windowsvista/building/workflow/

    An introductory article.

    Dave Green's blog. Dave is the architect of Windows Workflow.

  • stuart kent's blog

    September release of DSL Tools

    • 0 Comments

    A new release of DSL Tools is now available. You can dowload it from:

    http://go.microsoft.com/fwlink/?LinkId=43636

    The readme included in the zip file provides more information. This is hot off the press - the updates to the main DSL Tools site haven't filtered through yet.

    The list of known issues that accompanies this release is at:

    http://lab.msdn.microsoft.com/teamsystem/workshop/DSLTools/knownissues/default.aspx

    This release still works with VS2005 Beta2 - same as the May release. Quoting from the readme, new this release:

    • Numerous bug fixes and resolution of known issues.
    • Replacement of the 'Blank Language' template with the 'Minimal Language' template. This is about the smallest DSL you could create, consisting of: two domain classes, two relationships (an embedding and a reference relationship), one box and one line. In response to feedback received, we have changed the terminology used in this template to be more concrete, less esoteric (e.g. ExampleClass instead of ConceptA). This also makes some of the walkthroughs easier to follow.
    • Three new templates: Class Diagrams, Activity Diagrams and Use Case Diagrams. These provide starting points for the many users who wish to base their DSL on a UML notation, and provide some richer samples of what can be built with DSL tools. These samples are completely generated from DSL definitions (domain model in a dsldm file, and a notation definition in a dsldd file). As we enrich the feature set of DSL Tools we will be enriching these templates and removing some of their current limitations, as well as adding new templates. We'll also be showing customers how designers can be further enriched through code customizations, as we finalize the APIs for version 1.

    As Jochen points out, our next release should be available very soon after the RTM release of VS2005, and will work with that release. Other features planned for that release, are:

    • Deployment - you add a setup project to your designer authoring solution, build that project and get delivered an MSI
    • Validation - a framework that makes it easy to write well-formedness constraints, with accompanying error messages, and have them validated from various launch points in the designer.

    [edited to update link to download page, instead of the file itself]

     

     

Page 4 of 6 (147 items) «23456