As languages 'improve' over time, we see a first principle emerge:
Move responsibility for many of the 'good practices' into the language itself, allowing the language (and therefore the people who use it) to make better and more consistent use of those practices.
With assembler, we realized that we needed a variable location to have a consistent data type, so in comes variable declaration. We also want specific control structures like WHILE and FUNCTION. As we moved up into C and VB and other 3GLs, we started wanting the ability to encapsulate, and then to create objects. OO languages emerged that took objects into account.
Now that application architecture is a requirement of good application design, why is it that it that the languages don't enforce basic structural patterns like 'layers' and standard call semantics that allow for better use of tracing and instrumentation? Why do we continue to have to 'be careful' when practicing these things?
I think it may be interesting if applications had to declare their architecture. Classes would be required to pick a layer and the layers would be declared to the system, so that if the developer accidentally broke his own rules, and had the U/I call the data access objects directly, instead of calling the business objects, for example, then he or she could be warned. (With constructs to allow folks to override these good practices, of course, just as today you can create a static class which gives you, essentially global variables in an OO language).
What if an application had to present it's responsibilities when asked, in a structured and formal manner? What if it had to tie to a known heirarchy of business capabilities, as owned by the organization, allowing for better maintenance and lifecycle control?
In other words, what would happen if we built-in, to a modern language, the ability of the application to support, reflect, and defend the solution architecture?
Maybe, just maybe, it would be time to publish the next seminal paper: "Use of unconstrained objects considered harmful!"
I see Software Factories as pointing in this direction, although the path is a little murky right now.
That would be a recipe for ensuring the languages don't get used.
There are exceptions to most rules, and there will always be cases where following Best Practices (if they don't turn into tomorrow's "What were we thinking?" practices) is simply going to be overkill...database abstraction layers where I pretend I don't know what databse technology I'm using in an application that I *know* will only be used for SQL Server, and for a defined window of time, would be one example (and if the customer has no intention of ever using Oracle, for example, is he really paying you to write extra code to cater for the possibility that he might?).
The truth is that not all applications *require* good application design...sometimes an application just needs to be good enough for its intended purpose, and all you'd be adding is time and code bloat.
So to nanny people into a Patterns and Practices Group Knows Best approach would not be helpful.
Why not makes these requirements available as template options rather than mandating assumptions about what everyone should be doing all the time?
The last thing we need is development tools which appear to have been traumatised over potty training.
Simple answers to simple questions:
why is it that it that the languages don't enforce basic structural patterns like 'layers' and standard call semantics that allow for better use of tracing and instrumentation?
Because language designers don't understand or care about application architecture. In their view, application architecture is "Somebody Else's Problem".
Recently I was trying to decide if I would buy a copy of "Evaluating Software Architectures: Methods and Case Studies" by Clements. I'm still not sure; however, I was struck by a bit from an early sample chapter "To be architectural is to be the most abstract depiction of the system that enables reasoning about critical requirements and constrains all subsequent refinements."
If that's the viewpoint of an architecture that is being taken, then there would have to be significant flexibility in describing the architecture that a software system would claim, and this description would have to be presented along with the claim for verification. This is almost certainly plausible when the software system is compiled.
For runtime, much harder, but much nicer. And we've got to use all those extra cycles from the other cores for _something_, right?
With all due respect, you must be writing your code in assembly. After all, who would want 'type safety' or even 'inheritance' since it clearly would get in the way and bloat the code and must come from the mind of a control freak.
Give me a break. Adding a language innovation is not an arm-twisting exercise. If an innovation is valuable, people will use it. You can always ignore it.
Nick, to a greater or lesser extent a variety of "4GL" tools provided the kind of constructs you're talking about, back in the 1990s. Companies like Magic Software, USoft and others provided really nice solutions that focused developers' minds on the domain at hand (principally run-of-the-mill client-server database apps) and allowed them to build and validate apps that could be transparently deployed across heterogeneous client OSs, DBMSs, network protocols, etc.
But who has ever heard of these companies or can remember the tools? Not many people. Why? Because Java came along, and everyone forgot about these tools and platforms. Java gave people an excuse to get down and dirty and play with lower-level code again, which, let's face it is much more fun than working within constraints imposed by someone else...I'm vastly oversimplifying the market dynamics at the time, but the "coolness factor" certainly played a strong role in the ditching of the 4GL model for the next batch of 3GLs.
I think there's a broader lesson here: that most of the time, people will fight for what they think will best benefit their CV, rather than what's right for their organisation.
Perhaps, but if you notice, the general progression of our industry is not normally to switch languages, but to innovate on a language. C was the root language simply because of its ubiquity and open source nature, the seed planted by Bell Labs in Unix.
I'm not asking for a 4GL. I'm asking for features to add to C# (and Java) that allow the language to perform an additional layer of constraint management. Perhaps it should be in the IDE and not the language. Hmmmm.
There is at least one tool which can be used as desigh rule pattern builder. I am speaking about VS2005 IDE.
It is already there, look into Visual Studio 2005 Code Analysis Tool. It contains almost all we need to control "noisy" code produced by some of developers. It already contains rules in topics:
For example, some of the useful rules are:
CA1806: Do not ignore method results
CA2202: Do not dispose object multiple times
CA1501: Avoid excessive inheritance
CA1502: Avlid excessive complexity
CA1048: Do not declare virtual members in sealed types
CA1040: Avoid empty interfaces, e.t.c.
All of these rules individually does not garantee common desighn/performance errors, but in common they are good starting point to use by skilled professional. It does not helps to properly apply practicies as singleton, composite, visitor and other desighn patterns, but really helps as a pre-4GL layer. (3.5 GL :)
I think VS2005 really stands one step closer then the others to the next generation IDE. In my experience, I implemented most of the patterns either as code tempates, or as an internal framworks used by my company. It works well.
And the last. I wrote almost all frameworks using generic interfaces, throught which i drops the world of useless (if used without practical background) abstractions to real base.
I clearly have not explained myself very well. The items you illustrate, while good practices, are not the practices that I wish to constrain by declaring architecture in code! Every single one is non-architectural at the application and systems level.
I'll blog on this topic again, with a better story.
There are several design principles that we use to guide us designing software – abstraction, information hiding, separation of concerns, modularization, etc. Those are language (and design method) agnostic. Of cause modern languages (especially OO) provide us mechanisms to support those principles – encapsulation, public/private methods, abstract classes, etc.
If I understand you correctly, you are suggesting adding some additional mechanisms to the languages that support/enforce decisions made in design. I think generally it’s a good idea; I’m just not sure what other mechanisms can be useful. In you example with a layering architecture I can see how namespaces and using directive can be used to control the uses relationship. Do you have other examples?