Repeat History() Until True

  • Comments 2

You may not remember, but when ASP.NET was in the early stages of development it was called ASP+. I wonder if we'll see history repeating itself so that, when it finally clambers out of beta, Google+ will actually be called Google.NET. Probably not. I guess there's too many hard-up lawyers around at the moment looking for work. But it does seem that, in many spheres of life, we never get the hang of the notion that history repeats itself.

In politics we go through regular cycles of left-leaning (run out of money) and right-leaning (no new taxes, maybe) government. With the environment we can't make up our mind whether we want nuclear (clean but dangerous) or fossil-fuelled (safe but global warming). As for the financial crisis, we're torn between printing money (more debt) and cutting spending (less growth). It's almost like history doesn't teach us anything.

So what about my own little corner of the world: technical guidance and developer documentation? It sometimes seems like the process for this is built around the concept of conveniently ignoring the lessons of history. Here at p&p, our goal is to provide guidance for developers, system architects, administrators, and decision makers using Microsoft technologies - our tag line is, after all, "proven practices for predictable results". But designing projects to achieve this sometimes seems to be history repeating itself. And not always in a good way.

The problem tends to center around how to figure out what guidance users require, and how to go about creating it. My simple take is that you just need to discover exactly what the users need, what you want to guide users towards (the technologies, scope, and technical level that is appropriate), and the format of the guidance (such as book, online, video, hands-on-labs, FAQ, and more). From that you can figure what the TOC looks like and what example code you need, or what frameworks or applications you must build to support this. So, as usual, it's all about scenarios and user stories.

In the majority of cases, this is exactly how we plan and then start work on our projects. But the problem is that it's very easy to start by throwing a bunch of highly skilled developers at a new technology and letting them play for a while. OK, so they need to explore the technology to find out how it works, and figure out what it can do. They go through the process of spiking to discover the capabilities and issues early on so that their findings can help to shape the decision process for designing the guidance and defining the scope.

However, it's very easy for developers to be influenced by the capabilities of the technology rather than the scenarios. As a writer, I ask questions such as "Is this feature useful; and, if so, when, where, and why?", "What kind of application or infrastructure will users have, and how do the technology features fit in with this?", and "Does the technology solve the problems they have, and add value?" In other words, are we looking at the technology from a user requirements point of view, or are we just reveling in the amazing new capabilities it offers?

Here's an example. When you read about Windows Azure Service Bus, it seems like an amazing technology that you could use for almost any communication requirement. It gives you the opportunity to do asynchronous and queued reliable messaging between on-premises and cloud-based applications, and supports an eventing publish/subscribe model. It can use a range of communication and messaging protocols and patterns to provide delivery assurance, can scale to accommodate varying loads, and can even be integrated with on-premises BizTalk Server artifacts.

The Microsoft BizTalk Server Roadmap talks about future integration with Windows Azure but it seems as though you could support many of the BizTalk scenarios with Azure right now using Azure Service Bus, as well as extending BizTalk to integrate with cloud-based applications. But what are the realistic scenarios for real-world users? Will developers try to retrofit Service Bus capabilities into existing applications, or does it make sense only when building new applications? Will they attempt to extend BizTalk applications using Service Bus, or aim to replace part or all of their BizTalk infrastructure with it?

And what Service Bus capabilities are most useful and practical to implement in real-world scenarios and on existing customer infrastructures? Are most developers desperate to implement a distributed publish/subscribe mechanism between on-premises and multiple cloud-based applications, or do they mainly want to implement queuing and reliable message communication? Will it mean completely reorganizing their network and domain to get code that works OK in development spikes to execute on their systems? Is there a danger that experimental development spikes not planned with specific scenarios in mind will end up being completely unrealistic in terms of being applied to today's typical application implementations and infrastructure?

I can see that playing with the technology is one way to find this out, but it's also an easy way to build example code and applications that don't reflect real-world scenarios and requirements. They aren't solutions; they can turn out to be just demonstrations of features and capabilities. This is why we expend a great deal of effort in p&p on vision and scope meetings, customer and advisory group feedback, and contact with product groups to get these kinds of decisions right from the start.

And let's face it, there are several thousand other people in EPX here at Microsoft writing product documentation, each focused on their own corner of the technology map and concentrating on their own specific product features and capabilities. So it's left to our merry little band here in a forgotten corner of the campus, and scattered across the world, to look at it from an outsider's point of view and discover the underlying scenarios that make the technology shine. Finding the real-world scenarios first can help to prevent the dreaded disease of feature creep, and the wild exuberance of over-complexity, from burying the guidance in a morass of unnecessary technological overkill. And that's before I can even start writing it...

And just in case, as a writer, you are feeling this pain here's some related scenario-oriented diatribes:

  • Fortunately, Microsoft's documentation is pretty good most of the time, which does help, but I do think there should also be a focus not just on making sure the technology provides what the user wants, but the API should enable the developers to do the common cases most efficiently as well. A good example that is familiar to me is the DirectX API. Compare the amount of code one needs to write to draw a rotating cube in D3D5 and in D3D11 and how much you have to sift through tons of documentation. It has gotten a lot more efficient and the API does more for the developer. Some might think the API implementation gets bloated, but the outward interface becomes slimmer and easier to use, especially for the common case. That also means the onus is on the API to get the boring details correct and that being well-tested and implemented by the experts leads to less buggy software.

    Nitpick: I think you mean "until False" in the title ;)

  • Thanks Stephen - you make a very good point. This happens with lots of technologies (in the .NET world you only have to think about System.IO to see this; many common tasks are very simple now but the old members are still there). Enterprise Library is another example with which I had to become excruciatingly familiar when working on that project. And I also understand what you mean by the nitpick - I suppose I was intimating the current situation rather than what it should be... though more likely I was just looking for a snappy title.

Page 1 of 1 (2 items)
Leave a Comment
  • Please add 8 and 6 and type the answer here:
  • Post

Repeat History() Until True