The .NET Vision (aka: the big picture): what are we working towards? [Kit George]

The .NET Vision (aka: the big picture): what are we working towards? [Kit George]

  • Comments 8

In response to the query for input on what to blog about today, David asked the following question:

"I want to know the big picture on things. What is the vision here? The end game?"

This is one of those questions where you'll get a slightly different question depending on who you talk to of course. So I'm gonna imagine myself in a couple of different pairs of shoes for a second.

Let's imagine I'm on the CLR performance team. They would point out that our whidbey focus has been to a) improve startup time and b) improve working set. The former is because we've had a lot of (justified) complaints, so it's obvious we should do something about it. The latter is more interesting from a 'what's the big picture' scenario. We believe there's significant wins in making NGen'd images with as much shareable memory as possible. In V1.1, the amount of shareable memory in an image was pretty poor (roughly 15-25%). In Whidbey, we've done a boatload of work, and we've made that 75%-85%, a remarkable increase.

Now why is this important to us? First and foremost, our vision for .NET is to have performance characteristics similar to unmanaged code. We're not joking. There's work to do, and we know it, but we've got our guns on a specific target, and we know we can get there. In getting there, one of the key steps is going to be getting as many managed dependencies as we can (that is, as many folks building on top of managed code as possible). This gives us more data, more knowledge about the variety of apps that can be built using our product, and the different performance scnearios we need to be aware of, and optimize for. One of the key scenarios we're gonna have to bite off in the near future is how to make sure we're ready for OS's to take dependencies on us. To do that, we have to be meeting their performance goals ,of which, shareable working set is one of the primary critical issues. And at that point, we'll really be starting to achieve our broader objective, of making .NET the managed coding platform of choice.

In both these cases, we’re asking a fundamental question of customers (external in the first case, internal as it turns out, in the second). ‘What is it that’s preventing, or limiting you from using managed code?’ Once we have the answer, we take the feedback, and respond by addressing them head on, no matter how long-term the solution.

Now let’s step to the side, and imagine I’m working on the CLR hosting team. The hosting team’s key customer was of course ‘Yukon’, Microsoft’s next version of Sql Server. The CLR hosting team has a very real customer who have rigorous demands of the CLR in terms of capabilities, performance, and above all, reliability. After all, we want to be able to ensure that user managed code in Sql Server can run without jeopardizing the stability of the database server. We’ve done just heaps of testing, and taken zillions of bug fixes with this in mind, to ensure that we can achieve our goal not only for SQL Server but for any host that has similar requirements, and by doing this have a successful relationship with Sql Server (note: from a CLR perspective, it’s great to have a key customer like this. But more importantly, we want to make sure that the capability to host the CLR is a very real feature. The absolute best way to do this is to have someone who is depending on delivery of that feature. In other words, having a concrete, demanding customer makes sure we get a good hosting story for everyone else).

Now the broad objective here of course, is to overcome boundaries which would have prevented managed code being the code of choice. If we hadn’t been able to have been hosted, then in all likelihood, Sql would have not been interested in utilizing .NET. In other words, the initial question was ‘what needs to happen for you to utilize and expose .NET?’. And once we had the answer, we simply ripped into the problem, and addressed it.

I’ve given two examples, but I would assert to you that wherever you go in the .NET team right now, either within the CLR, or within other logical groups, the underlying objective is the same. We want to build a product that is your primary coding choice. Not an alternative. Not ‘a nice idea’. Your platform of choice. It turns out that right here, right now, we have a lot of big meaty issues to solve and therefore, you’re hearing about some of the broad initiatives (performance, hosting, servicing, etc.). But in two years, the question will be the same: ‘what do YOU need, in order to either a) move to .NET, or b) make .NET the best experience for coding there is’. And we'll respond to that.

And now, let’s come back to the BCL. Is our objective the same? It is entirely aligned with the above goals. We have specific sub-goals. For example, we want to ensure that we avoid duplication throughout the framework, we want to ensure we resolve the top requests from external customers, and we want to support internal teams to ensure their goals are met. But at the end of the day, each of those wrap up into one driving need: to offer you the best coding experience there is. We want it to be productive (a critical focus), we want to enable the writing of performant, reliable code, we want a secure environment, we want competitive parity… we want it all. We want the answer to the (example) question of ‘my boss just asked me what platform we should use to design this competitive website: what should I suggest?’ to be so obvious because across the board, we’re the leader.

I would suggest that http://www.osnews.com/story.php?news_id=9441&page=4 is an interesting read in this space as well: it looks a little broader than I do, and also points out issues along the way.

So the features you see in the BCL are steps in that direction: making this the best programming platform there is. This is why your feedback is so important to us. Because at the end of the day, if you think there’s something about .NET that’s unperformant, unreliable, insecure, missing (‘why do I have to PInvoke to do <x>?’), or otherwise deficient, we want to know.

That’ll be the same for the next version too.

At least.

  • Nothing's going to stop me from using managed code. But for those people I've talked to who could use .NET but don't... the primary thing that puts them off is the ~20Mb install of the FX distributable. Yes, they can include it with their installer (but they want to have a ~2-4 Mb installer). Yes, users have probably got it already via Windows Update (but you can't rely on that). Yes, a 20Mb download is not a big deal in these days of broadband (but it's still not a no-brainer).

    (BTW: I don't know what the baseline for broadband is in the US, but here in the UK it's still 576 Kbps. But this is changing over the next few months; the baseline is soon going to be 2 Meg.)

    Anything that can be done to ease the developer's pain of getting the correct, targetted version of the runtime (1.0, 1.1, 2.0, 2.x...) onto end users' machines should be done IMHO. Now I'm not up to speed on this vis-a-vis Whidbey. You may have made it easier already.

    But if not... how about an ultra-small loader of the runtime that's made available as a Critical Update via WU ? It's a no-brainer to get it. It detects attempts to install and/or run managed code, and automatically pulls the correct version of the runtime onto the machine just-in-time. JIT installation of the runtime!

    The secondary thing that puts people off using .NET is users' ease of decompilation of their code - their "precious IP" (slight note of sarcasm creeping in). Forget about the Dotfuscator Community Edition (who wants something that's not the best ?). The way to deal with this problem once and for all is to include the BEST obfuscator out there (Demeanor ?) and to give away the full version of it for free. Ideally you - Microsoft - buy the Demeanor product, develop it going forward, and include it with every version of VS.NET.

    Now folk could pick holes in the above arguments and suggest good and obvious workarounds or alleviating measures, but I'm talking about fundamentals here. I'm talking about the first-hand experience I've had of talking to people to find out what puts them off and keeps them writing in VB6 or whatever. And it's not the devs who are put off, it's their managers.
  • Of course, I haven't answered the question in the terms in which it was originally couched - which was mainly performance-related. And my suggestions are probably outside of the remit of the BCL team. But what I'm saying is that, with regards to takeup of managed code, the size of the redist. and the reverse compilation issue are what hinder all the people I've spoken to - not performance.

    As a writer of .NET applications, I personally would love to be able to write a beautiful piece of shareware like TextPad, have it be a <2Mb download, and not have to fork out $800 for Demeanor in order to protect my code. But I will (probably) buy Demeanor, because I want protection as much as anyone, although sometimes I think that I'm being a bit silly.
  • Tips
  • Mike Sampson from the VB team has written about a bootstrapper to help install the CLR lazily on your clients' machine.

    You can read about it here on his blog:
    http://blogs.msdn.com/misampso/archive/2004/03/11/88402.aspx
  • You guys are doing a hell of a job! although the market needs to "protect" it's code, this not happen even in the j2ee world. Thus, targeting the performance issue is not only more practical but from my experience with clients, much more needed then anything else...besides features :)

    Great post!!!
  • Tips
  • We're shipping managed apps. Our biggest complaint is related to startup time of a winforms app. We have gone back to using splash screens even on a 3 GHz HT pentium. The redist hasn't been too much of an issue although we were very disappointed that XP SP2 didn't just install .NET 1.1 redist. :-) We've also found some bugs on Win98 that don't show up on NT-based systems.
  • Oh one other big complaint about managed apps is the lack of support for post-mortem debugging. We send minidump error reports home for analysis on our unmanaged apps. We have never been able to get good minidumps from a managed app. This really bites. The best we can do is send back the exceptions and their respective stack traces, loaded module info, running thread info, etc but no local var/parameter values.
Page 1 of 1 (8 items)