March, 2004

  • Eric Gunnerson's Compendium

    Work/Life Balance


    Every year, Microsoft does a widespread poll to determine how people view their work environment, their compensation, etc.

    One of the questions is something like, “Are you satisfied with your work-life balance?”

    That tends to be a sore topic on the PM side. Perhaps a bit of an explanation is in order...

    I've been trying to close down on a number of issues related to the compiler, get ready for a community review with my 4th level manager and his reports, and help firm up our relationship to Longhorn. Here's what it's meant for the last two days.

    Tuesday, I got into the office at 7:30 and worked straight through to 6:30 PM. I went home, watched a bit of TV, and then worked for a couple of hours.

    Today, here was my schedule

    7-8 Bike ride (14.67 rides, 57 minutes) Hilly.
    9-10 Work on a summary email for a topic (yes, I spent a whole hour on a single email)
    10-11 C# PM meeting
    11-12 Meeting to discuss versioning
    12-12:30 Lunch
    12:30-1:00 Prep for a meeting
    1:00 - 1:30 Review meeting on a C# feature that I hope to be able to talk about in a week or so
    1:00 - 3:00 C# Language Design Meeting
    3:00 - 4:00 Write up design meeting notes (didn't get this done in the hour)
    4:00 - 5:30 Compiler bug triage
    5:30 - 6:30 Dinner
    6:30 - 8:00 Carpentry (we're doing some remodelling)
    8:00 - 10:30 Powerpoint slides, email, other issues

    I realize that long hours aren't really a rarity in the tech industry, but even with that amount of time spent, I still have some things piling up.

    That question didn't get a very high rating when I responded to the poll.

  • Eric Gunnerson's Compendium

    Particular boats and Funicular goats


    Your task:

    1) Figure out what this reference is

    2) Figure out why it's an appropriate one

  • Eric Gunnerson's Compendium

    Using for purposes other than disposable objects...


    Doug asks:

    re: A lock statement with timeout...

    I've done this trick before to deal with common patterns in numerous methods. Usually lock acquisition, but there are some others. Problem is it always feels like a hack since the object isn't really disposable so much as "call-back-at-the-end-of-a-scope-able".


    When we decided the using statement, we decided to name it “using” rather than something more specific to disposing objects so that it could be used for exactly this scenario.

  • Eric Gunnerson's Compendium

    Enumerators and boxing..


    Jeroen asked:

    While we're on the subject of boxing, why doesn't foreach do the same optimization as using? foreach always seems to box the enumerator struct.

    If you use a struct as an enumerator, you will always box when you go to IEnumerator. Interfaces are reference types, and you have to box to get an interface reference to a struct.

    The fix is to implement the strongly-typed enumerator pattern. Start with the IEnumerable/IEnumerator version and:

    1) Change the type of the Current property in the IEnumerator struct from object to the strong type.
    2) Remove IEnumerator from the implementation list on the struct type.
    3) Remove IEnumerable from the implementation list
    4) Change GetEnumerator() so that it returns the struct type rather than IEnumerator

    The compiler will then deal with the strongly-typed version. If you want to also allow the interface versions, you can implement them specifically.

    Here's some code (sorry about the formatting):

    public class IntegerListExplicit: IEnumerable
    int count = 0;
    int allocated = 10;
    int[] elements = new int[10];
    public IntegerListExplicit()
    void Expand()
    if (count == allocated)
    int[] newElements = new int[allocated * 2];
    for (int i = 0; i < count; i++)
    newElements[i] = elements[i];
    allocated = allocated * 2;
    elements = newElements;
    public int Add (int item)
    elements[count] = item;
    return count - 1;
    public int Count
    void CheckIndex(int index)
    if (index < 0 || index > count - 1)
    throw(new IndexOutOfRangeException(String.Format("Index {0} out of range", index)));
    public int this[int index]
    elements[index] = value;
    public override string ToString()
    string[] s = new string[count];
    for (int i = 0; i < count; i++)
    s[i] = elements[i].ToString();
    return(String.Join("\n", s));
    IEnumerator IEnumerable.GetEnumerator()
    return((IEnumerator) GetEnumerator());
    public IntegerListEnumerator GetEnumerator()
    return(new IntegerListEnumerator(this));
    public class IntegerListEnumerator: IEnumerator
    IntegerListExplicit list;
    int index = -1;
    public IntegerListEnumerator(IntegerListExplicit list)
    this.list = list;
    public bool MoveNext()
    if (index == list.Count)
    object IEnumerator.Current
    public int Current
    public void Reset()
    index = -1;


  • Eric Gunnerson's Compendium

    A lock statement with timeout...


    Ian Griffiths comes up with an interesting way to use IDisposable and the “using“ statement to get a very of lock with timeout.

    I like the approach, but there are two ways to improve it:

    1) Define TimedLock as a struct instead of a class, so that there's no heap allocation involved.

    2) Implement Dispose() with a public implementation rather than a private one. If that's the case, the compiler will call Dispose() directly, otherwise it will box to the IDisposable interface before calling Dispose().


  • Eric Gunnerson's Compendium

    C# Featurette #1 - Reference and Value type Constraints


    We've spent a lot of time talking about the major features of the C# language, but there are a number of minor features that we've added that we haven't talked about.

    As we march towards our Beta release (and we make bits available), I'm going to start talking about some of these “little features” (or featurettes). Keep in mind that they are little features (not to be confused with “Little Creatures”, the 1985 Talking Heads album (what a quaint term, that. “Album”. Back in those days, when you bought a piece of music, you got honkin' big piece of vinyl, and you had to turn it over in the middle. A far cry from the ethereal download of today)).

    Our first featurette - reference and value type constraints on generics types.

    I've gotten asked about this capability several times. Basically, there are certain situations where you want to have a generic type where the type argument can only be a reference type or a value type.

    We had discussed this early in the design process, but it wasn't something that the CLR team had time for in their schedule, so we weren't planning on it for this version. But CLR team managed to fit it in, so we decided to add it to the language. But we had to figure out the right way to express it.

    We went through a few ideas. One was to have a contraint named “reference-type” and “value-type”, but that seemed to be a very verbose statement, and not really in the spirit of the C# naming. We went through a few sillier options (which have thankfully slipped my mind), and finally settled on our original choice, “class” and “struct”.

    Those names aren't perfect, because of what they really mean is “reference type” or “value type”. For example, the class contraint means you can use any reference type - class, interface, or delegate. The struct constraints limits you to structs or enums. So, it's not perfect, but at least it gives the right flavor. Language design is rarely perfect.

    Anyway, the syntax is exactly what you'd expect:

    List<T> where T: class


    Doug asks what this enables. The list that comes to mind is some operations dealing with null, and the use of the as operator.

    Thomas and Dave ask: What about “object“? Why not “ref“ and “value“?

    Object wouldn't get you what you want, since everything is derived from object (yes, value types are implemented as being derived from System.ValueType, but to the language they're just derived from object).

    We discussed “ref“ and “value“. We didn't like the fact that ref was a short hand (ie it wasn't “reference“), and neither of those choices had a connotation of “type-ness“ to them, while both class and struct carry a strong connotation of “type-ness“.



  • Eric Gunnerson's Compendium

    Adding Emptiness to the DateTime class


    I got an interesting email from a customer today, asking for my opinion on how to deal with the concept of “Empty” in relation to DateTime values. They had decided to use the DateTime.MinValue value as an indication that the DateTime was empty.

    The two options they were consider were:

    1) Call static functions to determine whether a DateTime is empty
    2) Just compare the DateTime value to DateTime.MinValue to see if it is empty.

    After a bit of reflection, I decided that I didn't like either approach. The problem is that they're trying to add a new concept of “emptiness” (some might equate this to “null”) to a type without changing the type.

    A better approach is to define a new type that supports the concept of emptiness. In this case, we'll create a struct that encapsulates the DateTime value and lets us deal with empty in a more robust manner.

    Here's the struct I wrote for them (and, no, this is not an indication that I'm now writing classes when people ask me to). 

    public struct EmptyDateTime
        DateTime dateTime;
    public EmptyDateTime(DateTime dateTime) { this.dateTime = dateTime; }
    public bool IsEmpty { get { return dateTime == DateTime.MinValue; } }
    public static explicit operator DateTime(EmptyDateTime emptyDateTime) { if (emptyDateTime.IsEmpty) throw new InvalidOperationException("DateTime is Empty"); return emptyDateTime.dateTime; }
    public static implicit operator EmptyDateTime(DateTime dateTime) { return new EmptyDateTime(dateTime); }
    public static EmptyDateTime Empty { get { return new EmptyDateTime(DateTime.MinValue); } } }
  • Eric Gunnerson's Compendium

    Speed of direct calls vs interfaces vs delegates vs virtuals


    I've gotten a couple of follow-up questions on my column on dynamic dispatch asking why there are differences between direct calls, interface calls, virtual calls,  and delegate calls.

    I'm not Jan or Rico, who know a lot more about these topics than I do (hint hint - ask them through their blog pages if I don't answer your question) but I can give you the big picture.

    Consider the following code:

    interface IProcessor
        void Process();
    class Processor: IProcessor
        public void Process()

    If I write something like:

    Processor p = new Processor();

    The compiler will emit code that tightly binds to Processor.Process(). In other words, the only function that could be called is that function.

    That means that the JIT can easily inline the function call, eliminating the function call overhead totally. A discussion of when the JIT will and won't inline is beyond the scope of this post, but suffice it to say that functions below a given size will be inlined, subject to some constraints.

    A brief aside: Even though C# is doing a direct call, you'll find that it's using the callvirt (ie virtual call) to do it. It does this because callvirt has a built-in null check, which means you get an exception on the invocation, rather than on the dereference inside the function.

    Anyway, the direct call can easily be inlined, which is very good from a speed perspective.

    But what if I have code like this:

    class D
        public void Dispatch(IProcessor processor)

    Processor p = new Processor();
    D d = new D();

    In the calling code, we know that the function could only be Processor.Process(), but in the Dispatch() function, all the compiler knows is that it has an IProcessor reference, which could be pointing to any instance of an type that implementes IProcessor.

    There is therefore no way for the JIT to inline this call - the fact that there is a level of indirection in interfaces prevents it. That's the source of the slowdown.

    Virtual functions present a similar case. Like interfaces, there's a level of indirection, and the compiler can't know what type it will really get (ok, perhaps it can, but I'll cover that later.

    Delegates also have a level of indirection. In the first release of .NET, our implementation of delegates wasn't as optimal as it was, and had additional overhead above the non-inlineable overhead. In Whidbey, that overhead is gone, and my tests (don't trust my tests) show that it's about as fast as interfaces, which is pretty much what one would expect.

    My guess is that it was schedule pressures in V1 that kept us from providing the optimized version, but it's also possible that we didn't think deeply enough about the problem initially.

    So, back to virtual functions.

    You'd like to be able to inline virtuals, but it's a difficult problem. You could conceivably do a whole-program static analysis and know that a given call didn't have to be virtual, and therefore be able to inline it.

    That is, assuming you knew that the set of types was static, which isn't the case in environments where you can dynamically load code at runtime.

    A JIT by a celestially-named company has an interesting technique to get around the problem there being indirection in virtuals. It inlines virtual functions that don't require virtual dispatch, and then tracks whether it needs to change that decision later on (using the aptly named “Dynamic deoptimization“).

    Inlining virtuals is more important in their environment because all the functions are virtual by default, which means you have a ton of virtual functions that don't need to be. That's less of an issue in .NET because virtual happens less often.

    I think that about covers it, and I got through the whole post without mentioning Java once (Oh, what a giveaway!)

    [Update: Shane comments that you should be able to inline the interface call because you know what type it is called on. This would be somewhat difficult to do. You could have a base pointer to a derived class (for example), which would mean you didn't know the real type, or you could have dynamic code. Even if this wasn't the case, the JIT would have to do several levels of tracing analysis]




    In a case like this, that would be possible, but in a general case, the JIT would have to trace from the point where it knew the type down multiple levels to be able to do the analysis, and there are certainly cases where the JIT couldn't know the true type]


  • Eric Gunnerson's Compendium

    Arrays with non-zero upper and lower bounds...

    In the comments to my post on zero and one based arrays, several people mentioned that they wanted to be able to have collections that ran from 4 to 10, or from 1900 to 2004 for years. Consider the following:
    public class YearClass
       const int StartDate = 1900;
       const int EndDate = 2050;
       int[] arr = new int[EndDate - StartDate + 1];
       public int this[int num]
          get { return arr[ num - StartDate]; }
          set { arr[num - StartDate] = value; }
    public class Test
        public static void Main()
    YearClass yc = new YearClass(); yc[1950] = 5; } }
    I think that gives you the user model that you want.
  • Eric Gunnerson's Compendium

    Timing your C# code


    I've gotten a couple of emails on my recent column telling me that they couldn't replicate my timings.

    My first reaction was a sinking feeling in my stomach that I'd messed up the timings, but then a more rational idea occurred to me.

    They were running from inside the IDE.

    You see, whenever you do an F5 from inside the IDE, the IDE figures that you want to be able to debug your code, and therefore the JIT is put into debug mode, and you don't get the fully optimized path.

    This happens even if you are building in a release comparison.

    IIRC, if you do a CTRL-F5, you don't get this behavior, but it's generally a better idea to do any timings outside the IDE, as the IDE may have other impact on your timings.

  • Eric Gunnerson's Compendium

    What's wrong with this code redux...


    Thanks to all for their comments on “what's wrong with this code”.

    I will confess to making a tactical error in presenting the code. I started only showing a single error, and then I went back and showed another one.

    Ones that people commented on:

    Not checking for InnerException to be null

    I didn't intend this one, so +1 for my readers

    Datastore not getting tested in the use

    I hadn't intend this to be a full, usable class, so there's other code not shown that makes this a non-error.

    Error in constructor

    This was the error that I added, which just confused this issue. Whidbey may catch this one - I'm not sure.

    Not rethrowing in the catch

    This was the error I was intending to highlight. The code I wrote swallows all errors that weren't of the proper type.

    There are really two issues with this code. The first is the more obvious one - the fact that I'm dealing with one type of exception, and not rethrowing all the other types.

    The more subtle issue is that the api that I'm calling is flawed. APIs should never force their users to have to depend on the inner exception for everything. If you find your users writing code like that, you haven't given them enough richness in the set of exceptions that you throw.


  • Eric Gunnerson's Compendium

    Who does Microsoft talk to when they have questions?


    In the comments to my “What's coming up in C# beyond Whidbey”, RichB makes the following comment:

    I sometimes wonder who Microsoft ask for these opinions. I suspect it's internal Microsoft developers and Wintellect/DevelopMentor trainers.

    With all due respect, these type of people are not your average C# programmer. In the 3.5 years I've been coding in C#, I've yet to work with anyone who wasn't bothered about E&C.

    This is an interesting topic, and Rich brings up another topic that I'll touch on a bit later.

    There are three sources that we use for this sort of information. The first is a small group of users that we meet with on a regular basis. This group is a rough cross-section of our target users, and there are no internal Microsoft developers on it. IIRC, we have one person who is a trainer.

    The second source is by talking with customers through email, on newsgroups, at user groups, and at conferences.

    A third source of data is the information that we get by watching C# users program in our usability lab. We can examine, for example, whether a specific C# user programs in a way that E&C would be beneficial.

    The feedback we got around E&C has been fairly polarized. There is one group who feels the way Rich does, and really wants E&C. There is another group that actively doesn't want E&C, as they feel that it encourages the wrong kind of programming. And then there's a group in the middle who typically see the value of E&C but don't think it's critical.

    Rich also brings up another interesting point. Given two features of equal utility, we will try to favor the one that can only be done by Microsoft, and not by a third party.

    In the case of refactoring, however, there are two considerations:

    1. Because the C# IDE has very good information about your code, it can do a better job at refactoring, and provide features such as cross-project rename.
    2. Refactoring can provide a large increase in productivity. While it's true that users can buy a third-party add-in, many customers have told us that they expect Microsoft to provide features such as refactoring, and not force them to buy a third-party tool

    I hope that makes it a bit clearer. We are hoping to do some things to expand the group from which we gather such feedback.

  • Eric Gunnerson's Compendium

    zero or one based collection?


    moo asks:

    Zero based collections or 1 based?

    Since programming languages are a bridge between the human concept of a solution and we naturally think the first element is in position 1, why was this not so on the actual language? Why are we made to think like a machine when infact we are not? The infamous "Off by one bug" is there because of the inherant design. So popular its even got a name yet we do nothing to prevent this happening at the language level, as for those who say its easier for the computer, thats why we have smart compilers. Im not a compiler dammit.


    One of my hobbies is rewriting lines in TV and films, and I am therefore impelled to comment that moo's last line should be:

    I'm a programmer, not a compiler, Jim

    To cover this subject, I'm going to have to set the way-back-machine for the 1980s or earlier, back when real men wrote in assembly and our bytes only had 7 bits. (aside - How many of you - and be honest here - have ever heard of machines where the word size wasn't a multiple of 8? They really did exist).

    Anyway, in those ancient times, processors were fairly glacial in their arithmetic speed, though much faster than the early calculators. Saving cycles was very important, so when arrays were first considered, the implementers looked at the code they wrote:


    translates to

    address = base_address + (x - 1) * sizeof(x)

    They actually didn't write it that way because they didn't have multiplication in those days, but that gives you the idea. Then somebody noticed that if your array starts with zero, you could write it as:

    address = base_address + x * sizeof(x)

    Thereby saving you a single decrement operation, which was important in those day.

    Therefore early programmers got used to zero-based arrays, and the path was set, and it has stayed that way for many years for the majority of languages.

    But why? Isn't it simple enough to change?

    It's obviously trivially easy to change, and Moore's law has made the efficiency inconsequential in the majority of scenarios. The issue isn't around technological limitations, but rather human ones.

    Understanding how zero-based indexing works is the secret handshake of the programming world. We all started not knowing the secret handshake, but over time we learned and even began to like the secret handshake, and now we don't know any other way to shake hands.

    We're not going to try to change our brain wiring just because some young whippersnapper is having trouble remembering that the first index is zero.

    Or, to put it another way, developers have a huge investment in hardwired things like this, and changing them will not make your customer happy.

    [Update: Jack wrote:

    Why cant we either have 1 based or user definable array bounds?

    The CLR does support this kind of construct (I had a hard time not using the term “travesty“ here...), but there aren't, to my knowledge, any languages that have built-in syntax to do that.

    Which is a very good thing. If you go down that route, rather than having to remember a single rule (C# arrays are zero-based), you have to remember that every array could be either zero-based, so all your loops become:

    for (int index = arr.GetLowerBound(); index <= arr.GetUpperBound(); index++)

    If you get this wrong, you get code that works fine for your test cases, but breaks for the people who like 3-based arrays.

    Yuck. Many times, “make it an option“ is the worst choice.



    Ferris adds:

    Why bother creating a new language if you are still hugging your legacy pillow at night. Waste of time.

    One of our main goals was to create a language that was comfortable to C/C++ programmers. We could have taken a different tack, and designed a new language from scratch, and perhaps done some new and exciting things.

    But if you look at all the languages out there, you'll find that having a comfortable syntax is well correlated with language success. Even if you've never written C# code before, if you have experience with a C-style language, you'll be able to read C# code.


    You can find another discussion of this issue here.

  • Eric Gunnerson's Compendium

    What's wrong with this code?


    TechEd is rapidly approaching, and I'm signed up to do a “C# Best Practices” kind of talk.

    Rather bore my audience by presenting my views on implementing IDisposable, I'm going to take the “What's wrong with this code?” approach. My goal is to present code examples that show code that's doing something wrong - be it something prohibited by the language, something that's ill-advised, or something with bad performance - and then let the attendees try to figure out what's wrong with the code.

    I have a list of 10 or 15 items already, but I'd like to steal leverage your experience in this area. If you have a “poor practice”, please post the code, and then leave some blank space before you explain why it's bad, so that others can try to figure them out on their own. I'm especially interested in code that you (or somebody else) wrote where you didn't understand initially what the problem was. In other words, the subtle ones.

    Here's one of my favorites. What's wrong with this code?

    public class Processor
        DataStore dataStore;

        public Processor(DataStore dataStore)
            dataStore = dataStore;

        public void Process(DiscountStructure discStruct)
            catch (Exception e)
                 if (e.InnerException.GetType() == typeof(ProcessFailedException))
                     throw new InvalidActionException(e.InnerException);

  • Eric Gunnerson's Compendium

    Performance Quiz

    Rico presents a very interesting performance quiz on writing string values:
  • Eric Gunnerson's Compendium

    C# and Unit Testing chat on Thursday


    Jim Newkirk, one of the authors of NUnit and lately of the Microsoft PAG group, will be joining the C# team in a chat on C# and Unit Testing this Thursday.

    Jim has a book entitled Test Driven Development in Microsoft Net on the way. He's also a co-author of Extreme Programming in Practice

  • Eric Gunnerson's Compendium

    Anson talks about implementing interfaces in the IDE in Whidbey


    Anson talks about implementing interfaces in the IDE in Whidbey.

    Anson is the PM who used to own the C# compiler, and now owns the C# IDE (primarily the things the C# team builds - Intellisense, expansions, etc. - but he watches over the rest of the IDE as well).


  • Eric Gunnerson's Compendium

    Commenters: Please ask off-topic questions on my "Suggest a topic" link


    I really appreciate the comments that people leave on my posts, and the comments are often more valuable than the posts themselves.

    However, every day or so, I'll get an off-topic question. For example, in my post talking about what's coming up in Whidbey, I'm mostly talking about when I want to talk about things.

    In the comments, Nicholas asks a perfect reasonable question: “Why doesn't C# have const reference parameters?”

    But the question isn't something I'd answer in the post, so I'd need to spend some time writing another post on it. I don't have time for that right now, so I'll need to put it off, but I'll probably forget the question by then. When I come back to answer a question, I'll look at my “Suggest a topic” comments first.

    So, if you'd like to increase the chance that your question will get answered, go to my blog's home page, and choose “Suggest a topic” at the bottom of the left column. I'm trying to work through those in order, and when I write the topic, it gets deleted from the list of comments.

  • Eric Gunnerson's Compendium

    What's coming up for C# beyond Whidbey?


    Diego asks,

    Now that C# 2.0 is almost here, I'd like to know about features that were left out from this release and planned for the future?

    I can't think of anything that I'd say was “left out“. There are a few things that we've been talking about

    There are a few things that we've been talking about. Unfortunately, I don't think I can talk about them now.

    The basic problem is one of expectation. When we first start talking about something, we're always in the exploratory phase. That's a long way from the “already implemented“ phase when we usually start talking about features.

    [Update: Marshall asks whether E&C was left out, or just postponed. Refactoring rated higher than E&C, but we do understand the utility that E&C brings, and we also have gotten a lot of customer feedback in that area. Oh, and to the best of my knowledge, the VB version of E&C does not support web apps. ]

    [Update: Mike asks about delays in Whidbey. While I generally know when we have slipped our schedules, I don't track when (or wheter) we've communicated those slips, so I'm going to avoid those sorts of topics. ]

  • Eric Gunnerson's Compendium

    Why do delegate arguments have to match exactly?


    Michael asks,

    Why do delegate arguments have to match exactly, when creating a delegate using Delegate.CreateDelegate?


    delegate void SpecialEventHandler(object sender, SpecialEventArgs e);

    Handler function:

    void SpecialMethod(object sender, EventArgs e)

    Now the following

    Delegate.CreateDelegate(typeof(SpecialEventHandler), o, SpecialMethod)

    fails, even though EventArgs is a direct parent of SpecialEventArgs. Shouldn't it be possible to use (safe downcast?) this method as a delegate target?

    This is an interesting question. It should be possible to be able to support this, but we don't support it currently. IIRC, there's a runtime rule that says that a delegate must match exactly. The fancy name for this is delegate contravariance.

    This could be something that we support in the future.

  • Eric Gunnerson's Compendium

    The virtues of "One Note"


    Derek said that I should write about the virtues of OneNote, which is, in his words, “one sweet app...”

    I've been using it off and on, and while I think it's a nice too, it has a few drawbacks for the kind of notes I do. For the C# language design notes, for example, we've been using Word for a long time, and it lets me do everything I want - keep the notes numbered and outlined, do tables, and have a big document with all the notes. OneNote doesn't appear to support the more formal sort of note taking that I do, though it is nice for other kinds of note taking.

    So, I guess I'll have to forgo my endorsement, at least for now.

  • Eric Gunnerson's Compendium

    Fewer blog posts here, more on the C# FAQ


    I've decided that rather than put FAQ answers here, I'm going to put them on the C# FAQ instead. That will allow us to keep all the good answers together in one place.

    I haven't been linking to them from here, but I could if people would find that useful. If you would, indicate so in the comments.

  • Eric Gunnerson's Compendium

    Bruce Eckel - Generics Aren't

    Bruce has an interesting discussion entitled “Generics Aren't”. It's primarily about the new support for generics in Java, but it has a lot of “generic generic” material as well.
  • Eric Gunnerson's Compendium

    The VS7 debugger doesn't work. What can I do?


    Min provides a new link to his excellent document on debugging the debugger.

  • Eric Gunnerson's Compendium

    Why doesn't C# support default parameters?

    A post in the new C# FAQ Blog. The items will be showing up on the C# dev center in the near future.
Page 1 of 2 (27 items) 12