• Kirill Osenkov

    How to debug crashes and hangs


    At my job on the C# IDE QA team I've learned some useful things about debugging in Visual Studio, which I'd like to summarize in this post. Although the screenshots were made using Visual Studio 2008 SP1, this pretty much applies to other versions of VS as well.

    Rich debugging support

    When you develop your C# application and hit F5, the target process (your program) gets started, and then the Visual Studio process attaches the debugger to the process where your code is running. This way, you can break into the debugger and VS will provide you with all sorts of rich debugging support - current statement highlighting, call stack, watches, locals, immediate window, Edit-and-Continue and so on.

    More importantly, if your application throws an exception or crashes, the debugger will intercept that and provide your with all the information about the exception.

    As a side note, in Visual Studio there is a way to run your code without attaching the debugger - the shortcut is Ctrl+F5. Try throwing an exception in your code when using F5 and Ctrl+F5 to feel the difference.

    throw null;

    By the way, my favorite way to artificially throw exceptions is throw null; I just love the fact that it throws a NullReferenceException because it can't find the exception object and nevertheless does exactly what I want it to do :)

    Crashes and the Watson dialog

    What if a program crashes or throws an exception, which you don't have source code for? Moreover, you didn't start the program using F5, but the operating system launched the process. I remember that before coming to Microsoft, the only thing I could do about some application crashing was to express my disappointment about the fact (usually in Russian). Now I don't feel helpless anymore, because I've learned a couple of tricks. As an example for this we'll crash Visual Studio itself and then debug the crash.

    How to crash Visual Studio?

    Viacheslav Ivanov reported an interesting crashing bug in our language service recently. Save all your work and then paste this code in a C# Console Application and change 'object' to 'int':

    using System;
    static class Program
        static void Main()
            ITest<object> test;
            test.Test((object /* and now change the argument type to "int" */ i) => { });
    public interface ITest<T> { }
    public static class Extensions
        public static void Test<T, I>(this ITest<T> test, Action<ITest<I>> action) { }

    What you will see is the Watson dialog:


    Given this chance, I'd highly recommend everyone to click "Send Error Report" if you ever see this dialog for Visual Studio. Many people "Don't Send" and frankly I don't understand why not. There is no personal information being sent, and we don't want your personal information anyway, honest. What we want is a call-stack and minidump, if possible, so if you want us to fix the product to make it more stable in the future, you will greatly help us if you send us the Error Report. By the way, we usually fix most (if not all) crashes that come in through this dialog, so the chances that we'll fix the crash you report using the dialog are actually pretty high. For example, we've already fixed the bug mentioned above and it works just fine in current builds.

    Attaching a debugger

    So what can you do if an application crashes or hangs? You can attach the debugger to a running process, even if it has already crashed. The code is still being executed (the main thread of the crashed application is usually pumping messages for the error dialog). You can either choose "Debug" on the Watson dialog, or (what I usually do) is start a new instance of Visual Studio myself and attach to the process manually, without dismissing the Watson dialog.

    Note: if the debuggee process crashes and you attach the debugger after the fact, you'll have to manually break into the debugger by pushing the "Pause" button. If the debugger was already attached at the moment of the crash, then it will offer you to break or continue.

    You can attach the Visual Studio debugger to a running process by choosing Tools | Attach To Process (Ctrl+Alt+P):


    You will see the Attach to process dialog:

    image Interesting things to note here are the Attach to: selection. Since Visual Studio is a mixed-mode managed/native application, to get call stacks for both managed and native parts, you'd want to attach to both of them (by clicking Select...):


    If you select both Managed and Native, you'll get richer debugging information about your callstacks - this is recommended.

    Note: if you want to enable mixed-mode debugging (managed+native) for the application that you have source code for, go to project properties of the startup project, and on the Debug tag select "Enable unmanaged code debugging". Then the debugger will automatically attach using mixed-mode.

    Finally, select the process from the list which you'd like to attach to (in our example, it will be devenv.exe) and click Attach. Notice that the process of the debugger itself is not shown in the list, that's why you don't see two devenv.exe in the list.

    Remote Debugging

    What many people don't know is that VS can be used to debug a process running on another machine on the network. To do this, you just need to start the Visual Studio Remote Debugging Monitor on the same machine with the process you want to debug:


    The remote debugging monitor will listen to debugger connections on the other machine and you'll be able to attach the debugger using the Transport and Qualifier fields on the Attach to process dialog:


    You can find more detailed information about Remote Debugging in MSDN and on the internet.

    Set Enable Just My Code to false

    One very important option in Visual Studio is "Enable Just My Code", which is set to true by default. To be able to see more information on the callstacks instead of just "Non-user code", you need to go to Tools | Options and disable this option:


    I usually do this right after I install Visual Studio, so that I can always debug into "not my code".

    Other interesting options on this page are:

    • Enable .NET Framework source stepping - in case you'd like to step into the .NET framework source code
    • Enable Source Server support
    • Step over properties and operators (Managed only) - won't step in to property getters, operators, etc. - this is new in VS 2008 SP1
    • Require source files to exactly match the original version - in case you don't have the source files for the exact .pdb symbol file, but still have a close version of the source code

    Break on first chance exceptions

    One very useful option in the debugger is the ability to break whenever a first-chance exception is being thrown. A first-chance exception is an exception that might be caught by the program itself later in some surrounding catch block. First-chance exceptions are usually non-fatal and handled (or swallowed) by the user, so they might not even be visible to the end user during normal execution. However, if a first-chance exception is not handled in the code and bubbles up to the CLR/OS, then it becomes a crash.

    So, to break on first-chance exceptions, you can go to Debug | Exceptions to invoke the Exceptions dialog:


    Here you can put a checkmark on Common Language Runtime Exceptions for the debugger to break every time a managed exception is thrown. This way you will see more hidden exceptions, some of them originating deep in the .NET framework class library. Sometimes there are so much first-chance exceptions that you can become overwhelmed, but they are incredibly useful to get to the root cause of the problem, because the debugger will show exactly where the exception originated preserving the original surrounding context. Another great advantage of breaking on first-chance exceptions is that the call stack is not unwound yet and the problem frame is still on the stack.

    Debugging tool windows

    OK, so now that we know how to attach a debugger and how to set the debugger options, let's see what happens after we've attached a debugger. For our exercise, you can open two instances of Visual Studio, attach the second instance's debugger to the first one, and then crash the first one using the lambda-expression crash code above. Instead of the Watson dialog on the debuggee process, you will see the following window in the debugger:


    Now you have the chance to break and see where the exception happened. Continue is useful if the exception is actually a first-chance exception and you'd like to pass it on to user code to handle it. Let's hit Break and examine what tool windows are available under debugger.

    Processes window

    All the debugger windows are available from menu Debug | Windows. The Processes window shows the list of processes that the debugger is currently attached to. A nice trick is that you can actually attach to multiple processes at the same time. Then you can use this window to switch the "current" debuggee process and all other tool windows will update to show the content of the "current" process.


    Note: on our team, we use this window a lot, because our tests run out-of-process. Our test process starts Visual Studio in a separate process and automates it using DTE, remoting and other technologies. Once there is a failure in the test, it's useful to attach to both the test process and the Visual Studio process under test. The most fun part is when I was debugging a crash in the debugger, and attached the debugger to the debugger process that is attached to some other process. Now if the debugger debugger crashes on you on the same bug, then you're in trouble ;) Sometimes I definitely should blog more about the fun debugging stories from my day-to-day job. But I digress.


    As we all know, processes have multiple threads. If you break into a debuggee process, you will most likely end up with a list of threads that were active at the moment when you did break. Main thread is the green one - this is the UI thread of your application (in our example, Visual Studio). Main thread executes the application message loop, and is pumping the windows messages for the application's UI. If a message box or a dialog is shown, you will see this dialog's message loop on the main thread.


    To switch threads, double-click the thread name that you're interested in. In most of the cases you'll be interested in the main thread. But if you start your own threads, give them a name so that they are easy to find in the threads list.

    Call stack

    Every thread has a call stack. Call stack is probably the most important thing you'd like to know about a crash - what was the sequence of function calls that lead to a crash? If the program is hanging, you'd like to know what function is it hanging in and how did it get there. Oftentimes by just glancing at a callstack you immediately know what's going on. "Is this callstack familiar?" or "Who is on the callstack?" is probably the question we ask most often during the bug triage process.

    Anyway, here's the call stack window:


    In our example we see that the C# language service module is on the stack (.dlls and .exes are called "modules" in the debugging world).

    However instead of function names from cslangsvc.dll we see the addresses of the procedures in memory. This is because the symbols for the cslangsvc.dll module are not loaded. We'll look into how to load symbols in a moment.


    The Modules window shows a list of .dlls and .exes loaded into the debuggee process:


    There are multiple ways to load symbols for a given module. You can right-click on a module for a list of options:


    Symbols for modules are stored in .pdb files and are produced with every debug build of the binary. pdb files contain mapping from compiled binaries back to the original source code, so that the debugger can display rich information (function names, source code locations, etc.) for the binary being debugged. Without symbols, debugging is only possible at the assembly level and registers window, you can't map the binaries back to the source code.


    A very useful dialog is the Tools | Options | Debugging | Symbols:


    Here you can set paths where to look for the .pdb files. Normally, the .pdb files will be directly next to the binaries, in which case they are usually found and loaded automatically. Loading symbols takes some time, so Visual Studio supports caching symbol files to some directory. Also, if you don't want all the symbols for all the binaries loaded (it can take a while), you can check the checkbox "Search the above locations only when symbols are loaded manually". Load symbols from Microsoft symbol servers provides symbols for Microsoft products, such as Windows and Visual Studio. You can load symbols from here, or also from the Modules or Call Stack windows, by right-clicking on a module and choosing Load Symbols From. Since we're debugging into Visual Studio, the symbols are located on the Microsoft Symbols servers:


    When we load public symbols, the following little dialog is showing:


    After we've loaded the symbols for cslangsvc.dll we notice that the Call Stack window is now much more informative:


    Given this call stack, anyone of our developers will now easily be able to pinpoint the problem. In our example we see that the problem happens when we try to show the Smart Tag for the Generate Method refactoring:


    As you see, there are more modules for which the symbols haven't been loaded yet. You can load the symbols for those modules by right-clicking their stack frame and choosing Load Symbols From. Loading all the symbols is usually recommended for more detailed information.

    That is why the call stack is so precious during bug reports. To save the call stack, you can Ctrl+A and Ctrl+C to copy all the stack into the clipboard. When you click Send Error Report in the Watson dialog, the call stack is included in the error report.

    Also you see that the call stack is not very useful without the symbols - it's the symbols + the callstack that provide rich information about the crash.


    Another very useful information that you can provide about the crashed process is the memory dump (heap dump, minidump) - this is basically a snapshot of the memory state of the crashed process. To create a minidump, go to Debug | Save Dump As. Visual Studio will offer you to save the dump to a location on disk, and an option to just save the Minidump or Minidump with Heap:


    Minidump with Heap will save more information than just the minidump, but will take a whole lot more disk space (for Visual Studio - possibly hundreds of megabytes - depending on the process working set size during the crash).

    Those are the three components that we usually send to our developers during crash bug reporting:

    1. Call stack
    2. Symbols
    3. Minidump with heap

    In most cases, they are able to understand the problem given this information. A list of repro steps is usually even better, because they can reproduce the problem themselves and enable first-chance exceptions to break early into the problem area.

    Debugging hangs

    If an application hangs, you can attach the debugger to it and hit break to see what thread and what call stack blocks execution. Usually looking at the call stack can give you some clues as to what is hanging the application. Debugging hangs is no different than debugging crashes.

    Microsoft shares customer pain

    Finally, I'd recommend everyone to watch this video:


    In this post, I've talked about some things worth knowing for effective debugging sessions:

    • Turning off "Enable just my code"
    • Attaching the debugger using Tools | Attach To Process
    • Selecting Managed, Native for mixed-mode debugging or Enable unmanaged code debugging
    • Attaching to multiple processes
    • Selecting processes and threads
    • Breaking on first-chance exceptions using the Debug | Exceptions dialog
    • Picking the right thread
    • Loading symbols
    • Viewing the call stack
    • Saving the minidump file

    Do let me know if you have feedback or corrections about this information.

    kick it on DotNetKicks.com
  • Kirill Osenkov

    New CodePlex project: a simple Undo/Redo framework


    I just created a new CodePlex project: http://undo.codeplex.com


    It's a simple framework to add Undo/Redo functionality to your applications, based on the classical Command design pattern. It supports merging actions, nested transactions, delayed execution (execution on top-level transaction commit) and possible non-linear undo history (where you can have a choice of multiple actions to redo).

    The status of the project is Stable (released). I might add more stuff to it later, but right now it fully satisfies my needs. It's implemented in C# 3.0 (Visual Studio 2008) and I can build it for both desktop and Silverlight. The release has both binaries.

    Existing Undo/Redo implementations

    I do realize that my project is the reinvention of the wheel at its purest, existing implementations being most notably:

    However I already have three projects that essentially share the exact same source code, so I decided that it would be good to at least extract this code into a reusable component, so perhaps not only me but someone else might find it useful too.

    It's open-source and on CodePlex, so I also have a chance of benefiting from it if someone contributes to it :)


    It all started in 2003 when I first added Undo/Redo support to the application that I was developing at that time. I followed the classical Command design pattern, together with Composite (for nested transactions) and Strategy (for plugging various, possibly non-linear undo buffers).

    Then I needed Undo/Redo for my thesis, so I just took the source code and improved it a little bit. Then I started the Live Geometry project, took the same code and improved it there a little bit, fixing a couple of bugs. Now the mess is over, and I'm finally putting the code in one place :)

    A good example of where this framework is used is the Live Geometry project (http://livegeometry.codeplex.com). It defines several actions such as AddFigureAction, RemoveFigureAction, MoveAction and SetPropertyAction.


    Every action encapsulates a change to your domain model. The process of preparing the action is explicitly separated from executing it. The execution of an action might come at a much later stage after it's been prepared and scheduled.

    Any action implements IAction and essentially provides two methods: one for actually doing the stuff, and another for undoing it.

    /// <summary>
    /// Encapsulates a user action (actually two actions: Do and Undo)
    /// Can be anything.
    /// You can give your implementation any information it needs to be able to
    /// execute and rollback what it needs.
    /// </summary>
    public interface IAction
        /// <summary>
        /// Apply changes encapsulated by this object.
        /// </summary>
        void Execute();
        /// <summary>
        /// Undo changes made by a previous Execute call.
        /// </summary>
        void UnExecute();
        /// <summary>
        /// For most Actions, CanExecute is true when ExecuteCount = 0 (not yet executed)
        /// and false when ExecuteCount = 1 (already executed once)
        /// </summary>
        /// <returns>true if an encapsulated action can be applied</returns>
        bool CanExecute();
        /// <returns>true if an action was already executed and can be undone</returns>
        bool CanUnExecute();
        /// <summary>
        /// Attempts to take a new incoming action and instead of recording that one
        /// as a new action, just modify the current one so that it's summary effect is 
        /// a combination of both.
        /// </summary>
        /// <param name="followingAction"></param>
        /// <returns>true if the action agreed to merge, false if we want the followingAction
        /// to be tracked separately</returns>
        bool TryToMerge(IAction followingAction);
        /// <summary>
        /// Defines if the action can be merged with the previous one in the Undo buffer
        /// This is useful for long chains of consecutive operations of the same type,
        /// e.g. dragging something or typing some text
        /// </summary>
        bool AllowToMergeWithPrevious { get; set; }

    Both methods share the same data required by the action implementation and are supplied when you create an action instance.


    Your domain model (business objects) will likely have an instance of ActionManager that keeps track of the undo/redo buffer and provides the RecordAction(IAction) method. This method adds an action to the buffer and executes it. And then you have ActionManager.Undo(), ActionManager.Redo(), CanUndo(), CanRedo() and some more stuff.

    As a rule, the thing that works for me is that I generally have two APIs: one that is public and lazy (i.e. it just creates an action and adds it to the buffer), and the other which is internal and eager, that does the actual work. Action implementation just calls into the eager API, while the public API is lazy and creates actions transparently for the consumer.


    Right now I only have a SimpleHistory. Instead of having two stacks, I have a state machine, where Undo goes to the previous state and Redo goes to the next state, if available. Each graph edge stores an action (implementation of IAction). As the current state transitions along the graph edge, IAction.Execute or UnExecute is being called, depending on the direction in which we go (there is a logical "left" and "right" in this graph, which kind of represents "future" and "past").



    It's possible for this linked list to become a tree, where you try something out (way1), don't like it, undo, try something else (way2), like it even less, undo, and choose to go back and redo way1. However this is not implemented yet.


    Transactions are groups of actions viewed as a single action (see Composite design pattern).

    Here's a typical usage of a transaction:

    public void Add(IEnumerable<IFigure> figures)
        using (Transaction.Create(ActionManager))

    If an action is recorded while a transaction is open (inside the using statement), it will be added to the transaction and executed only when the top-level transaction commits. This effectively delays all the lazy public API calls in the using statement until the transaction commits. You can specify that the actions are not delayed, but executed immediately - there is a corresponding overload of Transaction.Create specifically for that purpose.

    Note that you can "misuse" this framework for purposes other than Undo/Redo: one prominent example is navigation with back/forward.

    Update: I just posted some samples for the Undo Framework: http://blogs.msdn.com/kirillosenkov/archive/2009/07/02/samples-for-the-undo-framework.aspx

  • Kirill Osenkov

    Remote Desktop: /span across multiple monitors


    I spent some time searching the web about Remote Desktop, fullscreen and multiple monitors, so I decided to write down my findings to avoid having to search for them again.

    /span for multiple monitors

    If you pass /span to mstsc.exe, the target session’s desktop will become a huge rectangle that equals to the summary area of your physical monitors. This way the remote desktop window will fill all of your screens. The downside of this approach is that both screens are part of one desktop on the remote machine, so if you maximize a window there, it will span all of your monitors. Also, a dialog that is centered, will show up right on the border between your monitors. There is software on the web to workaround that but I’m fine with keeping my windows restored and sizing them myself. Also Tile Vertically works just fine in this case.

    Saving the /span option in the .rdp file

    There is a hidden option that isn’t mentioned in the description of the .rdp format:

    span monitors:i:1

    Just add it at the bottom of the file.

    Saving the /f (fullscreen) option in the .rdp file

    screen mode id:i:2

    (By default it’s screen mode id:i:1, which is windowed).


  • Kirill Osenkov

    How to: override static methods


    I know, I know. You can't override static methods. The title was just a trick to provoke your interest :-) In this post, I'll first try to explain why it is impossible to override static methods and then provide two common ways to do it.

    Or rather - two ways to achieve the same effect.

    So, what's the problem?

    There are situations where you wish you could substitute or extend functionality of existing static members - for example, provide different implementations for it and be able to switch implementations at runtime.

    For example, let's consider a static class Log with two static methods:

    public static class Log
        public static void Message(string message){ ... }
        public static void Error(Exception exception){ ... }

    Let's say your code calls Log.Message and Log.Error all over the place and you would like to have different logging behaviors - logging to console and to the Debug/Trace listeners. Moreover, you would like to switch logging at runtime based on selected options.

    Why can't we override static members?

    Really, why? If you think about it, this is just common sense. Overriding usual (instance) members uses the virtual dispatch mechanism to separate the contract from the implementation. The contract is known at compile time (instance member signature), but the implementation is only known at runtime (concrete type of object provides a concrete implementation). You don't know the concrete type of the implementation at compile time.

    This is an important thing to understand: when types inherit from other types, they fulfil a common contract, whereas static types are not bound by any contract (from the pure OOP point of view). There's no technical way in the language to tie two static types together with an "inheritance" contract. If you would "override" the Log method in two different places, how do we know which one we are calling here: Log.Message("what is the implementation?")

    With static members, you call them by explicitly specifying the type on which they are defined. Which means, you directly call the implementation, which, again, is not bound to any contract.

    By the way, that's why static members can't implement interfaces. And that's why virtual dispatch is useless here - all clients directly call the implementation, without any contract.

    Let's get back to our problem

    I won't even mention the "solution" with if/switch:

    public static void Message(string message)
        if (LoggingBehavior == LoggingBehavior.Console)
        else if ...

    It is so ugly that my eyes start bleeding when I stare at it long enough. Why?

    1. You cannot add a new type of a log at runtime
    2. You cannot "override" the functionality at runtime or, say, wrap it in a decorator
    3. You have to modify every static method to add/change/remove a logging behavior
    4. all other 10000 reasons why OOP is better than procedural programming

    But! We are so close to the first solution - every more or less experienced developer will cry out here: use the Strategy pattern!

    Solution 1: Strategy + Singleton

    Yes, that's easy. Define a contract to use and it will be automatically separated from the implementation. (Unless you make it sealed. By making a type or a member sealed, you guarantee that no one else can implement this contract.)

    OK, so here's our contract:

    public abstract class Logger
        public abstract void Message(string message);
        public abstract void Error(Exception exception);

    You could make it an interface as well, but with an interface you wouldn't be able to change the contract later without breaking existing clients. I already wrote about choosing abstract class vs. interface.

    You'll only need one instance of a logging behavior, so let's create a singleton:

    public static class Log
        public static Logger Instance { get; set; }

    Correct Singleton implementation is not part of this discussion - there is a whole science of how to correctly implement Singleton in .NET - just use your favorite search engine if you're curious.

    Now we just redirect the static methods to instance methods of the Logger instance and presto:

    public static void Message(string message)

    And that's it! All of your code continues to use Log.Message and Log.Error, and if you'd like to change the behavior, just say

    Log.Instance = new DebugWriteLineLogger();

    At runtime! You could even wrap the instance using Decorators, watch it using Observers, broadcast using Composite, etc. etc.

    Solution 2: delegates

    When I was saying that static types do not have means to adhere to a contract, I was not quite correct. Delegates are a more fine-granular type of a contract, which regulates methods as opposed to types. Let's define two contracts:

    public delegate void MessageLogger(string message);
    public delegate void ErrorLogger(Exception exception);

    Now let's specify that our Log methods define a contract, not an implementation:

    public static class Log
        public static MessageLogger Message { get; set; }
        public static ErrorLogger Error { get; set; }

    We can still call Log.Message(string), but this time we can substitute an implementation at runtime:

    Log.Message = Debug.WriteLine;

    Also, instead of creating explicit delegates MessageLogger and ErrorLogger, we could reuse the System ones: Action<string> and Action<Exception>, which would work just as well.

    Important update: originally I wrote here that you can convert between different delegate types as long as the method signature is the same (e.g. convert MessageLogger to Action<string> and back). THIS IS WRONG. You can assign delegates of type MessageLogger and Action<string> to point to the same method, but you can't assign them to each other. Delegate type inferencing only works when assigning methods to delegates and not delegates to delegates. I'm sorry about the confusion and thanks to Alex for pointing this out.

    Of course, as with the Singleton from Solution 1, we have to make sure we initialize the default implementation before we call it, otherwise we'll get a null reference exception. We always have to be careful about initializing contracts with the default implementation.

    Now that'd be funny - what if we want to log a null reference exception that Log.Error is null - how in the world are we supposed to do that?

    As you can see, delegates are just as powerful as interfaces when specifying a contract for a single method as opposed to whole type. I already wrote about it too: Delegates as an alternative to single-method interfaces

    A peculiarity with delegates is that you can't switch all the delegates at once, you have to do it one by one. This can be both an advantage and a disadvantage, depending on your situation. Also, there is no clear guidance as to where to put the implementation methods for delegates - they might end up being scattered all over the code.


    We can use any of the two solutions above to change the logging implementation at runtime, even if the Log class is defined in a different assembly. Moreover, we can switch between the first and the second solution without changing the client source code. We will have to recompile though, because it changes the static methods on Log to static properties and vice versa (breaks binary compatibility, not source level compatibility).

    I'm still not sure whether I like the Singleton or Delegates better. It probably depends on the situation. What do you think?

  • Kirill Osenkov

    Copy Code in HTML format with Visual Studio 2010


    Today Jason has announced the Visual Studio 2010 Productivity Power Tools – a set of VS 2010 extensions that we released to complement and enhance the built-in 2010 features. You can either install the Pro Power Tools from here or just go to the Tools –> Extension Manager and type "Pro Power Tools" in the search box:


    For the Pro Power Tools, I wrote a feature to copy formatted source code to clipboard in HTML format. The traditional VS editor only copies the plain text and RTF formats to clipboard, and there are numerous tools available to convert RTF to HTML. With the Pro Power Tools installed, pressing Ctrl+C or otherwise copying the code editor selection to clipboard, it will place the formatted and colorized HTML fragment on clipboard, alongsize the plain text and RTF format.

    Then you can paste the code anywhere HTML is supported, such as Windows Live Writer:


    Ctrl+Shift+V (Paste Special), Alt+K (Keep Formatting) and Enter to insert the colorized code to Live Writer.

    In a future version of the power tools I will introduce customization – you will be able to fully control what HTML gets generated:

    1. Prepend any HTML to the code fragment
    2. Append any HTML to the code fragment
    3. Prepend any HTML before every line of code
    4. Append any HTML after every line of code
    5. Include line numbers and the format string for them
    6. Replace the space with &nbsp;
    7. Replace the carriage return line feed with <br />
    8. Emit <span style="color: blue">void</span> vs. <span class="keyword">void</span>

    Also, the current version has a bug: Cut (Ctrl+X) doesn't place formatted HTML to clipboard, only Copy does. I forgot to hook up the Edit.Cut command and never got around to fixing it. I'll fix it for the next release though.

    Hope you like it, and as always, your feedback is appreciated!

    P.S. Here's the sample code formatted using the extension:

            private static string IntersperseLineBreaks(string text)
                text = text.Replace("\n\r", "\n \r");
                return text;
    You can View Source on this page, look for IntersperseLineBreaks and view the HTML that I generate. In a future release you will be able to customize the output of the tool, but for now this is what it defaults to.
  • Kirill Osenkov

    5 min. screencast: Live Geometry overview


    Microsoft sponsored a usability study for my side project Live Geometry, and I have to say, it was awesome. It was a lot of fun watching the participants using the software and I got a ton of great and useful feedback.

    I have to confess, I didn’t realize that it’s not obvious how to use Live Geometry (especially if you’ve never seen it before). Since I was the one who developed the software, I subconsciously assumed that it’s all intiutive and trivial. Well guess what, it turns out to be not the case. I am not the end user. Things that are obvious for me, might not be obvious for others.

    So I developed a plan on how to make things better. There are two ways: improving User Experience and investing in User Education. The former will be a slow and gradual process of me designing the features and the UI, fixing bugs, reworking the UI and thinking through UI details.

    Today I’ll start approaching the task of User Education and present a 5 min. screencast – a brief overview of the Live Geometry software and its possibilities (Hint: double-click the video for fullscreen viewing):

    Get Microsoft Silverlight

    You can also download the .wmv file (15 MB).

    More documentation will follow later, but this should at least give a quick start and give you an idea of how things work.

    Any feedback is welcome!

  • Kirill Osenkov

    A list of common HRESULT error codes


    I was looking for error code –2146232797 (hex 0x80131623, which turned out to be what is thrown by Environment.FailFast) and I’ve stumbled upon this treasure:


    Also, here’s a great blog about deciphering an HRESULT:


    And here's another good list from MSDN:


    I sincerely wish you to never ever need this knowledge...

  • Kirill Osenkov

    Saving images (.bmp, .png, etc) in WPF/Silverlight


    I’ve recently added a new feature to Live Geometry that allows users to save the current drawing as a bitmap or a .png file. Just push the save button and pick the desired image format in the Save dialog:


    Fortunately, both WPF and Silverlight support saving full visual contents of any visual into a file on disk. However the approach is somewhat different.

    Saving images in WPF

    WPF can save any Visual to an image and it supports several formats out of the box via a concept of Encoders. Here’s a sample for .bmp and .png:

    void SaveToBmp(FrameworkElement visual, string fileName)
        var encoder = new BmpBitmapEncoder();
        SaveUsingEncoder(visual, fileName, encoder);
    void SaveToPng(FrameworkElement visual, string fileName)
        var encoder = new PngBitmapEncoder();
        SaveUsingEncoder(visual, fileName, encoder);
    void SaveUsingEncoder(FrameworkElement visual, string fileName, BitmapEncoder encoder)
        RenderTargetBitmap bitmap = new RenderTargetBitmap(
        BitmapFrame frame = BitmapFrame.Create(bitmap);
        using (var stream = File.Create(fileName))

    These types are all in System.Windows.Media.Imaging.

    Saving images in Silverlight 3

    In Silverlight, the encoders don’t come as part of the Silverlight runtime – but fortunately there is a project on CodePlex called ImageTools (http://imagetools.codeplex.com) that provides necessary support. You will need to download the following binaries and add them as references to your Silverlight project:

    • ICSharpCode.SharpZipLib.Silverlight
    • ImageTools
    • ImageTools.IO
    • ImageTools.IO.Png (only if you want .png support)
    • ImageTools.IO.Bmp (only if you want .bmp support)
    • ImageTools.Utils

    After that, you can call the ToImage() extension method on any Canvas:

    void SaveAsPng(Canvas canvas, SaveFileDialog dialog)
        SaveToImage(canvas, dialog, new PngEncoder());
    void SaveAsBmp(Canvas canvas, SaveFileDialog dialog)
        SaveToImage(canvas, dialog, new BmpEncoder());
    void SaveToImage(Canvas canvas, SaveFileDialog dialog, IImageEncoder encoder)
        using (var stream = dialog.OpenFile())
            var image = canvas.ToImage();
            encoder.Encode(image, stream);

    Since you can’t write to disk directly in Silverlight, you can pass a SaveFileDialog and use its stream, or you can obtain a stream elsewhere and pass that. The ToImage() extension method does the dirty work that we had to do ourselves in WPF.

    Big thanks to http://imagetools.codeplex.com for their awesome library and encoders!

  • Kirill Osenkov

    DLR Hosting in Silverlight


    As you probably know, DLR is the dynamic language runtime that provides a common platform for dynamic languages and scripting in .NET. Their two main languages, IronPython and IronRuby, are available to develop your programs and also to be hosted in your programs. DLR hosting means that the users of your program can use scripting in any DLR language, for example to automate your program or to programmatically access the domain model of your application.

    I was thinking about adding a capability to plot function graphs like y = cos(x) to my Live Geometry app, so I thought of hosting the DLR in Silverlight to compile and evaluate mathematical expressions.

    Fortunately, DLR readily supports this scenario. And fortunately, Tomáš Matoušek, a developer on our IronRuby Team (part of Visual Studio Managed Languages), sits right around the corner from my office and was kind enough to provide great help when I had questions. Big thanks and kudos to Tomáš!

    So, to host the DLR in Silverlight, here's the file that I added to my project (you can view my full source code here: http://dynamicgeometry.codeplex.com/SourceControl/ListDownloadableCommits.aspx).

    All you need to do is to set up a script runtime, get your language's engine (here we use Python), create a scope for your variables and you're ready to evaluate, execute and compile!

    using System;
    using DynamicGeometry;
    using IronPython.Hosting;
    using Microsoft.Scripting.Hosting;
    using Microsoft.Scripting.Silverlight;
    namespace SilverlightDG
        public class DLR : ExpressionCompiler
            ScriptRuntime runtime;
            ScriptEngine engine;
            ScriptScope scope;
            public DLR()
                var setup = new ScriptRuntimeSetup();
                setup.HostType = typeof(BrowserScriptHost);
                runtime = new ScriptRuntime(setup);
                engine = runtime.GetEngine("Python");
                scope = engine.CreateScope();
            public override Func<double, double> Compile(string expression)
                var source = engine.CreateScriptSourceFromString(
    from math import *
    def y(x):
        return {0}
    func = y
    ", expression),
                CompiledCode code = source.Compile();
                var func = scope.GetVariable<Func<double, double>>("func");
                return func;

    ExpressionCompiler is my own abstract class that I defined in the DynamicGeometry assembly:

    using System;
    namespace DynamicGeometry
        public abstract class ExpressionCompiler
            public abstract Func<double, double> Compile(string expression);
            public static ExpressionCompiler Singleton { get; set; }

    As you see, the service that I need from the DLR is to implement the Compile method, that compiles an expression down to a callable function delegate, which I can then use to evaluate a function at a point.

    Finally, just register the DLR as an implementation for my ExpressionCompiler:

    ExpressionCompiler.Singleton = new DLR();

    And we're ready to go.

    Let's go back to the DLR.cs and I'll comment a little more on what's going on. Essentially, to host the DLR you'd need 3 things:

    ScriptRuntime runtime;
    ScriptEngine engine;
    ScriptScope scope;

    Runtime is your "world". You load a language-specific engine (like PythonEngine) into the runtime. To create a runtime with a language, one way is to use:

    var setup = new ScriptRuntimeSetup();
    setup.HostType = typeof(BrowserScriptHost);
    runtime = new ScriptRuntime(setup);

    This will work fine in Silverlight, because we use a browser-specific BrowserScriptHost, which does not use the file system. One problem that I had is that I was trying to directly call:

    runtime = Python.CreateRuntime();

    Which didn't work because it used the default script host (which tried to access the file system) and not the BrowserScriptHost. After you have the runtime, you can get the engine and create a scope in that engine:

    engine = runtime.GetEngine("Python");
    scope = engine.CreateScope();

    Now you're ready to do things like:

    var five = engine.Execute("2 + 3", scope);
    You can go up to the first code example to see how I declared a function in Python, and converted it to a C# callable Func<double, double> delegate.

    Finally, here's the working application (which you can also find at http://geometry.osenkov.com). Press the y = f(x) toolbar button, enter sin(x) and press the Plot button:

  • Kirill Osenkov

    &apos; is in XML, in HTML use &#39;


    I just got hit by a very confusing "by design" behavior and it took me a while to figure out what's going on.

    Here is the line of code:

        text = System.Security.SecurityElement.Escape(text);

    This method replaces invalid XML characters in a string with their valid XML equivalent.

    The problem that I had is that when escaping some VB code using this method and then pasting it into Windows Live Writer, VB comments ' became &amp;apos;.

    Well, it turns out, XML supports &apos; to denote the apostrophe symbol '. However HTML doesn't officially support &apos; and hence Live Writer "HTML-escaped" my already "XML-escaped" string.


        text = System.Security.SecurityElement.Escape(text);
        // HTML doesn't support XML's &apos;
        // need to use &#39; instead
        // http://www.w3.org/TR/html4/sgml/entities.html
        // http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2005-October/004973.html
        // http://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references
        // http://fishbowl.pastiche.org/2003/07/01/the_curse_of_apos/
        // http://nedbatchelder.com/blog/200703/random_html_factoid_no_apos.html
        text = text.Replace("&apos;", "&#39;");
  • Kirill Osenkov

    First videos of the structured editor prototype


    Disclaimer: the structured editor work described in my posts is unrelated to my work at Microsoft. Everything shown is my personal research done as part of my MSc thesis during 2004-2007. Also, it’s not ready for real use and does not cover all the features of even C# 1.0. It’s a concept prototype and work in progress.


    As part of my research back in school I was building an experimental structured editor for C#. Now I’ve decided to publish the sources and binaries on CodePlex:


    A detailed discussion of structured editors deserves a separate post, which is coming soon. For now, to give a better idea of how the editor works, I’ve recorded six short videos showing the different features below. If your blog reader doesn’t support iframes, I recommend you view this post in the browser.

    What is a structured editor?

    Programs are represented as text by most code editors. Structured editors, on the contrary, directly display the syntax tree of a program on screen and allow the user to manipulate the tree directly (think MVC: program parse tree is a model, the editor displays a view of it):

    This way the visual layout better illustrates the structure of the program and allows for atomic operations on the language constructs. A structured editor lets developers avoid syntax errors and concentrate on the meaning of the program instead of formatting.

    1. Introduction

    2. Editing

    You will notice that there is no intellisense and coloring at the expression level – I did build a SharpDevelop add-in that hosts the structured editor control inside the SharpDevelop project system (DOM), but these videos were recorded on a stand-alone version of the structured editor control hosted inside an empty windows form. Also the support at the expression level is rudimentary, because so far I’ve concentrated the most at declaration level (namespaces, types, members) and statement level (except for single-line statements such as assignments).


    3. Drag & Drop Support

    4. No Parser Means Less Syntax

    5. Properties

    6. Modifiers


    Structured editing is a topic surrounded with scepticism and controversy for the past 20-30 years. Some argue that directly editing the AST on screen is inflexible and inconvenient, because the constraints of always having a correct program restrict the programmer way too much. Others expect structured editors to be more helpful than text editors because the user operates atomically and precisely on the language constructs, concentrating on the semantics and not on syntax.

    In summer 2004, my professor Peter Bachmann initiated a student research project - we started building a structured editor for C#. I took part because I was deeply persuaded that good structured editors can actually be built, and it was challenging to go and find out for myself. After the original project was over, I took over the basic prototype and evolved it further to the state it is today.

    As one of numerous confirmations for my thoughts, in 2004, Wesner Moise wrote:

    ...I see a revolution brewing within the next three years in the way source code is written.

    Text editors are going to go away (for source code, that is)! Don't get me wrong, source code will still be in text files. However, future code editors will parse the code directly from the text file and will be display in a concise, graphical and nicely presented view with each element in the view representing a parse tree node. ...

    I remember how I agreed with this! After three years, in 2007, the prototype implementation was ready - it became the result of my master's thesis. I still agree with what Wesner was envisioning in 2004 - with one exception. Now I believe that structured editors shouldn't (and can't) be a revolution - fully replacing text editors is a bad thing to do. Instead, structured editors should complement text editors to provide yet another view on the same source tree (internal representation, or AST).

    My conclusions about structured editors

    1. Text-based editors aren’t going away anytime soon.
    2. Structured editors can only succeed by evolution, not revolution – hybrid approach and ability to toggle between text and structure is the way to go.
    3. The main difficulty with structured editors is getting the usability right.
    4. The right approach is solving all the numerous little problems one by one – find a solution for every situation where a text editor is better (from my experience, for any advantage of a text editor, a good structured solution can be found).
    5. You need a graphical framework with auto-layout such as WPF to build a structured editor.
    6. Don’t try to build a universal editor editor that can edit itself – this approach is too complex. Take a specific language and build a hardcoded editor for this language first. Only after you’ve built 2-3 successful editors, it makes sense to generalize and build a universal editor or an editor generator.
    7. Compilers shouldn’t be a black box, but expose the data structures as a public API surface.
    8. Syntax trees should be observable so that the editor can databind to them.
    9. Traditional expression editor should be hostable inside a structured editor for single-line statements and expressions.

    As a result of my work, I'm convinced that structured editors actually are, in some situations, more convenient than text editors and providing the programmer with two views on the code to choose from would be a benefit. Just like Visual Studio Class Designer - those who want to use it, well, just use it, and the rest continues to happily use the text editor. All these views should co-exist to provide the programmer with a richer palette of tools to chooce from.

    Hence, my first important conclusion. A program's internal representation (the AST) should be observable to allow the MVC architecture - many views on the same internal code model. With MVC, all views will be automatically kept in sync with the model. This is where for example something like WPF data-binding would come in handy.

    As for the structured editor itself - it is still a work in progress and I still hope to create a decent complement for text editors. It has to be usable and there are still a lot problems to solve before I can say: "Here, this editor is at least as good as the text editor". But I managed to solve so many challenging problems already, that I'm optimistic about the future.

    Current implementation

    The current implementation edits a substantial subset of C# 1.0 - namespaces, types, members (except events), and almost all statements. If you're interested, you can read more at http://www.guilabs.net and www.osenkov.com/diplom - those are two sites I built to tell the world about my efforts. I also accumulate my links about structured editing here: http://delicious.com/KirillOsenkov/StructuredEditors

    More languages

    It turned out that it makes sense to build structured editors not only for C#, but for other languages as well - XML, HTML, Epigram, Nemerle, etc. That is why, at the very beginning, the whole project was split in two parts - the editor framework and the C# editor built on top of it.


    1. http://structurededitor.codeplex.com
    2. http://guilabs.net
    3. http://www.osenkov.com/diplom
    4. http://delicious.com/KirillOsenkov/StructuredEditors

    If you want to know more, or if you want to share your opinion on this, please let me know. I welcome any feedback! Thanks!

    kick it on DotNetKicks.com Shout it
  • Kirill Osenkov

    Call Hierarchy Navigation in Visual Studio 2010


    We're currently designing a new IDE feature named Call Hierarchy. Essentially, it allows you to find places where a given method is called, which is similar to how Find All References currently works. However, unlike Find All References, the Call Hierarchy feature provides more deep understanding and more detailed information about calls.


    You can invoke the Call Hierarchy toolwindow by right-clicking on a method, property or constructor name in the code editor and choosing View Call Hierarchy from the context menu:


    Tool window

    A toolwindow will appear docked on the bottom of the Visual Studio window:


    You can expand the node for the method to see information about it: incoming calls to the method ("Calls To") and outgoing calls ("Calls From"):


    Here's how it works. A method (or a property, or a constructor) is displayed as a root in the treeview. You can expand the node to get a list of "search categories" - things you want to find. Four search categories are currently supported:

    1. Calls To - "incoming" calls to this member
    2. Calls From - "outgoing" calls mentioned in this member's body
    3. Overrides - available only for abstract or virtual members
    4. Implements - finds places where an interface member is implemented

    When you expand a search node (such as Calls To 'GetCallableMethods'), a solution-wide search is started in the background and the results appear under the Calls To folder. You can click on a result, and the details will appear in the Details list view on the right hand side.

    The Details list view shows all the exact call sites and locations in code where GetCallableMethods is called from GenerateXsdForComplexTypes. We see that the method is being called only once, the line of code is shown, as well as file name and position in the file. Double-clicking on that call site will navigate to it in the code editor.

    The advantages of Call Hierarchy compared to Find All References is that it allows you to explore and drill deep multiple levels into the call graph (find caller's caller etc.) Also, Call Hierarchy has deeper and more fine-granular understanding of the source code - while Find All References just finds the symbols, Call Hierarchy differentiates abstract and virtual methods, interface implementations, actual calls from delegate creation expressions, etc. Also it works like a scratch-pad: you can add any member as another root-level item in the call hierarchy tool window and have several members displayed there at once. Finally, the Details Pane given information about the concrete call sites, if a method is being called several times in the body of the calling method.


    In the toolbar you can select the scope of the search: search in currently opened file only, current project or the entire solution.

    Refresh button re-fills the treeview in case the original source code was modified.

    If a root node of the treeview is selected, the "Delete Root" button will remove it from the treeview. You can add any member as a new root in the treeview by right-clicking on it in the context menu:


    or adding it from the source code as described in the beginning.

    Finally, the Toggle Details Pane button shows or hides the details pane.

    Some design issues and implementation details

    Although the feature is already implemented and if you have the Visual Studio 2010 CTP, you can already play with Call Hierarchy, we're still not quite happy with the current UI design, usability and the user experience.

    For example, one issue that we're seeing is that it takes 2 mouseclicks and 2 mousemoves to invoke the Find All References search, but it takes 4 mouseclicks and 4 mousemoves to get the callers list for a given method (1 click - menu invocation, 1 click - menu item selection, 1 click - expand the treeview node for the method, 1 click - expand the "Calls To" folder). Although the search itself will be slightly faster than Find All References, the perceived complexity of invoking the feature is something we definitely want to improve. We want this feature to be at least as good and usable as Find All References, but also provide additional benefits, otherwise people will just not use the feature and continue using Find All References.

    I think I'll stop for now and see what kind of feedback you guys might have about this. In the next blog post I plan to share more of our current thinking and what we'd like to change. For now, I'd be really interested to know what you think and if you have any suggestions or ideas. Now it's not too late, and we can change the feature based on your feedback.

  • Kirill Osenkov

    WPF SendKeys or mocking the keyboard in WPF


    This post will only be interesting for the few of those who test WPF UI in-process (i.e. not through out-of-process UI automation frameworks such as White). When using in-process testing, the test lives in the same AppDomain as the code under test and has direct access to all the APIs such as UIElement.RaiseEvent.

    With out-of-process testing, the test lives in a separate process from the application under test. In this case, the test sends the application automation requests (typically using the MSAA or the newer UI Automation library). These requests eventually end up going through the operating system's window messages. The downside of this approach is that since it almost fully emulates the user behavior, the tests are fragile because a sudden Windows Update dialog popup or a random message box from another application can steal the focus from the test application and confuse the test. That's why in-process tests are generally considered more reliable, since they will work even if the application is not focused or even not visible (in case of unit-test).

    Those who have written tests using WinForms before are familiar with the helpful SendKeys class in System.Windows.Forms. It works by going through the operating system's SendInput function and provides good approximation of how the actual user's input would be routed through the system. SendKeys can be used for both in-process and out-of-process testing.

    Another thing that WinForms has is the RaiseKeyEvent methods on Control with KeyEventArgs that easily allow you to mock keyboard events in-process, at the unit-testing level.

    WPF does not provide any alternative to the SendKeys class. It does provide a RaiseEvent method, however the KeyEventArgs class in WPF was explicitly designed in such a way to prevent mocking the modifier keys state. We can see that many people are looking to call RaiseEvent from their tests and provide mock key and modifiers, however it seems that no solution is currently available. See for example this StackOverflow question: http://stackoverflow.com/questions/1263006/wpf-send-keys-redux The official guidance from the WPF team is to go through the operating system and either add a reference to System.Windows.Forms.dll to use SendKeys or directly use the SendInput API to mock the keyboard.

    I've created a little project that works around this limitation and allows you to call RaiseEvent on your WPF controls from your unit-tests and fully mock the modifier key states (Ctrl, Alt, Shift). It implements the SendKeys DSL – a little mini-language to encode keystrokes. It does some very dirty reflection magic to workaround WPF's protection against mocking modifier key states, however it works and might be useful to some people. Warning: it is NOT a replacement for the WinForms SendKeys, it's a completely different thing.

    While it is not recommended for unit tests, it can do a decent job for integration tests, when you actually show your application's UI. Chances that you need this library are very low, but if you happen to need it, then it's probably handy.

    The source is at http://wpfsendkeys.codeplex.com

  • Kirill Osenkov

    Coding productivity: macros, shortcuts and snippets


    Visual Studio macros are a fantastic productivity booster, which is often under-estimated. It's so easy to record a macro for your repetitive action and then just play it back. Even better, map a macro to a keyboard shortcut. I'll share a couple of examples.


    If you open up Visual Studio and type in this code:


    How many keystrokes did you need? I've got 10 (including holding the Shift key once). It's because I have this macro mapped to Shift+Enter:

    Sub InsertCurlies()
        DTE.ActiveDocument.Selection.Text = "{"
        DTE.ActiveDocument.Selection.Text = "}"
    End Sub

    So I just typed in void Foo() and hit Shift+Enter to insert a pair of curlies and place the cursor inside. Remarkably, I've noticed that with this macro I now almost never have to hit the curly brace keys on my keyboard. Readers from Germany will especially appreciate this macro, because on German keyboard layouts you have to press Right-Alt and the curly key, which really takes some time to get used to.

    This macro is also useful to convert an auto-property to a usual property: you select the semicolon and hit Shift+Enter:


    Try it out!


    Suppose you have a field which you'd like to convert to an auto-implemented property:


    And when you click the menu item, you get:


    How did I do it? First, here's the macro:

        Sub ConvertFieldToAutoprop()
            DTE.ActiveDocument.Selection.StartOfLine( _
            Dim fieldtext As String = DTE.ActiveDocument.Selection.Text
            If fieldtext.StartsWith("protected") _
                Or fieldtext.StartsWith("internal") _
                Or fieldtext.StartsWith("private") Then
                fieldtext = fieldtext.Replace("protected internal", "public")
                fieldtext = fieldtext.Replace("protected", "public")
                fieldtext = fieldtext.Replace("internal", "public")
                fieldtext = fieldtext.Replace("private", "public")
            ElseIf Not fieldtext.StartsWith("public") Then
                fieldtext = "public " + fieldtext
            End If
            fieldtext = fieldtext.Replace(";", " { get; set; }")
            DTE.ActiveDocument.Selection.Text = fieldtext
        End Sub

    And then just add the macro command to the refactor context menu or any other place. This may seem like no big deal, but I had to convert fields to auto-properties recently in 50+ files. I really learned to appreciate this macro.

    gs code snippet

    This is a very little but useful snippet: gs expands to { get; set; }

    <?xml version="1.0" encoding="utf-8" ?>
    <CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
        <CodeSnippet Format="1.0.0">
                <Description>Code snippet for { get; set; }</Description>
                <Author>Microsoft Corporation</Author>
                <Code Language="csharp"><![CDATA[{ get; set; }$end$]]>

    Although I usually use the prop snippet to create auto-implemented properties, but gs is useful in some cases as well.

    I hope this has inspired you to do some personal usability research experiments and define everyday actions that you can optimize using macros, snippets and shortcuts. I would love to hear about your personal tips and tricks as well.

    kick it on DotNetKicks.com

  • Kirill Osenkov

    DateTime.UtcNow is generally preferable to DateTime.Now


    This seems to be commonly known and accepted best practice to use DateTime.UtcNow for non-user facing scenarios such as time interval and timeout measurement.

    I’ve just done an audit of the Roslyn codebase and replaced most DateTime.Now calls with DateTime.UtcNow. I thought it’d be useful to post my changeset description here (although none of it is new – I just summarize some common knowledge readily available in the sources linked below).


    Replacing DateTime.Now with DateTime.UtcNow in most cases.

    We should be using DateTime.Now only in user-facing scenarios. It respects the timezone and Daylight Savings Time (DST), and the user feels comfortable when we show them the string that matches their wall clock time.

    In all other scenarios though DateTime.UtcNow is preferable.

    First, UtcNow usually is a couple of orders of magnitude faster, since Now is basically first calling UtcNow and then doing a very expensive call to figure out the time zone and daylight savings time information. Here’s a great chart from Keyvan’s blog (modified to avoid allocations of the Stopwatch object):


    Second, Now can be a big problem because of a sudden 1-hour jump during DST adjustments twice a year. Imagine a waiting loop with a 5-sec timeout that happens to occur exactly at 2am during DST transition. The operation that you expect to timeout after 5 sec will run for 1 hour and 5 seconds instead! That might be a surprise.

    Hence, I'm replacing DateTime.Now with DateTime.UtcNow in most situations, especially polling/timeout/waiting and time interval measurement. Also, everywhere where we persist DateTime (file system/database) we should definitely be using UtcNow because by moving the storage into a different timezone, all sorts of confusion can occur (even "time travel", where a persisted file can appear with a future date). Granted, we don't often fly our storage with a supersonic jet across timezones, but hey.

    For precise time interval measurements we should be using System.Diagnostics.StopWatch which uses the high resolution timer (QueryPerformanceCounter). The resolution of both DateTime.Now and DateTime.UtcNow is very low – about 10 ms – this is a huge time interval!

    For a cheap timestamp, we should be generally using Environment.TickCount - it's even faster than DateTime.UtcNow.Ticks.

    I'm only leaving DateTime.Now usages in test code (where it's never executed), in the compiler Version parsing code (where it's used to generate a minor version) and in a couple of places (primarily test logging and perf test reports) where we want to output a string in a local timezone.


    * http://www.keyvan.ms/the-darkness-behind-datetime-now
    * http://stackoverflow.com/questions/62151/datetime-now-vs-datetime-utcnow
    * http://stackoverflow.com/questions/28637/is-datetime-now-the-best-way-to-measure-a-functions-performance

  • Kirill Osenkov

    Interview question


    Here’s a nice simple interview question:

    In a given .NET string, assume there are line breaks in standard \r\n form (basically Environment.NewLine).

    Write a method that inserts a space between two consecutive line breaks to separate any two line breaks from each other.

    Also, anyone venture a guess what is a practical application for such a method?

  • Kirill Osenkov

    Some resources about Visual Studio Extensibility


    A couple of readers have posted questions about Visual Studio Extensibility, DTE, your own packages, commands, experimental hive etc. To be frank, I’m not an expert in this field, so instead of trying to answer these questions, I will point to some better resources for VSX (VS Extensibility):

  • Kirill Osenkov

    Live Geometry with Silverlight 2


    I'm happy to announce a project I started on CodePlex: http://codeplex.com/DynamicGeometry

    Live preview at: http://geometry.osenkov.com

    Dynamic geometry

    In a nutshell, it's an interactive designer for ruler-and-compass constructions - it lets you plot points, connect them with lines, construct circles, intersection points, parallel lines - everything that you usually do with "plain" geometry. But then the magic part comes in - you can drag the points and the entire construction is recalculated immediately - this is why they call it dynamic geometry. The way it works is the program doesn't store the coordinates of figures - instead, it stores the algorithm of their construction. Every time you move any of the points, the entire construction is being reconstructed on-the-fly given the new coordinates of the point using the algorithm stored in memory - that's why I like calling it "CAD with lazy evaluation".

    The actual program available online

    The application is deployed live at http://geometry.osenkov.com - you'll need Silverlight 2 Beta 2 to view this site. At first, the UI might seem not very intuitive for you - it takes a while to figure out how to use the toolbar and construct tools - but I hope an interested reader can tackle this challenge. The trick is to know the rules how the toolbox buttons work. Plotting points is easy - just select the point tool and click anywhere. Dragging points is easy too - select the arrow (cursor) tool and drag the points. Now, to construct a segment between two points, you select the segment tool, click (and release) the first point, and then click (and release) the second point. See, you specify that this segment depends on two points - every time you drag one of the points using the Drag tool, the segment will update itself accordingly.

    The source code is available online too!

    I used Visual Studio 2008, C# 3.0 and Silverlight 2 Beta 2. I have to say, those are awesome technologies and a pleasure to use (at least for me).

    I split the code into several projects. DynamicGeometry is a class library that provides a framework for defining figures and their interaction. My goal was to allow for extensibility - so that you can just plug in a new kind of a figure, and automatically consume all the centralized goodies - serialization, painting, dependency tracking, transaction system, interaction, etc. Now, the code for almost every figure fits in one screen of text - so I rarely have to scroll when editing these files. SilverlightDG is the Silverlight 2 front-end - UI. DG is the WPF front-end. I try to maintain two projects - WPF and Silverlight - to build from the same sources. So far so good :)

    Implementation details

    The project is still in its early Mickey Mouse stages and only some very basic functionality is there - however, there is already something that is worth noting:

    • an extensible type hierarchy to model geometric figures - all starts with IFigure and FigureBase
    • a State and Strategy pattern implementations to enable interactive mouse input from the user - different toolbox buttons provide different strategies to handle mouse input
    • a pretty decent transaction system, which enables nested transactions, Rollback, delayed execution, record/playback and, of course, unlimited Undo/Redo
    • a dependency tracking mechanism that allows to express dependencies between figures (i.e. a midpoint depends on two existing points) - I employ Topological Sorting to sort the figure DAG by the dependencies - we first want to recalculate the basic figures, and then their dependents
    • a toolbar that took me many painful hours to design - I highly respect UI designers and think that getting the UI right is more difficult than to implement the rest of the actual functionality. I hope I didn't do too bad, although I'm definitely not a great UI designer.

    Silverlight 2 Beta 2

    Most probably by this time you already know what Silverlight is all about - if not, www.silverlight.net is a great place to start. I personally believe that Silverlight is our liberation from the old web story and am very optimistic about this technology. While Silverlight 1.0 only allowed JavaScript programmability, Silverlight 2 gives you full experience of .NET and managed languages - C#, VB, IronPython, IronRuby and others (did I forget something?). I think that Silverlight's advantage over Adobe technologies is this rich programming model - the fact that you can use .NET and C# 3.0 for the web client just makes me so excited about this.

    Targeting both Silverlight and WPF from same source

    If you look closer at my source code, you will notice that I have two projects - the WPF one and the Silverlight one - including the same .cs files. Because WPF and Silverlight use different CLR and runtimes, the project format is incompatible - i.e. I can't include a Silverlight class library in a WPF application. Workaround is relatively easy - just create two separate .csproj files in the same folder, that reference the same sources - and you're fine.


    By the way, CodePlex is a quite awesome site for open-source projects - it provides version control (I love TFS and think it's excellent), item tracking, forums, wiki, downloads, stats - all for free and without annoying ads. Codeplex is one of the products that still make me proud to be working at Microsoft.

    DG 1.0 - the inspiration

    The project that I'm describing in this post is actually based on an earlier one that I've implemented 7-8 years earlier - here's a blog post about my original dynamic geometry project: http://kirillosenkov.blogspot.com/2007/12/dg-10-free-dynamic-geometry-software.html

    I'd like to reach parity with the first version - however this is going to be tricky, because it has a lot of different goodies - from analytical expressions to ability to create hyperlinks and compose complex documents. Interestingly enough, I implemented DG 1.0 using VB6 - and though I think VB6 was an awesome environment at that time (I think it's Edit-and-Continue is still better than the one we have in Visual Studio 2008), I feel so much better and more comfortable using C# 3.0 - it just lets me express stuff about my program that I want to express in the way that I want.

    I hope some of you have found this interesting. I'd be happy to hear any feedback or advice on this.


  • Kirill Osenkov

    Samples for the Undo Framework


    I just added some samples for the Undo Framework. You can find the samples in the source code or download them from the project website.


    First is a simple Console App sample called MinimalSample. Here’s the full source code:

    using System;
    using GuiLabs.Undo;
    namespace MinimalSample
        class Program
            static void Main(string[] args)
                Console.WriteLine("Original color");
                Console.WriteLine("New color");
                Console.WriteLine("Old color again");
                using (Transaction.Create(actionManager))
                    SetConsoleColor(ConsoleColor.Red); // you never see Red
                    Console.WriteLine("Still didn't change to Red because of lazy evaluation");
                Console.WriteLine("Changed two colors at once");
                Console.WriteLine("Back to original");
                Console.WriteLine("Blue again");
            static void SetConsoleColor(ConsoleColor color)
                SetConsoleColorAction action = new SetConsoleColorAction(color);
            static ActionManager actionManager = new ActionManager();
        class SetConsoleColorAction : AbstractAction
            public SetConsoleColorAction(ConsoleColor newColor)
                color = newColor;
            ConsoleColor color;
            ConsoleColor oldColor;
            protected override void ExecuteCore()
                oldColor = Console.ForegroundColor;
                Console.ForegroundColor = color;
            protected override void UnExecuteCore()
                Console.ForegroundColor = oldColor;

    Here we define a new action called SetConsoleColorAction, and override two abstract methods: ExecuteCore() and UnExecuteCore(). In the ExecuteCore(), we change the color to the new color and backup the old color. In UnExecuteCore() we rollback to the backed up old color. We pass the action all the context information it needs (in our case, the desired new color). We rely on the action to backup the old color and store it internally.

    The philosophy is to only store the smallest diff as possible. Try to avoid copying the entire world if you can only save the minimal delta between states.

    Next, pay attention to the SetConsoleColor method. It wraps creating the action and calling RecordAction on it. It helps to create an API that abstracts away the action instantiation so that it is transparent for your callers. You don’t want your callers to create actions themselves every time, you just want them to call a simple intuitive API. Also, if for whatever reason you’d like to change or remove the Undo handling in the future, you’re free to do so without breaking the clients.

    Finally, the source code in Main shows how you can intersperse your API calls with calls to Undo and Redo. It also shows using a transaction to group a set of actions into a single “multi-action” (Composite design pattern). You can call your API within the using statement, but the actions’ execution is delayed until you commit the transaction (at the end of the using block). That’s why you don’t see the console color changing to red in the middle of the block. If you undo a transaction, it will undo all the little actions inside it in reverse order.


    The second sample is called WinFormsSample. It shows a windows form that let’s you edit properties of a business object:


    You can change the text of both name and age, and the values will be mapped to the business object. You can also “Set Both Properties” which illustrates transactions. Then you can click Undo and it will rollback the state of your object to the previous state. The UI will update accordingly.

    There is a trick in the code to avoid infinite recursion: on textbox text change, update the business object, fire an event, update the textboxes, update the business object again, etc... We use a boolean flag called “reentrancyGuard” that only enables the TextChanged events if the textbox modification was made by user, and not programmatically. If we updated the textboxes as a results of the business object change, no need to update the business object.

    Note: If this was WPF, I would just use two-way data binding, but I wanted to keep the sample as simple as possible and use only basic concepts.

    Action merging

    Another thing worth mentioning that this sample demonstrates is action merging. As you type in the name in the textbox ‘J’, ‘o’, ‘e’, you don’t want three separate actions to be recorded, so that you don’t have to click undo three times. To enable this, an action can determine if it wants to be merged with the next incoming action. If the next incoming action is similar in type to the last action recorded in the buffer, they merge into a single action that has the original state of the first action and the final state of the new action. This feature is very useful for recording continuous user input such as mouse dragging, typing and other events incoming at a high rate that you want to record as just one change.

    We update the visual state of the Undo and Redo buttons (enabled or disabled) to determine if the actionManager can Undo() or Redo() at the moment. We call the CanRedo() and CanUndo() APIs for this.

    Hopefully this has been helpful and do let me know if you have any questions.

  • Kirill Osenkov

    ColorPicker Control for WPF/Silverlight


    A while back I was looking around for a color picker control for Live Geometry. The ColorPicker from http://silverlightcontrib.codeplex.com was exactly what I was looking for:

    Get Microsoft Silverlight

    (live preview needs Silverlight 3.0)

    Using the control in your code

    I just took the source from CodePlex and embedded it in my project. You need 5 files:


    Alternatively, you can reference the binary which you can download from the SilverlightContrib CodePlex project site. Pay attention that generic.xaml contains the template for the control, so don’t forget the xaml. The control will work just fine with WPF and Silverlight, which is really a great thing, especially if you’re multitargeting.

    To include the control in your application, here’s the basic code:

    <sc:ColorPicker SelectedColor="LightGreen" />

    Don’t forget to add an XML namespace:


    How does the gradient work?

    The source code for this control is very good for educational purposes. For instance, I had no idea how they create the big gradient for every possible hue. Well, it’s genius and it’s simple. In generic.xaml:

    <Canvas Canvas.Top="0" Canvas.Left="20">
        <Rectangle x:Name="ColorSample" Width="180" Height="180" Fill="Red"></Rectangle>
        <Rectangle x:Name="WhiteGradient" IsHitTestVisible="False" Width="180" Height="180">
                <LinearGradientBrush StartPoint="0,0" EndPoint="1,0">
                    <GradientStop Offset="0" Color="#ffffffff"/>
                    <GradientStop Offset="1" Color="#00ffffff"/>
        <Rectangle x:Name="BlackGradient" IsHitTestVisible="False" Width="180" Height="180">
                <LinearGradientBrush StartPoint="0,1" EndPoint="0, 0">
                    <GradientStop Offset="0" Color="#ff000000"/>
                    <GradientStop Offset="1" Color="#00000000"/>
        <Canvas x:Name="SampleSelector" IsHitTestVisible="False" Width="10" Height="10" Canvas.Left="100" Canvas.Top="96">
            <Ellipse Width="10" Height="10" StrokeThickness="3" Stroke="#FFFFFFFF"/>
            <Ellipse Width="10" Height="10" StrokeThickness="1" Stroke="#FF000000"/>

    This canvas contains layers like in a cake. ZIndex of objects is stacked bottom to top, so the solid rectangle with initially red background is on the bottom.

    Above it, there is a horizontal white gradient fill, completely white on the left and completely transparent on the right.

    Above it, there is a vertical black gradient fill, completely black on the bottom and completely transparent on the top.

    As these gradients overlay, the transparency over the initial solid background creates the desired effect – the actual color is in top-right, where both gradients are 100% transparent. The white spot is in top-left, where the white gradient is most intense and black gradient fades out. Same for the black edge of the gradient.

    Also, it is well worth studying how the template is written – I learned a lot from this sample.

    My fixes

    Since I conveniently borrowed the source code for my project, I did several fixes for my own purposes. Ideally I should contribute the fixes back to SilverlightContrib, but I can never get around to it.

    First of all, I reordered the two StackPanels in the template so that the actual selected color is on top. I also made it collapsible and collapsed by default. You can expand the control by clicking it like a combobox. Unlike a combobox, you have to explicitly click the color area to collapse it again.

    Get Microsoft Silverlight

    I’ve enabled this by adding an Expanded property:

    public bool Expanded
            return m_chooserArea.Visibility == Visibility.Visible;
            var visibility = value ? Visibility.Visible : Visibility.Collapsed;
            if (m_chooserArea.Visibility == visibility)
            m_chooserArea.Visibility = visibility;

    When clicking on the color area, I just call Enabled = !Enabled to toggle it and it does the magic for me. The default value for this is obviously the default visibility of m_chooserArea, which is specified in XAML template (Visibility=”Visible” to set to true by default).

    Other fixes are not as interesting. I fixed a division by zero in ColorSpace.ConvertRgbToHsv (they had h = 60 * (g - b) / (max - min); and didn’t check if min == max). There are a couple of other things which I don’t remember off the top of my head. I’d have to view TFS history to remember what those were. If you’re willing to help and incorporate these fixes to the original project, I’ll dig this up, just let me know :)


    Both Sara Ford and myself agree that this control deserves both thumbs up:


  • Kirill Osenkov

    Algorithms in C#: shortest path around a polygon (polyline routing)


    Suppose you have to build a road to connect two cities on different sides of a lake. How would you plan the road to make it as short as possible?

    To simplify the problem statement, a lake is sufficiently well modeled by a polygon, and the cities are just two points. The polygon does not have self-intersections and the endpoints are both outside the polygon. If you have Silverlight installed, you can use drag and drop on the points below to experiment:

    Get Microsoft Silverlight

    Solution description

    A shortest path between two points is a segment that connects them. It’s clear that our route consists of segments (if a part of the path was a curve other than a segment, we could straighten it and get better results). Moreover, those segments (which are part of the route) have their endpoints either on polygon vertices or the start or end point. Again, if this were not the case, we would be able to make the path shorter by routing via the nearest polygon vertex.

    Armed with this knowledge, let’s consider all possible segments that connect the start and end point and all polygon vertices that don’t intersect the polygon. Let’s then construct a graph out of these segments. Now we can use Dijkstra’s algorithm (or any other path finding algorithm such as A*) to find the shortest route in the graph between start and endpoints. Note how any shortest path algorithm can essentially boil down to a path finding in a graph, because a graph is a very good representation for a lot of situations.

    From the implementation perspective, I used my dynamic geometry library and Silverlight to create a simple demo project that lets you drag the start and end points as well as polygon vertices. You can also drag the polygon and the plane itself. I also added rounded corners to the resulting path and made it avoid polygon vertices to make it look better.

    Here is the source code for the sample. Here’s the main algorithm. It defines a data structure to describe a Graph that provides the ShortestPath method, which is the actual implementation of the Dijkstra’s algorithm. ConstructGraph takes care of adding all possible edges to the graph that do not intersect our polygon. SegmentIntersectsPolygon also determines what the name suggests.

    I hope to post more about polygon routing in the future and do let me know if you have any questions.

  • Kirill Osenkov

    Seattle CodeCamp


    I attended Seattle CodeCamp this past weekend. I guess this is one of the advantages of living in Redmond that you have all kinds of geek events interesting events happening in 10 minutes' drive away from my home.

    I liked CodeCamp a lot, all of the presentations I visited were interesting. The average amount of people in a talk ranged from 7-8 to 50-60. Here are the highlights that I had a chance to visit.

    Day 1

    • Reflector and Friends: An overview of Lutz Roeder's .Net Reflector and its add-ins - a nice overview by Jason Haley, of the reflector itself as well as its many plugins.
    • Scala - I've never taken a look at Scala before and I liked this presentation a lot. The speaker, Ted Neward, is a genius of presentation. I can't remember enjoying a technical presentation so much in the recent times. Besides, Ted is very competent and has incredibly broad interests. I don't know how he managed to squeeze a pretty decent coverage of Scala features into one hour. Scala itself is an interesting language. I think Scala to Java is what F# and Nemerle are to C#. I've seen especially a lot of similarities with Nemerle, ranging from being a strongly typed, functional, OO language to things like return value of a function is the last thing evaluated (no need to type the return keyword). Also, it has Traits/Mixins (what I was always interested in), return type covariance (yummy!), and a whole lot other goodies.
    • F# - same speaker, Ted Neward, also gave a great overview of F#. The internet is full with details about F#, so I won't try to repeat it here :) Interesting that the F# talk was highly popular, the room was full (about 50 people).
    • Then there was also a nice general talk about Silverlight and its place as a technology. Made me do a lot of philosophical thinking about browsers as a phenomenon.
    • And last, but not least, Walt Ritscher gave an excellent deep technical talk about WPF Data-binding. It is such a pleasure to listen to a presentation by a clear thinker who has a deep understanding of the subject matter. It all just makes sense and seems so natural. What else do I have to say. I love WPF and really admire its architecture. Finally Microsoft was the first to build something that the world has never seen before - and Microsoft implemented it the right way. The more I learn about WPF the more I say to myself: "yes, yes, that's how it's supposed to be". Also, the talk itself was also densely attended and we all stayed till 6:30 pm (Saturday evening!) although the official end was at 6 pm.

    Day 2

    • LINQ Overview - Charlie Calvert gave a great overview of LINQ and different providers. It went very well and was as always pleasant to watch. Although LINQ gets really a lot of traction on the internet nowadays, Charlie managed to give a fresh, clear and inspiring presentation. I have to say, LINQ is a revolution. I'm proud to be a part of the team who built it (although they built it before I joined so I can't take credit for it :)
    • Static Analysis - this talk was basically what I came for. Wesner Moise was demoing NStatic, a software he is building, basically a bug finder that heavily employs artificial intelligence methods to, say, catch NullReferenceExceptions at compile time. I deeply share Wesner's vision that tools need to have a deeper understanging of source code. For example, NStatic uses algebraic methods to infer semantical information about possible variable ranges (it can infer and prove that a variable will have a given value or fall within a given range at runtime). But it's really difficult to explain without actually listening to the presentation. I was very impressed by what NStatic can do and looking forward to play with the released version.
      For me meeting Wesner and listening to his ideas and NStatic was also especially important for other reasons. In 2004, when I started working on a structured editor, I was inspired by Wesner's post Whidbey May Miss the Next Coding Revolution. Since then I am reading his blog and have gained a lot of appreciation for his creativity and talent. I myself don't have any significant impact whatsoever (just gathering pebbles on the beach of the ocean of knowledge :), but I have a strong belief that there are a number of people on the planet (Charles Simonyi, Jonathan Edwards, Wesner Moise, Rob Grzywinski, Sergey Dmitriev, Maxim Kizub and some others) who are searching in the right direction. They will know what I'm talking about :) But I digress, pardon me :)
    • LINQ to XML - it's a pity I missed it. Erick Thompson was presenting XLinq, a new great API to work with XML from managed languages. I really like the XLinq API usability.
    • VSX: Extend Your Visual Studio Development Experience - this was the last talk I attended, where the VSX team was demoing considerable improvements to the Visual Studio extensibility story. It is much easier to extend Visual Studio now than it was before. You can use managed code to achieve your goals. It is still not perfect, but it is good to know that a great team is striving towards improving the Visual Studio extensibility experience.
  • Kirill Osenkov

    How I started to really understand generics


    First off, a little explanation for the post title: not that I didn't understand generics before. I'd had a pretty good understanding of how they work, how they are implemented in the CLR, what facilities are provided by the compiler, etc.

    It's just that recently I started seeing some patterns and analogies that I hadn't seen before or just didn't pay enough attention to. This is a little fuzzy feeling but I'll still try and describe these insights, hopefully the reader will understand what I mean. These insights have helped me to think more clearly and have a better understanding of how to use generics in a richer way.

    For a start, let's consider a method called Process that processes command line arguments and modifies the argument list by removing the arguments that it could process:

    public ArgumentList Process(ArgumentList source)

    This design is not very good because it mutates the source variable, and I read somewhere that a method should either be immutable and return something or mutate stuff and return void, but not both at the same time. I think it was in Martin Fowler's Refactoring, but I'm not sure.

    But that's not the thing I wanted to talk about today. This design has another serious flaw: it essentially truncates its return type to be ArgumentList, so by calling this method we basically "seal" the output type to be ArgumentList:

    CustomArgumentList customList = new CustomArgumentList();
    ArgumentList processed = Process(customList);

    Even if we originally had a CustomArgumentList, once we pass it into our method, it "truncates" the type and all we have left on the output is the basic ArgumentList. We lose some type information and cannot enjoy the benefits of static typing and polymorphism at its fullest.

    So what have generics to do with all this? Well, generics allow us to "unseal" the return type of our method and to preserve the full static type information available:

    public T Process<T>(T source)
        where T : ArgumentList

    This way we can call Process on our CustomArgumentList and receive a CustomArgumentList back:

    CustomArgumentList customList = new CustomArgumentList();
    CustomArgumentList processed = Process(customList);

    See, type information is not sealed anymore and can freely "flow" from the input type to the output type. Generics enabled us to use polymorphism and preserve full static typing where it wasn't possible before. It might be obvious for many of you, but it was a wow-effect discovery for me. Note how we used type inference to make a call to Process<T> look exactly like a call to Process.

    Another observation here is that when you're talking about covariance and contravariance of generics, you can divide type usages into two groups: "in" and "out". Thus, we can have return type covariance and input parameter contravariance. Let's notice how "out" type usages essentially truncate the type information and limit ("seal") it to one base type. Generics are there to "unseal" this limitation and to allow for derived types to be returned without losing type information.

    I hope this wasn't too fuzzy and handwavy. It's just these little discoveries make me very excited and I'd like to share the way I see these things.

    Let's consider another insight: what's the difference between:

    1. class MyType<T> : IEquatable<T> and
    2. class MyType<T> where T: IEquatable<T>

    Update: I'd like to explain in a little bit more detail, what's going on here. In the first line, we're declaring that the type MyType<T> implements the IEquatable<T> interface (complies with the IEquatable<T> contract). Simpler said, MyType<T> is IEquatable<T> (can be compared with objects of type T). The <T> part reads "of T", which means MyType has a method Equals that accepts a parameter of type T. Hence, the first line imposes the constraint on the type MyType<T> itself.

    Now the second line imposes the constraint on the type parameter, not on the type itself. Now the type parameter that you specify must be a type that has an Equals method that accepts a parameter of type T (objects of type T can be compared with other objects of type T). I would imagine second line could be useful more often than the first line.

    Also, another line is possible:

    class MyType<T> : IEquatable<MyType<T>>

    This line would mean that MyType can be compared with itself (and, independently of this fact, use some other type T).

    Thinking about this difference made another aspect of generics clearer for me: with generics, there are always at least two different types involved: the actual type and the generic type parameters.

    Armed with this wisdom, let's move to a third mental exercise (again, it might all be simple and obvious for some readers, in this case I apologize and am very honored to have such readers). Is the following thing valid?

    class MyGenericType<T> : T

    It turns out it's not valid - we cannot inherit from a generic type parameter (Update: in the original post I wrote that this would lead to multiple inheritance - that's not true). This illustrates my previous points - with generics, there are always at least two different types involved, and we shouldn't mix them arbitrarily.

    Finally, to at least partially save this post from being a complete theoretical disaster, I'd like to share some code I wrote recently. Suppose you're implementing the Command design pattern and have various concrete Commands: CutCommand, CopyCommand, CloseAllDocuments command etc. Now the requirement is to support a unified exception throwing and catching functionality - every command should throw a generic exception InvalidOperationException<T>, where T is the type of the command. You might find this weird (who needs generic exceptions?) but in my real project at work we have some very clever exception classification logic, which requires that each exception should have a different type. Basically, each exception should know what type threw it. Based on this strongly typed exception source information, we do some magic to greatly simplify logging and classify exceptions into buckets based on their root cause. Anyway, here's the source code:

    class Command
        public virtual void Execute() { }
    class InvalidOperationException<T> : InvalidOperationException
        where T : Command
        public InvalidOperationException(string message) : base(message) { }
        // some specific information about
        // the command type T that threw this exception
    static class CommandExtensions
        public static void ThrowInvalidOperationException<TCommand>(
            this TCommand command, string message) 
            where TCommand : Command
            throw new InvalidOperationException<TCommand>(message);
    class CopyCommand : Command
        public override void Execute()
            // after something went wrong:
            this.ThrowInvalidOperationException("Something went wrong");
    class CutCommand : Command
        public override void Execute()
            // after something went wrong:
            this.ThrowInvalidOperationException("Something else went wrong");

    Note how two seemingly equal calls to ThrowInvalidOperationException will, in fact, throw different exceptions. The call in CopyCommand will throw an InvalidOperationException<CopyCommand>:


    while the same call from CutCommand will throw an InvalidOperationException<CutCommand>:


    Here (attention!) we infer the type of the command from the type of the "this" reference. If you try calling it without the "this." part, the compiler will fail to resolve the call:


    So this is a situation where extension methods and generic type inference play nicely together to enable what I think is a very interesting trick. We preserve type information all the way from where an exception is thrown, "remember" the type that threw the exception, and still have full type information available when we catch and process it. A nice example of how generics help type information "flow" from one part of your code to other. This is also in line with my first example where we see how type information "flows" from input to output parameter of a method.

    Note: for those of you who spent some time with Prolog, don't you have a déjà vu of how Prolog allows values to "flow" between predicates? In Prolog every parameter to a predicate (method) can be "in" or "out" - input or output. Prolog infers which parameter is which depending on its usage elsewhere and inside the predicate body. I don't know if this analogy is valid at all, but I feel that C# compiler in a certain way does its type inference in a similar manner.

  • Kirill Osenkov

    Algorithms in C#: Connected Component Labeling


    I have to admit, I'm not very good at interviews. For some reason my mind isn't trained well for sorting linked lists or balancing binary trees on the whiteboard. I have no problem designing 600+ type hierarchies and building complex object-oriented frameworks, but typical interview questions were never an area where I was shining. Maybe it's because I like to think slowly and take my time, maybe for some other reasons :) Anyway, just out of the blue I decided to implement an algorithm in C# with the goal to improve my algorithm skills and to enjoy the power and expressiveness of my most favorite language and platform.

    As it usually goes, I first sat down and spent three hours coding the whole thing, and after that I did some online research, which revealed the algorithms official name:

    Connected component labeling

    The goal is, given a 2D-matrix of colors, to find all adjacent areas of the same color, similar to the flood-fill algorithm used in Paint.

    Standard (existing) solution - Union-Find

    Online research had revealed that there is a standard algorithm for this that heavily employs Union-Find. The union-find approach starts with a disjoint matrix, where every element is its own set and then iteratively joins (unions) neighboring sets together. The secret of Union-Find is twofold:

    • Extremely fast (constant-time) union of two sets
    • Extremely fast (almost constant-time) determining whether two points belong to the same set or not

    I remember when I was at the university, I even knew how to prove that the complexity of Union-Find is bounded by the inverse Ackermann function, which, believe me, grows very slowly. The important thing that I carried out for the future is that for all practical purposes you can safely consider this O(1).

    My solution

    I was somewhat proud to have implemented a different approach. If you have seen this anywhere before, I wouldn't be surprised though, so please bear with me. The idea is to split the "image" into horizontal stripes ("spans") of the same color. Then scan all the rows of the image row-by-row, top-to-bottom, and for each row, for each span in a row, attach the span to the neighbor spans of the previous row. If you're "touching" more than one existing component, join them into one. If you're not touching any pixels of the same color, create a new component. This way, we split all horizontal spans into three generations: 0, 1 and 2 (like the .NET Garbage Collector!). Generation 0 is the current row, Generation 1 is the row above it, and Generation 2 are all the rows above these two. As we finish processing a row, Generation 1 is added to Generation 2 and Generation 0 becomes Generation 1.

    MSDN Code Gallery

    I published the source code at http://code.msdn.microsoft.com/ConnectedComponents. MSDN Code Gallery is an awesome new website where you can publish your code samples and source code. I discovered that I'm not the only one to publish algorithms in C#: see for example, A* path finding.

    Performance challenge

    I'd be very curious to measure the performance of my current code. I know it isn't optimal because there is at least one place where I can do considerably better. For the profiling wizards among you, I won't reveal where it is for now. For the tuning wizards, if my algorithm takes 100% time, how many percent can you get? I myself expect (but not guarantee) to be around 5-10% faster. Can anyone do better?

    Also, I'd be curious how my algorithm performs compared to the classical union-find implementation. Anyone interested to implement and see if it does better? (I hope I won't regret posting this when someone shares an implementation which is 100 times faster than mine...)

    Cleaner code challenge

    I dare to hope that my code is more or less clean and more or less well-factored. However I learned one lesson in life - whenever I start thinking that I'm good, life immediately proves otherwise. That's why I welcome any improvement suggestions. I understand that on such Mickey Mouse-size projects it doesn't really matter that much if the code is clean or not, but anyway. The design could be better, the use of .NET and C# could be better, the distribution of responsibilities could be better, my OOP kung-fu could be better. I'd like to use little projects like this to learn to code better.

    Language challenge

    Care to implement it in F# or, say, Nemerle? This is your chance to show the world that C# is not the only .NET language! Of course, non-.NET languages are highly welcome as well. I'd personally be interested in, for example, Haskell, Lisp, Prolog, Scala, Nemerle, Boo, (Iron)Ruby, (Iron)Python, Smalltalk, Comega, SpecSharp, SML.NET and so on.

    Silverlight challenge

    Have you installed Silverlight 2 beta? Need an idea what to implement? ;-)

  • Kirill Osenkov

    Windows Explorer Preview Pane for .vb files


    I love using the Preview Pane in Windows Explorer to quickly preview file contents:


    Usually, to enable this preview for a file extension I just open regedit, go to HKEY_CLASSES_ROOT –> .xxx (where xxx is the extension you’re interested in), and add a string key called PerceivedType with value of “text”: http://blog.wpfwonderland.com/2011/01/15/customize-windows-7-preview-pane-for-xaml-files/

    However for some reason it didn’t work for .vb files. After some web searching, one solution I found that worked for me is to add the preview handler GUID explicitly under a “shellex” key: http://www.howtogeek.com/howto/windows-vista/make-windows-vista-explorer-preview-pane-work-for-more-file-types/


    Windows Registry Editor Version 5.00


Page 1 of 7 (153 items) 12345»