• Kirill Osenkov

    Samples for the Undo Framework

    • 13 Comments

    I just added some samples for the Undo Framework. You can find the samples in the source code or download them from the project website.

    MinimalSample

    First is a simple Console App sample called MinimalSample. Here’s the full source code:

    using System;
    using GuiLabs.Undo;
    
    namespace MinimalSample
    {
        class Program
        {
            static void Main(string[] args)
            {
                Console.WriteLine("Original color");
    
                SetConsoleColor(ConsoleColor.Green);
                Console.WriteLine("New color");
    
                actionManager.Undo();
                Console.WriteLine("Old color again");
    
                using (Transaction.Create(actionManager))
                {
                    SetConsoleColor(ConsoleColor.Red); // you never see Red
                    Console.WriteLine("Still didn't change to Red because of lazy evaluation");
                    SetConsoleColor(ConsoleColor.Blue);
                }
                Console.WriteLine("Changed two colors at once");
    
                actionManager.Undo();
                Console.WriteLine("Back to original");
    
                actionManager.Redo();
                Console.WriteLine("Blue again");
                Console.ReadKey();
            }
    
            static void SetConsoleColor(ConsoleColor color)
            {
                SetConsoleColorAction action = new SetConsoleColorAction(color);
                actionManager.RecordAction(action);
            }
    
            static ActionManager actionManager = new ActionManager();
        }
    
        class SetConsoleColorAction : AbstractAction
        {
            public SetConsoleColorAction(ConsoleColor newColor)
            {
                color = newColor;
            }
    
            ConsoleColor color;
            ConsoleColor oldColor;
    
            protected override void ExecuteCore()
            {
                oldColor = Console.ForegroundColor;
                Console.ForegroundColor = color;
            }
    
            protected override void UnExecuteCore()
            {
                Console.ForegroundColor = oldColor;
            }
        }
    }

    Here we define a new action called SetConsoleColorAction, and override two abstract methods: ExecuteCore() and UnExecuteCore(). In the ExecuteCore(), we change the color to the new color and backup the old color. In UnExecuteCore() we rollback to the backed up old color. We pass the action all the context information it needs (in our case, the desired new color). We rely on the action to backup the old color and store it internally.

    The philosophy is to only store the smallest diff as possible. Try to avoid copying the entire world if you can only save the minimal delta between states.

    Next, pay attention to the SetConsoleColor method. It wraps creating the action and calling RecordAction on it. It helps to create an API that abstracts away the action instantiation so that it is transparent for your callers. You don’t want your callers to create actions themselves every time, you just want them to call a simple intuitive API. Also, if for whatever reason you’d like to change or remove the Undo handling in the future, you’re free to do so without breaking the clients.

    Finally, the source code in Main shows how you can intersperse your API calls with calls to Undo and Redo. It also shows using a transaction to group a set of actions into a single “multi-action” (Composite design pattern). You can call your API within the using statement, but the actions’ execution is delayed until you commit the transaction (at the end of the using block). That’s why you don’t see the console color changing to red in the middle of the block. If you undo a transaction, it will undo all the little actions inside it in reverse order.

    WinFormsSample

    The second sample is called WinFormsSample. It shows a windows form that let’s you edit properties of a business object:

    image

    You can change the text of both name and age, and the values will be mapped to the business object. You can also “Set Both Properties” which illustrates transactions. Then you can click Undo and it will rollback the state of your object to the previous state. The UI will update accordingly.

    There is a trick in the code to avoid infinite recursion: on textbox text change, update the business object, fire an event, update the textboxes, update the business object again, etc... We use a boolean flag called “reentrancyGuard” that only enables the TextChanged events if the textbox modification was made by user, and not programmatically. If we updated the textboxes as a results of the business object change, no need to update the business object.

    Note: If this was WPF, I would just use two-way data binding, but I wanted to keep the sample as simple as possible and use only basic concepts.

    Action merging

    Another thing worth mentioning that this sample demonstrates is action merging. As you type in the name in the textbox ‘J’, ‘o’, ‘e’, you don’t want three separate actions to be recorded, so that you don’t have to click undo three times. To enable this, an action can determine if it wants to be merged with the next incoming action. If the next incoming action is similar in type to the last action recorded in the buffer, they merge into a single action that has the original state of the first action and the final state of the new action. This feature is very useful for recording continuous user input such as mouse dragging, typing and other events incoming at a high rate that you want to record as just one change.

    We update the visual state of the Undo and Redo buttons (enabled or disabled) to determine if the actionManager can Undo() or Redo() at the moment. We call the CanRedo() and CanUndo() APIs for this.

    Hopefully this has been helpful and do let me know if you have any questions.

  • Kirill Osenkov

    New CodePlex project: a simple Undo/Redo framework

    • 57 Comments

    I just created a new CodePlex project: http://undo.codeplex.com

    What

    It's a simple framework to add Undo/Redo functionality to your applications, based on the classical Command design pattern. It supports merging actions, nested transactions, delayed execution (execution on top-level transaction commit) and possible non-linear undo history (where you can have a choice of multiple actions to redo).

    The status of the project is Stable (released). I might add more stuff to it later, but right now it fully satisfies my needs. It's implemented in C# 3.0 (Visual Studio 2008) and I can build it for both desktop and Silverlight. The release has both binaries.

    Existing Undo/Redo implementations

    I do realize that my project is the reinvention of the wheel at its purest, existing implementations being most notably:

    However I already have three projects that essentially share the exact same source code, so I decided that it would be good to at least extract this code into a reusable component, so perhaps not only me but someone else might find it useful too.

    It's open-source and on CodePlex, so I also have a chance of benefiting from it if someone contributes to it :)

    History

    It all started in 2003 when I first added Undo/Redo support to the application that I was developing at that time. I followed the classical Command design pattern, together with Composite (for nested transactions) and Strategy (for plugging various, possibly non-linear undo buffers).

    Then I needed Undo/Redo for my thesis, so I just took the source code and improved it a little bit. Then I started the Live Geometry project, took the same code and improved it there a little bit, fixing a couple of bugs. Now the mess is over, and I'm finally putting the code in one place :)

    A good example of where this framework is used is the Live Geometry project (http://livegeometry.codeplex.com). It defines several actions such as AddFigureAction, RemoveFigureAction, MoveAction and SetPropertyAction.

    Actions

    Every action encapsulates a change to your domain model. The process of preparing the action is explicitly separated from executing it. The execution of an action might come at a much later stage after it's been prepared and scheduled.

    Any action implements IAction and essentially provides two methods: one for actually doing the stuff, and another for undoing it.

    /// <summary>
    /// Encapsulates a user action (actually two actions: Do and Undo)
    /// Can be anything.
    /// You can give your implementation any information it needs to be able to
    /// execute and rollback what it needs.
    /// </summary>
    public interface IAction
    {
        /// <summary>
        /// Apply changes encapsulated by this object.
        /// </summary>
        void Execute();
    
        /// <summary>
        /// Undo changes made by a previous Execute call.
        /// </summary>
        void UnExecute();
    
        /// <summary>
        /// For most Actions, CanExecute is true when ExecuteCount = 0 (not yet executed)
        /// and false when ExecuteCount = 1 (already executed once)
        /// </summary>
        /// <returns>true if an encapsulated action can be applied</returns>
        bool CanExecute();
    
        /// <returns>true if an action was already executed and can be undone</returns>
        bool CanUnExecute();
    
        /// <summary>
        /// Attempts to take a new incoming action and instead of recording that one
        /// as a new action, just modify the current one so that it's summary effect is 
        /// a combination of both.
        /// </summary>
        /// <param name="followingAction"></param>
        /// <returns>true if the action agreed to merge, false if we want the followingAction
        /// to be tracked separately</returns>
        bool TryToMerge(IAction followingAction);
    
        /// <summary>
        /// Defines if the action can be merged with the previous one in the Undo buffer
        /// This is useful for long chains of consecutive operations of the same type,
        /// e.g. dragging something or typing some text
        /// </summary>
        bool AllowToMergeWithPrevious { get; set; }
    }

    Both methods share the same data required by the action implementation and are supplied when you create an action instance.

    ActionManager

    Your domain model (business objects) will likely have an instance of ActionManager that keeps track of the undo/redo buffer and provides the RecordAction(IAction) method. This method adds an action to the buffer and executes it. And then you have ActionManager.Undo(), ActionManager.Redo(), CanUndo(), CanRedo() and some more stuff.

    As a rule, the thing that works for me is that I generally have two APIs: one that is public and lazy (i.e. it just creates an action and adds it to the buffer), and the other which is internal and eager, that does the actual work. Action implementation just calls into the eager API, while the public API is lazy and creates actions transparently for the consumer.

    History

    Right now I only have a SimpleHistory. Instead of having two stacks, I have a state machine, where Undo goes to the previous state and Redo goes to the next state, if available. Each graph edge stores an action (implementation of IAction). As the current state transitions along the graph edge, IAction.Execute or UnExecute is being called, depending on the direction in which we go (there is a logical "left" and "right" in this graph, which kind of represents "future" and "past").

     

    image

    It's possible for this linked list to become a tree, where you try something out (way1), don't like it, undo, try something else (way2), like it even less, undo, and choose to go back and redo way1. However this is not implemented yet.

    Transactions

    Transactions are groups of actions viewed as a single action (see Composite design pattern).

    Here's a typical usage of a transaction:

    public void Add(IEnumerable<IFigure> figures)
    {
        using (Transaction.Create(ActionManager))
        {
            figures.ForEach(Add);
        }
    }

    If an action is recorded while a transaction is open (inside the using statement), it will be added to the transaction and executed only when the top-level transaction commits. This effectively delays all the lazy public API calls in the using statement until the transaction commits. You can specify that the actions are not delayed, but executed immediately - there is a corresponding overload of Transaction.Create specifically for that purpose.

    Note that you can "misuse" this framework for purposes other than Undo/Redo: one prominent example is navigation with back/forward.

    Update: I just posted some samples for the Undo Framework: http://blogs.msdn.com/kirillosenkov/archive/2009/07/02/samples-for-the-undo-framework.aspx

  • Kirill Osenkov

    Visual Studio 2010 Beta1 + TFS + HTTPS (TF31001): The ServicePointManager does not support proxies with the https scheme.

    • 1 Comments

    This is just a little note to myself and others who might run into this. I was using Visual Studio 2010 and Team Foundation Client to access a CodePlex project over HTTPS (port 443), and got this error message:

    ---------------------------
    Microsoft Visual Studio
    ---------------------------
    Microsoft Visual Studio

    TF31001: Cannot connect to Team Foundation Server at tfs07.codeplex.com. The server returned the following error: The ServicePointManager does not support proxies with the https scheme.
    ---------------------------
    OK   Help  
    ---------------------------

    By the way, did you know that you can press Ctrl+C to copy the contents of a message box dialog to clipboard? (Well, at least in Visual Studio message boxes).

    Anyway, it turns out this is a known bug: https://connect.microsoft.com/VisualStudio/feedback/Workaround.aspx?FeedbackID=453677

    The workaround so far is to create a couple of string values in the registry:

    This problem it seams is to do with the way Visual Studio 2010 connects to your TFS server over HTTPS. The default value for “BypassProxyOnLocal” in Visual Studio 2008 was “False”, but it has been changed to “True” for Visual Studio 2010 Beta 1.

    You can fix this by adding the following registry keys and restarting Visual Studio 2010:

    You need to add a “RequestSettings” key to both of the following location that contains a string value pair of “BypassProxyOnLocal=’False’”.

    32bit OS Key Locations:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\TeamFoundationServer\10.0\RequestSettings
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\10.0\TeamFoundation\RequestSettings

    64bit key locations:
    HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\TeamFoundationServer\10.0\RequestSettings
    HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\10.0\TeamFoundation\RequestSettings

    How to: Change the BypassProxyOnLocal Configuration: http://msdn.microsoft.com/en-us/library/bb909716(loband).aspx

    Update: I just noticed Aaron Block has also blogged about this.

  • Kirill Osenkov

    VS Project, C++ and Editor team blogs

    • 0 Comments

    This is just a quick announcement about some Visual Studio team blogs that might be really worth checking out.

  • Kirill Osenkov

    Algorithms in C#: shortest path around a polygon (polyline routing)

    • 14 Comments

    Suppose you have to build a road to connect two cities on different sides of a lake. How would you plan the road to make it as short as possible?

    To simplify the problem statement, a lake is sufficiently well modeled by a polygon, and the cities are just two points. The polygon does not have self-intersections and the endpoints are both outside the polygon. If you have Silverlight installed, you can use drag and drop on the points below to experiment:

    Get Microsoft Silverlight

    Solution description

    A shortest path between two points is a segment that connects them. It’s clear that our route consists of segments (if a part of the path was a curve other than a segment, we could straighten it and get better results). Moreover, those segments (which are part of the route) have their endpoints either on polygon vertices or the start or end point. Again, if this were not the case, we would be able to make the path shorter by routing via the nearest polygon vertex.

    Armed with this knowledge, let’s consider all possible segments that connect the start and end point and all polygon vertices that don’t intersect the polygon. Let’s then construct a graph out of these segments. Now we can use Dijkstra’s algorithm (or any other path finding algorithm such as A*) to find the shortest route in the graph between start and endpoints. Note how any shortest path algorithm can essentially boil down to a path finding in a graph, because a graph is a very good representation for a lot of situations.

    From the implementation perspective, I used my dynamic geometry library and Silverlight to create a simple demo project that lets you drag the start and end points as well as polygon vertices. You can also drag the polygon and the plane itself. I also added rounded corners to the resulting path and made it avoid polygon vertices to make it look better.

    Here is the source code for the sample. Here’s the main algorithm. It defines a data structure to describe a Graph that provides the ShortestPath method, which is the actual implementation of the Dijkstra’s algorithm. ConstructGraph takes care of adding all possible edges to the graph that do not intersect our polygon. SegmentIntersectsPolygon also determines what the name suggests.

    I hope to post more about polygon routing in the future and do let me know if you have any questions.

  • Kirill Osenkov

    yield return and Continuation-Passing Style

    • 5 Comments

    Someone was recently porting some C# code to VB and had a question about how to convert the C# yield return iterator methods to VB (VB currently doesn’t support iterators).

    There were a lot of replies like “use Reflector on a compiled binary and copy-paste the generated state machine code”. The problem with the Reflector approach is that you go one step down the abstraction ladder and lose the high-level intent expressed in the original code. Resulting code will be surely harder to read and maintain.

    Surprisingly, no one mentioned CPS. But before applying continuation-passing style, let’s first look at the nature of yield return. It’s essentially a producer-consumer model where the producer is a state machine where transitions are triggered by the MoveNext calls and current state is saved in the Current property. On the consumer side there is eventually almost always a foreach loop with some logic in the body, and this logic only requests the next element (and triggers a state machine transition) after it’s done processing the current element.

    It turns out, we can preserve the algorithm encoded in the iterator and avoid using yield return and thus having the compiler to generate the state machine code for us. To achieve this, we pass the logic that used to be in the consumer foreach loop (a continuation) directly into the iterator.

    Here’s an example with yield return that we want to convert:

    using System;
    using System.Collections.Generic;
    
    class Node<T>
    {
        public Node<T> Left { get; set; }
        public T Value { get; set; }
        public Node<T> Right { get; set; }
    
        public IEnumerable<T> Traverse()
        {
            if (Left != null)
            {
                foreach (var item in Left.Traverse())
                {
                    yield return item;
                }
            }
            yield return Value;
            if (Right != null)
            {
                foreach (var item in Right.Traverse())
                {
                    yield return item;
                }
            }
        }
    }
    
    class Program
    {
        static void Main(string[] args)
        {
            Node<int> tree = new Node<int>()
            {
                Left = new Node<int>()
                {
                    Value = 1,
                    Right = new Node<int>()
                    {
                        Value = 2
                    }
                },
                Value = 3,
                Right = new Node<int>()
                {
                    Value = 4
                }
            };
    
            foreach (var item in tree.Traverse())
            {
                Console.WriteLine(item);
            }
        }
    }

    Now, the trick is to pass the “continuation”, which is the code that processes the results, directly into the iterator method (using first-class functions AKA delegates):

    public IEnumerable<T> Traverse()
    {
        List<T> result = new List<T>();
        TraverseInner(this, result.Add);
        return result;
    }
    
    void TraverseInner(Node<T> root, Action<T> collector)
    {
        if (root == null) return;
        TraverseInner(root.Left, collector);
        collector(root.Value);
        TraverseInner(root.Right, collector);
    }

    Note how we created an internal helper that actually does the traversing and how the logic of the traversal is even shorter than in the original method. We don’t use yield return and still maintain a high level of abstraction. Where was yield return, now is a call to the helper method. Otherwise, the control flow is the same.

    The downside of this approach though is that we lose laziness, which means, that once requested, we eagerly calculate all the results and return them at once. This is the price that we pay for losing the state machine that can store intermediate results for us.

    If we remove the limitation of having to return IEnumerable<T>, we can directly consume the helper method without having to write a foreach loop:

    TraverseInner(tree, Console.WriteLine);

    Here we’re passing the “continuation” (which is the Console.WriteLine method) directly inside the iterator. Note how the consumer side became shorter as well because we don’t have to write a foreach loop.

    Note: a while ago I blogged about yield foreach which would allow to get rid of foreach statements in the iterator scenario as well.

    Note 2: I’m guessing it’s possible to get rid of yield return and still keep the laziness, I just need to do more homework on Push LINQ and similar to find a nice solution to this.

  • Kirill Osenkov

    Some resources about Visual Studio Extensibility

    • 1 Comments

    A couple of readers have posted questions about Visual Studio Extensibility, DTE, your own packages, commands, experimental hive etc. To be frank, I’m not an expert in this field, so instead of trying to answer these questions, I will point to some better resources for VSX (VS Extensibility):

  • Kirill Osenkov

    Should Call Hierarchy display compiler-generated member calls?

    • 3 Comments

    [A quick reminder, Call Hierarchy is a new IDE feature in VS 2010]

    In the comments to the previous post, a reader is asking:

    But why are the query operator method calls ignored by the Call Hierarchy?

    var r = from i in new[] { 1, 2, 3 }

           where i > 1

           select i;

    The code snippet above calls Enumerable.Where and Enumerable.Select, and I reacon they should go into Call Hierarchy, which is not the case of the current beta. Any hint on this?

    This is actually a good question. We thought about this issue in the feature design meetings and the reasoning behind the decision not to show the LINQ methods was as follows. The foreach statement also calls GetEnumerator and MoveNext, lock calls Monitor.TryEnter (or something similar??), not to mention other places where compiler-generated code calls members (e.g. the yield return state machine, lambdas, etc). The question is simple: where do we stop? In other words, how deep do we go down the abstraction ladder?

    This also applies to things like calling += on events, which actually calls Delegate.Combine, calling a delegate calls the Invoke method, etc. etc. We decided that we will only show a member call in Call Hierarchy if the compiler actually sees the member’s symbol in the source code. Find All References also follows this pattern, e.g. if you look for Point, you will have two references in Point p = new Point(); and only one reference in var p = new Point(); – since the symbol doesn’t show up in your source code, we don’t mention it. This might be misleading actually, also when you’re looking for calls to a method, we won’t show you places where you create a delegate that points to this method (or method group).

    Do you think this reasoning is OK or would you like us to change the feature’s behavior? If yes, how exactly should it behave? Also keep in mind that changing this behavior at this point will be pretty costly (i.e. I will have to retest everything!). Not even to mention that our dev will have to change the implementation :)

    Thanks!

  • Kirill Osenkov

    Visual Studio 2010 Beta 1 is out!

    • 2 Comments

    In case you missed it (which I don’t believe), today we released Visual Studio 2010 Beta 1 to MSDN subscribers. On Wednesday, it will become available for everyone else to download.

    The build is much more stable and fleshed out then the early CTP preview – I’d say, the functionality of Visual Studio 2010 is at least 95% there already. Our next top priority is fixing performance and making VS fast, sleek and snappy. But it already got its new WPF “skin”, the brand-new WPF editor, historical debugger, architecture explorer, and of course, C# 4.0 and VB 10 new language and IDE features. Also, for the first time ever, F# is in the box just like C# and VB.

    In any case – go try it out and tell us what you think. If you have anything to say about the new C# Call Hierarchy toolwindow – let me know.

  • Kirill Osenkov

    A simple sample for C# 4.0 ‘dynamic’ feature

    • 11 Comments

    Earlier I posted some code to start Visual Studio using C# 3.0:

    using System;
    using EnvDTE;
    
    class Program
    {
        static void Main(string[] args)
        {
            Type visualStudioType = Type.GetTypeFromProgID("VisualStudio.DTE.9.0");
            DTE dte = Activator.CreateInstance(visualStudioType) as DTE;
            dte.MainWindow.Visible = true;
        }
    }

    Now here’s the code that does the same in C# 4.0:

    using System;
    
    class Program
    {
        static void Main(string[] args)
        {
            Type visualStudioType = Type.GetTypeFromProgID("VisualStudio.DTE.10.0");
            dynamic dte = Activator.CreateInstance(visualStudioType);
            dte.MainWindow.Visible = true;
        }
    }

    At first, it looks the same, but:

    1. Referencing EnvDTE.dll is not required anymore – you also don’t need using EnvDTE – don’t need to reference anything!
    2. You declare the ‘dte’ variable to be weakly typed with the new dynamic contextual keyword
    3. You don’t have to cast the instance returned by Activator.CreateInstance
    4. You don’t get IntelliSense as you type in the last line
    5. The calls to dte are resolved and dispatched at runtime

    It’s a trade-off, but I still view dynamic as yet another useful tool in the rich C# programmer’s toolbox to choose from.

  • Kirill Osenkov

    Jon Skeet: Planning C# In Depth 2

    • 8 Comments

    Jon is asking advice about how to shape the second edition of his book “C# In Depth”. Jon, I like your current suggestions (in blue) about making tweaks to existing content and adding a new section on C# 4.0.

    One thing I’d also love to see is a summary of the hypothetical C# vNext features that the community has been discussing in the blogosphere and on the forums. I think Jon is the right person to come up with a great summary of those features, the rationale behind them, and possible implementation (syntax).

    Specifically, I’m talking about:

    1. Immutability (e.g. public readonly class) and related matters (initializing auto-props, using tuples as an implementation for anonymous types, making object initializers immutable, tuples in the language etc.). Also statically verifying immutability etc.
    2. yield foreach and allowing anonymous methods to be iterators (like a VB prototype that Paul Vick blogged about)
    3. return type covariance
    4. duck typing, making a type implement an interface without touching it’s source, techniques (TransparentProxy, Reflection.Emit etc)
    5. member-level var (hopefully we won’t have this)
    6. notnull, code contracts in the language
    7. traits/mixins (e.g. redirecting an interface’s implementation to a field/property that implements that interface)
    8. metaprogramming (syntactic/semantic macros, DSLs in the language, infoof, compile-time reflection, IL rewriting (PostSharp, CCI, Mono.Cecil), AOP, transformations, code generation)
    9. parallel programming, concurrency, etc.
    10. Am I missing something?
  • Kirill Osenkov

    Common Compiler Infrastructure released on CodePlex

    • 2 Comments

    Great news – Herman Venter from Microsoft Research has released CCI (Common Compiler Infrastructure) on CodePlex: http://ccimetadata.codeplex.com. See Herman’s blog post here.

    The Microsoft Research Common Compiler Infrastructure (CCI) is a set of components (libraries) that provide some of the functionality that compilers and related programming tools tend to have in common.

    The metadata components provide functionality for reading, writing and manipulating Microsoft Common Language Runtime (CLR) assemblies and debug files. The functionality provided by these components subsumes the functionality provided by System.Reflection and System.Reflection.Emit.

    It’s interesting to know that FxCop is actually powered by CCI, so if you wondered how FxCop analyzes your assemblies, you can now peek into the source code and even build your own tools that leverage the CCI framework.

    Update:

    Beside the metadata rewriting engine and IL/PDB/PE framework, CCI also provides the AST and syntax trees to model source code, IL <-> CodeModel roundtripping, C# pretty-printer, as well as SmallBasic compiler as a sample.

    See here:

  • Kirill Osenkov

    Remote Desktop: /span across multiple monitors

    • 24 Comments

    I spent some time searching the web about Remote Desktop, fullscreen and multiple monitors, so I decided to write down my findings to avoid having to search for them again.

    /span for multiple monitors

    If you pass /span to mstsc.exe, the target session’s desktop will become a huge rectangle that equals to the summary area of your physical monitors. This way the remote desktop window will fill all of your screens. The downside of this approach is that both screens are part of one desktop on the remote machine, so if you maximize a window there, it will span all of your monitors. Also, a dialog that is centered, will show up right on the border between your monitors. There is software on the web to workaround that but I’m fine with keeping my windows restored and sizing them myself. Also Tile Vertically works just fine in this case.

    Saving the /span option in the .rdp file

    There is a hidden option that isn’t mentioned in the description of the .rdp format:

    span monitors:i:1

    Just add it at the bottom of the file.

    Saving the /f (fullscreen) option in the .rdp file

    screen mode id:i:2

    (By default it’s screen mode id:i:1, which is windowed).

    Sources

  • Kirill Osenkov

    DLR Hosting in Silverlight

    • 9 Comments

    As you probably know, DLR is the dynamic language runtime that provides a common platform for dynamic languages and scripting in .NET. Their two main languages, IronPython and IronRuby, are available to develop your programs and also to be hosted in your programs. DLR hosting means that the users of your program can use scripting in any DLR language, for example to automate your program or to programmatically access the domain model of your application.

    I was thinking about adding a capability to plot function graphs like y = cos(x) to my Live Geometry app, so I thought of hosting the DLR in Silverlight to compile and evaluate mathematical expressions.

    Fortunately, DLR readily supports this scenario. And fortunately, Tomáš Matoušek, a developer on our IronRuby Team (part of Visual Studio Managed Languages), sits right around the corner from my office and was kind enough to provide great help when I had questions. Big thanks and kudos to Tomáš!

    So, to host the DLR in Silverlight, here's the file that I added to my project (you can view my full source code here: http://dynamicgeometry.codeplex.com/SourceControl/ListDownloadableCommits.aspx).

    All you need to do is to set up a script runtime, get your language's engine (here we use Python), create a scope for your variables and you're ready to evaluate, execute and compile!

    using System;
    using DynamicGeometry;
    using IronPython.Hosting;
    using Microsoft.Scripting.Hosting;
    using Microsoft.Scripting.Silverlight;
    
    namespace SilverlightDG
    {
        public class DLR : ExpressionCompiler
        {
            ScriptRuntime runtime;
            ScriptEngine engine;
            ScriptScope scope;
    
            public DLR()
            {
                var setup = new ScriptRuntimeSetup();
                setup.HostType = typeof(BrowserScriptHost);
                setup.LanguageSetups.Add(Python.CreateLanguageSetup(null));
    
                runtime = new ScriptRuntime(setup);
                engine = runtime.GetEngine("Python");
                scope = engine.CreateScope();
    
                engine.ImportModule("math");
            }
    
            public override Func<double, double> Compile(string expression)
            {
                var source = engine.CreateScriptSourceFromString(
                    string.Format(@"
    from math import *
    
    def y(x):
        return {0}
    
    func = y
    ", expression),
                    Microsoft.Scripting.SourceCodeKind.File);
    
                CompiledCode code = source.Compile();
                code.Execute(scope);
                var func = scope.GetVariable<Func<double, double>>("func");
                return func;
            }
        }
    }

    ExpressionCompiler is my own abstract class that I defined in the DynamicGeometry assembly:

    using System;
    
    namespace DynamicGeometry
    {
        public abstract class ExpressionCompiler
        {
            public abstract Func<double, double> Compile(string expression);
            public static ExpressionCompiler Singleton { get; set; }
        }
    }

    As you see, the service that I need from the DLR is to implement the Compile method, that compiles an expression down to a callable function delegate, which I can then use to evaluate a function at a point.

    Finally, just register the DLR as an implementation for my ExpressionCompiler:

    ExpressionCompiler.Singleton = new DLR();

    And we're ready to go.

    Let's go back to the DLR.cs and I'll comment a little more on what's going on. Essentially, to host the DLR you'd need 3 things:

    ScriptRuntime runtime;
    ScriptEngine engine;
    ScriptScope scope;

    Runtime is your "world". You load a language-specific engine (like PythonEngine) into the runtime. To create a runtime with a language, one way is to use:

    var setup = new ScriptRuntimeSetup();
    setup.HostType = typeof(BrowserScriptHost);
    setup.LanguageSetups.Add(Python.CreateLanguageSetup(null));
    runtime = new ScriptRuntime(setup);

    This will work fine in Silverlight, because we use a browser-specific BrowserScriptHost, which does not use the file system. One problem that I had is that I was trying to directly call:

    runtime = Python.CreateRuntime();

    Which didn't work because it used the default script host (which tried to access the file system) and not the BrowserScriptHost. After you have the runtime, you can get the engine and create a scope in that engine:

    engine = runtime.GetEngine("Python");
    scope = engine.CreateScope();

    Now you're ready to do things like:

    var five = engine.Execute("2 + 3", scope);
    You can go up to the first code example to see how I declared a function in Python, and converted it to a C# callable Func<double, double> delegate.

    Finally, here's the working application (which you can also find at http://geometry.osenkov.com). Press the y = f(x) toolbar button, enter sin(x) and press the Plot button:

  • Kirill Osenkov

    Visual Studio 2010 Screencast: C# 4.0 Language + IDE + WPF Shell + Editor

    • 13 Comments

    It so happened that I recorded a quick 30-minutes video (screencast) showing the new features in the language and the IDE – and I did all this on a recent internal build of Visual Studio 2010, which has the WPF UI enabled. The video is very basic, I don’t go into any details, it’s mainly a quick overview and how features look like:

    Get Microsoft Silverlight

    You can also download or view the .wmv file here: http://guilabs.de/video/CSharp4.wmv

    Features covered:

    1. Language (0:00)
      1. Dynamic (0:30)
      2. Named and optional (3:20)
      3. Co/Contravariance (11:10)
      4. NoPIA, omit ref etc. (16:35)
    2. IDE (18:45)
      1. Call Hierarchy (18:50)
      2. Quick Symbol Search (23:00)
      3. Highlight References (25:30)
      4. Crash!! (26:15)
      5. Generate From Usage (26:50)
      6. fix aggressive IntelliSense (consume first, list filtering) (29:50)
  • Kirill Osenkov

    Kirill’s Whitespace Guidelines for C#

    • 16 Comments

    I don’t remember seeing any explicit guidelines on whitespace formatting for C# programs, however it seems that experienced C# developers all format their C# code files in a very similar fashion, as if there are some implicit but widely-accepted rules. In this post, I’ll try to formalize my own rules that I use intuitively when I format C# code. I’ll add more to it as I discover new stuff and correct things based on your feedback.

    No two consecutive empty lines

    Bad:

       1:  static void Main(string[] args)
       2:  {
       3:      Main(null);
       4:  }
       5:   
       6:   
       7:  static void Foo()
       8:  {
       9:      Foo();
      10:  }

    No empty line before a closing curly

    Bad:

       1:          Main(null);
       2:   
       3:      }

    No empty line after an opening curly

    Bad:

       1:  class Program
       2:  {
       3:      
       4:      static void Main(string[] args)

    One empty line between same level type declarations

       1:  namespace Animals
       2:  {
       3:      class Animal
       4:      {
       5:      }
       6:   
       7:      class Giraffe : Animal
       8:      {
       9:      }
      10:  }

    One empty line between members of a type

       1:  class Animal
       2:  {
       3:      public Animal()
       4:      {
       5:      }
       6:   
       7:      public void Eat(object food)
       8:      {
       9:      }
      10:   
      11:      public string Name { get; set; }
      12:  }

    Whereas it’s OK to group single-line members:

       1:  class Customer
       2:  {
       3:      public string Name { get; set; }
       4:      public int Age { get; set; }
       5:      public string EMail { get; set; }
       6:   
       7:      public void Notify(string message)
       8:      {
       9:      }
      10:  }

    However every multi-line member must be surrounded by an empty line unless it’s the first or the last member, in which case there shouldn’t be a line between the member and the curly brace.

    One empty line after #region and before #endregion

    Usually #region should be treated as if it were the first construct from it’s content (in this example, a type member):

       1:  class Customer
       2:  {
       3:      #region Public properties
       4:   
       5:      public string Name { get; set; }
       6:      public int Age { get; set; }
       7:      public string EMail { get; set; }
       8:   
       9:      #endregion
      10:   
      11:      public void Notify(string message)
      12:      {
      13:      }
      14:  }

    Within a #region, it’s contents should be separated from the #region/#endregion by a single empty line. Usually #regions contain type members or whole types, less often parts of a method body.

    I think these are the major rules that come into mind for now. If I remember more, I’ll update this post. Also, definitely feel free to contribute any corrections/additions and I’ll update the post too. Thanks!

  • Kirill Osenkov

    A common globalization bug

    • 1 Comments

    I’ve just found and fixed a globalization bug in our test infrastructure where a feature of our testcase management system (resetting a testcase to re-run on a lab machine) just wouldn’t work on a Russian OS. Fortunately, the call stack was easy to investigate: (sorry it’s in Russian - globalization, what can you do…)

    System.InvalidCastException: Приведение строки "2.0" к типу "Double" является недопустимым. ---> System.FormatException: Входная строка имела неверный формат.
       в Microsoft.VisualBasic.CompilerServices.Conversions.ParseDouble(String Value, NumberFormatInfo NumberFormat)
       в Microsoft.VisualBasic.CompilerServices.Conversions.ToDouble(String Value, NumberFormatInfo NumberFormat)
       --- Конец трассировки внутреннего стека исключений ---
       в Microsoft.VisualBasic.CompilerServices.Conversions.ToDouble(String Value, NumberFormatInfo NumberFormat)
       в Microsoft.VisualBasic.CompilerServices.Conversions.ToDouble(String Value)
       в XXXXXXXX.Utilities.DotNetFramework.IsDotNetFramework35HigherInstalled()
       в XXXXXXXX.Result.BulkResetResultsHelper(...

    This essentially says: cannot convert a string “2.0” to double. Here’s the problem line of code (VB):

    version = CDbl(numbers(0) & "." & numbers(1))

    and here’s a fix:

    version = System.Double.Parse(numbers(0) & "." & numbers(1), System.Globalization.CultureInfo.InvariantCulture)

    The original code made an incorrect assumption that the decimal separator in the current culture is the ‘.’ character. However on German, Russian, Italian and some other OSs the default decimal separator is a ‘,’, not a ‘.’. By default, string operations use the current locale (and hence expect a comma as a decimal separator), so if you want to compose a string using a dot and convert it to a double, you have to use InvariantCulture, which uses a dot.

    I’ve seen this error quite a lot of times – this is probably the most common globalization bug out there. Keep in mind, it’s 21st century out there, it’s likely that your software will be used all over the world on all possible combinations of operating systems, languages, locales, encodings, RTL etc.

    A good read on this topic would be Jeffrey Richter’s CLR via C#, chapter 11 (Chars, Strings, and Working with Text), pages 264-268).

  • Kirill Osenkov

    How to start Visual Studio programmatically

    • 6 Comments

    One of the ways we test Visual Studio is by automating the devenv.exe process using a library called DTE (Design Time Extensibility). To use this library from your .NET application, you’ll need to add a reference to the EnvDTE assembly (which is usually available on the .NET tab of the Add Reference dialog).

    Starting Visual Studio using DTE

    Here's a simple code snippet that starts Visual Studio and displays its main window:

    using System;
    using EnvDTE;
    
    class Program
    {
        static void Main(string[] args)
        {
            Type visualStudioType = Type.GetTypeFromProgID("VisualStudio.DTE.9.0");
            DTE dte = Activator.CreateInstance(visualStudioType) as DTE;
            dte.MainWindow.Visible = true;
        }
    }

    When the VS object is being created, a VS process (devenv.exe) starts in the background. You can make its main window visible using dte.MainWindow.Visible = true;

    Note that when the parent process (your program) ends, VS will close with it as well.

    To get an instance of an already running VS process, you can use the following snippet:

    EnvDTE80.DTE2 dte2 = (EnvDTE80.DTE2)
        System.Runtime.InteropServices.Marshal.GetActiveObject("VisualStudio.DTE.9.0");

    This snippet also demonstrates using DTE2, a newer version of the DTE interface that provides additional functionality.

    DTE interface

    Since DTE is COM based, we need to get the type that represents DTE from a well-known ProgID (“VisualStudio.DTE.9.0” that can be found in the registry). Once we have that type, we create an instance of it using Activator and cast it to the DTE interface. Contrary to what the name suggests, DTE is actually an interface and not a class:

    namespace EnvDTE
    {
        [CoClass(typeof(DTEClass))]
        [Guid("04A72314-32E9-48E2-9B87-A63603454F3E")]
        public interface DTE : _DTE
        {
        }
    } 

    DTE commands

    Now that you have DTE in your hands, you can do a whole lot of stuff, for example, execute a command:

    dte.ExecuteCommand("File.OpenFile", "");

    This one will execute the File.OpenFile command to display the open file dialog. There are plenty more Visual Studio commands that are really useful if you want to automate Visual Studio. You can look up a VS command from the Command Window: View –> Other Windows –> Command Window. Just start typing there and it will offer a completion list:

    image

    Also, you can use the Customize dialog (right-click on any VS menu) to get an idea of what commands are available:

    image

    Finally, you can see what command corresponds to an action if you start recording a macro, then just do an action manually, and then view the source code for that macro. As the macro is being recorded, VS registers all DTE command calls and writes them down in VBA source code. For example, ever wondered what command corresponds to the Rename refactoring? Record it and view the macro source, you’ll find out that there is a Refactor.Rename command.

    Other DTE API

    Apart from DTE.ExecuteCommand, there are a lot of other APIs to control the editor, ActiveDocument, ActiveWindow, Application, Debugger, Documents, ItemOperations, Solution, SourceControl, etc.

    However this deserves a separate post by itself. Who knows, if there is popular demand on how to automate Visual Studio, I might start a series of blog posts about that. However, for now, I’ll just link to MSDN articles on DTE:

  • Kirill Osenkov

    Making the XAML editor fast

    • 0 Comments

    If you use WPF/Silverlight and prefer working with XAML only (i.e. no visual designer), you can significantly, I repeat, significantly speed-up the XAML editor. Check out this tip from Fabrice Marguerie: Life changer XAML tip for Visual Studio

  • Kirill Osenkov

    What's common between C# 4.0 optional parameters, object initializers, the new WPF code editor and the navigation bar comboboxes?

    • 1 Comments

    I found an interesting bug recently which resulted from a pretty weird constellation of the following Visual Studio features:

    1. C# 4.0 optional parameters
    2. object initializer syntax
    3. the VS code editor rewritten from scratch in managed code and WPF
    4. the navigation bar combobox updated to show default values for optional parameters

    Here's the screenshot of the bug:

    image

    If you pasted this code in a recent VS 2010 build, the navigation bar (two comboboxes above) would grow to accomodate the full text of the Main method. Why?

    Here's what happens:

    1. The code contains a parse error (missing closing parenthesis after new Program())
    2. The IDE parser (which is very resilient) parses the entire Main method body as the object initializer on the default value of Program
    3. Since during the parsing stage we don't apply certain compiler checks yet, the parser assumes that an object creation expression with an object initializer is a valid default value for the optional parameter
    4. it takes the entire text of the parameter (including the default value and the initializer) and passes it to the New Editor for displaying in the navigation bar as part of Main's signature
    5. the New Editor's navbar comboboxes aren't simply textboxes - they are instances of the full-blown WPF New Editor control themselves
    6. since they're so powerful, they have absolutely no problem displaying multiline content
    7. the rest of the WPF layout shifts accordingly to accomodate the growing content

    We hope to fix the bug before VS 2010 Beta 2 (probably not Beta1 because it's a low impact low priority).

  • Kirill Osenkov

    How to Debug Crashes and Hangs translated into Chinese

    • 0 Comments

    Big thanks to He,YuanHui who has translated my debugging tutorial into Chinese:

    http://www.cnblogs.com/khler/archive/2009/02/08/1386462.html

    Enjoy!

  • Kirill Osenkov

    New Years resolutions v2.0.0.9

    • 3 Comments

    Well, I've been tagged in a chain-letter-blogging game again. This time, Chris "I-like-to-put-ugly-monsters-on-the-frontcovers-of-my-books-to-at-least-partially-distract-readers-from-great-content" Smith tagged me in his New Years resolutions post. It's February, but I think it's better late than never. So here it goes:

    Make sure VS 2010 rocks!

    I'll try to do my part and make sure that our language service works as expected and the features are pleasant to work with. I'll also try to make sure other teams don't miss obvious bugs (yes, Editor, Shell and Project System, I'm looking at you!) Given the fact that I keep finding crashes in the XAML designer, I'll keep an eye on them as well. Oh and the debugger, of course.

    Learn MEF

    I plan to read MEF sources and actually play with it. It's 21 century out there, nowadays you *need* a dependency injection/component framework.

    Learn DLR

    Especially DLR hosting. Would be fun to build an expression evaluator/function graph plotter into my Live Geometry Silverlight app.

    Read more of the product code

    I definitely should read more of the VS source, especially since more and more of it gets rewritten in managed code. We're building a managed compiler and rewriting the language service in managed code, it would be great to follow the API design there. It's super important to get the API surface right. I'll maybe start playing with it early on and build some sample apps/add-ins to see how the API feels.

    Read more blogs

    and catch up on all those starred and flagged items. I plan to read all of Cyrus, all of Wes and all of Eric, for a start. I need to catch up on Jon's early posts as well. Also, I still hope Wes starts blogging again. Same for Dustin (although I do understand how busy Dustin is in his new role...)

    Gym

    Continue ignoring it. I'll at least be honest to myself. I will still exercise regularly. Once in three months is regular, isn't it?

     

    Is there anything else that I've missed? :-P

  • Kirill Osenkov

    ForEach

    • 13 Comments

    In my recent post about coding styles one particular thing provoked the majority of feedback and discussions: the ForEach extension method on IEnumerable<T>. Justin Etheredge has a good post about this method here. StackOverflow.com also has a good question: Why is there not a ForEach extension method on the IEnumerable interface?

    Note: If you’d like this method to be added to .NET 4.0, go vote here: https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=279093

    Recently I also went ahead and logged a suggestion against the BCL team to add this method in .NET 4.0, and got some feedback from Melitta Andersen and Justin van Patten. Essentially, the BCL team’s considerations boil down to what we’ve discussed in my recent post:

    • it encourages state mutation
    • it’s hard to set a breakpoint inside the lambda/delegate body (and other debugging issues)
    • it’s not clear where to place this method (it’s not exactly part of LINQ, it’s just a helper on IEnumerable<T>), also it doesn’t allow to chain calls

    Mads Torgersen (representing the language design team) was also reluctant about adding this method because of similar concerns (functional impurity etc). I myself in my previous post was enumerating various downsides of using this method.

    And still I think we should add it.

    My thinking is the following. The ultimate purpose of the BCL is to help avoid code duplication by introducing a common reusable set of functionality so that people don’t have to reinvent the wheel by writing their own collections, sorters, etc. A lot of people will use ForEach anyway, and if we don’t provide it in the framework, they will have to re-implement it in every new project. Also, by providing the ForEach method out of the box, we’re not forcing anyone to actually go ahead and use it – people will still have the choice and be warned about the downsides of ForEach. It’s just when they will use it anyway (and this happens a lot), they will be able to consume a ready-made one. The use of having it (in my opinion) by far overweights the downsides of using it inappropriately.

    ForEach looks really good with very simple snippets, such as:

    myStrings.ForEach(Console.WriteLine);

    Some developers forget that you can use this shorter syntax instead of:

    myStrings.ForEach(s => Console.WriteLine(s));

    Another ForEach advantage is that it allows you to extract the body of the loop into a separate place and reuse it by just calling into it.

    Also, given the fact that List<T> already has it, it seems unfair that IEnumerable<T> doesn’t. This is an unnecessary limitation (that’s what I think).

    Chris Tavares says:

    I suspect the reason that this didn't exist before is that you can't use it from VB. VB only support lambda expressions, while the foreach method requires a lambda *statement*.

    Well the good news is that we are introducing statement lamdbas in VB 10.0 so this shouldn’t be an issue at all.

    I’m also pretty sure that one can also overcome the debugging difficulties of ForEach with tooling support, such as [DebuggerStepThrough] and “Step Into Specific”. Debugging problems are not language/libraries problem per se, it’s a tooling problem, and tooling should always be fixed/improved to satisfy languages and libraries.

    A lot of people are asking for the ForEach extension method – this is probably one most wanted piece of API:

    My questions for you folks are:

    1. What do you think? Should this method be added to BCL?
    2. If yes, where? System.Linq.Enumerable? System.Collections.Generic.Extensions? Anywhere else?

    There are also variations on this extension method:

    • returning IEnumerable<T> to allow the ForEach calls to chain
    • accepting an Action<T, int> where the second parameter is the index of an item
    • Kevin’s Apply method

    If you ask me, only the simplest overload should be added, because Select is a better choice for chaining calls. Also this would encourage people to use the method in the simplest scenarios, where the downsides of doing so are negligible.

    Finally, one argument I have is that googling “foreach extension method” yields 1 590 000 results, which I think is a pretty good indication that the feature has high demand.

  • Kirill Osenkov

    How to disable optimizations during debugging

    • 3 Comments

    Sooner or later you may run into a situation where you need to evaluate a local variable under debugger and all you get is this:

    "Cannot obtain value of local or argument 'whatever' as it is not available at this instruction pointer, possibly because it has been optimized away'.

    Well, it turns out there are two different tricks to solve such problems:

    1.

    Shawn Burke blogs about How to disable optimizations when debugging Reference Source. In a nutshell, you need to:

    1. Start VS with the Environment Variable COMPLUS_ZapDisable=1
    2. Disable the VS Hosting Process (.vshost.exe) before you start debugging

    2.

    Another tip is from our VB IDE Dev Jared Parsons: Disabling JIT optimizations while debugging. Essentially, Jared points to create an .ini file with the same name as the application's .exe:

    [.NET Framework Debugging Control]
    GenerateTrackingInfo=1
    AllowOptimize=0

    He also points to the MSDN article http://msdn.microsoft.com/en-us/library/9dd8z24x.aspx (Making an Image Easier to Debug).

    To be frank, this tip didn't work for me for some reason, but I guess it's still worth mentioning.

    Hope this helps!

  • Kirill Osenkov

    Call Hierarchy Navigation in Visual Studio 2010

    • 31 Comments

    We're currently designing a new IDE feature named Call Hierarchy. Essentially, it allows you to find places where a given method is called, which is similar to how Find All References currently works. However, unlike Find All References, the Call Hierarchy feature provides more deep understanding and more detailed information about calls.

    Invocation

    You can invoke the Call Hierarchy toolwindow by right-clicking on a method, property or constructor name in the code editor and choosing View Call Hierarchy from the context menu:

    image

    Tool window

    A toolwindow will appear docked on the bottom of the Visual Studio window:

    image

    You can expand the node for the method to see information about it: incoming calls to the method ("Calls To") and outgoing calls ("Calls From"):

    image

    Here's how it works. A method (or a property, or a constructor) is displayed as a root in the treeview. You can expand the node to get a list of "search categories" - things you want to find. Four search categories are currently supported:

    1. Calls To - "incoming" calls to this member
    2. Calls From - "outgoing" calls mentioned in this member's body
    3. Overrides - available only for abstract or virtual members
    4. Implements - finds places where an interface member is implemented

    When you expand a search node (such as Calls To 'GetCallableMethods'), a solution-wide search is started in the background and the results appear under the Calls To folder. You can click on a result, and the details will appear in the Details list view on the right hand side.

    The Details list view shows all the exact call sites and locations in code where GetCallableMethods is called from GenerateXsdForComplexTypes. We see that the method is being called only once, the line of code is shown, as well as file name and position in the file. Double-clicking on that call site will navigate to it in the code editor.

    The advantages of Call Hierarchy compared to Find All References is that it allows you to explore and drill deep multiple levels into the call graph (find caller's caller etc.) Also, Call Hierarchy has deeper and more fine-granular understanding of the source code - while Find All References just finds the symbols, Call Hierarchy differentiates abstract and virtual methods, interface implementations, actual calls from delegate creation expressions, etc. Also it works like a scratch-pad: you can add any member as another root-level item in the call hierarchy tool window and have several members displayed there at once. Finally, the Details Pane given information about the concrete call sites, if a method is being called several times in the body of the calling method.

    Toolbar

    In the toolbar you can select the scope of the search: search in currently opened file only, current project or the entire solution.

    Refresh button re-fills the treeview in case the original source code was modified.

    If a root node of the treeview is selected, the "Delete Root" button will remove it from the treeview. You can add any member as a new root in the treeview by right-clicking on it in the context menu:

    image

    or adding it from the source code as described in the beginning.

    Finally, the Toggle Details Pane button shows or hides the details pane.

    Some design issues and implementation details

    Although the feature is already implemented and if you have the Visual Studio 2010 CTP, you can already play with Call Hierarchy, we're still not quite happy with the current UI design, usability and the user experience.

    For example, one issue that we're seeing is that it takes 2 mouseclicks and 2 mousemoves to invoke the Find All References search, but it takes 4 mouseclicks and 4 mousemoves to get the callers list for a given method (1 click - menu invocation, 1 click - menu item selection, 1 click - expand the treeview node for the method, 1 click - expand the "Calls To" folder). Although the search itself will be slightly faster than Find All References, the perceived complexity of invoking the feature is something we definitely want to improve. We want this feature to be at least as good and usable as Find All References, but also provide additional benefits, otherwise people will just not use the feature and continue using Find All References.

    I think I'll stop for now and see what kind of feedback you guys might have about this. In the next blog post I plan to share more of our current thinking and what we'd like to change. For now, I'd be really interested to know what you think and if you have any suggestions or ideas. Now it's not too late, and we can change the feature based on your feedback.

Page 5 of 7 (160 items) «34567