Colin Thomsen's Microsoft Blog

I'm a developer working on the code profiler that ships with Visual Studio 2010 Premium and Ultimate editions. At a previous company I worked on computer vision software for face and gaze tracking.

Posts
  • Colin Thomsen's Microsoft Blog

    Sysinternals is Live

    • 0 Comments

    I use a bunch of Sysinternals tools for diagnosing problems while developing. My two favorites are:

    • Process Explorer, a more fully-featured version of Task Manager that can report environment variables for running processes, show loaded DLLs and even display callstacks. It can also tell you which process is currently accessing a certain file or DLL, which is useful if you're trying to delete a file and getting a 'file is in use and cannot be deleted' error.
    • Process Monitor, which can record all accesses to files, disks and the registry. Very useful for diagnosing complicated scenarios with multi-process development.

    Recently the Sysinternals tools have been hosted on a new live site that can be accessed via the web, or as a file share. Now I can easily run a Sysinternals tool and be sure that it is the newest version:

    dbgview 

    I can also update my own local cache of useful tools by periodically copying from the file share.

  • Colin Thomsen's Microsoft Blog

    Performance: Inserting Marks Using Code

    • 0 Comments

    Ian previously covered using the VS 2008 Data Collection Control to choose when to collect data. The Data Collection Control can also be used to insert marks into the performance report, but sometimes it is convenient to modify the source code to do this automatically.

    Consider a typical application (PeopleTrax) where I am interested in gathering profiler data only between when a button is clicked and when data is displayed. The application is shown below.

    pre_click

    After the 'Get People' button is clicked, data is displayed after just over 6 seconds. This seems a little excessive so I want to focus my performance investigation in this area.

    post_click

    To filter the data so that it only shows information collected between those two points, I could use the Data Collection Control, but maybe I'm planning to run a complicated scenario and don't want to have to remember to insert the marks manually. Instead, it is possible to modify the original code to request the profiler insert marks in the required locations.

    The Profiler API is available for managed code in an assembly that can be added directly to the project from \Program Files\Microsoft Visual Studio 9.0\Team Tools\Performance Tools.

    add_ref_zoom_border

    After adding the reference it shows up in the 'References' list for the PeopleTrax project.

    add_ref_done_zoom_border

    I can then use functions in the Profiler API to control the profiler. This might include starting or stopping data collection or in this case, inserting marks into the datastream. This is easily achieved as shown below.

    mark_in_code

    I can then profile the application and when I open the Performance Report and switch to Marks View I see that the marks have been correctly inserted. We can also see that the time elapsed between the marks is about 6.5 seconds, which corresponds with the measurement that is already displayed in the PeopleTrax UI.

    marks_view_zoom_border

    I can use the marks to filter the report to only show profiling data for the time between the two inserted marks and then start my performance investigation.

    filter_on_marks_border

  • Colin Thomsen's Microsoft Blog

    Tech-Ed 2008 Demos

    • 0 Comments

    Last year my boss took a trip to sunny Orlando to present at Tech-Ed and to offer help and suggestions in the Technical Learning Center (TLC). This year I'm lucky enough to be attending with a couple of other folks (Habib and Tim) and since I'm not an official Speaker I'll be spending most of my time hanging out in the Application Lifecycle Management (ALM) demo station for Visual Studio 2008 Team System, Development Edition.

    We've prepared a few demos covering things like:

    • Profiling using Instrumentation Mode on a Virtual PC image.
    • Collecting Allocation and Object Lifetime information.
    • Analyzing Performance Reports.
    • Using Code Analysis to improve your code.
    • Enabling Code Analysis Check-In Policies.

    We're also looking forward to discussing your specific scenarios so if you're at Tech-Ed and interested in diagnostic tools and solving performance problems we'd love to chat with you.

  • Colin Thomsen's Microsoft Blog

    Tech-Ed 2008 Wrap-up

    • 0 Comments

    Quite a few people at Tech-Ed wanted to know more about the various components of Visual Studio Team System (which we sometimes refer to as SKUs). For example, how does Team Foundation Server (TFS) fit in with the client SKUs? What are the differences between Visual Studio Team System Development Edition and Visual Studio Team System Test Edition? What is Visual Studio Team Suite?

    These are very valid questions when you're considering which SKU to buy and since I'm not a sales or marketing guy I'll defer to some nice diagrams and comparisons already available on the web:

    While we were demoing at Tech-Ed we were giving out trial versions of Visual Studio 2008 Team Suite, with some detailed tutorials (called Hands-On Labs) and they were very popular. If you'd like to try out the Virtual PC Image as well, download it here.

    Some of my colleagues at Microsoft were interviewed for a panel discussion about various aspects of Team System which is worth watching to see where Team System is heading (Visual Studio Team System Panel - Meet the Team).

    I also managed to catch a few sessions at Tech-Ed and one of the more interesting talks was about Visual Studio 2008 Tip of Day. Each day for more than 230 days now, Sara Ford has been posting blog entries with tips for Visual Studio.

    UPDATE: I forgot to mention an interesting series on Channel 9 that I found out about while at Tech Ed. This Week On Channel 9 covers some of the highlights from Channel 9 blogs, articles and videos. The focus is on the developer, which works well for me. The current episode talks about PDC, Pex, Build Bunnies, UltraCam and the Live Agents SDK.

  • Colin Thomsen's Microsoft Blog

    Visual Studio 2008, Beta 2 (now with some of my code)

    • 0 Comments

    Today we released Beta 2 of VS2008. This is the first public release from Microsoft that contains a nontrivial amount of code that I wrote (even though I haven't written too much code just yet). I had barely synched up the source tree and only fixed a couple of bugs when we released Beta 1 but now I've found my feet and am contributing more.

    The major release announcements have focussed on the flashier (and admittedly very cool) aspects of the Beta like LINQ and some of the HTML editing and Javascript debugging features. However, us Profiler folks have also been toiling away adding new features and fixing bugs. Look out for things like (and some of these already featured in Beta 1, but they just keep getting better):

    • A promotion to the new Developer menu
       dev_menu
    • Hot path - find the critical path/paths through your call trees
      hotpath
    • Noise reduction - trim and/or fold your call trees so that they are easier to examine. See above for folding example.
    • Comparison reports - compare subsequent profiler runs to determine if code changes are improving performance
      comparison_reps
    • x64 OS support - profile on x64 Vista or W2K3 server 

    If you can, please download it and let us know what you think. If you don't have the time at least take a look at the overview video showing some of the major features. You should also check out Ian's entry about controlling data collection while profiling. Hopefully I'll have time to go through some of the new profiler-specific features soon.

  • Colin Thomsen's Microsoft Blog

    C# For C++ Devs: ~A() doesn't act like a destructor

    • 0 Comments

    In C++, memory allocated with the 'new' keyword must be deallocated using 'delete' or it is not deallocated until the application finishes. A call to delete results in a call to the destructor for that class. Classes that are allocated on the stack are automatically destroyed, which calls their destructor, when they go out of scope.

    Sometimes this 'deterministic' memory allocation/deallocation behavior is exploited by developers using scoped objects on the stack to acquire and then automatically release resources even in the presence of exceptions (this pattern is known as Resource Acquisition Is Initialization - RAII).

    Here is a C++ class designed to be used in RAII pattern:

    class A
    {
    public:
       A()
       {
         
    // Acquire a resource (e.g. mutex or file)
       }

       ~A()
       {
         
    // Release the resource
      
    }
    };

    The class is then used as follows:

    void f()
    {
       {
          A raii;
         
    // do some stuff, maybe even throw exceptions
       
    }
      
    // raii has gone out of scope, so the destructor has been called. If an exception was thrown A still went out of scope and the destructor was still called
    }

    C# is a language with automatic garbage collection which means that developers allocate memory but in most cases they don't need to worry about when that memory is deallocated. There is no way to explicitly call the destructor. It is called whenever the garbage collector decides it is necessary to clean up, which is called Finalizing the class. In most cases classes should not implement a destructor.

    In C#, it is possible to get somewhat deterministic garbage collection (at least for unmanaged objects like files) by implementing the IDisposable interface and adding a Dispose() method. That method acts much more like C++'s destructor than the equivalent class destructor. The dispose pattern is described pretty well for C# in the MSDN help for IDisposable.

    Things to note:

    • The C# destructor will only (and can only) be called by the Finalizer.
    • Dispose() may be called in code.
    • If Dispose() is called before the Finalizer is called, finalization is suppressed using GC.SuppressFinalize(this);.
    • You must be careful not to reference any managed objects if Dispose is called from the destructor (this is achieved in the example by using an extra Dispose() function that takes a bool parameter).
    • It isn't covered in the code, but if you have member variables that implement IDisposable, your class should also implement IDisposable.

    Working with unmanaged resources is clearly much more work than working with managed resources.

    To implement the same RAII pattern from above in C#, assuming you have set up your class A to implement IDisposable, code with the 'using' statement to ensure Dispose() is called at the end of the block as follows:

    using (A raii = new A())
    {
      
    // Do some stuff...
    }

    This is safe in the presence of exceptions in the same way that the C++ scoped class pattern was above.

  • Colin Thomsen's Microsoft Blog

    C# for C++ Devs: Structs vs Classes

    • 0 Comments

    I'm from a C++ background and now I'm working quite a bit more with C# so I'm learning new things all the time. One thing that baffled me recently was caused by the difference between structs and classes in C#.

    In C++ the only difference between struct and class is the default member accessibility. For example, in the code below A::f() is private, whereas B::f() is public

    class A
    {
       
    void f();
    }

    struct B
    {
       
    void f();
    }

    That's the only difference. Structs can have member functions and classes can contain only data members. For C#, things are different, as I found out recently.

    In C#, structs are always passed by value, whereas classes are always passed by reference. What this means in practice is that anywhere you pass a struct to a function as a parameter or return it you are doing so by value.

    The confusing piece of code for me was equivalent to the following:

    struct Animal
    {
       
    public int Spots;
    }

    class Program
    {
       
    static void Main(string[] args)
        {
           
    List<Animal> allAnimals = new List<Animal>();
            allAnimals.Add(
    new Animal());
            allAnimals.Add(
    new Animal());

           
    foreach (Animal animal in allAnimals)
            {
                animal.Spots = 5;
           
    }
          
    Debug.WriteLine(String.Format("First animal spots: {0}", allAnimals[0].Spots));
        }
    }

    When I compiled the code above I got the error:

    error CS1654: Cannot modify members of 'animal' because it is a 'foreach iteration variable'

    How strange, I thought. OK, maybe in a foreach loop you can't modify public members. Let's try calling a function instead: 

    struct Animal
    {
        public void setSpots(int NewSpots)
       
    {
            Spots = NewSpots;
        }
       
    public int Spots;
    }

    class Program
    {
       
    static void Main(string[] args)
        {
           
    List<Animal> allAnimals = new List<Animal>();
            allAnimals.Add(
    new Animal());
            allAnimals.Add(
    new Animal());

           
    foreach (Animal animal in allAnimals)
            {
                animal.setSpots(5);
           
    }
          
    Debug.WriteLine(String.Format("First animal spots: {0}", allAnimals[0].Spots));
        }
    }

    So the compile error went away, but the message printed out was:

    First animal spots: 0

    I was expecting 5 here. After reading a little bit about structs and classes in C#, the penny dropped. Each iteration through allAnimals was getting a copy of the animal and calling setSpots. If I changed the definition of Animal to a class instead of struct, I could use the original code.

    class Animal
    {
       
    public int Spots;
    }

    class Program
    {
       
    static void Main(string[] args)
        {
           
    List<Animal> allAnimals = new List<Animal>();
            allAnimals.Add(
    new Animal());
            allAnimals.Add(
    new Animal());

           
    foreach (Animal animal in allAnimals)
            {
                animal.Spots = 5;
           
    }
          
    Debug.WriteLine(String.Format("First animal spots: {0}", allAnimals[0].Spots));
        }
    }

    Incidentally, members of structs also do not have default public accessibility in C#.

  • Colin Thomsen's Microsoft Blog

    Learning to Profile

    • 0 Comments

    I went to a meeting with Rico the other day and he showed us a few approaches he uses when solving performance issues. He is a performance engineer with many years of experience so it really was a case of watch and learn. This got me thinking about how people can best learn to use performance tools.

    One starting point in this process is to consider my own experience learning a more mature dynamic code analysis tool - the debugger. Think back to the first time you ever started up a program running under a debugger. What was the first thing you did? My first debugging experience went something like this:

    • Set a breakpoint at the beginning of main() - this was C/C++ afterall.
    • Run the code in the debugger. Hey, it stopped. cool.
    • Step through a few lines of code and inspect the values of some local variables.

    Sit back and think that's pretty cool - maybe I'll have to use a few less printfs to work out what's going on with my program. That's pretty much it. Gradually I learnt more and more about things like:

    • The difference between Step In, Step Over, Step Out, Run to Cursor
    • The value of different types of breakpoints like conditional breakpoints, data breakpoints etc.
    • The value of the Watch window. I'm still surprised by how much you customize the output to make it easier to find issues.
    • The various other windows - threads, memory, etc. etc.
    • Etc.

    It took a long to discover some of these features. It took even longer to use them almost automatically while debugging.

    Obviously the learning curve depends a lot upon the tool you use. Visual Studio tries to be more intuitive and easy to use than something like WinDbg, which is a command-line tool. Even with the ease of use of the visual debugger, you still need to know the typical debugging pattern (using breakpoints) before you can use the tool effectively.

    Fewer people have used code profilers than debuggers and the tools are still less mature than their debugger equivalents, so it is harder for new programmers to profile their code than to debug it. In an ideal world we might have a 'fix my code' button or at the very least a 'highlight problem code lines' feature, but for now we need to develop patterns that developers can use to do this themselves.

    What features would make profiling easier for you? Are we missing a fundamental concept (the equivalent of 'set breakpoint' in debugging land) that would make things so much easier?

Page 4 of 4 (38 items) 1234