Colin Thomsen's Microsoft Blog

I'm a developer working on the code profiler that ships with Visual Studio 2010 Premium and Ultimate editions. At a previous company I worked on computer vision software for face and gaze tracking.

Posts
  • Colin Thomsen's Microsoft Blog

    Microsoft Blogs I Read

    • 1 Comments

    There are a lot of Microsoft bloggers, literally thousands of them. When I first joined Microsoft I wasn't sure who to read. I've gradually built up a list based on interesting product and feature announcements and people I've met. Here they are:

    Profiling

    • Our Team Blog
    • IanWho's Blog. Written by a fellow dev on the profiler team, Ian has probably written the most about profiling across the team.
    • joc's bLog. Written by my bosses' boss.
    • mgoldin's blog. Written by a senior dev on my team. Find out about the difference between different types of samples etc.
    • My Code Does What?!. A relatively new blog about profiling by another fellow dev.
    • scarroll's Blog. Written by my boss.

    Technical

    • bharry's WebLog. Written by a Technical Fellow (read more about this) with a huge amount of experience who has a big focus on TFS.
    • Greggm's Weblog. Written by a senior dev on the Debugger team. Has many advanced debugger tips.
    • Mark Russinovich. Mark wrote some cool Sysinternals tools and now blogs some fascinating posts about his investigation into problems he finds everyday just using his PC.
    • Rico Mariani's Performance Tidbits. Written by a senior Microsoftie who has been here for a long time. Gives tips for analyzing performance and provides guidelines to use in writing .NET code.
    • ScottGu's Blog. Find out about LINQ, ASP.NET AJAX etc. etc. This blog has many examples including screenshots and source code.
    • Somasegar's WebLog. As the corporate VP of DevDiv, Soma covers a lot of Visual Studio features and other developer tools.

    Other

    That's just some of the Microsoft blogs I read. Are there other 'must-reads' that I'm missing?

  • Colin Thomsen's Microsoft Blog

    Tip: VS2008 – Finding and Setting Properties (Right-Click)

    • 0 Comments

    The Visual Studio Profiler has many properties and options and this tip shows you where to find most of them. Future posts may cover some of the specific properties in more detail.

    Performance Session:
    session_properties 
    Select an existing Performance Session in the Performance Explorer to see properties in the Properties Window. If the Properties Window is hidden: 
    Press ‘F4’ or go to
    ‘View->Properties Window’.
      Performance Report:
    report_properties

    Select a Performance Report in the Performance Explorer to view many properties including Collection, ETW, General, Machine Information, Performance Counters, Process, Thread and Version Information.

     

    Performance Session Properties (and Options):

    session_properties_1 To adjust Performance Session properties:
    1. Right-click on the Performance Session (Performance1 in this example).
    2. Select ‘Properties’.

    Properties for Performance1 are shown below. There are different categories of properties on the left (e.g. General, Launch, Sampling, …).

    session_properties_2

     

    Performance Targets:

    target_properties_1 To adjust Performance Target properties:
    1. Right-click on the Target (ConsoleApplication3 in this example).
    2. Select ‘Properties’.

    Adjust the properties for the Performance Target as required. These properties do not often need to be changed, with the possible exception of the Instrumentation property ‘Exclude small functions from instrumentation’.

    target_properties_2

     

    Tools –> Options –> Performance Tools:

    Some global options can be configured using the Visual Studio Options dialog, which is accessed via:

    Tools –> Options –> Performance Tools

    tools_options

    That’s all the properties I can think of but I’m probably missing some still. Probably the most important aspect to this tip is to emphasize that right-clicking with the mouse is often the way to access important contextual information.

  • Colin Thomsen's Microsoft Blog

    Performance: Inserting Marks Using Code

    • 0 Comments

    Ian previously covered using the VS 2008 Data Collection Control to choose when to collect data. The Data Collection Control can also be used to insert marks into the performance report, but sometimes it is convenient to modify the source code to do this automatically.

    Consider a typical application (PeopleTrax) where I am interested in gathering profiler data only between when a button is clicked and when data is displayed. The application is shown below.

    pre_click

    After the 'Get People' button is clicked, data is displayed after just over 6 seconds. This seems a little excessive so I want to focus my performance investigation in this area.

    post_click

    To filter the data so that it only shows information collected between those two points, I could use the Data Collection Control, but maybe I'm planning to run a complicated scenario and don't want to have to remember to insert the marks manually. Instead, it is possible to modify the original code to request the profiler insert marks in the required locations.

    The Profiler API is available for managed code in an assembly that can be added directly to the project from \Program Files\Microsoft Visual Studio 9.0\Team Tools\Performance Tools.

    add_ref_zoom_border

    After adding the reference it shows up in the 'References' list for the PeopleTrax project.

    add_ref_done_zoom_border

    I can then use functions in the Profiler API to control the profiler. This might include starting or stopping data collection or in this case, inserting marks into the datastream. This is easily achieved as shown below.

    mark_in_code

    I can then profile the application and when I open the Performance Report and switch to Marks View I see that the marks have been correctly inserted. We can also see that the time elapsed between the marks is about 6.5 seconds, which corresponds with the measurement that is already displayed in the PeopleTrax UI.

    marks_view_zoom_border

    I can use the marks to filter the report to only show profiling data for the time between the two inserted marks and then start my performance investigation.

    filter_on_marks_border

  • Colin Thomsen's Microsoft Blog

    C# For C++ Devs: ~A() doesn't act like a destructor

    • 0 Comments

    In C++, memory allocated with the 'new' keyword must be deallocated using 'delete' or it is not deallocated until the application finishes. A call to delete results in a call to the destructor for that class. Classes that are allocated on the stack are automatically destroyed, which calls their destructor, when they go out of scope.

    Sometimes this 'deterministic' memory allocation/deallocation behavior is exploited by developers using scoped objects on the stack to acquire and then automatically release resources even in the presence of exceptions (this pattern is known as Resource Acquisition Is Initialization - RAII).

    Here is a C++ class designed to be used in RAII pattern:

    class A
    {
    public:
       A()
       {
         
    // Acquire a resource (e.g. mutex or file)
       }

       ~A()
       {
         
    // Release the resource
      
    }
    };

    The class is then used as follows:

    void f()
    {
       {
          A raii;
         
    // do some stuff, maybe even throw exceptions
       
    }
      
    // raii has gone out of scope, so the destructor has been called. If an exception was thrown A still went out of scope and the destructor was still called
    }

    C# is a language with automatic garbage collection which means that developers allocate memory but in most cases they don't need to worry about when that memory is deallocated. There is no way to explicitly call the destructor. It is called whenever the garbage collector decides it is necessary to clean up, which is called Finalizing the class. In most cases classes should not implement a destructor.

    In C#, it is possible to get somewhat deterministic garbage collection (at least for unmanaged objects like files) by implementing the IDisposable interface and adding a Dispose() method. That method acts much more like C++'s destructor than the equivalent class destructor. The dispose pattern is described pretty well for C# in the MSDN help for IDisposable.

    Things to note:

    • The C# destructor will only (and can only) be called by the Finalizer.
    • Dispose() may be called in code.
    • If Dispose() is called before the Finalizer is called, finalization is suppressed using GC.SuppressFinalize(this);.
    • You must be careful not to reference any managed objects if Dispose is called from the destructor (this is achieved in the example by using an extra Dispose() function that takes a bool parameter).
    • It isn't covered in the code, but if you have member variables that implement IDisposable, your class should also implement IDisposable.

    Working with unmanaged resources is clearly much more work than working with managed resources.

    To implement the same RAII pattern from above in C#, assuming you have set up your class A to implement IDisposable, code with the 'using' statement to ensure Dispose() is called at the end of the block as follows:

    using (A raii = new A())
    {
      
    // Do some stuff...
    }

    This is safe in the presence of exceptions in the same way that the C++ scoped class pattern was above.

  • Colin Thomsen's Microsoft Blog

    Tip: Fixing VSPerfASPNetCmd metabase errors

    • 0 Comments

    VSPerfASPNetCmd is a new Visual Studio 2010 tool that helps you profile ASP.Net websites from the command-line. Recently I noticed an error message which didn’t cover one common situation so I thought I’d write about it. Here’s an example:

    > VSPerfASPNetCmd.exe http://localhost
    Microsoft (R) VSPerf ASP.NET Command, Version 10.0.0.0
    Copyright (C) Microsoft Corporation. All rights reserved.

    Error
    VSP 7008: ASP.net exception: "The website metabase contains unexpected information or you do not have permission to access the metabase.  You must be a member of the Administrators group on the local computer to access the IIS metabase. Therefore, you cannot create or open a local IIS Web site.  If you have Read, Write, and Modify Permissions for the folder where the files are located, you can create a file system web site that points to the folder in order to proceed."

    The information in the error is correct and it is worth checking to make sure that you are running from an elevated command prompt, but it does miss a common configuration issue. In order to query for information from the IIS metabase, certain IIS components need to be installed.

    To check this in Windows 7:

    1. Open ‘Control Panel\Programs\Programs and Features’ (or run ‘appwiz.cpl’).
    2. Choose ‘Turn Windows features on or off’.
    3. In the ‘Internet Information Services’ section, make sure that the following options are selected.

     

    IIS Configuration Options

    The non-default options include:

    • IIS 6 Scripting Tools
    • IIS 6 WMI Compatibility
    • IIS Metabase and IIS 6 configuration compatibility
    • ASP.NET,
    • Windows Authentication
  • Colin Thomsen's Microsoft Blog

    PDC 2008 - See the Sessions

    • 0 Comments

    This year if you didn't get a chance to go to the Professional Developer's Conference (PDC), there is still a wealth of information available to you. The most valuable resource I think are the videos of all the PDC sessions. Here are a few of the sessions that I've viewed and found most interesting:

    • Improving .NET Application Performance and Scalability, starring my boss Steve Carroll and Ed Glass, this session covers a bunch of new Visual Studio 2010 Profiler features.
    • Visual Studio Debugger Tips & Tricks, with speaker John Cunningham who is a Microsoft Development Manager (and Steve's boss), covering features in Visual Studio 2008, 2008 SP1 and features to look forward to in Visual Studio 2010. Note to self 'if you ever ship without symbols, I would fire you'.
    • Microsoft Visual Studio Team System: Software Diagnostics and Quality for Services, featuring Habib and Justin, who are also folks from the diagnostics team. The most exciting demo from this talk shows off the cool new Historical Debugging feature. It also features the new Test Impact Analysis feature, which can tell you which tests you should run after changing your code.
    • Framework Design Guidelines, by the guys who wrote the book of the same name, Krzysztof Cwalina and Brad Adams. If you write managed code this is a must-see session.

    If you'd like to try some of the Visual Studio 2010 features for yourself, you can download the newest CTP here.

  • Colin Thomsen's Microsoft Blog

    The Honeymoon Is Over

    • 1 Comments

    I've been here at Microsoft for more than 6 months so I guess you could say that I've passed through the Honeymoon Phase. By now the initial joy and excitement should be starting to wear off and I should be settling into a monotonous routine.

    Well I'm happy to say that it hasn't happened so far. I'm still learning a lot, including things like:

    • Shipping big products is fun. We get to think about cool new ideas and some of them we implement and some of them get implemented by other smart folks.
    • Shipping big products is hard. We have to worry about things like localization, corner case scenarios and crashes that smaller products just don't need to consider. All of this takes time and there can be periods of time where you're fixing strings or working in high-contrast mode.
    • Our debugging tools are cool. For most of the bugs I need to fix my primary tool is Visual Studio. It is a good sign that even working with less stable dogfood versions is better than using another tool.
    • Bug/Feature Triage is important. We have so many people using our products that all kinds of bugs are reported, from serious (crashes) to suggestion (please improve this feature by...). If we did everything that was asked of us, we would never have a stable version to release. However, triaging can be much more lenient in the early stages of development. Here we go through stages:
      • Code review - any change you make must pass a code review. The reviewer might say 'hey, why are we fixing this bug!' and it may not be accepted.
      • Tell mode - closer to a release our team leads will go along to a meeting (called a shiproom meeting) and they will say "hey, we're fixing these bugs". If a lead goes along and says "we changed the font size from 9 to 10 points" without a good reason there might be some raised eyebrows.
      • Ask mode - even closer to release, before a bug is submitted, it has to go to the shiproom and be approved. Usually there are only certain classes of bugs that will be approved (blocking bugs, localization bugs, etc.). It is important that this 'bug bar' is known so that developers/leads know whether to attempt to fix a bug or not.

        All of this means that the number of bugs we fix gets fewer closer to a release, which means the product has time to stabilize and be thoroughly tested. At the same time, more minor bugs get a chance to get fixed early in the release cycle.
    • Company Meetings are exciting. There was a lot of shouting, collective back-slapping and cool demos. It was amazing that 1/3 of a baseball stadium was all from the same company.
    • Seattle summers are great. There is so much talk about how rainy Seattle is, but over summer the weather is warm but not really hot and it doesn't rain all that much. Daylight hours are long and it is perfect for getting out and about.

    I also like hearing about new features and products and being able to try them out before they're distributed to customers. Let's see how the next 6 months go.

  • Colin Thomsen's Microsoft Blog

    VS2010: Attaching the Profiler to a Managed Application

    • 0 Comments

    Before Visual Studio 2010, in order to attach the profiler to a managed application, certain environment variables had to be set using vsperfclrenv.cmd. An example profiling session might look like this:

    • vsperfclrenv /sampleon
    • [Start managed application from the same command window]
    • vsperfcmd /start:sample /output:myapp.vsp /attach:[pid]
    • [Close application]

    If the environment variables were not correctly set, when attempting to attach you would see this message:
    old_attach_warning

    The profiling environment for ConsoleApplication2 is not set up correctly. Use vsperfclrenv.cmd to setup environment variables. Continue anyway?

    The generated report would typically look something like the report below. The warning at the bottom of the page indicates the problem and the report itself would typically not be useful since no managed modules or functions would be resolved correctly.

    old_attach_badreport  Report with 'CLRStubOrUnknownAddress and Unknown Frame(s) and the warning ‘It appears that the file was collected without properly setting the environment variables with VSPerfCLREnv.cmd. Symbols for managed binaries may not resolve’.

    Fortunately the Common Language Runtime (CLR) team provided us with a new capability to attach to an already running managed application without setting any environment variables. For more detailed information take a look at David Broman’s post.

    Caveats:

    • We only support attach without environment variables for basic sampling. It will not work for Allocation or Object Lifetime data collection and Instrumentation attach is not possible. Concurrency (resource contention) attach is supported.
    • The new attach mechanism only works for CLR V4-based runtimes.
    • The new attach mechanism will work if your application has multiple runtimes (i.e. V2 and V4  SxS), but as noted above, you can only attach to the V4 runtime. I’ll write another post about the profiler and Side by Side (SxS).
    • The old environment-variable-based attach still works, so you can still use that if you prefer.

    The new procedure for attaching the profiler to a managed application in Visual Studio 2010 goes like this:

    • Launch your app (if it isn’t already running)
    • Attach to it, either from the command-line or from the UI.
    • When you’re finished, detach or close the app to generate a report.

    new_attach_report

    If you want to diagnose any issues with attach, the CLR V4 runtime provides diagnostic information via the Event Log (view with Event Viewer) and the profiler also displays information there:

    new_attach_eventlog

    Event Log: ‘Loading profiler. Running CLR: v4.0.21202. Using ‘Profile First’ strategy’

    There are two .NET Runtime messages regarding the attach, the first indicating that an attach was requested and the second that the attach succeeded. The VSPERF message describes which CLR is being profiled.

  • Colin Thomsen's Microsoft Blog

    VS2010: Profiler Guidance (rules) Part 1

    • 0 Comments

    The new guidance feature in the VS2010 profiler will look familiar to people who have used the static code analysis tools in previous versions. However, instead of statically analyzing your code, the profiler runs it and analyzes the results to provide guidance to fix some common performance issues.

    Probably the best way to introduce this feature is via an example. Let’s assume you have written a simple application as follows:

       1: using System;
       2: namespace TestApplication
       3: {
       4:     class Program
       5:     {
       6:         static void Main(string[] args)
       7:         {
       8:             BadStringConcat();
       9:         }
      10:  
      11:         private static void BadStringConcat()
      12:         {
      13:             string s = "Base ";
      14:             for (int i = 0; i < 10000; ++i)
      15:             {
      16:                 s += i.ToString();
      17:             }
      18:         }
      19:     }
      20: }

    If you profile this application in Sampling Mode you’ll see an Action Link called ‘View Guidance’ on the Summary Page:

    Action Links View Guidance

    Clicking on this link brings up the Error List, which is where you would also see things like compiler errors and static code analysis warnings:

    Error List String.Concat
    DA0001: System.String.Concat(.*) = 96.00; Consider using StringBuilder for string concatenations.

    As you can see there is a 1 warning, which is DA0001, warning about excessive usage of String.Concat. The number 96.00 is the percentage of inclusive samples in this function.

    Double-clicking on the warning in the Error List switches to the Function Details View. Navigating up one level of callers, we see that BadStringConcat is calling Concat (96% of Inclusive Samples) and doing some work itself (4%). The String.Concat call is not a direct call, but looking at the Function Code View you can see a ‘+=’ call on a string triggers the call.
     Function Details Concat

     

    The DA0001 rule suggests fixing the problem by changing the string concatenation to use a StringBuilder but I’ll leave that up to the reader. Instead, I’ll cover some of the other aspects of rules.

    One of the more important questions is what to do if you wish to turn off a given rule (or even all rules)? The answer is to open up the ‘Tools/Options’ dialog and in the Performance section, navigate to the new ‘Rules’ subsection:

    rules_options

    Here you can see that I’ve started searching by typing ‘st’ in the search box at the top. This dialog can be used to turn off rules (by clicking on the checkboxes on the left), or to change rule categories to ‘Warning’, ‘Information’ or ‘Error’. The only affect is to change how the rule is displayed in the Error List.

    If you have a situation where you are sharing a profiler report (VSP file) with team members, sometimes it might be useful to let them know that a warning is not important or has already been considered. In this case you can right-click on the Error List and choose ‘Suppress Message’.

    errorlist_suppress

    The rule warning is crossed out and you can choose to save the VSP file so that the next time it is opened, the suppression is shown:

    errorlist_suppressed

     

    That’s it for now. I plan on covering a little more about rules in a future post, including more details about the rules themselves, how you can tweak thresholds and even write your own rules.

  • Colin Thomsen's Microsoft Blog

    VS2010: Using the keyboard to profile an application (Alt-F2 shortcut)

    • 0 Comments

    In announcing the Visual Studio Beta 2 profiler features, Chris mentioned that we have a new option on the Debug menu called ‘Start Performance Analysis’ which has the Alt-F2 keyboard shortcut. This makes it easier than ever to start profiling your application. The new menu item has the following behavior:

    • You must have a Visual Studio Solution open in order to enable it.
    • If you have a solution open, but do not have a launchable current performance session, Start Performance Analysis launches the Performance Wizard.
    • If you have a solution open and have a launchable current performance session, Start Performance Analysis starts profiling.

    Let’s use this new functionality to profile an application that I prepared earlier.

    1. Open the solution with ‘Alt-F, J, Enter’:
      1_open_project 
    2. Start Performance Analysis with ‘Alt-F2’, which brings up the wizard:
      2_alt-f2_wizard
    3.   Press ‘Enter’ to choose the default ‘CPU Sampling’ profiling method and move to the target selection page:
      3_enter_next 
    4. Press ‘Enter’ to select the only launchable project in the solution and move to final wizard page:
      4_enter_finish
    5. Press ‘Enter’ to finish the wizard and start profiling:
      5_profiling
    6. The report will open when profiling finishes:
       6_report

     

    If you wish to profile again, selecting Alt-F2 will start profiling with the Performance Session that was created after step #4.

Page 2 of 4 (38 items) 1234