Colin Thomsen's Microsoft Blog

I'm a developer working on the code profiler that ships with Visual Studio 2010 Premium and Ultimate editions. At a previous company I worked on computer vision software for face and gaze tracking.

  • Colin Thomsen's Microsoft Blog

    There's a Profiler in Visual Studio?


    I've just started working in the Visual Studio Team System, Profiler team. Many people are surprised to hear that Visual Studio does actually have a profiler and that it has had one since Visual Studio 2005. To use the profiler you need either Visual Studio Team System for Developers or Visual Studio Team Suite.

    So where exactly is the profiler? Well in VS2005 it is buried in the 'Tools' menu under 'Performance Tools'. Fortunately the profiler is being promoted in the next release of Visual Studio to become part of a new 'Developer' menu. You can see it on my colleague Ian's blog in Figure 1.1.

    I must admit that I hadn't used the VS2005 profiler extensively before joining this team, so I went in search of more information about it. Here's a few of things I found:

    • Videos - (Part 1, Part 2)
      Ian stars in videos from 2005 that show a semi-real example of how you might use a profiler to improve performance. He discusses what is probably the most confusing thing for new users - what exactly is the difference between sampling and instrumentation? 
    • Profiler Blog
      Includes posts from other members of the team.
    • Ian's Blog
      Interest from Ian's blog articles led to the videos above. Be sure to check out some of the new features that are part of the next VS Beta.
    • Tech Notes
      More detailed articles about profiling and other aspects of VS2005.
    • MSDN Forum
      Here you can either ask questions (and developers do actually monitor and answer) or look at past answers.

    That's it for now.

  • Colin Thomsen's Microsoft Blog

    C# for C++ Devs: Generics vs Templates for Primitive Types


    I was trying to write some type-generic (almost) code in C# using a pattern that I commonly use in C++. A very simple version of what I was trying to do looks something like:

    class B

    template<typename T>

    int convert(T value)


        return (int)value;



    int main(int argc, char* argv[])




        // The line below would fail with "error C2440: 'type cast' : cannot convert from 'B' to 'int'"


        return 0;


    In C++ this compiles and runs just fine, as long as you don't uncomment the convert function for the class. For templates, code is only generated when needed at compile time and as long as the substituted code supports the required functions (or in this case cast), everything is just fine.

    The equivalent C# program using generics looks like this:

    class Program


       static int convert<T>(T value)


          return (int)value;



       static void Main(string[] args)





    Unfortunately it doesn't compile. I get an error: Cannot convert type 'T' to 'int'

    This is due to the way generics are handled in C#. Instead of having code generated for different types at compile time, generics are resolved to the particular type at runtime (JIT actually). This means that the convert function shown above must support all types at compile time. This is not the case, since generic objects cannot always be converted to integers.

    It is possible to use constraints if you are not using a primitive type, but there doesn't seem to be a nice way to support 'generic across primitive types' in C#. Am I missing something?

  • Colin Thomsen's Microsoft Blog

    Tools of the Trade


    I've been thinking about what some of the most important tools are for me while coding. Here's a few:

    • Good IDE - syntax highlighting, integrated builds, source control integration, search facility, debugger and profiler built-in. I use VSTS.
    • Source control/bug tracking system. I use TFS (typically a dogfood version of TFS).
    • Windows Task Manager.
      I use task manager to:
      - View CPU usage
      - Kill processes
      - Start a new explorer.exe if I ever kill explorer.exe. Do this from the Applications tab.
    • Process Explorer.
      I use Process Explorer like task manager, but it can also:
      - Find what process has a handle (e.g. a file) open. This can be handy if you want to delete a file but a process has it locked.
      - Find out what DLLs a process has loaded
      - List the environment variables for a running process
    • Process Monitor
      I use Process Monitor to record file, registry and process activity. This is very useful when debugging issues in complex programs like VSTS which have a lot of registry interactions.
    • DebugView
      Display debugging output from programs without having to attach a debugger. This is very useful if you want to run your program outside a debugger and still want to see all those debug prints.
    • Media Player. I like to listen to music while I code.
    • Outlook. It is somewhat sad that I spend a fair percentage of my day reading emails, scheduling or checking up on meetings and writing notes in an Outlook journal, but I do and so I have Outlook open all the time.
    • Internet Explorer. I need to use MSDN a lot and do web searches. I also read RSS feeds of relevant blogs with IE.
    • Regedit.
    • Remote Desktop. I work on different machines pretty regularly and Remote Desktop makes switching between machines easy.
  • Colin Thomsen's Microsoft Blog

    Quick Tip: VS2008 - Compare Reports Quickly


    While investigating a performance problem you may need to collect many Performance Reports and compare them. You can use the Performance Explorer to quickly compare two reports by:

    1. Selecting two reports.
    2. Right-clicking and choosing 'Compare Performance Reports...'

    The oldest report will be used for the 'Baseline' report and the other report will be used for the 'Comparison' report, as shown below:


  • Colin Thomsen's Microsoft Blog

    Visual Studio 2010


    This week Microsoft is starting to talk about Visual Studio 2010. One of the best resources I've seen has a bunch of videos from various Microsoft folks about some of the new features and what the high-level goals of the next release of Visual Studio are. Take a look at the Channel 9 site

    There isn't much about profiling just yet, although some of the other features being developed by the diagnostics team including 'Eliminate No-Repro Bugs' and 'Test Impact Analysis' are covered. For more details about those features, you may also want to keep an eye on John's blog.

    We'll also start talking about some of our new profiling features on the main profiler blog. If you're going to PDC, be sure to check out my boss Steve Carroll's (and Ed's) session:

    You'll get to see how some of our new profiler features make finding and solving performance problems easier.

  • Colin Thomsen's Microsoft Blog

    Why Performance Matters


    Everybody likes to think that what they're working on is important so this is why I think performance (and measuring performance) matters and why it matters now more than ever.

    In the past chip manufacturers like Intel and AMD did a lot of the performance work for us software guys by consistently delivering faster chips that often made performance issues just disappear. This has meant that many developers treat performance issues as secondary concerns that will get better all by themselves if they wait long enough. Unfortunately free lunches don't last forever as Herb Sutter discusses.

    Chip manufacturers can no longer keep increasing the clock speeds of their chips to boost performance so they are reducing the clock speed and increasing the number of cores to take computing to new levels of energy-efficient performance. The result, which is discussed on one of Intel's blog's, is that some applications will actually run less quickly on newer multicore hardware. This will surely shock some software consumers when they upgrade to a better machine and find it running software slower than before.

    Clearly we need to change our applications to allow them to take advantage of multiple cores. Unfortunately this introduces a lot of complexity and there are many competing opinions about how we should do this. Some seasoned developers are pretty negative about using multithreaded development in any application. Some academics suggest that we use different language constructs altogether to avoid the inherent nondeterminism associated with developing using threads.

    Consider also that the types of applications we are developing are also changing. Sure, the traditional rich-client applications are still very popular, but there is also a demand for light-weight web-based clients that communicate with a central server (or servers). The software running on the servers must cater for many users and will have very strict performance requirements.

    So how does all this fit in with performance measurement? Well now developers have to write concurrent applications that are difficult to understand and develop. They write web-delivered applications that must respond promptly to many concurrent users and it is unlikely that just upgrading hardware is going to fix any performance problems that crop up. The free lunch is basically over, so now we have to pay. One way to minimize the cost of our lunches is to be able to 'debug' or resolve dynamic software issues using a profiler just like you would use a debugger to fix issues with program correctness.

    One of the main benefits of using a profiler instead of manually inspecting the code is that it avoids the 'gut feel' approach to performance optimization that is common. For example, a developer sees a loop like:

    for (int i=0; i < some_vector.size(); ++i)

    So they decide to optimize by making a temporary so that the size() function doesn't get called for every iteration of the loop:

    const int some_vector_length = some_vec.size(); 
    for (int i=0; i < some_vector_length; ++i)

    The number of lines of code has now increased by 1. If the length of the vector is always small, it is unlikely this buys much in the way of performance. Even worse, a developer may start to do things like loop unrolling when the real cause of the performance problems is something they don't notice. As the complexity of the code goes up the maintenance costs increase. If a profiler were used, it would be much easier to isolate the cause of a performance problem without wasting time optimizing code that is barely impacting the performance of an application.

    Before I get too carried away, I should clarify that performance matters, but only if the performance is poor. For example, if you're working on a User Interface (UI), according to Jakob Nielsen if the response time to a user action is less than 0.1 seconds, the user will feel that the system is reacting immediately to their action. If you're working on a computer game the performance requirement might be that the frame rate must be at least 30 Hz. In both of these cases the user will notice if the performance requirement isn't met, but they will probably not notice or care about performance if the performance requirement is met.

    If you haven't used a profiler before, go and try out a Community Technology Preview (CTP) of Orcas which will be the next version of Visual Studio. For the full experience you should avoid using the VPC images which have reduced profiler functionality. Some day, maybe soon if not already, you'll have to fix a performance problem with your code and using a profiler might help.

  • Colin Thomsen's Microsoft Blog

    Basic Profiler Scenarios


    This post was going to cover some basic scenarios discussing the differences between sampling and instrumentation and when you would choose to switch methods, but then I found there is already something like that in MSDN. If you haven't already, go and take a look. See if you can improve the performance of the PeopleTrax app.

    Instead I'll discuss sampling and instrumentation from a user's perspective. There are already many definitions of sampling vs instrumentation so I won't repeat them.

    For some background reading on the sampling aspect, take a look at David Gray's post. There are a few things that he hasn't covered in that post. The main question I had was should I use sampling or instrumentation?

    A generic answer to that would be:

    • If you know your performance problem is CPU-related (i.e. you see the CPU is running at or near 100% in task manager) then you should probably start with sampling.
    • If you suspect your problem may be related to resource contention (e.g. locks, network, disk etc), instrumentation would be a better starting point.

    Sometimes you may not be sure what type of performance issue you are facing or you may be trying to resolve several types of issues. Read on for more details.


    Why use sampling instead of instrumentation?

    Sampling is lighter weight than instrumentation (see below for reasons why instrumentation is more resouce intensive) and you don't need to change your executable/binaries to use sampling.

    What events do you sample with?

    By default the profiler samples with clock cycles. This should be familiar to most users because they relate to the commonly quoted frequency of the machine. For example, 1 GHz is 1 billion clock cycles / second. If you use the default profiler setting for clock cycles that would mean 100 samples every second on a 1 GHz machine.

    Alternatively, you could choose to sample using Page Faults, which might occur frequently if you are allocating/deallocating memory a lot. You could also choose to profile using system calls or some lower level counter.

    How many samples is enough to accurately represent my program profile?

    This is not a simple question to answer. By default we only sample every 10000000 clock cycles, which might seem like a long time between samples. In that time, your problematic code might block waiting on a lock or some other construct and the thread it is running in might be pre-empted allowing another thread to run. When the next sample is taken the other thread could still be running which means the problematic code is not included in the sample.

    The risk of missing the key data is something that is inherent in any sample-based data collection. In statistics the approach is to minimize the risk of missing key information by making the number of samples large enough relative to the general population. For example, if you have a demographic that includes 10000 people, taking only 1 sample is unlikely to be representative. Taking a sample of 1000 people might be considered representative. There are more links about this on Wikipedia.

    Won't this slow down my app?

    No, not really. When a sample is taken the current thread is suspended (other application threads continue to run) so that the current call stack can be collected. When the stack walk is finished, execution returns to the application thread. Sampling should have a limited effect on most applications.

    Sounds good, why use instrumentation?

    See below.


    Why use instrumentation?

    As discussed above, sampling doesn't always give you the whole picture. If you really want to know what is going on with a program the most complete way is to keep track of every single call to every function.

    How does instrumentation work (briefly)?

    Unlike sampling, with instrumentation the profiler changes the binary by inserting special pieces of code called probes at the start and end of each function. This process is called 'instrumenting the binary' and it works by taking a binary (dll or exe) along with its PDB and making a new 'instrumented binary'. By comparing a counter at the end of the function with the start, it is easy to determine how long a function took to execute.

    What if I call other people's code?

    Usually you don't have access to the PDB files for other people's code which means you can't instrument it. Fortunately as part of the instrumentation process the profiler inserts special probes around each call to an external function so that you can track these calls (although not any functions that they might call).

    Why not just use Instrumentation all the time?

    Computers execute a lot of instructions in 10000000 clock cycles, so using instrumentation can generate a LOT of data compared with sampling. The process of calling the probe functions in an application thread can also degrade performance more than sampling would.

  • Colin Thomsen's Microsoft Blog

    Tech-Ed 2007


    Tech-Ed 2007 is starting tomorrow and the Profiler Team is sending a few people to sunny Orlando for the event. This is great news for me because my boss, Steve Carroll, is away for the week (just kidding Steve), but it is really great news for folks at Tech Ed because he'll be there presenting with Marc Popkin-Paine:

    DEV313 - Improving Code Performance with Microsoft Visual Studio Team System  [N210 E]

    June 07

    9:45 AM

    11:00 AM

    I believe they'll be demoing a few new Orcas features and giving a pretty good introduction to profiling. If you didn't know Visual Studio Team System has a profiler, or you don't think performance is important, you should definitely check this out.

    If you're not lucky enough to be able to make it to Orlando this year, be sure to take a look at Virtual Tech Ed, which will include webcasts and other content from some of the sessions. One that jumps out at me is MSDN Webcast: A Lap around Microsoft Visual Studio Code Name "Orcas" (Level 200).

    UPDATE: Steve is already helping people at Tech Ed. If you're there and you're interested in performance go and have a chat with him in the Technical Learning Center.

  • Colin Thomsen's Microsoft Blog

    Developer Dogfooding at Microsoft


    I hadn't heard the term dogfooding used much before I started here, but it has already been explained so take a look here. The basic idea is that if you're not happy using your product (i.e. eating your own dogfood) then why should you expect your customers to be? Working at Microsoft gives you incredible scope to dogfood a wide variety of products.

    As a Microsoft employee, I should be using Internet Explorer, Vista, Office, etc etc and I am. This doesn't necessarily mean I shouldn't run alternative products as well or when a Microsoft product doesn't provide the functionality I need. 

    As a Microsoft developer, I should be using Team Foundation Server for bug tracking and source control. I should be developing Visual Studio using Visual Studio. I should be profiling my code using VSTS profiling tools. Fortunately, I am, although not exclusively and probably not in some other parts of the company.

    The main reason I think this is a good idea is because we get to feel any of the pain that customers do. We have extra incentive to fix any problems instead of ignoring them. We often catch problems early on before customers even see them.

    I'll admit it, the process can be painful. The pain typically increases as you get closer to the bleeding edge of technology. For example, my Visual Studio dogfooding experience involves running the latest build of VSTS while developing. There are issues which delay my development, but facing these issues every day helps me drive improvements to the product. Imagine if your source control system went down - you'd want it fixed pretty quickly and that's just what we want from our TFS dogfood server.

    Here's a few of things that I think need to happen for successful dogfooding:

    • The process must not be voluntary. As an individual dev I must use a pre-release version of TFS. As a Microsoft employee my computer is automatically updated to use the latest updates before they are pushed out to customers. There isn't a choice.
    • There must be a feedback mechanism. If things are broken it must be easy to report this and critical breaks must be fixed quickly.
    • Things must actually get better. Limit the audience for really unstable dogfooding. For example, we don't make devs outside the VS team build their own VS from last night's source. They get a 'Last Known Good' build of a release that has had extra testing carried out on it.

    If you're an application developer, are you using your own alpha/beta software before it is released to the public?

  • Colin Thomsen's Microsoft Blog

    Free Microsoft Software (including VSTS) for IEEE Computer Society Student Members


    I'm a member of IEEE and IEEE Computer Society and recently received requests to renew my subscriptions. I was pleasantly surprised to see that in 2008 student members of IEEE Computer Society can receive free software including Vista, Project, Visio and most interesting for me, Visual Studio Team System.

    Both subscriptions come with monthly journals which are pretty technical and provide glimpses into various potential future career paths. 

    UPDATE: If you're a student in the US, Australia or a few other places you can also get Office Ultimate 2007 for up to 90% off the retail price.

Page 1 of 4 (38 items) 1234