The official source of product insight from the Visual Studio Engineering Team
After the beta release, we published several posts on the performance improvements in Visual Studio 11. Since the beta release, we have made a number of additional enhancements, continuing our quest to improve the performance of Visual Studio. The performance work done between beta and the upcoming RC was substantial and covered many aspects of the product including XAML (compiler, loading documents, and the design surface), C++, TLM, Debugging and the list goes on. We plan to blog about a few examples. In this post we’ll cover typing responsiveness, and in the next post we’ll cover the Toolbox – both of which are areas that have a broad impact on Visual Studio performance. We are excited for you to experience Visual Studio 11’s performance improvements in the upcoming RC release and without further ado I would like to introduce Eric Knox from the Visual Studio Pro Team to describe to you the work done to improve typing responsiveness for you.
As previous posts on Visual Studio performance have stated, we spent a great deal of time looking across the breadth of Visual Studio for areas to improve. In this post, I’ll describe how we approached measuring and improving the typing and editing part of the “edit-compile-debug” loop, due in large part to the UserVoice suggestion to make typing and scrolling more responsive. I’ll share the techniques we’ve developed for assessing Visual Studio’s responsiveness and some of the results we’ve achieved with the measurement style we developed.
As you most likely know, Windows applications are message-based, meaning that all interaction, whether it’s a typed character, a mouse move or a request to paint is sent to an application’s message loop. If an application spends more time handling a given message than it takes the user or Windows to generate the next message, an application can start falling behind and become “unresponsive.”
Since responsiveness is all about whether or not VS is keeping up with the incoming message stream, it is fairly straightforward engineering work to figure out how long any given message takes to process. During Visual Studio 2010 SP1, we added some hooks into the product so we could tell when processing a single message took too long. We released an extension called PerfWatson that recognized unresponsiveness and generated anonymous reports that we collected and aggregated. You can read this earlier blog post about how we used PerfWatson data during Visual Studio 11. One of the most interesting things to know about PerfWatson for today’s post is that it collects a mini heap dump which allows us to figure out what Visual Studio code is running at a particular point during the delay.
PerfWatson has been an extremely valuable tool for understanding VS hangs, but the threshold we set for it and the dump files it collects are designed for general application responsiveness, not for interactions that must be immediate such as typing a character and seeing it appear on the screen. Collecting a dump is fast, but it’s not fast enough to repeat on a per-keystroke basis. We therefore built a more fine-grained measurement system we call Immediate Delay Tracker (IDT).
Both PerfWatson and IDT are aimed at assessing responsiveness, but they achieve that result in different ways. PerfWatson can tell what code was running at a point in time during a single message’s processing by programmatically translating the mini heap dump it collects into a single call stack. Collecting that dump and translating it into a call stack is a modest amount of work both in terms of how much data it collects and how much processing power it takes. IDT, however, is based off of Event Tracing for Windows (ETW) which doesn’t take a lot of processing power either but does collect a lot more information than a single call stack. ETW enables Windows to record events and other data important to knowing exactly what was happening during the profiled time period in a low-overhead manner, saving that data into an output trace file (ETL file). One of the main things ETW gives us for analyzing CPU-bound work is a sampled profile of what a computer was doing while we collected a trace. Instead of a single stack like PerfWatson, we can get a call stack every millisecond and then analyze the collection of stacks in aggregate rather than just a single point in time. When a particular function is present across a majority of samples, it is likely that function spent a lot of time processing and is a great place to optimize or even eliminate where possible.
The other thing to know about ETW is that while it is designed to be low overhead when collecting a trace, it’s also designed to have as negligible an impact as possible when ETW is not actively collecting a trace. That enables VS engineers to decide what kinds of events would be interesting to record in an ETL file without negatively influencing VS running on customer machines where ETW is unlikely to be collecting a trace. Having the product give an indication of exactly what event is occurring, combined with a sampled profile, allows us to pinpoint the exact periods of time we need to focus on when trying to make a particular scenario faster.
IDT is the tool that combines the ability to turn on ETW tracing along with awareness of some key events to give us visibility into high-frequency, short-duration scenarios. The in-product events IDT is aware of are numerous and include things like the time to process an individual keystroke, how long it took a menu to open and even how long it took a language service to color the code on the screen. With the ETW setup and registration in place, IDT is able to know exactly when these kind of immediate operations happened, how long they lasted and give us a report of how long these “instant” operations took in aggregate.
Knowing how long a set of “instant” operations took is a great step, and having that data enables us to evaluate a developer scenario and say whether actions such as keystroke processing met our expectation or not. But when investigating a scenario that did not meet our expectation, it was difficult to look at the numerous, very short windows of time when VS was processing a message to figure out what VS was doing in aggregate. To help make sense of the data within the trace, the team built another tool on top of the IDT infrastructure that we call RaceTrack.
RaceTrack’s main job is to analyze ETL files in a way that allows us to focus on just the time period during the specific events we care about, such as keystroke processing. On top of narrowing down to a single specific event, RaceTrack also has the ability to merge multiple sampled profile periods together so that the code paths which show up most often across multiple occurrences rise to the top of our visibility, exactly like a sampled profile stitches together individual samples over time to show which functions spent a significant amount of time processing.
To use a concrete example, we used RaceTrack to collect an ETL file while some automation code opened a very large C# solution and typed a few thousand characters of valid code into a file. With RaceTrack, we can see things like which character in the input stream had the longest delay, or we can aggregate all the delays together to see what our biggest bottleneck was across all keystrokes. In this screenshot, you can see RaceTrack showing a number of “Input Delay” events at the top which represent characters typed, along with their duration and even what time of day they happened. The bottom pane shows that I’m looking at an aggregated stack of two events, breaking down what percentage of the combined delays were spent in the call stacks on the right.
For C#, it turned out that we were doing some aggressive calculations to figure out what to put in the completion list in order to help provide the right experience of knowing what you could legally type next. With this kind of focused data, we were able to rethink parts of what we were doing during each keystroke, optimizing some of our common code paths as well as eliminating some that we didn’t need to be doing at all.
Once we had that kind of tool and data in front of us, we were able to more effectively reason about what each language service was doing and make improvements. Here are the results of just three of our language services, expressed as a percentage of keystrokes above a given millisecond threshold. For the purpose of VS reacting instantly, lower is better:
Visual Studio 2010
Keys above 50ms
Keys above 100ms
Keys above 200ms
And here’s an alternate view that more graphically represents how many characters were processed in less than 50ms (the gray of the pie chart) and how the spread of characters above 50ms falls into buckets that we found meaningful (the bar chart). Notice that each language has the same scale for in the before and after bar charts, but the scale between languages is different.
The main caveat from this data is that each language is using a large, representative solution, but this one test is by no means an exhaustive measurement of all typing in those languages. Your mileage may vary compared to what we’ve measured on our moderately-powered machines where we do regular measurements. If your local results don’t mesh with these numbers, you can find out how to help us diagnose problems toward the bottom of this post in the “Next Steps” section.
Having given the caveat, the main takeaways from this data are:
In addition to repetitively running a single test per language on a daily basis to quantify improvements as well as to prevent regressions, we’ve also deployed an IDT-based monitoring service to internal users within Microsoft in order to get a more exhaustive measurement. This tool runs in the background, watching how long keystrokes and a few other high-frequency, short-duration actions take within VS, counting the number of occurrences within buckets and then phoning that data back home to a central server.
It turns out that the real world follows the trends of what we see above (with C++ performing better than C#), but not surprisingly, our testing lab doesn’t represent the real world exactly. When large numbers of developers use VS regularly across a spread of hardware configurations and usage patterns, we can see a small percentage of data points that I didn’t expect. For example, the first time I ran the report over the collected data, I was shocked to see that some keystrokes took longer than 30 seconds! It turned out that the folks who logged those values worked on the TFS team, and v1 of our service didn’t recognize when modal checkout dialogs came up between the beginning of a keystroke and the end of it.
For full transparency, here’s some data specifically from Beta:
The main takeaways from these charts are:
Using this data, we were able to track down some C# users having responsiveness issues. One thing that we found was an issue we fixed in the Beta Update to improve C# responsiveness in large solutions. The issue was a bug where we weren’t differentiating an empty dictionary of async-related extension methods from having no async-related extension methods in the solution. That caused us to search for those extension methods on every keystroke. We missed it because that’s a fast operation when the entire solution either fits within our symbol cache or has a smaller number of extension methods. These users had solutions with lots of non-async-related extension methods in a solution that didn’t fit within our cache. And while we haven’t yet collected enough data to regenerate charts with that fix in place, the qualitative assessment from some of the hardest-hit folks was that C# typing was noticeably more responsive after the Beta Update.
The combination of putting the right telemetry into the product, along with tools that can read and process that data in a meaningful fashion helped us tremendously during the development of Visual Studio 11.
For situations when you want to report a responsiveness problem happening on your machine, we’ve come up with the Microsoft Visual Studio 11 Feedback Tool which can record ETL files. The Feedback Tool knows how to package up the traces along with some other info and send them back to Visual Studio as part of a problem report submission (via Connect) which will make the job of gathering diagnosable information much easier compared to Visual Studio 2010. On our end, once we receive your problem reports about responsiveness in scenarios like typing, we’ll open the ETL files using RaceTrack to analyze what was happening on your machine during actions that should be quick in order to try and resolve your issues.
Altogether, I hope that sharing some of our tools, methodologies and even dirty-laundry stories gives you a better understanding of how we’re working to make sure that Visual Studio 11 is as solid as it can be.
Eric Knox – Development Manager, Visual Studio Pro Team
Short Bio: Eric has been at Microsoft for 18 years, working on various parts of Visual Studio for the last 14 years. His current role is the development manager in charge of C#, VB, F# and the Visual Studio editor.
In the next post we will cover the changes the team made to improve the Toolbox and those effects throughout the product. Please let me know where you feel we still need improvement and where you see noticeable performance improvements as well. I appreciate your continued support of Visual Studio.
Thanks, Larry Sullivan Director of Engineering
If you are slower than a 4.77 MHz IBM PC doing Wordstar, you are not doing very well, are you.
You can hide GCs but it's a pay me now or pay me later thing. Something's going to pay.
It all may be nice. But with a bland UI as the front and no XP support (Both .net 4.5 and C++), what does it matter?
Those who read a lot of technical blogs are Enterprise Developers. These are the same people who will not be able to use VS 11 because of your Metro/Win RT single mindedness.
There is just no point in talking about all these "improvements" you make, when you refuse to make the "deal breaker" fixes.
Also, I love how you choose to ignore the MOUNTAIN of negative comments from the last two posts...
Just go on pretending we love it... Maybe if you wish enough it will come true.
Also, I am a Windows developer, but I sincerely hope you fail miserably. That ought to teach you that treating us like slaves won't get you far.
Hopefully I'll be laughing at you morons one year from now.
I'm happy with a 7.change % increase in responsiveness; though I doubt it'll be noticeable consciously. I've enjoyed working with the beta so far and am looking forward to trying out the RC.
Thanks for the update.
@Vaccanoll: not everyone is as intransigent as you and the rest of the crowd crying about how the colors have changed. I haven't found myself unable "to use VS 11 because of [the] Metro/WinRT single mindedness" at all.
I'm sure by now the VS team is sick of hearing the whining about the UI; your type has been quite vocal.
I for one am thrilled that the editor will be faster. Constantly typing faster than 2010 could handle was really annoying. This is a very welcome improvement.
Any chance the TFS checkout is async so it doesn't stop me from typing while it checks out the file?
Sounds like a pretty sizable improvement.
Out of curiosity is there any telemetry for comparison to VS2008? We are just now in the process of moving to VS2010, so most of our current development takes place in the 2008 version of the compiler.
I was aware that the 2010 editor was less responsive, but I'm curious to know by how much and further if Dev11 is on par with 2008 or faster at this point.
Thanks sharing this information.
I like the approach of converting general (performance) feedback to specific measureable aspects and actual measure the relevant data from test cases and real world installation.
With this approach the optimization is guided by facts and it is not based on ideology.
The goal of these type of optimizations is not to develop the absolute achievable minimum by spending unlimited effort into it.
I don't want to miss features like syntax highlighting, some online code analysis, intellisense, code style/formatting checks.
I would like to see real-time background compilation and unit test execution and other features.
All these features improving my development productivity!
Clearly performance problems reducing my development productivity and they are very annoying.
Therefore the goal should be that most of the time typing and editing is perceived as responsive and exceptions are very rare.
What is the upper threshold for key typing delays where humans (developers) still perceive typing as responsive?
For me it is not important how this goal is achieved (e.g. managed/unmanaged, mfc/wpf, Assembler/C/C++/C#/xyz, ...).
I expect that the relevant developers do the best to reach the goal. Not more but also not less.
Just a view comments to the topic colors:
1. Performance is for me more important than a VS color schema
2. It would be nice to stick with the performance topic of this post
3. The RC will have more colors
PS: If someone likes the performance of wordstar and don't miss the features of VS I suggest to use notepad and compile with msbuild.
(notepad is also free!)
While the blog post is interesting and it's obviously good that these optimisations are being made, the fact that lag in typing response was ever allowed to become an issue in the first place is very disappointing. It's also worrying when it comes to trusting Microsoft's priorities and/or their continuing ability to develop fit-for-purpose tools.
Thanks for sharing the information. I'm pleased to see that performance is - at last- important. Who can work with an IDE that freezes for a few seconds every minute?
What you are doing to improve responsiveness seems to be very complicated. Have you considered dropping WPF, and writing the editor in C++ instead? ;)
4.77 MHz running Wordstar is to an i7 running VS like Montezuma's revenge compared to Percoset constipation. Nevermind that an i7 is nearly 1000 times faster by the clock, and it is 100 times faster by the IPC, but that this is 30 years later, and it's slower.
here Is my own feedback :-
1. C# like intellisense in html. nah, it's not good. Webmatrix copycat Vs but both made user frustrated if they like to work in VS. notepad++ do a good job already.
2. you made VS dark very good. I have already a idea behind dark is using High contrast themes in windows. Many feature goes disable if I am using High contrast #1 themes in VS.for example CTRL K + D will not worked.
Good news for a change, although, as mentioned, other posts on this blog spoil the fun. No support for XP, no support for developing desktop apps in Express, etc, make sure we will never have a chance to check if the improvements you made to the editor even work for our code.
There have been many fantastic improvements in VS11. Kudos for that.
BUT a huge number of us won't be able take advantage of them for the years to come. XP support and C#/C++ Express are crucial.