The official source of product insight from the Visual Studio Engineering Team
A great way to get fast builds on a multiprocessor computer is to take advantage of as much parallelism in your build as possible. If you have C++ projects, there’s two different kinds of parallelism you can configure.
Project-level parallel build, which is controlled by MSBuild, is set at the solution level in Visual Studio. (Visual Studio actually stores the value per-user per-computer, which may not be always what you want – you may want to have different values for different solutions, and the UI doesn’t allow you to do that.). By default Visual Studio picks the number of processors on your machine. Do some experiments with slightly higher and lower numbers to see what gives the best speed for your particular code. Some people like to dial it down a little so that they can do other work while a build goes on.
This dial is just the same as VS2008, although under the covers MSBuild is taking over some of the work from Visual Studio now.
If you’re building C++ or C++/CLI, there’s another place you can get build parallelism. The CL compiler supports the /MP switch, which tells it to build subsets of its inputs concurrently with separate instances of itself. The default number of buckets, again, is the number of CPU’s, but you can also specify a number, like /MP5. Again, this was available before, so I’m going to just remind you where the value is and what it looks like in the MSBuild format project file.
Go to your project’s property pages, and to the C/C++, General page. For now I suggest that you select All Configurations and All Platforms. You can be more selective later if you want.
As usual you can see what’s in the project file by unloading it, right clicking on the node in the Solution Explorer, and choosing Edit:
Here’s what it looks like in the project file. Yes, it’s inside a configuration and platform specific block, but it put the same value in all of them.
Notice that it’s in an “ItemDefinitionGroup”. That MSBuild tag simply indicates it defines a “template” for items of a particular type. In this case, all items of type “ClCompile” will automatically have metadata MultiProcessorCompilation with value true unless they explicitly choose a different value.
By the way, MSBuild Items, in case you’re wondering, are just files, usually. Their subelements, if any, are the metadata. Here’s what some look like. Notice they’re in an “ItemGroup”:
Because this is metadata, at an extreme, I could actually set this down to a per-file basis. In that case, MSBuild would bucket together all the inputs that have a common value. You would need to disable /MP for particular files that use #import, for example, because that's not supported with /MP. (Other features not supported with /MP are /Gm, which is incremental compilation, and a few other switches documented here)
Note it’s under the “ItemGroup” because these are actual items:
Back to multiprocessor CL. If you want to tell CL explicitly how many parallel compiles to do, Visual Studio lets you do this – as for /MP, it's exposed as a global setting:
Under the covers, VS passes this on by setting a property (a global property – it's not persisted) named CL_MPCount. That means it won't have any effect when building outside of VS.
If you want to choose a value at a finer grained level you can’t use the UI as it’s not exposed in the property pages or the command line preview. You have to go into the project file editor and type it. It’s a different piece of metadata on the CLCompile items, named “ProcessorNumber”. It can have a number from 1 to as high as you like and adds the numeric value to /MP if you want it. If you don't have <MultiProcessorCompilation> it will be ignored.
The squiggle here is a minor bug – ignore it.
The /MP settings come from the project files, so they work exactly the same on the command line. That’s part of the whole point of MSBuild, right, the same build on the command line as in Visual Studio? But the global parallelism setting that you set in Tools, Options does not affect the command line. You must pass it yourself to the msbuild.exe command with the /m switch. Again, the value is optional and if you don’t supply a value it uses the number of CPU’s. However, unlike Visual Studio, out of the box, without /m supplied, it uses 1 CPU. That might change in future.
To choose the number on any /MP value, you can set an environment variable, or pass a property, named CL_MPCount, just like Visual Studio does.
Probably you’ll want to use /MP on more than one of your projects, and you don’t want to edit each individually. The Visual Studio solution to this kind of problem is property sheets. They don’t have any special connection to multiprocessor build, but it’s an opportunity for me to give a quick refresher using this as an example. First open the “Property Manager” from the view menu. Its exact location will vary depending on the settings you’re using, here’s where it is if you have C++ settings;
Right click on a project and choose “Add New Property Sheet”:
I have mine the name “MultiprocCpp.props”. You’ll see it gets added to all configurations of this project. Right click on it, and you’ll see the same property pages that the project has, but this time you’re editing the property sheet. Again, set “Multi-processor Compilation” to “Yes”. Close the property pages, select the property sheet in the Property Manager, and hit Save.
Now I can open up that new MultiprocCpp.props file in the editor, and I see this:
(Again, ignore the squiggle.)
Looking in the project file, you can see the property sheet pulled in to each configuration, using an “Import” tag. Think of that just like a #include in C++:
So now we have the definition we put in the project file before, but in a reusable form. Given that, I can put it into all the projects I want in one shot, by multi-selecting in the Property Manager and choosing Add Existing Property Sheet:
Now all your projects compile with /MP !
In some circumstances, you might want to go beyond what you can easily do in the Property Manager. For example you might want to bulk-remove a property sheet, or put a property sheet in each project once outside of all the configurations. Fortunately MSBuild 4.0 has a powerful and complete object model over its files that you can use to do this kind of work in a few lines of code. More on that in a future blog post, but for now, if you want to take a look, point the Object Browser at Microsoft.Build.dll.
Before I leave property sheets, it’s worth mentioning that you can do this kind of common-importing in your own ways, if you don’t mind losing some of the UI support. For example, in the build of VS itself, we pull in a common set of properties at the top of every project, like this example from the project that builds msenv.dll (which contains much of the VS shell)
Within that we define all kinds of global settings, and import yet others. I’ll talk about this kind of structure in a future blog post about the organization of large build trees.
Usually the problem is getting enough parallelism to exploit all your machine’s cores. But the reverse problem is possible, and although it’s a nice problem to have, it needs fixing because it will cause your machine to thrash. Here’s what task manager might look like when this is happening:
In this case on a box with 8 CPU’s I enabled /MP on all my projects in the solution, and then built it with msbuild.exe /m (I didn’t need to use the command line to have this problem, the same could happen in Visual Studio). If dependencies don’t prevent it, MSBuild will kick off 8 projects at once, and in each of those CL will run 8 instances of itself at once, so we could have up to 64 copies of CL all fighting over my cores and my disk. Not a recipe for performance.
You can expect that one day the system will auto-tune itself here, but for now if you have this problem you would do some manual adjustment. Here’s some ideas:
Reduce /m:4 to /m:3, for example, or use a property sheet to change /MP to /MP2, say. Easy, but a blunt instrument: if there are points elsewhere in your build where there is a lot of project parallelism but not much CL parallelism, or vice versa, you probably just slowed them down.
A project that compiles at a relatively parallelized point in the build is not such a good candidate for /MP, for example. You might adjust by configuration as well. Retail configuration can be much slower to build because the compiler’s optimizing more: that might make it interesting to enable /MP for Retail and not Debug.
In your team, you might have a range of hardware. Perhaps your developers have 2-CPU machines, but your nightly build is on an 8-CPU beast. Yet the both need to build the same set of sources, and you don't want any box to be either slow or thrashing. In this case, you could use environment variables, and Conditions on the MSBuild tags. Almost all MSBuild tags can have Conditions.
Here’s an example below. When a property “MultiprocCLCount” (which I just invented) has a value, and it’s greater than 0, /MP is enabled with that value.
MSBuild pulls in all environment variables as its initial properties when it starts up. So on my fast machine, I set an environment variable MultiprocCLCount=8, and on my developer boxes, I set MultiprocCLCount=2.
The build machine’s script could also parameterize the /m switch going to MSBuild.exe, like /m:%MultiprocMSBuildCount%
To other properties that might be useful in exotic conditions: $(Number_Of_Processors) is the number of logical cores on the box – this just comes from the environment variable. $(MSBuildNodeCount) is the value that was passed to /m on msbuild.exe, or within VS, the value from Tools>Options for project parallelism.
That’s it. I hope while walking you through /m and /MP I’ve also given you an overview of some MSBuild features and how much flexibility they give you to configure your build process.
Optimizing your build speed is a huge topic so look for more blogging on this subject from me.
Dan – Developer Lead, MSBuild
@Marcus "Our builds are 5-45 secs incremental, or 10-20 mins from clean (argh)."
Is this VS2008 or VS10? do you have numbers for each to compare? Just curious.
@Jonathan "The main thing running that might affect this is an antivirus service (as developers we'd like to get rid of it, but it is well and truly mandated by the powers that be). "
Build will never be fully robust with indexing/AV/antispyware as they try to take a read lock on a file after it is modified and before it is read. For example the current default VC build process can embed a manifest in the output binary after linking it, which causes a read after a write and thus random AV conflicts. I always recommend disabling AV for relevant processes or the source tree if you can, somehow.
"The new incremental build mechanism is definitely something we will look forward to investigating. If it works well, then it could make parallel builds on a single box through VS be the best way to do nearly all incremental builds."
Let me know how it goes. email@example.com. The VS2010 RC is a free download.
@rhino I will pass on your comment about PCH sharing.
One final comment about incremental build perf: in VS2008 (?SP1) a native project referencing a managed project would not rebuild unless the managed assembly's public interface actually changed. VS2010 is missing this: we're looking into whether we can put out an extension to fill that gap. Until then, you'll see extra builds in that case.
I'm working on pulling our VC++ project from VS2005 to VS2010 right now. It's 20 projects, but only 7 of those matter, as far as build time goes, the other 13 are tiny.
In VS05, we used Incredibuild; we pretty much had no choice, given 2005's lack of parallel build support for C++.
But the VS2010-compatible version of Incredibuild isn't out yet, plus we just got a batch of Xeon 5500-based developer workstations (2 sockets * 4 cores plus HyperThreading = 16 effective CPUs) which look hefty enough to do a purely local build, so I'm giving it a whirl.
The problem I'm hitting is that our DLLs are in dependent order: A < B < C < D < E < F. Incredibuild is smart enough to realize that it can start compiling (but not linking) B's cpp files even before it finishes linking A, but MSBuild isn't that smart. Incredibuild/VS2005 can build in 6 minutes (disconnected from the coordinator -- Incredibuild is scheduling, but all work is being done locally to this one machine) while MSBuild/VS2010 takes 7 and a half. It's frustrating watching Process Explorer -- that 7.5 minutes, is, roughly, 45s of watching all 16 processors scream along compiling, followed by 20s of a single processor linking, repeated 7 times.
If I could get MSBuild to do the right thing, I believe it would be faster than Incredibuild -- Incredibuild actually slows down the links (because it generates extra intermediate PDB files) and it restarts the compiler for each file rather than using the same compile instance for multiple CPP files as CL.exe /MP does.
@Dick, re compiling
"Incredibuild is smart enough to realize that it can start compiling (but not linking) B's cpp files even before it finishes linking A, but MSBuild isn't that smart."
This is a very interesting point. I will try to see what customization it would take to make this work.
@Dan, I'd greatly appreciate it. I started looking at it myself the other day, but I'm a complete novice at MSBuild. Even some tips to point me in the right direction would be appreciated.
Dirk, I haven't forgotten this, and hopefully can look into it soon. When I figure it out, I'll make a new blog post.
Thanks for the update.
> The bottleneck now is building PCH's. Each project has its own PCH, and because of the way Visual C++ works, it is not possible to reuse PCH's across projects. This has been reported since at least VS2005 - I guess it will never be fixed...
I spoke to the compiler team, and they said: "I am not aware of any re-use problems with PCHs as long as they’re on the same machine. They are known to be non-portable across machines and this is by design. Projects are not interesting to the compiler so I’m mystified by the customer comment. More details would be interesting."
So if you want to send me a mail at firstname.lastname@example.org, I can link you up with them to figure out what's going on.
har, "make -j 4" rules ;)
Sharing PCH files between projects definitely doesn't work... or at least it's non-obvious how to do it. I have 4 DLLs and each one has the same precompiled header. Unfortunately it has to be compiled once for each DLL which can be a big pain.
It would be great if we could get the behavior like Dirk says (wrt A > B > C > D) as we have a similar scenario.
On a related note:
Is there a way to parallelize a custom build rule in 2010? We compile shaders as a custom step and I see no reason why the rest of the DLL couldn't be compiled while this is happening.
Are there any other tools than Incredibuild that allows builing solution on different machines? On quad core CPU our solution is build built 40 minutes. We are looking for ways to reduce compilcation time. Incredibuild is really expencive.
It appears that /MP only works for one source folder in a project at a time. Suggestions?
Full problem description here: stackoverflow.com/.../msvc10-mp-builds-not-multicore-across-folders-in-a-project