The time spent in the link phase could be a significant portion of an applications overall build time for most large projects. A user can quickly determine this by adding the '/time' flag to the linker command line. The 'Final: Total time' reports the total time spent in the link phase. There are essentially two primary scenarios to consider when looking at link time. The first is the developer iteration scenario. In this scenario, the primary objective is to make the cycle from a code change to seeing that change running on the console as quick as possible. Developers are often willing to trade off executable size and code quality in order to reduce iteration times. The other scenario to consider is producing a final build in preparation for release. In this scenario, the amount of time it takes the link to finish is secondary to ensuring that an optimal binary is produced, both from the perspective of size and performance.
These two scenarios require the linker to be configured quite differently. This blog describes a set of best practices that will enable you to get the most out of the Visual C++ linker both when iterating during development and while producing a final release build. I will be covering this over a couple of blogs, with this particular one covering the developer iteration scenario in some detail.
The key to optimal linker performance in the developer iteration scenario is to link the application incrementally. When linking incrementally, the linker directly updates the binaries produced on the previous link rather than building them from scratch. This approach is much faster because the linker is only updating the part of the existing binary that was impacted by the code changes rather than having to recreate the binary from its constituent objects and libraries from the ground up. In addition to incrementally updating the binary, the linker incrementally updates the corresponding PDB as well.
To enable the ability to add code to an existing binary on subsequent links, the linker inserts extra padding into a binary as it's being built. As a result, a binary built with incremental linking enabled will be larger than a binary built without incremental linking. In the developer iteration scenario, the additional size is generally accepted as a fair tradeoff for faster link times. However, larger binaries will take longer to deploy on remote hosts so you'll want to verify whether this tradeoff is acceptable in your particular scenario.
Even if the linker is properly configured to link incrementally, sadly today there are several factors that will force the linker to fall back and do a full link (we are working on improving this). The remainder of this section describes the set of switches you'll use to turn on incremental linking and provides a set of guidelines to maximize the chance that incremental linking will succeed.
Incremental linking is turned on by passing the /INCREMENTAL switch on the linker command line. If you're building from within Visual Studio, /INCREMENTAL can be turned on using the Enable Incremental Linking property:
/INCREMENTAL is on by default in the Debug configuration for projects created using Visual Studio. The /INCREMENTAL switch is off by default for the Release and Profile configurations. Note also, that /INCREMENTAL is implied if you have specified /DEBUG.
There are two switches you can use to get diagnostic information about the incremental linking process. The /verbose:incr switch will print various diagnostic messages you can use to determine when the linker had to abandon incremental linking and fall back to a full link. For example, one of the conditions that will cause the linker to fall back to a full link is the modification of a library that the binary being linked depends on (see Linking .libs below). If /verbose:incr is turned on, and a library has been changed, the following message will be displayed:
LINK : library changed; performing full link
If an incremental link is performed successfully, /verbose:incr produces no output.
The other diagnostic switch which I mentioned earlier as well is /time. Among other things, /time displays information about each phase of the link. If you see phrases such as IncrPass in the link output when /time is specified, the title has been linked incrementally. The absence of such phrases in the output means the linker performed a full link. Here's an example of the full output from /time on an incremental link:
Linker: IncrPass2: Interval #1, time = 0.04710s [C:\temp\IncrLink\Durango\Debug\IncrLink.exe]Linker: Wait PDB close Total time = 0.02389s PB: 9494528 [C:\temp\IncrLink\Durango\Debug\IncrLink.exe]Linker: IncrPass2: Interval #2, time = 0.11271s [C:\temp\IncrLink\Durango\Debug\IncrLink.exe]Linker: Final Total time = 0.15984s < 632942532369 - 632942948644 > PB: 5312512 [C:\temp\IncrLink\Durango\Debug\IncrLink.exe]
To summarize, the 3 recommended linker switches to use when incrementally linking are:
It's also worth noting that there may be cases where you can eliminate the /DEBUG option, which causes the linker to generate a PDB file. The time the linker spends producing the .pdb file has been shown to be a significant portion of overall link time. If you have scenarios where this debug information will not be used, excluding the /DEBUG linker flag will reduce your link time by skipping the pdb generation.
Even with all the recommended switches defined, there are still several factors that could cause the linker to do a full link instead of an incremental link. This section describes those factors and how to prevent them from occurring.
Visual C++ ships with a 32 bit linker and a 64 bit linker. The 64 bit linker should be used if at all possible. Incremental linking is much more likely to succeed with the 64 bit linker primarily because of the increased address space. The larger address space is important for two reasons. First, the 64 bit linker can map many more objects and libraries into memory than the 32 bit linker can (running out of address space is one reason incremental linking fails more often with the 32 bit linker).
The second reason the increased address space is important for incremental linking relates to the loading of linker data structures. When linking incrementally, the linker saves some of its internal data structures to an .ilk file. On subsequent links, the linker tries to load the contents of that file into the same memory location as in the previous run. If the file can't be loaded at the same location, the incremental link will fail. The 64 bit address space makes it much more likely that the linker can load the contents of the .ilk at the desired address.
To verify that the 64 bit linker is being used, add /Bv to the compiler (not linker) command line. The following line in your build output confirms that the 64 bit linker is being used:
C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\BIN\amd64\link.exe: Version 11.00.65501.17015
Note that the version number in the above line may change between versions of Visual Studio.
The linker provides various switches to enable optimizations at link time. Using any of these switches will disable incremental linking. Specifically, avoid using /opt:ref,/opt:icf, /order, and /LTCG (Link-time code generation) in the developer iteration scenario. If you use one of these switches while /INCREMENTAL is on, you'll see output like the following when you build:
LINK : warning LNK4075: ignoring '/INCREMENTAL' due to '/OPT:REF' specification
The /opt:icf and /opt:ref linker optimizations are performed to remove identical and unreferenced COMDATS. A compiler can only optimize away data or a function if a compiler can prove that the data or function will never be referenced. Unless /LTCG is enabled, the compiler's visibility is limited to a single module (.obj), so for data and functions that have global scope, the compiler will never know if other modules will be using them. As a result, the compiler can never optimize them away.
In contrast, the linker has a good view of all the modules that will be linked together so it is in a good position to optimize away unused global data and unreferenced functions. However, the linker manipulates the binary on a section level, so if the unreferenced data and functions are mixed with other data or functions in a section, the linker won't be able to extract and remove the unreferenced data or functions. In order to equip the linker to remove unused global data and functions, each global data member or function is placed in a separate section. These sections are called COMDATs. These optimizations require the linker to collect and analyze reference information across all input modules, which makes these optimizations impractical when linking incrementally.
The /order switch can be used to specify an order in which to lay out certain COMDATs. The amount of potential change necessary to a binary when this switch is specified causes incremental linking to be disabled.
Link-time code generation (/LTCG) causes the linker to do whole program optimization. One common example of an optimization enabled by /LTCG is the inlining of functions across modules. As with many of the other linker optimizations, incremental linking is disabled when /LTCG is turned on because the linker must analyze references across multiple input files. Turning off link-time code generation requires changes to both the linker and the complier command lines. Specifically, /LTCG must be removed from the linker command line and /GL must be removed from the compiler command line.
The linker's ability to incrementally link will be significantly hampered if your title links in libraries (.lib files). The most significant impact of using libraries as far as incremental linking is concerned, is that any change made to any library will cause the linker to abandon incremental linking and do a full link.
The reason that a change to a library disables incremental linking has to do with how the linker resolves the symbols for a given binary references. When an .obj is linked in, all symbols in the .obj file are copied into the binary the linker is building. But when a .lib is linked in, only the symbols that the binary references from the library are linked in.
If a library is changed, there is the possibility that a symbol which was previously resolved from that library may now come from another library. In addition, the linker always tries to resolve symbols starting with the library that referenced the symbol. So if a reference moves from one lib to another there is the possibility that several other references must move as well. When faced with the possibility that so much may have changed, the linker abandons the incremental link.
It is also possible that a change to a library may not impact symbol look up at all. While it's technically possible for the linker to do extensive analysis to determine what has changed and what the impact is, there is a tradeoff between the time spent trying to determine if the incremental link can be preserved vs. just starting over with a full link. Having said that, if you do perform changes to .libs on a constant basis we do provide a way to incrementally link in Visual Studio. This can be done by enabling the 'Use Library Dependency Inputs' as shown in the figure below:
Changing the set of options passed to the linker will always cause a full link, even if the new set of switches is fully compatible with incremental linking. Likewise changing the set of objects and libraries that are linked together to form the binary will always cause a full link. If you have /verbose:incr on, you'll see messages like the following when you change the set of link inputs: LINK: object file added; performing full link
The linker requires several artifacts from the previous build in order to link incrementally. In particular, you must preserve:
The binary and the pdb from the previous build are required because without them there is nothing for the linker to update incrementally. The .ilk file is needed because it contains state the linker has saved from the previous build. When linking incrementally, the linker writes a copy of some of its internal data structures to an .ilk file. You'll find this file in your build output. The .ilk file contains state that the linker must have access to in order to do the next incremental link.
When a link begins, the linker will open the .ilk file and attempt to load it at the same address it was loaded at during the previous link. If the .ilk file can't be found, or if it can't be loaded at the required address, the linker will fall back to a full link. The '/verbose:incr' switch can help you detect cases in which a full link was done because one of the outputs of the previous build could not be found. For example, if the .pdb is deleted you'll see the following in the build output:
LINK : program database C:\temp\abc.pdb missing; performing full link
While we here at Microsoft work towards improving linker performance, following are some of the do's and don'ts using which one should be able to extract better link build throughput. In a follow-up blog I will get into some of the tips which can be used to improve link performance for build lab and production-release scenario. So stay tuned! Lastly, if you would like us to blog about some other linker-related scenarios or are just curios and have a few more questions about linker performance please feel free to reach out to me. I will do my best to answer them.
There is some good advice here. Thanks for writing this up.
I tried /Bv and it looks like my project is using the 32-bit linker:
1> C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\x86_amd64\link.exe: Version 11.00.60610.1
You forgot to show us the option to set in the Solution to switch to using the 64-bit linker/tools. Can you show us that so I can move to using the better linker?
Thanks, here is how you do it:
1. Once the project is opened up in Visual Studio, right click on the project in solution explorer and unload the project.
2. Once the project has been unloaded right click on the project in solution explorer and select 'edit *.vcxproj'
3. In the .vcxproj file look for the following nodes:
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
Please note 'Debug|x64' here means this is the node for debug configuration when building x64 target. If you have a custom build configuration name this would change accordingly.
To enable using the x64 host toolset, add the following property:
Once added this property the node would look like this:
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
Repeat this step for any other '*|x64' nodes.
Once finished, Reload the project and build again. This time /Bv should report you are using the 64 bit toolset to build (i.e. 64 bit linker and compiler).
Thank you Ankit!
Editing the project did indeed switch to using the x64 native compiler.
Also FYI for others, adding UseNativeEnvironment does not seem to be honored by VS 2012 but is honored in VS 2013.
We use .libs heavily and with previous versions of Visual Studio have not found a way to use incremental linking successfully. In particular, if we set 'Use Library Dependency Inputs' to 'Yes', it seems that all object files that make up the libraries are passed as input object files to the linker. The linker then treats these object files as input that has to be included in the resulting executable file. This is different from what happens with .libs, where parts of the library that resolve references will be included but the rest will be ignored.
This breaks many linking strategies, for instance defining the same symbol in two libraries so that those executables linking with libraries A and B in that order will pick up the symbol from library A, but other executables linking only with B will pick it up from B. Using 'Use Library Dependency Inputs' in the first case results in a failed link due to multiply defined symbols.
Is there any improvement in this area in Visual Studio 2013 ?
If 'UseNativeEnvironment' is only supported on VS2013, is there any way to get VS2010 and VS2012 to use the 64bit toolchain from the IDE? There's some information on the MSDN about doing 64bit toolchain builds from the command line but I've never been able to get the IDE to honour these settings/options?
Do you have any suggestions Ankit?
Your explanation in the comments of how to select the 64-bit linker suggests that this only works for 64-bit targets -- is this true? That is, can we use the 64-bit linker when linking 32-bit code?
Also, can you confirm that the 64-bit linker can only be selected in VS 2013?
Finally, any plans to expose this in the IDE, or make it the default? Most developers won't see this information.
Bruce, you may be interested to learn that VC 2013 shipped x64-hosted, x86/ARM-targeting cross-toolchains (compilers and linkers). We apparently didn't add dedicated shortcuts for them, but you can request them with:
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" amd64_x86
or amd64_arm for ARM. As I'm not a compiler dev, I don't know what happens if you try to mix-and-match x86-hosted compilation with x64-hosted linking (especially with LTCG and/or PCHes).
Will the follow-up blog post include any mention of the impacts of linking large-ish static libraries? For instance, is one one hundred meg static lib better than five twenty meg libs? Does the answer differ if LTCG is involved?
Can the build timings be directly fed into a TFS database? We'd use that to see the build/link times for our build definitions over time. This would need to break down build timings into solution, subproject, link and compile.
@Bruce, VS2013 introduces x64 based cross compilers. Having said that when using VS2012 you can take advantage of 64 bit toolset, if you set _IsNativeEnvironment to true, the x64 tools will be used when building from command line and IDE.
We want to make 64 bit tools the default moving forward when building x86/x64 targets. This will apply to both IDE and CLI.
@Ralph, that is an interesting idea, can you provide more details on the kind or project and application that you are building :).