allen@msft

random bits, rants and raves

Posts
  • allen@msft

    Important timeout tweaks for controller agent

    • 0 Comments

    Based on your network you may run into test run failures associated with the agent/controller communication. This primarily happens due to network latency, high data transfer between controller & agent or high cpu utilization on the agent. In such cases you can tweak some of the timeouts to improve the resiliency of the system

    Increase the timeout setting for Agent timeout:

    [Controller machine] In QTController.exe.config: <drive letter:>\Program Files (x86)\Microsoft Visual Studio <Visual Studio Version>\Common7\IDE\.

        <add key="AgentConnectionTimeoutInSeconds" value="120"/>

        <add key=" AgentConnectionTimeoutInSeconds" value="120"/>

        <add key="AgentSyncTimeoutInSeconds " value="300"/>

    Increase the value to triple.

    [Agent machine] In QTAgentService.exe.config <drive letter:>\Program Files (x86)\Microsoft Visual Studio <Visual Studio Version>\Common7\IDE

                        Set ControllerConnectionPeriodInSeconds to 90 (making this 3 times too).

    Restart the services.

    Apart from this increasing the bucket size will help reducing communication too.

  • allen@msft

    Getting started with Fakes using Visual Studio

    • 0 Comments

    These are a good set of references to anyone looking into getting started with fakes

    http://www.peterprovost.org/blog/2012/04/15/visual-studio-11-fakes-part-1/

    http://www.peterprovost.org/blog/2012/04/25/visual-studio-11-fakes-part-2/

    http://www.peterprovost.org/blog/2012/11/29/visual-studio-2012-fakes-part-3/

  • allen@msft

    Test controller/agent usage recommendations

    • 2 Comments

    Over the years we have had many people ask us on what is the best way to configure and use the test controller and agent for remote test execution. Here is a summary of the best practices that we advocate.

    Setup and Settings

    If you are on the latest VS2012U1+ or VS2013 you can ignore step#1&2

    1. Ensure that Visual Studio 2010 SP1 is installed on all machines
    2. Dev10 SP1 - http://www.microsoft.com/en-us/download/details.aspx?id=23691

    3. Next install the latest patch available
    4. http://support.microsoft.com/kb/2801364

    5. Open "<VSINSTALLDIR>\Common7\IDE\QTControllerService.exe.config" and add the following entry under
    6. <add key="ControllerJobSpooling" value="false"/>

      • Restart Controller
    7. Agent setup
      • Install the latest patch available
      • Open "<VSINSTALLDIR>\Common7\IDE\QTAgentService.exe.config" and add the following

      <add key="RestartTestExecutionProcessForEachRun" value="true"/>

      • Restart Agent
    8. Client Setup
      • Install the latest patch available
      • Open "<VSINSTALLDIR>\common7\ide\mstest.exe.config and add the following key

      <add key="DeleteTestDeploymentFiles" value="yes" />

    9. In your test settings
      • Keep bucket size as 1000. This will result in a test run going to smaller set of agents.

      <Execution>

          <Buckets size="1000"/>

    10. Set test run timeouts to ensure no runs go beyond acceptable limits
    11. <Execution>

      <Timeouts runTimeout="7200000" />

      7200000 = 2hrs specified in milliseconds

    12. Run your tests in parallel if your agents are on a multiproc system
    13. <Execution parallelTestCount=0>

      http://blogs.msdn.com/b/vstsqualitytools/archive/2009/12/01/executing-unit-tests-in-parallel-on-a-multi-cpu-core-machine.aspx

    Other Recommendations

    1. Test Controller keeps all results for a run in memory. Therefore it is necessary that test runs be split into logical groups. We recommend you split tests into assemblies so that each test dll has <=5000 tests and/or run the TC in 64bit mode
      1. testcontrollerconfig.exe configure <… configuration parameters… > /platform64

    2. When any component of the Test Rig (any of the agents or controller) faces an issues (network disconnect, deployment failures, test crashes), the current test run is aborted. The probability of the test run getting aborted due to failure in agents reduces when we send one test run to a smaller set of agents. Keep a run limited to minimum number of test assemblies
    3. To scale out you should be using multiple such rigs and breaking up your test runs amongst these rigs as required.
  • allen@msft

    Getting the status of your test agents

    • 0 Comments

    Users routinely want to check the status of the test agents against which they are scheduling their remote test execution. The existing way of doing this was to fire up VS and open the “Manage Test Controllers” dialog

    With Visual Studio 2013 you can now check the status of your agents on the controller itself using the commandline

    Syntax-
    TestControllerConfig.exe status [/testController:<testControllerUri>]

    testController              URI of the test controller. Default is localhost:6901

    Sample output

    Microsoft (R) Visual Studio Test Controller Configuration Tool
    Version 12.0…. for Microsoft Visual Studio 2013
    Copyright (c) Microsoft Corporation.  All rights reserved.

    Total number of agents         : 1

    Status of agents :
    Agent name                     : vstfs:///…
    Agent status                   : Disconnected


    Summary of all agents :
    Ready                          : 0
    Running tests                  : 0
    Offline                        : 0
    Deploying build                : 0
    Disconnected                   : 1

    The agent status correspond to the existing known states listed here

  • allen@msft

    Test Controller 2012 Update 3+

    • 0 Comments

    New features introduced

    1. Visual Studio Test Controller 2012 Update3+ now supports backward compat with TFS servers. So you can connect your 2013 test controller to TFS 2012+
    2. The test controller now works with hosted builds having server drop locations
  • allen@msft

    Empty .coverage file with profiler related errors in the event logs

    • 1 Comments

    ​If you find yourself with a an empty .coverage file and see errors similar to the below in your event logs you most probably have a corrupt install

    (info) .NET Runtime version 4.0.30319.17929 - The profiler has requested that the CLR instance not load the profiler into this process. Profiler CLSID: '{b19f184a-cc62-4137-9a6f-af0f91730165}'. Process ID (decimal): 12624. Message ID: [0x2516].

    (Error) TraceLog Profiler failed in initialization due to a lack of instrumentation methods, process vstest.executionengine.x86.exe

     

    Check

    a) Environment variable VS110COMNTOOLS is set to <vsinstalldir>\common7\tools

    b) Regkey HKLM\SOFTWARE\Microsoft\VisualStudio\11.0\InstallDir is set to your <vsinstalldir>\Common7\IDE\

    c) covrun32.dll and covrun64.dll exist in "<vsinstalldir>\Team Tools\Dynamic Code Coverage"

  • allen@msft

    Troubleshooting missing data in Code Coverage Results

    • 0 Comments

    Code Coverage tool in Visual Studio 11 instruments native and managed binaries (DLLs/EXEs) whenever they are loaded at runtime, if they meet some criteria. The code coverage information is collected for these binaries

    At the end of the Code Coverage run you see code coverage within the Code Coverage results window. The total code coverage as well as code coverage for each binary are reported.

     

    However in some cases you may end up with a .coverage file which shows 0% coverage in the Code Coverage Results window with an error similar to “Empty results generated: No binaries were instrumented …”

    This means code coverage results were not obtained for any binary.

     

    In this article we will attempt to list down the most common reasons for such a problem and provide you with resolutions for the same.

    1. No tests were executed
    2. PDBs (symbol files) are unavailable/missing
    3. Using an instrumented or optimized binary
    4. Code executed is not managed (.NET) or native (cpp) code
    5. A custom .runsettings file with exclusions is being used

     

    Reason

    No tests were executed

    Analysis

    Check your output window – Select “Tests” in the “Show Output from:” dropdown to see if there are any warnings or errors logged.

    Explanation

    The Dev11 Code Coverage engine is dynamic in nature. What this means is that the instrumentation of binaries occurs only for those binaries that are loaded into memory.

    However if none of the tests are executed then there is nothing that is available for the Code Coverage to report.

    Resolution

    Verify if that the tests run fine, without Code Coverage turned on, by clicking on “Run All”. Fix any issues you find here before using “Analyze Code Coverage”

     

    Reason

    PDBs (symbol files) are unavailable/missing

    Analysis

    Check that all the modules within the solution have their associated PDBs available alongside the binary

    Explanation

    The Dev11 Code Coverage engine requires that every module have its associated PDB available and accessible during execution. If the PDBs are unavailable we will skip the module and not provide any data for the same.

    Note: PDBs have a direct association with a DLL via build. The DLLs and PDBs should be from the same build for it to be recognized as a valid PDB.

     

    If PDB are not installed alongside the binary, but are present in some share, configure code coverage settings specifying the path to pick the the PDB from. See “Customizing Code Coverage in Visual Studio 11

    Resolution

    Install the PDB alongside the binary; Or customize code coverage settings by specifying the path to pick the PDB from.

     

    Reason

    Using an instrumented or optimized  binary

    Analysis

    Check if the binary has undergone any form of advanced optimization (like the BBT optimization, Profile Guided Optimization, etc.) or has been instrumented by a profiling tool like vsinstr.exe.

    Explanation

    If a binary has already been instrumented or optimized by another profiling tool eg. vsinstr.exe we skip the binary and do not include it as a part of the coverage results.

     

    Code coverage cannot be obtained for such binaries.

     

    Resolution

    Use the non-instrumented / non-optimized version of the binary.

     

    In fact, it is recommended to turn off all optimizations, and run code coverage with non-optimized CHK build to get best results.

     

    Reason

    Code executed is not managed (.NET) or native (cpp) code

    Analysis

    Verify if your project has any .net or cpp UTs being run

    Explanation

    Code Coverage in Visual Studio 11 is available only for .net and native (C++) code. If you are working with any language that does not fall into either of these categories Code Coverage will not be available for code in these unsupported languages.

     

    This can typically happen if you are developing tests with a third party unit test adapter extension.

    Resolution

    None available

      

    Reason

    A custom .runsettings file with exclusions is being used

    Analysis

    Verify if you are using a .runsettings file and have specified any exclusion rules that leave out your dlls from being instrumented

    Explanation

    You can run your UTs with a custom .runsettings file with Code Coverage configuration specified in the DataCollectors node. In the Code Coverage configuration we allow exclusions of dlls based on name, company name, public key token etc.You might have, by mistake, excluded all your DLLs, or missed including any of your DLLs here

    Resolution

    Fix your .runsettings to have the minimal set of exclusions that do not exclude your DLLs OR explicitly include the DLLs you want coverage for.

     

     

    Further Analysis

    To know why a specific DLL was not included by code coverage, you can analyze the .coverage file generated further using our command line utility, CodeCoverage.exe

    Cd “<VSInstallDir>\Team Tools\Dynamic Code Coverage Tools”

    CodeCoverage.exe analyze /include_skipped_modules my.coverage > analysis.xml

    my.coverage is the .coverage file generated for your test run found in the TestResults folder.

     

    Here /include_skipped_modules will provide information regarding each and every module that was inspected by the Code Coverage engine and provide a reason why it was skipped. These reason codes are to be interpreted as below:

     

    Reason

    Resolution

    no_symbols

    See PDBs (symbol files) are unavailable/missing

    public_key_token_is_excluded

    path_is_excluded

    company_name_is_excluded

     

    See A custom .runsettings file with exclusions

     

    optimized_or_instrumented

    See Using an instrumented or optimized binary

    nothing_instrumented

    No code in the dll could be instrumented.

    See Code executed is not managed (.NET) or native (cpp) code

    instrumentation_failure

    The engine itself failed due to some internal error. Please try again.

     

    If the output file is empty, see “No tests were executed

  • allen@msft

    Lab Management is now available

    • 0 Comments

    Visual Studio 2010 Lab Management is now available for download.

     

    So how do you get started?

    Install Visual Studio 2010 Team Foundation Server 10.0 and SCVMM Admin Console on a machine.

    Set up a SCVMM Server on a machine.

    Install Visual Studio 2010 Ultimate sku to get the Microsoft Test Manager client and the agents skus. Find the detailed steps @ the msdn download page

    Then download and apply the patch on all your client and server machines (available here). This has all the right fixes making it sturdy to be used in your live environments.

     

    Soma is excited and so is Brian and so should you. Try it out and send us your feedback while we keep working hard on the next release.

  • allen@msft

    Using the TFS - Best Practice Analyzer (BPA) for Lab Management

    • 0 Comments

    As announced earlier the TFS Best Practice Analyzer (BPA) has been released for RC. You can install those bits from here.

    One of the new and useful features in the RC release is the ability to exclusively run the Lab Management checks.

    scantypes

    This new option will reduce your scan times when debugging lab related issues.

    We are looking for some early feedback on the existing functionality and suggestions moving forward. So download the bits and give it a whirl.

    Note: The pre-req for the lab scan is the VMM Admin Console 2008 R2

  • allen@msft

    Lab Management Dictionary

    • 0 Comments

    This is an attempt to shed some light on the variety of nouns introduced by Lab Management, as is common with every new product. The list below primarily contains terms that you would encounter while accesing the sdk exposed by Lab Management ie this targets developers.

    SCVMM : System Center Virtual Machine Manager aka VMM

    Host Group [HG] : A logical entity which groups a set of hosts. Host groups can be hierarchial in nature and hosts cannot be shared across disjoint host groups.

    Library Server : A host which exposes UNC shares that would be used as resources to be managed by the SCVMM server.

    Library Share [LS] : A subset of the UNC paths on library servers that are explicity added to the SCVMM server

    Integration Components : A set of drivers installed on virtual machines to provide better integration with the host housing the virtual machine.

    Virtual Network : A virtual network allows you to configure various network topologies for virtual machines and hosts. Virtual networks can be external/internal/private

    Network Location : Each physical network adapter has network location associated with it eg. xyz.com. This is something new introduced in Vista.

    TeamProjectCollectionHostGroup [TPCHG] / TeamProjectCollectionLibraryShare [TPCLS] : Logical lab entities at the TeamProjectCollection level which map onto related SCVMM Host Groups and Library Shares. You would want to have a 1-1 mapping of a library share or host group on SCVMM to a TPCLS and TPCHG. It is not recommened to have multiple TPCLS/TPCHG objects mapping to the same LS/HG objects on SCVMM.

    For a given project collection you can add multiple such TPCHGs and TPCLSs

    TeamProjectHostGroup [TPHG] / TeamProjectLibraryShare [TPLS] : Logical entities at the TeamProject level which map onto TPCHG and TPCLS objects. Think of these objects being contained within their corresponding TPC objects

    For a given project you can add multiple such TPHGs and TPLSs


    Bird's View : The above objects are essentially a hierarchy of logical objects, one contained within another at most times, as shown here: 

    MyCollection {TeamProjectCollection}

    MyTPCHG1 {TeamProjectCollectionHostGroup -> internally maps to HG1 on SCVMM}

    MyTPCLS1 {TeamProjectCollectionLibraryShare -> internally maps to LS1 on SCVMM}

    MyProject {TeamProject}

    MyTPHG1 {TeamProjectHostGroup -> internally mapping to MyTPCHG1 and therefore to HG1 on SCVMM}

    MyTPLS1 {TeamProjectLibraryShare -> internally mapping to MyTPCLS1 and therefore to LS1 on SCVMM}

     

Page 1 of 2 (11 items) 12