Buck Hodges

Visual Studio ALM (VSALM, formerly VSTS) - Team Foundation Service/Server (TFS) - MSDN

Posts
  • Buck Hodges

    Authentication in web services with HttpWebRequest

    • 20 Comments

    Hatteras has three tiers: client, middle, and data.  The middle tier is an ASP.NET web service on a Windows 2003 Server running IIS 6.  When the client (we use C# for both it and the middle tier) connects to the middle tier, it must authenticate with IIS 6.  Depending upon the IIS configuration, that may be negotiate, NTLM, Kerberos, basic, or digest authentication.  Here's a page on Internet Authentication in .NET.

    NOTE:  The code below uses the .NET 2.0 framework (Visual Studio 2005).

    In order to authenticate, the client must have credentials that the server recognizes as valid.  For Windows Integrated Authentication (comprising NTLM and Kerberos) using the current logged-on user, the client can use CredentialCache.DefaultCredentials.  Here's how it looks in code.

    using System;
    using System.IO;
    using System.Net;
    
    class Creds
    {
        public static void Main(string[] args)
        {
            Uri uri = new Uri(args[0]);
    
            HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
            req.Credentials = CredentialCache.DefaultCredentials;
    
            // Get the response.
            using (HttpWebResponse res = (HttpWebResponse)req.GetResponse())
            {
                StreamReader sr = new StreamReader(res.GetResponseStream());
                Console.WriteLine(sr.ReadToEnd());
            }
        }
    }

    You can find that same type of sample code in MSDN.  However, it gets more interesting if you want to use basic or digest authentication or use credentials other than the current logged-on user.

    One interesting fact is that the HttpWebRequest.Credentials property is of type ICredentials, but it only uses instances of NetworkCredential and CredentialCache.  If you implement ICredentials on your own class that is not one of those two classes, you can assign it to the Credentials property, but HttpWebRequest will silently ignore it.

    To go further, we need to look at the CredentialCache class itself.  This class is used to hold a set of credentials that are associated with hosts and authentication types.  It has two static properties, one of which we used above, that are the "authentication credentials for the current security context in which the application is running," which means the logged-on user in our case.

    The difference is very subtle.  The documentation for DefaultCredentials says, "The ICredentials instance returned by DefaultCredentials cannot be used to view the user name, password, or domain of the current security context."  The instance returned by DefaultNetworkCredentials, being new in .NET 2.0 and of type NetworkCredential, would presumably let you get the user name and domain, but it didn't work for me when I tried it with the following code (UserName returned an empty string).

    Console.WriteLine("User name: " + CredentialCache.DefaultNetworkCredentials.UserName);

    The NetworkCredential class implements both the ICredentials (NetworkCredential GetCredential(Uri uri, String authType)) and ICredentialsByHost (NetworkCredential GetCredential(String host, int port, String authType)) interfaces.  The ICredentialsByHost interface is new in .NET 2.0.

    The CredentialCache class has methods that let you add, get, and remove credentials for particular hosts and authentication types.  Using this class, we can manually construct what setting req.Credentials = CredentialCache.DefaultCredentials accomplished in the original example.

            CredentialCache credCache = new CredentialCache();
            credCache.Add(new Uri("http://localhost"), "Negotiate",
                          CredentialCache.DefaultNetworkCredentials);
            HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
            req.Credentials = credCache;

    The authentication type can also be explicitly specified as "NTLM" and "Kerberos" in separate calls to Add().  This page on authentication schemes explains using Negotiate as follows.

    Negotiates with the client to determine the authentication scheme. If both client and server support Kerberos, it is used; otherwise NTLM is used.

    Let's say you want to work with basic or digest authentication.  The documentation for CredentialsCache.DefaultCredentials and CredentialsCache.DefaultNetworkCredential says that neither will work with basic or digest.  If we add basic to credentials cache, we get a runtime exception.

            credCache.Add(new Uri("http://localhost"), "Basic",
                          CredentialCache.DefaultNetworkCredentials);

    The exception is thrown by the Add() method.

    Unhandled Exception: System.ArgumentException: Default credentials cannot be supplied for the Basic authentication scheme.
    Parameter name: authType
    at System.Net.CredentialCache.Add(Uri uriPrefix, String authType, NetworkCredential cred)

    So, in order to use basic or digest, we must create a NetworkCredential object, which is also what we need to do in order to authenticate as some identity other than the logged-on user.  To do that, we create NetworkCredential object and add it to the CredentialCache as follows.

            credCache.Add(new Uri("http://localhost"), "Basic" /* or "Digest" */,
                          new NetworkCredential("me", "foo", "DOMAIN"));

    Basic authentication sends the password across the wire in plain text.  That's okay for a secure connection, such as one using SSL, and for situations where you don't need much security.  Digest authentication hashes the password along with other data from the server before sending a response over the wire.  It's a significant step up from basic.

    Now we need to have the user name and password to create the NetworkCredential object.  There are two parts to this.  First is prompting the user for the name and password.  The second is storing the information.  For prompting there is the Windows dialog that pops up any time you go to a web site that requires authentication.  That dialog includes a "Remember my password" checkbox.  I don't yet know what the managed API is for that.

    To store and retrieve the information, there is the new managed DPAPI explained by Shawn Farkas in several blog postings.

    [Update 3:44pm]  The Windows dialog used when IE prompts for name and password is created by the CredUIPromptForCredentials() function.  CredUIConfirmCredentials() is used to save credentials that authenticated successfully, if desired.  Duncan Mackenzie's MSDN article Using Credential Management in Windows XP and Windows Server 2003 explains how to use it from .NET.

    [UPDATE 4/10/2006]  I updated the MSDN links that were broken.

  • Buck Hodges

    Visual Studio setup projects (vdproj) will not ship with future versions of VS

    • 51 Comments

    [UPDATE 04/18/14] The Visual Studio team has released an extension to VS 2013 to address the feedback on this, which has been loud and clear for a long time now: Visual Studio Installer Projects Extension.

    [UPDATE 11/6/12] Fixed broken links.

    At the user group meeting last night, someone asked about the future of WiX.  There was some confusion over the future of WiX since at one point there was a plan to ship it but then that changedRob Mensching, who leads the WiX project, is a developer on Visual Studio, and Visual Studio continues to contribute to the WiX project.  We use WiX to create the installation packages for VS and a bunch of other stuff.

    The Visual Studio setup projects will not ship again – VS 2010 was the last release with support for it.  So, you’ll want to make plans to migrate to something else.  Of course, I’d suggest looking into WiX, and there are other options as well.  The MSDN page Choosing a Windows Installer Deployment Tool contains a table showing a comparison of VS setup projects, WiX, and InstallShield Limited Edition.

    Caution

    Future versions of Visual Studio will not include the Visual Studio Installer project templates. To preserve existing customer investments in Visual Studio Installer projects, Microsoft will continue to support the Visual Studio Installer projects that shipped with Visual Studio 2010 per the product life-cycle strategy. For more information, see Expanded Microsoft Support Lifecycle Policy for Business & Development Products.

  • Buck Hodges

    Power Toy: tfpt.exe

    • 23 Comments

    [UPDATE 8/9/2007]  I fixed the broken link to the power tools page. 

    [UPDATE 9/8/2006]  TFPT is now available in its own small download: http://go.microsoft.com/?linkid=5422499!  You no longer need to download the VS SDK.  You can find more information about the September '06 release here.

    Back at the start of October, I wrote about the tfpt.exe power toy.  The beta 3 version has been released with the October SDK release.  In the future, we plan to have a better vehicle for delivering it.

    Here's the documentation, minus the screenshots, in case you are trying to decide whether to download the SDK.  The documentation is included in the SDK release as a Word document, including screenshots of the various dialogs (yes, most commands have a GUI, but you can still use the commands from scripts by specifying the /noprompt option).

    Review  The only command not documented is the review command, which is very handy for doing code reviews.  When you run "tfpt review" you get a dialog with a list of your pending changes that you can check off as you diff or view each one.

    I hope you find these useful.  Please leave a comment, and let us know what you think.

    Team Foundation PowerToys

    Introduction

    The Team Foundation PowerToys (TFPT) application provides extra functionality for use with the Team Foundation version control system. The Team Foundation PowerToys application is not supported by Microsoft.

    Five separate operations are supported by the TFPT application: unshelve, rollback, online, getcs, and uu. They are all invoked at the command line using the tfpt.exe application. Some of the TFPT commands have graphical interfaces.

    Unshelve (Unshelve + Merge)

    The unshelve operation supported by tf.exe does not allow shelved changes and local changes to be merged together. TFPT’s more advanced unshelve operation allows this to occur under certain circumstances.

    If an item in the local workspace has a pending change that is an edit, and the user uses TFPT to unshelve a change from a shelveset, and that shelved change is also an edit, then the changes can be merged with a three-way merge.

    In all other cases where changes exist both in the local workspace and in the shelveset, the user can choose between the local and shelved changes, but no combination of the changes can be made. To invoke the TFPT unshelve tool, execute

    tfpt unshelve

    at the command line. This will invoke the graphical interface for the TFPT unshelve tool:

    Running TFPT Unshelve on a Specified Shelveset

    To skip this dialog, you can specify the shelveset name and owner on the command line, with

    tfpt unshelve shelvesetname;shelvesetowner

    If you are the owner of the shelveset, then specifying the shelveset owner is optional.

    Selecting Individual Items Within a Shelveset for Unshelving

    If you specify a shelveset on the command line as in “Running TFPT Unshelve on a Specified Shelveset,” or if you select a shelveset in the window above and click Details, you are presented with the Shelveset Details window, where you can select individual changes within a shelveset to unshelve.

    You can check and uncheck the boxes beside individual items to mark or unmark them for unshelving. Click the Unshelve button to proceed.

    Unshelve Conflicts

    When you press the Unshelve button, all changes in the shelveset for which there is no conflicting local change will be unshelved. You can see the status of this unshelve process in the Command Prompt window from which you started the TFPT unshelve tool.

    There may, however, be conflicts which must be resolved for the unshelve to proceed. If any conflicts are encountered, the conflicts window is displayed:

    Edit-Edit Conflicts

    To resolve an edit-edit conflict, select the conflict in the list view and click the Resolve button. The Resolve Unshelve Conflict window appears.

    For edit-edit conflicts, there are three possible conflict resolutions. 

    • Taking the local change abandons the unshelve operation for this particular change (it would be as if the change had not been selected for unshelving).
    • Taking the shelved change first undoes the local change, and then unshelves the shelved change. This results in the local change being completely lost.
    • Clicking the Merge button first attempts to auto-merge the two changes together, and if it cannot do so without conflict, attempts to invoke a pre-configured third-party merge tool to merge the changes together. The local change is not lost by choosing to merge – if the merge fails, the local change remains.

    The Auto-Merge All Button

    The Auto-Merge All button is enabled when there are edit-edit conflicts remaining that are unresolved. Clicking the button goes through the edit-edit conflicts and attempts to auto-merge the changes together. For each conflict, if the merge succeeds, then the conflict is resolved. If not, then the conflict is marked as “Requires Manual Merge.” In order to resolve conflicts marked as “Requires Manual Merge,” you must select the conflict and click the Resolve… button. Clicking the Merge button will then start the configured third-party merge tool. If no third-party merge tool is configured, then the conflict must be resolved by selecting to take the local change or take the shelved change.

    Generic Conflicts

    Any other conflict (a local delete with a shelved edit, for example) is a generic conflict that cannot be merged.

    There is no merge option for generic conflicts. You must choose between keeping the local change and taking the shelved change.

    Aborting the Unshelve Process

    Because the unshelving process makes changes to the local workspace, and because the potential exists to discover a problem halfway through the unshelve process, the TFPT Unshelve application makes a backup of the local workspace before starting its execution if there are pending local changes. This backup is stored as a shelveset on the server. In the event of an abort, all local pending changes are undone and the backup shelveset is unshelved to the local workspace. This restores the workspace to the state it was in before the unshelve application was run.

    The backup shelveset is named by adding _backup and then a number to the name of the shelveset that was unshelved. For example, if the shelveset TestSet were unshelved, the backup shelveset would be named TestSet_backup1. Up to 9 backup shelvesets can exist for each shelveset.

    With the backup shelveset, changes made during an unshelve operation can be undone after the unshelve is completed but before the changes are checked in, by undoing all changes in the workspace and then unshelving the backup shelveset:

    tf undo * /r

    tf unshelve TestSet_backup1

    Rollback

    Sometimes it may be necessary to undo a checkin of a changeset. This operation is directly not supported by Team Foundation, but with the TFPT rollback tool you can pend changes which attempt to undo any changes made in a specified changeset.

    Not all changes can be rolled back, but in most scenarios the TFPT rollback command works. In any event, the user is able to review the changes that TFPT pends before checking them in.

    To invoke the TFPT rollback tool, execute

    tfpt rollback

    at the command line. This will invoke the graphical user interface (GUI) for the TFPT rollback tool. Please note that there must not be any changes in the local workspace for the rollback tool to run. Additionally, a prompt will be displayed to request permission to execute a get operation to bring the local workspace up to the latest version.

    The Find Changesets window is presented when the TFPT rollback tool is started. The changeset to be rolled back can be selected from the Find Changesets window.

    Specifying the Changeset on the Command Line

    The Find Changesets window can be skipped by supplying the /changeset:changesetnum command line parameter, as in the following example:

    tfpt rollback /changeset:3

    Once the changeset is selected, either by using the Find Changesets window or specifying a changeset using a command-line parameter, the Roll Back Changeset window is displayed.

    Each change is listed with the type of change that will be counteracted by a rollback change.

    To rollback a:

    The tool pends a:

    Add, Undelete, or Branch

    Delete

    Rename

    Rename

    Delete

    Undelete

    Edit

    Edit

    Unchecking a change in the Roll Back Changeset window marks it as a change not to be rolled back. There are cases involving rolling back deletes which may result in unchecked items being rolled back. If this occurs, clear warnings to indicate this are displayed in the command prompt window. If this is unsatisfactory, undo the changes pended by the rollback tool.

    When the changes to roll back have been checked appropriately, pressing the Roll Back button starts the rollback. If no failures or merge situations are encountered, then the changes should be pended and the user returned to the command prompt:

    Merge scenarios can arise when a rollback is attempted on a particular edit change to an item that occurred in-between two other edit changes. There are two possible edit rollback scenarios: 

    1. An edit is being rolled back on an item, and the edit to roll back is the latest change to the content of the item. 

    This is the most common case. Most rollbacks are performed on changesets that were just checked in. If the edit was just checked in, it is unlikely that another user has edited it in the intervening time.

    To roll back this change, an edit is pended on the item, and the content of the item is reverted to the content from before the changeset to roll back. 

    1. An edit is being rolled back on an item, and the edit to roll back is not the latest change to the content of the item. 

    This is a three-way merge scenario, with the version to roll back as the base, and the latest version and the previous version as branches. If there are no conflicts, then the changes from the change to roll back (and only the change to roll back) are extracted from the item, preserving the changes that came after the change to roll back. 

    In the event of a merge scenario, the merge window is displayed:

    To resolve a merge scenario, select the item and click the Merge button. An auto-merge will first be attempted, and if it fails, the third-party merge tool (if configured) will be invoked to resolve the merge. If no third-party merge tool is configured, and the auto-merge fails, then the item cannot be rolled back:

    The Auto-Merge All button attempts an auto-merge on each of the items in the merge list, but does not attempt to invoke the third-party merge tool.

    Failures

    Any changes which fail to roll back will also be displayed in the same window.

    Online

    With Team Foundation, a server connection is necessary to check files in or out, to delete files, to rename files, etc. The TFPT online tool makes it easier to work without a server connection for a period of time by providing functionality that informs the server about changes made in the local workspace.

    Non-checked-out files in the local workspace are by default read-only. The user is expected to check out the file with the tf checkout command before editing the file. When working in this

    When working offline with the intent to sync up later by using the TFPT online tool, users must adhere to a strict workflow: 

    • Users without a server connection manually remove the read-only flag from files they want to edit. Non-checked-out files in the local workspace are by default read-only, and when a server connection is available the user must check out the file with the tf checkout command before editing the file. When working offline, the DOS command “attrib –r” should be used.
    • Users without a server connection add and delete files they want to add and delete. If not checked out, files selected for deletion will be read-only and must be marked as writable with “attrib –r” before deleting. Files which are added are new and will not be read-only.
    • Users must not rename files while offline, as the TFPT online tool cannot distinguish a rename from a deletion at the old name paired with an add at the new name.
    • When connectivity is re-acquired, users run the TFPT online tool, which scans the directory structure and detects which files have been added, edited, and deleted. The TFPT online tool pends changes on these files to inform the server what has happened.  

    To invoke the TFPT online tool, execute 

    tfpt online

    at the command line. The online tool will begin to scan your workspace for writable files and will determine what changes should be pended on the server.

    By default, the TFPT online tool does not detect deleted files in your local workspace, because to detect deleted files the tool must transfer significantly more data from the server. To enable the detection of deleted files, pass the /deletes command line option.

    When the online tool has determined what changes to pend, the Online window is displayed.

    Individual changes may be deselected here if they are not desired. When the Pend Changes button is pressed, the changes are actually pended in the workspace.

    Important Note: If a file is edited while offline (by marking the file writable and editing it), and the TFPT online tool pends an edit change on it, a subsequent undo will result in the changes to the file being lost. It is therefore not a good idea to try pending a set of changes to go online, decide to discard them (by doing an undo), and then try again, as the changes will be lost in the undo. Instead, make liberal use of the /preview command line option (see below), and pend changes only once.

    Preview Mode

    The Online window displayed above is a graphical preview of the changes that will be pended to bring the workspace online, but a command-line version of this functionality is also available. By passing the /preview and /noprompt options on the command line, a textual representation of the changes that the TFPT online tool thinks should be pended can be displayed.

    tfpt online /noprompt /preview

    Inclusions

    The TFPT online tool by default operates on every file in the workspace. Its focus can be more directed (and its speed improved) by including only certain files and folders in the set of items to inspect for changes. Filespecs (such as *.c, or folder/subfolder) may be passed on the command line to limit the scope of the operation, as in the following example:

    tfpt online *.c folder\subfolder

    This command instructs the online tool to process all files with the .c extension in the current folder, as well as all files in the folder\subfolder folder. No recursion is specified. With the /r (or /recursive) option, all files matching *.c in the current folder and below, as well as all files in the folder\subfolder folder and below will be checked. To process only the current folder and below, use

    tfpt online . /r

    Exclusions

    Many build systems create log files and/or object files in the same directory as source code which is checked in. It may become necessary to filter out these files to prevent changes from being pended on them. This can be achieved through the /exclude:filespec1,filespec2,… option.

    With the /exclude option, certain filemasks may be filtered out, and any directory name specified will not be entered by the TFPT online tool. For example, there may be a need to filter out log files and any files in object directories named “obj”.

    tfpt online /exclude:*.log,obj

    This will skip any file matching *.log, and any file or directory named obj.

    GetCS (Get Changeset)

    The TFPT GetCS tool gets all the items listed in a changeset at that changeset version.

    This is useful in the event that a coworker has checked in a change which you need to have in your workspace, but you cannot bring your entire workspace up to the latest version. You can use the TFPT GetCS tool to get just the items affected by his changeset, without having to inspect the changeset, determine the files listed in it, and manually list those files to a tf.exe get command.

    There is no graphical user interface (GUI) for the TFPT GetCS tool. To invoke the TFPT GetCS tool, execute

    tfpt getcs /changeset:changesetnum

    at the command line, where changesetnum is the number of the changeset to get.

    UU (Undo Unchanged)

    The TFPT UU tool removes pending edits from files which have not actually been edited.

    This is useful in the event that you check out fifteen files for edit, but only actually make changes to three of them. You can back out your edits on the other twelve files by running the TFPT UU tool, which compares hashes of the files in the local workspace to hashes the server has to determine whether or not the file has actually been edited.

    There is no graphical user interface (GUI) for the TFPT UU tool. To invoke the TFPT UU tool, execute

    tfpt uu

    at the command line. You can also pass the /changeset:changesetnum argument to compare the files in the workspace to a different version.

    Help

    Help for each TFPT tool, as well as all its command-line switches, is available at the command line by running

    tfpt help

    or for a particular command, with

    tfpt help <command>

    or

    tfpt <command> /?

  • Buck Hodges

    TFS 2008: A basic guide to Team Build 2008

    • 33 Comments

    Patrick Carnahan, a developer on Team Build, put together the following guide to the basic, as well as a few advanced, features of Team Build in TFS 2008.  It's a great way to get started with continuous integration and other features in TFS 2008.

    Team Build – Continuous Integration

    One of the new and most compelling features of Team Foundation Build is the out-of-the-box support for continuous integration and scheduling. A few in-house approaches have been built around the TFS Soap Event mechanism, most likely set to listen for check-in events and evaluating whether or not a build should be performed. The disadvantages to this approach are the speed and accuracy at which build decisions may be made.

    For all of the following steps, it is assumed that the ‘Edit Build Definition’ dialog is currently active. To activate this dialog, expand the ‘Builds’ node of the team project to which your V1 build type belongs (or to which you want to add a new V2 build definition) and click on ‘Edit Build Definition’ as shown below.

    Setting up the workspace

    Setting up a workspace is pretty important to the continuous integration build, since this is how the server determines when to automatically start builds on your behalf. Although the default workspace mapping is $/<Team Project Name> -> $(SourceDir), something more specific should be used in practice. For instance, in the Orcas PU branch you should use (at a minimum) the mapping $/Orcas/PU/TSADT -> $(SourceDir).

    What is this $(SourceDir) variable? Well, in V1 the build directory was specified per build type, meaning that the build type was built on the same directory no matter what machine the build was performed on. In V2 we have separated out the idea of a build machine into a first-class citizen called a Build Agent; this is where the build directory is stored. The variable $(SourceDir) is actually a well-understood environment variable by Team Build, and is expanded to: <BuildAgent.BuildDirectory>\Sources (more on the Build Directory and environment variables later). Typically you will want to make use of $(SourceDir) and keep everything relative to it, but there is no restriction that forces this upon you.

    Just so we’re on the same page with the workspace dialog, a picture has been included below. Those of you familiar with version control workspaces should feel right at home!

    Already have a workspace ready to go? Simply select the ‘Copy Existing Workspace’ button to search for existing workspaces to use as a template. The local paths will be made relative to $(SourceDir) automatically!

    Defining a Trigger

    The trigger defines how the server should automatically start builds for a given build definition.

    http://blogs.msdn.com/photos/buckh/images/1623211/original.aspx

    The first option should be self-explanatory, and keeps the build system behaving just like V1 (no automatic builds). The other options are as follows.

    • The 'Build each check-in' option queues a new build for each changeset that includes one or more items which are mapped in a build definition's workspace.
    • 'Accumulate check-ins', otherwise known as 'Rolling Build', will keep track of any checkins which need to be built but will not start another build inside of the defined quiet period. This option is a good way to control the number of builds if continuous integration is desired but you want a maximum number of builds per day (e.g. 12 builds per day, which would require a quiet period of 120 minutes) or your builds take much longer than the typical time between checkins.
    • Standard scheduled builds will only occur if checkins were made since the previous build. Don't worry about this rule affecting the first build, however, because the system will ensure that the first build is started right on time.
    • Scheduled builds can optionally be scheduled even if nothing changes. This is useful when builds are dependent on more than what is in version control.

    Drop Management

    One of the side effects of a continuous integration system is that a greater number of builds will be created. In order to manage the builds and drops created through CI we have introduced a feature in Team Build called drop management.

    Drop management is enabled through a concept we call Retention Policy in Team Build, which allows you to define how many builds to retain for a particular build outcome. For instance, you may only want to keep the last 5 successful builds and only one of any other kind. Through retention policy you can define this by setting the appropriate values in the dialog shown above and the server will manage the automatic deletion of builds for you.

    What if you don’t want a build to be susceptible to automatic deletion? We have an option available on individual builds if you would like to retain a build indefinitely (it just so turns out that this is what the menu option is called). To do this go to the ‘Build Explorer’ in Visual Studio 2008 (available by double clicking a node under the Builds node in the Team Explorer window), right click on the build, then select ‘Retain Indefinitely’. Once this option has been toggled on you will see a padlock icon next to the build.

    If you decide that the build is no longer useful, simply toggle the option off for the build and let drop management take care of the build automatically.

    Advanced Topics

    1. Automated UI Tests

    Automated UI tests cannot be run in the build agent running as a windows service since it is not interactive, meaning that it cannot interact with the desktop. So, we have provided the ability to run an interactive instance of the service out-of-the-box!

    The first thing you will need to do is create a new build agent on the server. To do this you should right click on the ‘Builds’ node for your team project, then click ‘Manage Build Agents’. Once this dialog comes up, click the ‘Add’ button which will bring you to the dialog below.

    The display name can be anything descriptive you want. For instance, if the machine name is ‘BuildMachine02’ you may choose to name the build agent ‘BuildMachine02 (Interactive)’ so you can easily differentiate between the interactive and non-interactive agents. The computer name should be the computer name of the machine on which the build service resides, and the port should be changed to 9192 since it is the default interactive port. When changing the port you may see a dialog with the message below, which may be safely disregarded in this case since you’ll be using the default interactive port.

    TF226000: After you change the port that is used by Team Foundation Build, you must also update the build service on the build computer to use the new port. For more information, see http://go.microsoft.com/fwlink/?LinkId=83362 .

    In order to run the build service in interactive mode just start a command-prompt on the build computer in the directory %PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies (if you do not want to run the build agent service as yourself then you need to be sure to spawn the command-prompt as the correct user using the ‘runas’ command). Now that you’re in the directory as the appropriate user all you need to do is type ‘TFSBuildService.exe’, which will output something similar to the following:

    C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies>TFSBuildService.exe

    Press the Escape key (Esc) to exit...

    Once you see that prompt your interactive agent is ready to go. You’ll need to leave that command running. Be sure that any time you need to run automated UI tests that you target the correct agent!

    2. Build Directory Customization

    In TFS 2005, you were able to specify the root build directory which should be used in the build type definition file TfsBuild.proj. During the build, the root build directory would automatically get expanded to:

    <Root Build Directory>\<Team Project Name>\<Build Type Name>

    This was not configurable and was done this way to ensure uniqueness across build types being built on the same machine. The sources, binaries, and build type were then brought into sub folders named ‘Sources’, ‘Binaries’, and ‘BuildType’, respectively. Since the names could get quite long, there were quite a few issues with path name length errors which were unavoidable.

    In TFS 2008 we have made it easier to customize the build directory on the agent through the use of 2 well-known environment variables.

    $(BuildDefinitionPath), which expands to <TeamProjectName>\<DefinitionName>

    $(BuildDefinitionId), which is the unique identifier by which the definition may be referenced (an Int32)

    The build directory is no longer guaranteed to be unique by the system automatically unless one of these two variables is used. This approach has some great advantages, especially since $(BuildDefinitionId) is merely an Int32 and will definitely reduce the length of paths at the build location!

    There is another method by which this path may be reduced even more if the source control enlistment requires it. If you take a look at the file %PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\TfsBuildService.exe.config on the build computer, you will see some settings which may be of interest to you.

    <add key="SourcesSubdirectory" value="Sources" />

    <add key="BinariesSubdirectory" value="Binaries" />

    <add key="TestResultsSubdirectory" value="TestResults" />

    These control the name of the sub-directories which will house the Sources, Binaries, and TestResults. If you need an even shorter path, you could name the sources sub directory ‘s’!

    NOTE: Be sure to restart the team build service ( the Windows service or the interactive service, as needed) after making changes to this file in order for the changes to take effect!

    3. Port Configuration

    For Orcas we have changed the communication method for the build agent (the build agent is the service installed and running on the build computer). In TFS 2005 the Team Build Server communicated with the build machines via .NET Remoting. In TFS 2008 we changed this to use SOAP+XML over HTTP, just like the other Team Foundation services. In order to do this, we have switched to using Windows Communication Foundation self-hosting functionality to expose the service end-point without requiring Internet Information Services (IIS) on the build computer. There is a new series of steps which must be followed in order to change the port for an existing TFS 2008 build agent.

    1. Disable the build agent on the server using the 'Manage Build Agents' option in Visual Studio 2008 described above. This will keep the server from starting new builds on the machine, but will let existing builds finish. This way you do not accidentally stop an in-progress build from finishing.
    2. Once there are no builds running, issue a 'Stop' command to the build Windows service, either using the Windows Services control panel or from the command line using the "net stop vstfbuild" command.
    3. Navigate a command prompt to the directory %PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies.
    4. Open the file TFSBuildService.exe.config, and look for a child element in the <appSettings> section with the key="port" (case sensitive!). Change the value of that key to the desired port and remember the value for the next step.
    5. In the same directory there will be an executable named wcfhttpconfig.exe. This will ensure that the appropriate URL ACLs are in place to allow the service account to listen for incoming HTTP requests. The command you should issue is: 'wcfhttpconfig reserve <domain>\<user name> <port number>'. You must run this command as a local administrator.
    6. Now issue a 'Start' command to the window service. 
    7. Change the status of the build agent back to 'Enabled' using the 'Build Agent Properties' dialog and click ok.

    You can find the official docs on MSDN at How to: Configure an Interactive Port for Team Foundation Build.

    NOTE: Changing the port of the Interactive Service is exactly the same as the instructions above with one exception: the appSettings key you will need to modify is ‘InteractivePort’.

    [UPDATED 9/07/07]  I've added links to the official docs on changing the port.

  • Buck Hodges

    TFS 2010 server licensing: It's included in MSDN subscriptions

    • 76 Comments

    [UPDATE 2/10/2010]  You can now get the official Visual Studio 2010 Licensing whitepaper, which also covers TFS, Lab, and IntelliTrace. That is the best resource for understanding the licensing.

    Another big piece of news with the release of VS and TFS 2010 betas yesterday is the changes to TFS licensing for 2010 that make it even more affordable.  Here are the comments from Doug Seven, our licensing guru in marketing, on Soma's beta 2 announcement post.

    Team Foundation Server 2010 will be included in the MSDN subscription that comes with Visual Studio 2010 Professional, Premium, Ultimate, and Test Elements. This copy of Team Foundation Server in licensed for unlimited development and test use (as is all MSDN software) and licensed for one production deployment. These MSDN subscriptions also include one CAL.

    Team Foundation Server has three installation choices - Basic, Advanced and Custom.  You will be able to install this either on your client machine (very similar to client side SCM such as VSS) or on a server machine just like TFS 2008.

    Team Foundation Server will also be available in retail for around $500 USD and will include a license term allowing up to five (5) named users without CALs to use Team Foundation Server. To grow to more than five users, you will need to have CALs for additional users beyond five users. This enables small teams of five or fewer to get up and running on Team Foundation Server for as little as $500 USD.

    Of course having Visual Studio 2010 with MSDN means you can get Team Foundation Server up and running at no additional cost.

    You can also hear more in an interview with Doug Seven conducted by three MVPS: The Ultimate Announcement Show.

    I'm not a licensing expert, so I can't answer detailed questions about licensing.  I did want to make sure everyone sees this.  It's a really exciting change.

    [UPDATE 10/20/09]  I wanted to add a clarification from Doug around the CALs and SQL.  There is a licensing whitepaper in the works that should be out soon.

    Retail TFS does not come with 5-CALs. It has a EULA exception allowing up to 5 users without CALs. The primary difference is that CALs can be used to access multiple TFS instances. A EULA exception cannot. In other words, buying two TFS retail licenses does NOT give me rights for 10-users on one instance of TFS. It gives me rights to two instances with 5-users each. To add more than 5 users, you must have CALs for all additional users.

    TFS also still includes a SQL Server license for use with TFS.  In other words, you can't use the SQL license included with TFS to do anything other than to support TFS.

  • Buck Hodges

    How to handle "The path X is already mapped in workspace Y"

    • 22 Comments

    This has come up before on the forums, but I don't think I've ever posted about it here.  Today I saw a reference to the TFS Workspace Gotcha! post in today's Team System Rocks VSTS Links roundup.  There's a command to deal with the problem, but it's not obvious.

    Here's the post.

    I have been working with a team that has recently migrated a TFS source project from a trail TFS to a full production server (different server).  They disconnected their solution from source control (removes all the SCC stuff from .sln) files and then tried to add to the new TFS.

    However, they were getting weird errors.

    I suggested that they might not have their workspace mapped correctly to the new TFS project.

    When they tried to map the workspace, they got the following error:

    The Path <local path> is already mapped in workspace <machine name [old tfs server]>

    Hmm, I thought we removed all the source stuff?

    Turns out that the workspace mappings are stored in a file called:

    VersionControl.config under the users local settings/application data directory.

    I could find no way (other than manually editing the forementioned file) to remove the workspace mapping from that local folder to the other TFS server (which is no longer in usage).

    Anyway, once that was done, all was good in the world.

    Thanks go out to Kevin Jones for his excellent post on Workspaces in TFS.

    While deleting versioncontrol.config will clear the cache, we've actually got a way to do it from the command line.  The command "tf workspaces /remove:*" clears out all of the cached workspaces (it only affects the cache file).  You can also specify "/s:http://oldserver:8080" to clear out only the workspaces that were on the old server. The MSDN documentation for the workspaces command is at http://msdn2.microsoft.com/en-us/library/54dkh0y3.aspx.

    The reason that he hit this is due to switching servers. Every server has a unique identifier, which is a GUID. Each local path can only be mapped in a single workspace, and the versioncontrol.config cache file is the file that the client uses to determine what server to use when given a local path (e.g., tf edit c:\projects\BigApp\foo.cs).

    He originally had a workspace on the old server that used the local path he wanted to use with the new server. Let's say that's c:\projects. When he tried to create the new workspace on the new server (GUID2) that he also wanted to map to c:\projects, the client saw that the old server (GUID1) was already using that local path. Since the IDs for the servers did not match, the client complained that c:\projects is already mapped to the old workspace on the old server.

    The client uses the server's ID as the real name of a server.  The reason is that the name could change, either because an admin changed the server's name or because a user needs to access the server through a different name depending on the connection (intranet vs. internet).  The only "name" guaranteed not to change is the ID.  So, when presented with a different network name, the client requests the ID from the server and compares it to the IDs listed in the versioncontrol.config cache file.  If the ID matches one of them, the client simply updates the existing entry to have the new name, so that subsequent uses of the new name do not incur an extra request to check the ID.  If the ID does not match any of them, then the server is different than any of the servers in the cache file.

    The error message looks like it's showing the machine, but it's actually showing the workspace name.  The reason for the confusion there is that the version control integration in VS creates a default workspace that has the same name as the machine.

    The problem will not occur if you upgrade the same server (i.e., you don't create a new server), as an upgraded server still has the same ID.

    Though /remove isn't mentioned (part 2 does mention the error message at the end), you can check out Mickey Gousset's workspace articles for more information on workspaces and managing them.

    tags: , ,

  • Buck Hodges

    More on branching and merging vs. sharing and pinning

    • 12 Comments

    Branching and merging in TFS provide a more robust way to accomplish what sharing and pinning are often used for in VSS.  In TFS, you would branch a directory (source), using the "branch source target" command, to the desired location (target).  Then when there are changes in the source that you need in the target, you would use the "merge source target" command to propagate the changes.  The merge command remembers what changes have have been brought over to the target, so it brings the target up to date with the source by only merging changes that have not been merged.

    The merge command will pend edits, branches for files added, deletes, etc. in the target (in TFS, commands pend changes and then the checkin command submits all changes in a single atomic operation).  After running the merge, command you would then build the software and test it.  Additional changes may need to be made in the target, such as when you change a function or method signature.  When everything is ready, you would check in the merged changes, along with any other changes you needed to make, in a single atomic operation.

    By doing this, you are able to build and verify before checkin each branch when it merges in the changes from the source.

    There is no penalty in TFS with respect to having multiple branches of the same file.  In the SQL database, they all reference the same content.  Only when a branched file changes does new content get added.

    With respect to merging binary files, there is nothing to do except run merge followed by checkin if the binary files in the target do not change.  If binary files in the target have changed, the merge command will produce conflicts for those files, and you would have to pick which to keep (source or target -- there is no 3-way content merge option, of course) when resolving the conflict.

    If you want to have every project always use the same same file and immediately see new versions as soon as they are checked in, putting the file in a common location in the TFS repository, such as $/public/bin/dlls, is the way to go.  That does mean that everything that references it should use relative paths.  And since there is no isolation, everything that depends on the file must be checked prior to checkin, or you run the risk of silently breaking other projects (i.e., it requires manual coordination).

  • Buck Hodges

    Team System Web Access 2008 SP1 CTP and Work Item Web Access 2008 CTP are now available

    • 27 Comments

    Hakan has announced the availability of the new TSWA community technology preview (CTP) in his post, What's New in TSWA 2008 SP1.  Personally, I would say this release is beta quality or better, so don't let the CTP designation scare you too much.

    Also released is the first CTP release of what we are calling Work Item Web Access (WIWA).  You may recall that we published a spec for it recently, referring to it as a "bug submission portal."  WIWA provides you with the ability to have folks create work items and view work items they have created without needing a client access license (CAL) for 2008.  This was a new condition that was added to the TFS 2008 license agreement.  Hakan has more details in his post on WIWA.

    Both the CTP of TSWA and the CTP of WIWA have the same requirements as the previous release of TSWA 2008 (e.g., you must have Team Explorer 2008 installed as a prerequisite).

    This release of TSWA has some really great new features.

    • Single instance with multiple languages
    • Support for specifying field values in the URL for creating new work items (works in both TSWA and WIWA)
    • Share ad-hoc work item queries
    • Shelveset viewer
    • Improved search support

    I want to call out two features in particular that I really like.

    Support for specifying field values in the URL for creating new work items (works in both TSWA and WIWA)

    How often have you wanted users or testers to file bugs and needed to have them fill in certain fields with particular values so that the work item shows up in the correct area?  We now support providing field values in the new work item URL.  Here's the example that Hakan provided.

    http://<server>/wi.aspx?pname=MyProject&wit=Bug&[Title]=Bug Bash&[AssignedTo]=Hakan Eskici&[Iteration Path]=MyProject\Iteration2&[FoundIn]=9.0.30304

    This will open a new work item editor window with the following initial values:

    • Team Project = MyProject
    • Work Item Type = Bug
    • Title = Bug Bash
    • Assigned To = Hakan Eskici
    • Iteration Path = MyProject\Iteration2
    • Found in Build = 9.0.30304

    Now you can start sending your users and testers a link with all of this already filled in!

    Improved search support

    Have you ever wanted to search for bugs assigned to someone in particular or in a particular area without writing a query?  In the past, you could only search the Title and Description fields in a work item, which I described here.  Now you can enter the following into the search box in TSWA to find any bug assigned to me that also has the word "exception" in the Title or Description.

    exception a="Buck Hodges"

    The core fields have shortcuts.  Any field can be used by specifying the reference name for the field.  Here's the equivalent without using the shortcut.

    exception System.AssignedTo="Buck Hodges"

    Here are the shortcuts for the core fields.

    • A: Assigned To
    • C: Created By
    • S: State
    • T: Work Item Type

    You can use TFS macros, such as @me, in search.  For example, find all work items containing "watson" in the Title or Description that are assigned to me that are in the Resolved state and are work items of type Bug.

    watson a=@me s=Resolved t=Bug

    Now, if you really want to do something cool, there are the "contains" and "not" operations.  The "=" operator matches exact phrases, whereas the ":" operator is used for "contains" clauses.  The following search looks for bugs assigned to Active (i.e., not assigned to any particular person yet) where the word "repro" is contained in the History field.

    a=Active History:repro

    This example illustrates the difference between the two operators.  The first example finds all work items where the Title is exactly "Bug Bash" with no other words or characters in it.  The second example, which uses the contains operator (colon) rather than the exact match operator (equals), finds all bugs where the Title contains the phrase "Bug Bash" along with any other words or characters.

    • Title="Bug Bash"
    • Title:"Bug Bash"

    Personally, I find myself almost always using the contains operator.

    Finally, you need to be able to exclude certain things from your search.  For that, there is the not operator, which is represented by the hyphen ("-").  The following example finds all work items with "watson" in the Title or Description fields that are not assigned to me and that are not closed.

    watson –a=@me –s=closed

    The not operator only works with field references, so you can’t use the following to find all work items containing "watson" but not containing "repro" in the Title and Description fields.

    watson –repro

    However, you can accomplish this by specifying the Title field explicitly with the not operator.

    watson –Title:repro

    Please send us your feedback on both the new features and Work Item Web Access!

  • Buck Hodges

    Team Foundation Version Control client API example (RTM version)

    • 24 Comments

    [Update 3/10/2012] If you are using TFS 2010 or newer, there is an updated version control client API example.

    [Update 6/13/06] While the documentation is not what it needs to be, you can find reference-style documentation on a significant amount of the API in the VS SDK (current release is April): http://blogs.msdn.com/buckh/archive/2005/12/09/502179.aspx.

    I've updated this sample a few times before.  This is a really simple example that uses the version control API.  It shows how to create a workspace, pend changes, check in those changes, and hook up some important event listeners.  This sample doesn't do anything useful, but it should get you going.

    You have to supply a Team Project as an argument.  Note that it deletes everything under the specified Team Project, so don't use this on a Team Project or server you care about.

    The only real difference in this version is that it uses the TeamFoundationServer constructor (in beta 3, you were forced to use the factory class).

    You'll need to reference the following dlls to compile this example.

    System.dll
    Microsoft.TeamFoundation.VersionControl.Client.dll
    Microsoft.TeamFoundation.Client.dll

    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.IO;
    using System.Text;
    using Microsoft.TeamFoundation.Client;
    using Microsoft.TeamFoundation.VersionControl.Client;

    namespace BasicSccExample
    {
        class Example
        {
            static void Main(string[] args)
            {
                // Verify that we have the arguments we require.
                if (args.Length < 2)
                {
                    String appName = Path.GetFileName(Process.GetCurrentProcess().MainModule.FileName);
                    Console.Error.WriteLine("Usage: {0} tfsServerName tfsTeamProjectPath", appName);
                    Console.Error.WriteLine("Example: {0} http://tfsserver:8080 $/MyProject", appName);
                    Environment.Exit(1);
                }

                // Get a reference to our Team Foundation Server.
                String tfsName = args[0];
                TeamFoundationServer tfs = new TeamFoundationServer(tfsName);

                // Get a reference to Version Control.
                VersionControlServer versionControl = (VersionControlServer) tfs.GetService(typeof(VersionControlServer));

                // Listen for the Source Control events.
                versionControl.NonFatalError += Example.OnNonFatalError;
                versionControl.Getting += Example.OnGetting;
                versionControl.BeforeCheckinPendingChange += Example.OnBeforeCheckinPendingChange;
                versionControl.NewPendingChange += Example.OnNewPendingChange;

                // Create a workspace.
                Workspace workspace = versionControl.CreateWorkspace("BasicSccExample", versionControl.AuthenticatedUser);

                try
                {
                    // Create a mapping using the Team Project supplied on the command line.
                    workspace.Map(args[1], @"c:\temp\BasicSccExample");

                    // Get the files from the repository.
                    workspace.Get();

                    // Create a file.
                    String topDir = Path.Combine(workspace.Folders[0].LocalItem, "sub");
                    Directory.CreateDirectory(topDir);
                    String fileName = Path.Combine(topDir, "basic.cs");
                    using (StreamWriter sw = new StreamWriter(fileName))
                    {
                        sw.WriteLine("revision 1 of basic.cs");
                    }

                    // Now add everything.
                    workspace.PendAdd(topDir, true);

                    // Show our pending changes.
                    PendingChange[] pendingChanges = workspace.GetPendingChanges();
                    Console.WriteLine("Your current pending changes:");
                    foreach (PendingChange pendingChange in pendingChanges)
                    {
                        Console.WriteLine("  path: " + pendingChange.LocalItem +
                                          ", change: " + PendingChange.GetLocalizedStringForChangeType(pendingChange.ChangeType));
                    }

                    // Checkin the items we added.
                    int changesetNumber = workspace.CheckIn(pendingChanges, "Sample changes");
                    Console.WriteLine("Checked in changeset " + changesetNumber);

                    // Checkout and modify the file.
                    workspace.PendEdit(fileName);
                    using (StreamWriter sw = new StreamWriter(fileName))
                    {
                        sw.WriteLine("revision 2 of basic.cs");
                    }

                    // Get the pending change and check in the new revision.
                    pendingChanges = workspace.GetPendingChanges();
                    changesetNumber = workspace.CheckIn(pendingChanges, "Modified basic.cs");
                    Console.WriteLine("Checked in changeset " + changesetNumber);
                }
                finally
                {
                    // Delete all of the items under the test project.
                    workspace.PendDelete(args[1], RecursionType.Full);
                    PendingChange[] pendingChanges = workspace.GetPendingChanges();
                    if (pendingChanges.Length > 0)
                    {
                        workspace.CheckIn(pendingChanges, "Clean up!");
                    }

                    // Delete the workspace.
                    workspace.Delete();
                }
            }

            internal static void OnNonFatalError(Object sender, ExceptionEventArgs e)
            {
                if (e.Exception != null)
                {
                    Console.Error.WriteLine("Non-fatal exception: " + e.Exception.Message);
                }
                else
                {
                    Console.Error.WriteLine("Non-fatal failure: " + e.Failure.Message);
                }
            }

            internal static void OnGetting(Object sender, GettingEventArgs e)
            {
                Console.WriteLine("Getting: " + e.TargetLocalItem + ", status: " + e.Status);
            }

            internal static void OnBeforeCheckinPendingChange(Object sender, ProcessingChangeEventArgs e)
            {
                Console.WriteLine("Checking in " + e.PendingChange.LocalItem);
            }

            internal static void OnNewPendingChange(Object sender, PendingChangeEventArgs e)
            {
                Console.WriteLine("Pending " + PendingChange.GetLocalizedStringForChangeType(e.PendingChange.ChangeType) +
                                  " on " + e.PendingChange.LocalItem);
            }
        }
    }

  • Buck Hodges

    How to run tests in a build without test metadata files and test lists (.vsmdi files)

    • 46 Comments

    [UPDATE 6/16/2010]  The VSTS 2008 release added support for test containers (/testcontainer) in the product, and the 2010 release added support for test categories.  This post now only applies to TFS 2005.

    Since the beginning, running tests in Team Build (or MSBuild in general) has meant having to use .vsmdi files to specify the tests to run.  Tons of people have complained about it, as it's a burden to create and edit the files, as either VSTS for Testers or the full suite is required in the 2005 release, and merging changes to the file is painful when multiple developers are updating the file.  While mstest.exe has a /testcontainer option for simply specifying a DLL or load/web test file, the test tools task used with MSBuild does not expose an equivalent property.

    Attached to this post is a zip archive containing the files needed to run tests without metadata.  There are three files.

    1. TestToolsTask.doc contains the instructions for installing the new task DLL and modifying the TfsBuild.proj files to use it.
    2. Microsoft.TeamFoundation.Build.targets is a replacement for the v1 file by the same name that resides on the build machine.
    3. Microsoft.TeamFoundation.PowerToy.QualityTools.dll contains the new TestToolsTask class that extends the TestToolsTask class that shipped in v1 and exposes a TestContainers property that is is the equivalent of mstest.exe's /testcontainer option.

    After you read the instructions (please, read the instructions), you'll be able to run all of your unit tests by simply specifying the DLLs or even specifying a file name pattern in TfsBuild.proj.  Here are a couple of examples.  The TestToolsTask.doc file explains how they work, including what %2a means.

    <TestContainerInOutput Include="HelloWorldTest.dll" />
    <TestContainerInOutput Include="%2a%2a\%2aTest.dll" />
    <TestContainer Include="$(SolutionRoot)\TestProject\WebTest1.webtest" />

    This new task will be included in future releases of the Team Foundation Power Toys (soon to be called Power Tools).  The TestToolsTask in the shipping product will support test containers and Team Build will support using it in the next release of the product.

    I started with code and documentation that someone else wrote and finished it off.  Thanks to Clark Sell for digging up an old email thread about the original work and Tom Marsh, development lead in VSTS Test, for pointing me to the code and doc.  Thanks to Aaron Hallberg and Patrick Carnahan, Team Build developers, for their help.  Thanks to Brian Harry for letting me post it.

    Please post your feedback in the comments to this post.  We'd like to know if this hits the mark, or if there's something else we need to support.

    [UPDATE 4/26/2007] New features: Test categories and test names

    Pierre Greborio, a developer over in MSTV, has contributed a great new feature: test categories.  Those of you who have used NUnit are probably familiar with the Category attribute.  Test categories allow you to execute specific groups of unit tests.  Unlike the test container feature, the test category feature will not be in Orcas.

    While the details are discussed in the TestToolsTask.doc included in the zip file attached to this blog post, here's the quick version.

    1. Add the Category attribute to your unit test method.
    2. Specify the category to run in your tfsbuild.proj file.

    The other feature that's new in this release is the support for test names.  This is equivalent to the mstest.exe /test command line switch.  The name that's specified is implicitly a wildcard match.  Specifying "Blue" as the test name, for example, will execute every test method that has Blue in the name, including DarkBlue and BlueLight.

    Pierre did his testing on Vista.  Because the dll is not signed, he ran into some trust issues.  If you hit a similar problem, he recommends this forum post for how to get it to work.

    [UPDATE 11/9/2006] Bug fix

    I've updated the attachment with a new zip archive file.  Unfortunately, the task I posted originally didn't result in builds being marked as Failed when the tests had errors.  I missed that in my testing.  The reason for the problem is that the v1 Team Build logger looks for the task name, TestToolsTask, and the power toy task was originally called TestToolsTaskEx.  With this new release, the task has the same name as the original v1 task, so that builds will be marked as failed when the tests fail.

    If you downloaded the original release, you simply need to copy the Microsoft.TeamFoundation.Build.targets and Microsoft.TeamFoundation.PowerToys.Tasks.QualityTools.dll files from the zip to get the fix (see the Word doc for the paths).

    Thanks to Thomas for pointing out this bug!

    tags: , , , , , ,

  • Buck Hodges

    Visual Studio 2012 features enabled by using a TFS 2012 server

    • 14 Comments

    While many of the features in Visual Studio 2012 are available to users using TFS 2008 or 2010, there are some features that are only available when using a TFS 2012 server.  Also, the web experience in TFS 2012 has been rebuilt entirely from the ground up, and the result is a fast, fluid experience that also includes new experiences tailored to Scrum.

    Once you’ve upgraded your server to TFS 2012, installed your first TFS server, or are using the Team Foundation Service Preview, here are some of the features you are now able to use.  The goal here isn’t an exhaustive list but to get you started.

    Version Control

    • Async checkout – We have added a new TFS 2012 feature so that VS 2012 will do checkouts in the background for server workspaces.  That eliminates the pause when you start typing and VS checks out the file.  Turning it on turns off checkout locks, but you can still use checkin locks.  You can find out how to turn it on here.
    • Merge on Unshelve – Shelvesets can now be unshelved into a workspace even if there are local changes on files in the shelveset.  Conflicts will be created for any items modified both locally and in the shelveset, and you will resolve them as you would any other conflict.
    • Local workspacesLocal workspaces allow many operations to be done offline (add, edit, rename, delete, undo, diff) and are recommended only for workspaces with fewer 50,000 files.  Local workspaces are now the default with TFS 2012, but you can control that if you want server workspaces to be the default (setting is in the dialog shown here).

    Alerts editor – Replacing the power tool, there’s now an even better experience built into the web UI for managing your email notifications.  You will be able to see and manage all of your alerts in one place.  To get to it, go to a project, click on your name in the upper right, and choose Manage Alerts.  Note that you can only get to it if your administrator has configured your server to send email.

    Retry build – If your build fails for reasons unrelated to your changes, you can now right-click and retry it.

    My Work – This is a new feature of Team Explorer that allows you to suspend and resume work quickly and easily.  This feature is only available in Premium or Ultimate.

    Code Review – You will be able to use the new code review experience.  You can start a code review from the Pending Changes page in Team Explorer or from the History window by right clicking on a changeset.  Your list of code reviews is shown in the My Work window.  This feature also requires Premium or Ultimate.

    Agile/Scrum – We’ve added a first-class experience for Scrum in TFS 2012 in our web UI.  To use these, teams will need to be created.  You can learn more here.

    • Team – We now have teams in TFS!
    • Task Board
    • Product Backlog and Sprint Planning – these require Premium or Ultimate (see this post about enabling via the licensing feature)
    • Team Home Page with a burn down chart and customizable tiles based on team favorites.  To add tiles to a team’s home page, create team favorites in work item tracking, build and version control.

    Feedback – You’ll be able to use this feature to request feedback on your work, and users will be able to use the Microsoft Feedback Client (including accessing it from Microsoft Test Manager).

    Storyboarding – This is available in VS Premium or Ultimate and in Test Pro, and TFS 2012 adds a Storyboard tab to work items such as User Story.  You can learn more here and find additional shapes here.

    You can also use the new unit testing features to run tests via Nunit, xunit, qunit, and more in your TFS build (aka team build).

    Microsoft Test Manager – there are a few features only available when using MTM 2012 with TFS 2012

    • Inline Reports/Test Results (Testing Center -> Plan Activity -> Results)

    clip_image001

    • Exploratory feature (Testing Center->Test Activity->Do Exploratory Testing, Testing Center->Test Activity->View Exploratory Test Sessions)

    clip_image001[4]

    • Support for Standard Environments (Lab Center->Lab Activity->Environments

    clip_image001[6]

    For a more detailed list of the features in ALM for 2012, see the ALM 2012 section or download the product guide.  There’s also a list of what’s new in VS 2012.

    [UPDATE 6/20/2012]  I’ve added more details around MTM requested by Neno and supplied by Ravi.  I’ve also fixed the broken product guide link (thanks, Mayur).

    [UPDATE 6/7/2012]  TFS Express does not include any of the Agile features.  It is really focused on source control, build, and bug tracking.  You can read more about it here.

    Follow me at twitter.com/tfsbuck

  • Buck Hodges

    How to diff Word documents

    • 11 Comments

    John Lawrence responded to someone's question about how to diff MS Word documents, and I thought the answer would help a few more folks.

    Team Foundation enables you to specify different diff and merge tools for different file types.

    We don’t have a built in compare tool for .doc files but I did a quick search and found quite a few available for download (e.g. http://www.documentlocator.com/download/difftools.htm). I don’t know if we have any specific recommendations.

    I installed DiffDoc (because it was free… http://www.softinterface.com/MD/MD.htm) and configured it to be my diff tool for .doc files:

    1. Navigate to Tools->Options->Source Control->Visual Studio Team Foundation
    2. Click the Configure User Tools button
    3. In the dialog that pops up, click Add
    4. Enter ‘.doc’ (no apostrophes) in the Extension field to indicate you’re adding a comparison tool for .doc files
    5. For the Command field, click “…” and navigate to your diff tool exe (in my case “C:\Program Files\Softinterface, Inc\DiffDoc\DiffDoc.exe”)
    6. Then enter the command line parameters to drive your specific diff tool. %1 expands to be the original file, %2 expands to the new one. In this case, enter "/M%1 /S%2" in the Arguments text box (without the quotation marks).

    That should do it – next time you try and view a diff of .doc files, it will launch this tool and you should be able to compare them.

  • Buck Hodges

    Timeouts on the HttpWebRequest (and thus SOAP proxies)

    • 4 Comments

    Last summer I wrote several posts on using HttpWebRequest and SOAP proxies.  Recently I was working on code that needed to handle requests that could take a really long time for the server to process.  The client would give up and close the network connection long before the request completed.

    The code had tried to control that using the HttpWebRequest.Timeout property.  That helped, but it didn't solve the problem.  Without changing the Timeout property, the client gave up after 100 seconds, which is the default value for that property as stated in the docs.  The Timeout property was set to infinite, but the client gave up after five minutes.  Below is the doc for the Timeout property.

    Return Value

    The number of milliseconds to wait before the request times out. The default is 100000 milliseconds (100 seconds).

    Remarks

    Timeout is the number of milliseconds that a synchronous request made with the GetResponse method waits for a response, and the GetRequestStream method waits for a stream. If the resource is not returned within the time-out period, the request throws a WebException with the Status property set to Timeout.

    The Timeout property has no effect on asynchronous requests made with the BeginGetResponse or BeginGetRequestStream methods.

    Caution In the case of asynchronous requests, it is the responsibility of the client application to implement its own timeout mechanism. Refer to the example in the BeginGetResponse method.

    To specify the amount of time to wait before a read or write operation times out, use the ReadWriteTimeout property.

    At the end of that, notice that they mention another timeout value, the HttpWebRequest.ReadWriteTimeout property.  Here's the doc for that.

    Return Value

    The number of milliseconds before the writing or reading times out. Its default value is 300000 milliseconds (5 minutes).

    Remarks

    The ReadWriteTimeout is used when writing to the stream returned by GetRequestStream or reading from the stream returned by GetResponseStream.

    Specifically, the ReadWriteTimeout property controls the time-out for the Read method, which is used to read the stream returned by the GetResponseStream method, and for the Write method, which is used to write to the stream returned by GetRequestStream method.

    To specify the amount of time to wait for the request to complete, use the Timeout property.

    Well, there's the five minute timeout.  If you want to wait for a long request, which is greater than five minutes for the HttpWebRequest, you need to set both properties.  Using the information in the post from the summer, you can add code to your override of GetWebRequest() in your SOAP proxy to set the timeout to an hour, for example, using request.Timeout = 3600 * 1000 and request.ReadWriteTimeout = 3600 * 1000.

    You may also notice that the Timeout property doesn't apply to asynchronous calls that use the Begin/End call pattern.  For those, only the ReadWriteTimeout applies.

  • Buck Hodges

    How to change the computer name and update the owner name for a workspace

    • 11 Comments

    As part of the information about a workspace, the version control server records the name of the computer that hosts a workspace and the user that owns the workspace.  If you need to change your computer name or your user name, how do you tell the server to update the workspace information?

    The command line, tf.exe, provides two options, /updateComputerName and /updateUserName, on the workspaces command to address this issue.  Note that in version 1 you must use the command line for this.  Also, you may notice that the documentation on MSDN shows that these options do not accept values, but that's a documetation error that will be corrected in the near future.

    To update the computer name for workspace, you'll need to run the following command.

    tf workspaces /updateComputerName:OldComputerName /s:http://Tfs_server:8080

    OldComputerName should be replaced with the name your computer had previously (more precisely, it should be what the server currently has recorded).  Tfs_server should be replaced with the name of your server.

    When you execute that command, tf.exe removes the cached workspace entries that use the old computer name, calls the server to tell it the current computer name, and gets the current list of workspaces owned by you on the current computer.

    Similarly, you'll need to run the following command if your user name changes (for example, from corpdomain\eharris to corpdomain\esmith).

    tf workspaces /updateUserName:OldUserName /s:http://Tfs_server:8080

    OldUserName should be replaced with your user name prior to changing it.

    As with updating the computer name, tf.exe uses the old user name to clear out the workspace entries where the owner is the old user name, tells the server to update your user name, and gets the current list of workspaces owned by your current user name.

    The server actually stores the Windows SID (security identifier) for your account, so the update call to the server simply tells the server to update its cache with the current user name for that SID.  That also means that if you get a new user ID rather than just changing the name, you won't be able to update the workspace ownership this way.  In that case, you'll have to create a new workspace.

    You can find the complete command line reference documentation at http://msdn2.microsoft.com/en-us/library/cc31bk2e(vs.80).aspx.

    To use /updateComputerName and /updateUserName, you must use the RC or newer release of TFS.

  • Buck Hodges

    Data tier load with Team Foundation beta

    • 20 Comments

    Did you install your beta data tier in Virtual PC or Virtual Server and see a high CPU load while its running?  Even on real hardware, you may notice some load when nothing would appear to be going on.  Someone mentioned on an internal mailing list that the data tier CPU load for a combined app and data tier installed in Virtual Server was quite high, averaging about 50-70% with most of that time being used by SQL analysis services (msmdsrv.exe).

    Well, here's the answer (I didn't write the question or the answer, but I hope people find it useful).

    The warehouse was designed to run processing every hour. For demo purposes the period was changed to 2 minutes in beta 2. On a weak system or a virtual machine you will see this behavior.

    Change the run interval on the app tier as follows.

    1. Stop TFSServerScheduler using 'net stop TFSServerScheduler'.
    2. Go to http://localhost:8080/Warehouse/warehousecontroller.asmx using a browser on the app tier.  Click on ChangeSetting and enter the following values and then press the 'Invoke' button (3600 seconds = run once per hour).
      1. settingID: RunIntervalSeconds
      2. newValue: 3600
    3. Restart TFSServerScheduler using 'net start TFSServerScheduler'.

    Note: It is important to restart TFSServerScheduler, as the interval is cached and will not take effect until the next run.

    You can also manually kick off the data warehouse.  Here are the steps to do so:

    1. Go to http://localhost:8080/Warehouse/warehousecontroller.asmx using a browser on the app tier.
    2. Click the ‘Run’ link.
    3. Press the ‘Invoke’ button.

     This will trigger a refresh of the reports.

    [Update]  Thanks to Mike for pointing out the the original instructions were a little rough.  I've updated them.

    [Update 2] Added msmdsrv.exe to the text to (hopefully) make it easier for folks to find the post when they notice that the Yukon April CTP Analysis Services process is consuming a lot of CPU time.

  • Buck Hodges

    CVS compared with Team Foundation Version Control

    • 15 Comments

    In the Team Foundation forum a question was asked regarding CVS compared to Team Foundation Version Control.  Here's what I wrote.

    Of course, Team Foundation is much more than version control since we have integrated work item tracking, reporting, etc.

    I'm biased, of course, and I'm probably leaving stuff out.  I've used CVS in the past, but I wasn't exactly a power user.  Okay, here goes.

    TFS has atomic checkins.  When you check in, it all succeeds or fails.  No changes are committed unless the whole thing succeeds.  CVS does not have this.

    TFS has shelving, and CVS does not.  Shelving is a great way to juggle sets of changes as well as backup changes without checking them in.

    TFS has a SQL database for the server, and CVS does not.  This means that adminstering it uses the same tools famililar to database admins (in beta 2, there is ADAM data not in SQL, but ADAM has been removed -- ignore this comment if it doesn't mean anything to you).

    CVS and TFS both support parallel development.  Everyone checks out and modifies files and resolves conflicts before checking in.  There's no need to grab an exclusive lock.

    CVS does a better job with "offline" mode than TFS in version 1.  It's inherent to CVS because you just modify files and then sync up with the repository before checking in.  When you pend an edit in TFS, you must be able to talk to the server to do it.  We'll be doing a lot more for offline support in the next version.

    TFS branches in path space, which means that branch is a lot like copy.  CVS branches are different.  Branches in TFS can have different permissions (main vs. release branches).

    TFS has checkin policies, and CVS does not.

    TFS supports rename/move, but CVS does not.

    The TFS command line, h.exe, has really nice dialogs for command like checkin, shelve, and resolving conflicts (there's also a /noprompt option to suppress dialogs).  CVS doesn't have that.

    TFS comes with a full GUI, either VS 2005 or the Team Foundation Client (to be renamed Team Explorer).  CVS does not have a GUI itself, though there are several available.

    TFS uses its SQL server for history queries.  CVS doesn't have that support.

    TFS uses SOAP over HTTP (or HTTPS) to communicate with the server.  You don't need to open other ports or tunnel through ssh for remote access as you would for CVS.

    TFS stores files compressed as reverse deltas (diffs from one version to the next).  CVS can't do that.

    CVS and TFS both have branch and merge capabilities, but the changesets used by TFS make it easier to manage.

  • Buck Hodges

    Why doesn't Team Foundation get the latest version of a file on checkout?

    • 33 Comments

    I've seen this question come up a few times.  Doug Neumann, our PM, wrote a nice explanation in the Team Foundation forum (http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=70231).

    It turns out that this is by design, so let me explain the reasoning behind it.  When you perform a get operation to populate your workspace with a set of files, you are setting yourself up with a consistent snapshot from source control.  Typically, the configuration of source on your system represents a point in time snapshot of files from the repository that are known to work together, and therefore is buildable and testable.

    As a developer working in a workspace, you are isolated from the changes being made by other developers.  You control when you want to accept changes from other developers by performing a get operation as appropriate.  Ideally when you do this, you'll update the entire configuration of source, and not just one or two files.  Why?  Because changes in one file typically depend on corresponding changes to other files, and you need to ensure that you've still got a consistent snapshot of source that is buildable and testable.

    This is why the checkout operation doesn't perform a get latest on the files being checked out.  Updating that one file being checked out would violate the consistent snapshot philosophy and could result in a configuration of source that isn't buildable and testable.  As an alternative, Team Foundation forces users to perform the get latest operation at some point before they checkin their changes.  That's why if you attempt to checkin your changes, and you don't have the latest copy, you'll be prompted with the resolve conflicts dialog.

  • Buck Hodges

    Displaying the labels on a file, including label comments

    • 14 Comments

    Unfortunately, there's not a fast, efficient way to see the list of labels in the system with the full comment without also seeing a list of all of the files included in a label.  You also can't efficiently answer the question, "What labels involve foo.cs?"  While this won't be changed for v1, you can certainly do it using code.  I mentioned on the TFS forum that I'd try to put together a piece of code to do this.  The result is the code below.

    The code is really simple to do this, but I ended up adding more to it than I originally intended.  All that's really necessary here is to call QueryLabels() to get the information we need.

    Let's look at the QueryLabels() call in a little detail, since it is the heart of the app.  Here is the method declaration from the VersionControlServer class.

    public VersionControlLabel[] QueryLabels(String labelName,
                                             
    String labelScope,
                                             String owner, 
                                             bool includeItems,
                                             String filterItem,
                                             VersionSpec versionFilterItem)

    By convention, methods that begin with "Query" in the source control API allow you to pass null to mean "give me everything."  In the code below, I don't want to filter by labelName or owner, so I set those to null to include everything.

    If the user specified a server path for the scope (scope is always a server path and not a local path), we'll use it, and otherwise we'll use the root ($/).  The scope of a label is, effectively, the part of the tree where it has ownership of that label name.  In other words by specifying the label scope, $/A and $/B can have separate labels named Foo, but no new label Foo can be used under $/A or $/B.  For this program, setting the scope simply narrows the part of the tree it will include in the output.  For example, running this with a scope of $/A would show only one label called Foo, but running it with $/ as the scope (or omitting the scope) would result in two Foo labels being printed.

    D:\ws1>tree
    Folder PATH listing for volume Dev
    Volume serial number is 0006EE50 185C:793F
    D:.
    ├───A
    └───B

    D:\ws1>tf label Foo@$/testproj/A A
    Created label
    Foo@$/testproj/A

    D:\ws1>tf label Foo@$/testproj/B B
    Created label
    Foo@$/testproj/B

    D:\ws1>d:\LabelHistory\LabelHistory\bin\Debug\LabelHistory.exe
    Foo (10/25/2005 4:00 PM)
    Foo (10/25/2005 4:00 PM)

    The most important parameter here is actually includeItems.  By setting this parameter to false, we'll get the label metadata without getting the list of files and folders that are in the label.  This saves both a ton of bandwidth as well as load on the server for any query involving real-world labels that include many thousands of files.

    The remaining parameters are filterItem and versionFilterItem.  The filterItem parameter allows you to specify a server or local path whereby the query results will only include labels involving that file or folder.  It allows you to answer the question, "What labels have been applied to file foo.cs?"  The versionFilterItem is used to specify what version of the item had the specified path.  It's an unfortunate complexity that's due to the fact that we support rename (e.g., A was called Z at changeset 12, F at changeset 45, and A at changeset 100 and beyond).  Before your eyes glaze over (they haven't already, right?), I just set that parameter to latest.

    Here's an example of using the program with the tree mentioned earlier.  I modified the Foo label on A to have a comment, so it has a later modification time.

    D:\ws1>tf label Foo@$/testproj/A /comment:"This is the first label I created."
    Updated label Foo@$/testproj/A

    D:\ws1>d:\LabelHistory\LabelHistory\bin\Debug\LabelHistory.exe
    Foo (10/25/2005 4:05 PM)
       Comment: This is the first label I created.
    Foo (10/25/2005 4:00 PM)

    Then I added a file under A, called a.txt, and modified the label to include it.  Running the app on A\a.txt, we see that it is only involved in one of the two labels in the system.

    D:\ws1>tf label Foo@$/testproj/A A\a.txt
    Updated label
    Foo@$/testproj/A

    D:\ws1>d:\LabelHistory\LabelHistory\bin\Debug\LabelHistory.exe A\a.txt
    Foo (10/25/2005 4:56 PM)
       Comment: This is the first label I created.

    To build it, you can create a Windows console app in Visual Studio, drop this code into it, and add the following references to the VS project.

    Microsoft.TeamFoundation.Client
    Microsoft.TeamFoundation.Common
    Microsoft.TeamFoundation.VersionControl.Client
    Microsoft.TeamFoundation.VersionControl.Common

    using System;
    using System.IO;
    using Microsoft.TeamFoundation;
    using Microsoft.TeamFoundation.Client;
    using Microsoft.TeamFoundation.VersionControl.Client;
    using Microsoft.TeamFoundation.VersionControl.Common;
    
    namespace LabelHistory
    {
        class Program
        {
            static void Main(string[] args)
            {
                // Check and get the arguments.
                String path, scope;
                VersionControlServer sourceControl;
                GetPathAndScope(args, out path, out scope, out sourceControl);
    
                // Retrieve and print the label history for the file.
                VersionControlLabel[] labels = null;
                try
                {
                    // The first three arguments here are null because we do not
                    // want to filter by label name, scope, or owner.
                    // Since we don't need the server to send back the items in
                    // the label, we get much better performance by ommitting
                    // those through setting the fourth parameter to false.
                    labels = sourceControl.QueryLabels(null, scope, null, false, 
                                                       path, VersionSpec.Latest);
                }
                catch (TeamFoundationServerException e)
                {
                    // We couldn't contact the server, the item wasn't found,
                    // or there was some other problem reported by the server,
                    // so we stop here.
                    Console.Error.WriteLine(e.Message);
                    Environment.Exit(1);
                }
    
                if (labels.Length == 0)
                {
                    Console.WriteLine("There are no labels for " + path);
                }
                else
                {
                    foreach (VersionControlLabel label in labels)
                    {
                        // Display the label's name and when it was last modified.
                        Console.WriteLine("{0} ({1})", label.Name,
                                          label.LastModifiedDate.ToString("g"));
    
                        // For labels that actually have comments, display it.
                        if (label.Comment.Length > 0)
                        {
                            Console.WriteLine("   Comment: " + label.Comment);
                        }
                    }
                }
            }
    
            private static void GetPathAndScope(String[] args,
                                                out String path, out String scope,
                                                out VersionControlServer sourceControl)
            {
                // This little app takes either no args or a file path and optionally a scope.
                if (args.Length > 2 || 
                    args.Length == 1 && args[0] == "/?")
                {
                    Console.WriteLine("Usage: labelhist");
                    Console.WriteLine("       labelhist path [label scope]");
                    Console.WriteLine();
                    Console.WriteLine("With no arguments, all label names and comments are displayed.");
                    Console.WriteLine("If a path is specified, only the labels containing that path");
                    Console.WriteLine("are displayed.");
                    Console.WriteLine("If a scope is supplied, only labels at or below that scope will");
                    Console.WriteLine("will be displayed.");
                    Console.WriteLine();
                    Console.WriteLine("Examples: labelhist c:\\projects\\secret\\notes.txt");
                    Console.WriteLine("          labelhist $/secret/notes.txt");
                    Console.WriteLine("          labelhist c:\\projects\\secret\\notes.txt $/secret");
                    Environment.Exit(1);
                }
    
                // Figure out the server based on either the argument or the
                // current directory.
                WorkspaceInfo wsInfo = null;
                if (args.Length < 1)
                {
                    path = null;
                }
                else
                {
                    path = args[0];
                    try
                    {
                        if (!VersionControlPath.IsServerItem(path))
                        {
                            wsInfo = Workstation.Current.GetLocalWorkspaceInfo(path);
                        }
                    }
                    catch (Exception e)
                    {
                        // The user provided a bad path argument.
                        Console.Error.WriteLine(e.Message);
                        Environment.Exit(1);
                    }
                }
    
                if (wsInfo == null)
                {
                    wsInfo = Workstation.Current.GetLocalWorkspaceInfo(Environment.CurrentDirectory);
                }
    
                // Stop if we couldn't figure out the server.
                if (wsInfo == null)
                {
                    Console.Error.WriteLine("Unable to determine the server.");
                    Environment.Exit(1);
                }
    
                TeamFoundationServer tfs =
                    TeamFoundationServerFactory.GetServer(wsInfo.ServerName);
                                                          // RTM: wsInfo.ServerUri.AbsoluteUri);
                sourceControl = (VersionControlServer)tfs.GetService(typeof(VersionControlServer));
    
                // Pick up the label scope, if supplied.
                scope = VersionControlPath.RootFolder;
                if (args.Length == 2)
                {
                    // The scope must be a server path, so we convert it here if
                    // the user specified a local path.
                    if (!VersionControlPath.IsServerItem(args[1]))
                    {
                        Workspace workspace = wsInfo.GetWorkspace(tfs);
                        scope = workspace.GetServerItemForLocalItem(args[1]);
                    }
                    else
                    {
                        scope = args[1];
                    }
                }
            }
        }
    }

    [Update 10/26] I added Microsoft.TeamFoundation.Common to the list of assemblies to reference.

    [Update 7/12/06]  Jeff Atwood posted a VS solution containing this code and a binary.  You can find it at the end of http://blogs.vertigosoftware.com/teamsystem/archive/2006/07/07/Listing_all_Labels_attached_to_a_file_or_folder.aspx.

  • Buck Hodges

    Team Foundation Beta 3 has been released!

    • 23 Comments

    Today we signed off on Team Foundation Beta 3!  If you used beta 2, beta 3 is a vast improvement.  Beta 3 should hopefully show up on MSDN in about two days.  You may remember that beta 3 has the go-live license and will be supported for migration to the final release version 1, which means this is the last time you have to start from scratch.

    With beta 3, single-server installation is once again supported!  I know many people didn't install the July CTP because of the lack of a single-server installation.  With each public release, installation has gotten easier and more reliable, and this is the best installation thus far.

    I wrote about what changed between beta 2 and the July CTP.  That's still a good summary.  Between the July CTP and beta 3, we fixed a lot of bugs, further improved performance across the product, improved the handling of authenticating with the sever using different credentials (e.g., you're not on the domain), improved installation, and more.

    If you have distributed teams, be sure to try out the source control proxy server.  It's one of the features we have to support distributed development.

    While you are waiting on TFS to show up, you'll want to make sure you already have Visual Studio 2005 Team Suite Release Candidate (build 50727.26) and SQL Server 2005 Standard (or Enterprise) Edition September CTP (build 1314.06, which uses the matching 2.0.50727.26 .NET framework)).

    TFS beta 3 is build 50727.19.  The reason the minor number is different than the VS RC minor number is due to the fact that TFS beta 3 was built in a different branch.  The major build number stopped changing at 50727 (July 27, 2005 build) for all of Visual Studio, and only the minor number changes now.

    Here's a list of my recent posts that are particularly relevant to beta 3.

    This one will need to be updated (URLs changed):  TFS Source Control administration web service.

    [Update 9/22/05]  Updated links and SQL info.

  • Buck Hodges

    TFS 2008: Build agent configuration options

    • 15 Comments

    While some of the build agent properties are available in the VS GUI, buried in the tfsbuildservice.exe.config file are a number of options that control key aspects of the build agent and the build.  This file existed in TFS 2005, but it had fewer options.  While you don't have to change anything for the build agent to work in the normal case, there are options here that will help you get more out of the product.  In future releases, these types of options will be exposed in better ways (e.g., GUI).

    Everything described below applies to TFS 2008 Beta 2 through TFS 2008 RTM.  Many of these settings were also in Beta 1, for those of you that are experimenting with that release (if the setting doesn't appear in the tfsbuildservice.exe.config on your build computer, then it was added after Beta 1).  Some of these features were described earlier but with much less detail.

    Any time you make a change to the tfsbuildservice.exe.config file, you must restart the Visual Studio Team Foundation Build service (from the Windows Start menu, choose Run and execute services.msc, select the service and click Restart).

    Building independent projects simultaneously

    The msbuild included with the .NET Framework version 3.5 has the capability of using multiple processes to build projects in your solution that are independent of one another.  In TFS Build, the default is to continue to use only one process for maximum backwards compatibility.  To make use of this new feature,  you'll want to set the MaxProcesses setting to a different number.  One general rule of thumb is to set it to twice the number of processors or cores in your computer.  Finding the optimum value, though, will require experimentation with your own builds since that depends on the I/O and CPU characteristics of your builds.  Aaron wrote about this recently.

    Running GUI tests as part of your builds

    A Windows service doesn't have access to a desktop, and thus the standard configuration of the build agent cannot run unit tests that involve GUIs.  You can do that, however, if you log onto the build computer as the user account that you want running your builds and simply run tfsbuildservice.exe from a Visual Studio 2008 Command Prompt.  Leave the logon session up, and tfsbuildservice will run indefinitely.  We call this running the build agent "interactively" for lack of a better term.  In the config file below, you will find the InteractivePort setting.  That port number, which defaults to 9192, is used by the interactive build agent.  When you configure the build agent in the Visual Studio GUI, simply change the default port of 9191 in the Build Agent Properties dialog to be 9192 instead.  Now you can run GUI tests as part of your build!

    See How to: Configure an Interactive Port for Team Foundation Build for the steps.

    Requiring an authenticated user connection to the build agent

    In TFS 2005, the communication between the TFS application tier (AT) and the build agent used .NET Remoting.  With the advent of Windows Communication Foundation (WCF) in .NET Framework version 3.0, we were able to change to using SOAP web services like the rest of TFS without requiring that IIS be installed on the build agent.  In TFS 2008, we default to requiring that every connection to the build agent is authenticated via NTLM, which is the AuthenticationScheme setting.  However, it could still be any valid Windows account.  To further secure your build agent, you can now also specify the service account under which that TFS AT is running as the only user allowed to connect to the build agent.  That setting is the AuthorizedUser.

    Using HTTPS, possibly with X.509 certificates

    TFS 2008 supports X.509 certificates.  As part of that work, we added the capability to use HTTPS to secure the web service communication channel between the application tier and the build agent.  To require that HTTPS be used to connect to the build agent, simply set the RequireSecureChannel setting to true.  You will also need to set the checkbox in the Build Agent Properties dialog for the build agent in Visual Studio (Build -> Manage Build Agents, then choose Edit to modify an existing agent or New to create a new one).  Additionally, you can require that the communication to the build agent use X.509 certificates by setting RequireClientCertificate to true.

    Extranet support

    We've still got a ways to go to provide true extranet or internet support, but we've made some improvements in this area.  When the TFS AT calls the build agent, it sends the build agent its URL.  That URL comes from the AT's web.config file.  It's value is set when the TFS AT was installed.  It typically uses the short name of the server.  For example, it's typically http://mytfsserver:8080 rather than http://mytfsserver.somedomain.corp.company.com:8080.  If your build agent needs to contact that server using either that fully qualified domain domain (FQDN) or perhaps an entirely alternate name, such as http://secretserver.extranet.company.com:7070, you can set the ServerAccessUrl setting to be that other URL.  The AllowedTeamServer setting must still match the URL sent by the AT, so it would typically hold the short form.

    The AllowedTeamServer setting exists also in TFS 2005, but you've likely never seen or even heard of it unless you need to change which TFS AT the build agent will listen to.  That's because that when that key is not set, the URL of the first AT that successfully communicates with the build agent is stored in the registry (HKCU for the account under which the build agent is running).  Its role and behavior has not changed in TFS 2008.

    At the risk of getting lost in minutia, I want to explain one more related aspect.  Feel free to skip this part if it's not clear.  To determine whether the AT that is connecting to the build agent is the correct (allowed) server, the build agent will compare the value in AllowedTeamServer or the HKCU registry (.config file setting has precedence) to the server URL sent by the AT.  If the two match, it continues processing the request.  If they do not match, the build agent tries to determine if it's really the right AT but with a different name.  To do that, it requests the GUID from both the server specified by the URL that was sent with the request and the server specified by its configured server URL.  If the two match, it continues.  If they don't, it will reject the request.  In TFS 2005, the build agent would call the domain name server (DNS) to get the IP addresses to do the comparison.  The change to using each server's GUID is an incremental improvement that will help folks who've had problems with this in the past.

    Oh, and don't forget that if your build agent is separated from you TFS AT via a low-bandwidth connection, you can set the build agent to use the version control proxy.

    Possible support for future versions of msbuild

    While we don't know what the future of msbuild holds, we do know that the TFS 2005 build agent cannot run the new msbuild in .NET Framework 3.5 because the code gets the CLR runtime directory to determine the location of msbuild.exe.  Those who've paid close attention to the difference between the CLR and the framework will know that the CLR version in Visual Studio 2008 remains 2.0, while the framework is obviously 3.5.  The result is that the TFS 2005 build agent will only execute the 2.0 version of msbuild.  To attempt to be a bit more future proof (no guarantees, of course), we've included a setting that allows you to specify the path to msbuild.exe.  This means that if the .NET Framework 4.0, for example, contains a new version of msbuild (3.0, by the way, did not), you would be able to specify the path in the MSBuildPath setting to have the TFS 2008 build agent use it.  There's no way to know for sure at this point whether it would work, of course, but at least you'll have the option.

    Separate log files per project

    Have you ever wanted to have a separate build log file per project?  Well, we know some of you do, because it's come up on the MSDN Build Automation forum a number of times.  If you set LogFilePerProject to true, you will get that.

    Better support for source files with really long paths

    A number of users ran into problems in the TFS 2005 build agent where they had trouble building their applications because the paths to the source files were really long.  The TFS 2005 build agent would create a build directory path that consisted of the build directory set via the GUI dialog or the tfsbuild.proj file, the team project name, the build type name, and then the word "Sources."  The result was that users were left with substantially less than the Window's path limit of 260 characters to work with (yes, Windows supports 32K character paths, but .NET doesn't).

    In TFS 2008, you have options to deal with this.  Aaron provides the details here.  The short version is that you can set the root of the build directory in the Build Agent Properties dialog.  If it's still not short enough, you can use the SourcesSubdirectory setting to shorten the word "Sources" to "s" if you want.  So you can end up with something as short as C:\b\s, if you need.

    Conclusion

    We've provided a number of new options for the build agent in TFS 2008.  While it's far from ideal that you need to edit a .config file and restart the Windows service for the changes to take effect, we've at least gotten support in there to address some of these issues.  In future releases, we'll be working to make these and other options easier to work with.

    I've copied the contents of %ProgramFiles%\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\tfsbuildservice.exe.config below for reference (minus the tracing information at the top of the file).  As you can see, we tried to document the settings to make it easier to know how to change them.  Hopefully, there's enough here to get you going.

    <?xml version="1.0" encoding="utf-8" ?>
    <configuration> 
    <appSettings>
    <!-- port
    This is the port that is used by the Team Foundation Server
    Application Tier to connect to agents hosted by this executable
    when it is run as a windows service. This value has to be the 
    same as the value specified for the agent(s) in the Application 
    Tier.
    -->
    <add key="port" value="9191" /> 
    <!-- InteractivePort
    This is the port that is used by the Team Foundation Server
    Application Tier to connect to agents hosted by this executable
    when it is run as a command-line application. This value has to
    be the same as the value specified for the agent(s) in the 
    Application Tier.
    -->
    <add key="InteractivePort" value="9192" /> 
    <!-- AuthenticationScheme
    This string controls what type of authentication will be accepted for
    incoming connections. The following values are supported:
    Anonymous, Digest, Negotiate, Ntlm
    When specifying Negotiate, the Build Service Account must satisfy one
    of the following conditions in order for Kerberos authentication to
    work:
    - If on a workgroup, it must be NT AUTHORITY\Local Service
    - If on a domain, it must be NT AUTHORITY\Network Service or the account must have a valid SPN
    The Basic authentication scheme is not supported.
    -->
    <add key="AuthenticationScheme" value="Ntlm" /> 
    <!-- AuthorizedUser
    This key provides a mechanism to restrict all access to the agent service
    to a single account. If this value is set then a transport authentication
    scheme of Basic, Digest, Negotiate, or Ntlm must be used.
    -->
    <add key="AuthorizedUser" value="" />
    <!-- RequireSecureChannel
    This boolean value controls whether or not transport-layer security 
    should be used for the exposed service. Normally, HTTP is used for
    communications which may not be desirable for a machine exposed on
    the internet. Set this value to true to expose the service using
    HTTPS instead. This value has to be the same as the value specified 
    for the agent(s) in the Application Tier.
    -->
    <add key="RequireSecureChannel" value="false" />
    <!-- RequireClientCertificate
    This boolean controls whether or not a client certificate should be
    required when using a secure channel.
    -->
    <add key="RequireClientCertificate" value="false" />
    <!-- AllowedTeamServer
    This is the Team Foundation Server Application Tier that can connect
    to this build machine. This value should be the URL for the AT,
    such as http://myserver:8080>.
    This value overrides the setting in HKCU.
    -->
    <
    add key="AllowedTeamServer" value="" />
    <!-- ServerAccessUrl
    This only needs to be set when the URL required to communicate to
    the Team Foundation Server Application Tier (AT) is different than the
    one specified in AllowedTeamServer.
    The most common case would be when the AT and the build
    agent are separated by the internet. For example, AllowedTeamServer
    may need to be <http://myserver:8080>, but the build agent may need to use
    http://boundaryserver.corp.company.com:80> to connect to the AT.
    -->
    <add key="ServerAccessUrl" value="" />
    <!-- BuildOnFatPartitions
    As a part of the build process, access controls are set on the build
    directory to secure it against unauthorized access. By default, only
    NTFS partitions are allowed as FAT partitions do not support access
    controls. To override this and allow building on FAT partitions set 
    this value to true.
    -->
    <add key="BuildOnFatPartitions" value="false" />
    <!-- DoNotDownloadBuildType
    Set this flag to true if you want to use the build type definition
    existing on the local machine instead of downloading the definition
    from the server. The local path used will be the same as the location
    where the build type would have been downloaded.
    -->
    <add key="DoNotDownloadBuildType" value="false" />
    <!-- MSBuildPath
    Set this value to the full path to the directory of MSBuild.exe to use
    a location other than the default. This should only be needed if a new
    version of the .NET Framework is installed.
    -->
    <add key="MSBuildPath" value="" />
    <!-- MaxProcesses
    Set this value to the maximum number of processes MSBuild.exe should
    use for builds started by agents hosted by this executable.
    -->
    <add key="MaxProcesses" value="1" />
    <!-- LogFilePerProject
    Set this value to true if you would like Team Build to generate errors
    and warning log files for each project, rather than just for each platform
    configuration combination.
    -->
    <add key="LogFilePerProject" value="false" />
    <!-- SourcesSubdirectory
    Set this value to the desired sources subdirectory for the build agents
    hosted by this executable. 
    -->
    <add key="SourcesSubdirectory" value="Sources" />
    <!-- BinariesSubdirectory
    Set this value to the desired binaries subdirectory for the build agents
    hosted by this executable.
    -->
    <add key="BinariesSubdirectory" value="Binaries" />
    <!-- TestResultsSubdirectory
    Set this value to the desired test results subdirectory for the build agents
    hosted by this executable.
    -->
    <add key="TestResultsSubdirectory" value="TestResults" />
    </appSettings>
    </configuration>

    [UPDATE 9/07/07] I've added a link to the official doc for changing the port.

  • Buck Hodges

    Team Foundation Version Control Command Line Summary document

    • 2 Comments

    Command line summary for tf.exe.  This was put together by Rob Caron.

    You may also want to see the full Team Foundation Version Control Command Line Reference as well.

    This is pre-release documentation and is subject to change in future releases.

    Command

    Usage

    Add

    tf Add itemspec [lock:none|checkin|checkout] [/type:filetype] [/noprompt] [/recursive]

    Branch

    tf Branch olditem newitem [/version:versionspec] [/noget] [/lock] [/noprompt] [/recursive]

    Branches

    tf branches [/s:servername] itemspec

    Changeset

    tf Changeset [/comment:comment|@commentfile] /s:servername [/notes:(“NoteFieldName”=”NoteFieldValue”|@notefile)] [/noprompt] ([/latest]|changesetnumber)

    Checkin

    tf checkin [/author:authorname] [/comment:("comment"|@commentfile)] [/noprompt] [/notes:(“Note Name”=”note text”|@notefile)] [/override:reason|@reason] [/recursive] filespec …]

    Checkout

    tf checkout [/lock:(none|checkin|checkout)] [/recursive] [/type:encoding] itemspec

    Delete

    tf delete [/lock:(none|checkin|checkout)] [/recursive] itemspec

    Difference

    tf difference itemspec [/version:versionspec] [/type:filetype] [/format:(visual|unix|ss)] [/ignorespace] [/ignoreeol] [/ignorecase] [/recursive] [/options:"options"]

     

    tf difference itemspec itemspec2 [/type:filetype] [/format:(visual|unix|ss)] [/ignorespace] [/ignoreeol] [/ignorecase] [/recursive] [/options:"options"]

     

    tf difference [/shelveset:shelvesetname;[shelvesetowner]] shelveset_itemspec [/server:serverURL] [/type:filetype] [/format:(visual|unix|ss)] [/ignorespace] [/ignoreeol] [/ignorecase] [/recursive] [/options:"options"]

    Dir

    tf dir [/s:servername] itemspec [/version:versionspec] [/recursive] [/folders] [/deleted]

    Get

    tf get itemspec [/version:versionspec] [/all] [/overwrite] [/force] [/preview] [/recursive] [/noprompt]

    Help

    tf help commandname

    History

    tf history [/s:servername] itemspec [/version:versionspec] [/stopafter:number] [/recursive] [/user:username] [/format:(brief|detailed)] [/slotmode]

    Label

    Option Set 1:

    tf label [/s:servername]  labelname@scope [/owner:ownername] itemspec [/version:versionspec] [/comment:("comment"|@commentfile)] [/child:(replace|merge)] [/recursive]

     

    Option Set 2:

    tf label [/s:servername] [/delete]  labelname@scope [/owner:ownername] itemspec [/version:versionspec] [/recursive]

    Labels

    tf labels [/owner:ownername] [/format:(brief|detailed)] [/s:servername] [labelname]

    Lock

    tf lock itemspec /lock:(none|checkout|checkin) [/workspace:workspacename] [/server:serverURL] [/recursive] [/noprompt]

    Merge

    tf merge  [/recursive] [/force] [/candidate] [/discard] [/version:versionspec] [/lock:none|checkin|checkout] [/preview] [/baseless] [/nosummary] source destination

    Merges

    tf merges [/s:servername] [source] destination [/recursive]

    Permission

    tf permission [/allow:(* |perm1[,perm2,]] [/deny:(* |perm1[,perm2,])] [/remove:(* |perm1[,perm2,])] [/inherit:yes|no] [/user:username1[,username2,]] [/recursive] [/group:groupname1[,groupname2,]] [/server:servername] itemspec

    Properties

    tf properties [/recursive] itemspec

    Rename

    tf rename [/lock:(none|checkout|checkin)] olditem newitem

    Resolve

    tf Resolve itemspec [auto:(AcceptMerge|AcceptTheirs|AcceptYours)] [/preview] [(/overridetype:overridetype | /converttotype:converttype)] [/recursive]

    Shelve

    tf shelve [/move] [/replace] [/comment:(@commentfile|"comment")] [/recursive] shelvesetname[;owner] filespec

     

    tf shelve /delete [/server:serverURL] shelvesetname[;owner]

    Shelvesets

    tf shelvesets [/owner:ownername] [/format:(brief|detailed)] [/server:serverURL] shelvesetname

    Status

    tf status itemspec [/s:servername] ([/workspace:workspacename[;workspaceowner]] | [/shelveset:shelvesetname[;shelvesetowner]]) [/format:(brief|detailed)] [/recursive] [/user:(*|username)]

    Undelete

    tf undelete [/noget] [/lock:(none|checkin|checkout)] [/newname:name] [/recursive] itemspec[;deletionID]

    Undo

    tf undo [/workspace:workspacename[;workspaceowner]] [/s:servername] [/recursive] itemspec

    Unlabel

    tf unlabel [/s:servername] [/recursive] labelname itemspec

    Unshelve

    tf unshelve [/move] [shelvesetname[;username]] itemspec

    View

    tf view [/s:servername] [/console] [/noprompt] itemspec [/version:versionspec]

    WorkFold

    tf workfold localfolder

     

    tf workfold [/workspace: workspacename]

     

    tf workfold [/s:servername] [/workspace: workspacename] repositoryfolder

     

    tf workfold [/map] [/s:servername] [/workspace: workspacename] repositoryfolder|localfolder

     

    tf workfold /unmap [/s:servername] [/workspace: workspacename] [/recursive] (repositoryfolder|localfolder)

     

    tf workfold /cloak (repositoryfolder|localfolder) [/workspace: workspacename] [/s:servername]

     

    tf workfold /decloak (repositoryfolder|localfolder) [/workspace: workspacename] [/s:servername]

    Workspace

    Option Set #1--Create New Workspace:

    tf workspace /new [/noprompt] [/template:workspacename[;workspaceowner]]

    [/computer:computername] [/comment:(“comment”|@commentfile)] [/s:servername]

     

    Option Set #2--Delete Workspace:

    tf workspace /delete [/s:servername] workspacename[;workspaceowner]

     

    Option Set #3--Edit Existing Workspace:

    tf workspace [/s:servername] [/comment:comment] [/newname:workspacename] workspacename[;workspaceowner]

    Workspaces

    tf workspaces [/owner:ownername] [/computer:computername] [/s:servername] [/format:(brief|detailed)] [/updateUserName:oldUserName] [/updateComputerName:oldComputerName] workspacename

     

    tf workspaces /remove:(*|workspace1[,workspace2,...]) /server:(*|server)

  • Buck Hodges

    VSTS 2005 and 2008: Building Database Projects with Team Build

    • 13 Comments

    Jon Liperi, a tester on Team Build, has put together the post below that explains a number of the issues around using Visual Studio Team Edition for Database Professionals (DBPro) with TFS Build.  Jon previously worked on the DBPro team, so he knows his way around it quite well.  Here are the issues that he covers.

    • Issue #1: Team Build service account does not have the required SQL Server permissions or cannot connect to SQL Server
    • Issue #2: Values for TargetDatabase, TargetConnectionString, or DefaultDataPath are missing or incorrect
    • Issue #3: The New Build Definition dialog does not provide a “Default” configuration option
    • Issue #4: The Deploy target is not invoked when built via Team Build
    • Issue #5: Database Unit Tests cannot find database project files, data generation plans, or the database instance(s) to be used for running tests when run via Team Build

    This information applies to both the 2005 (8.0) and the 2008 (9.0) versions of VSTS and TFS.

    Building Database Projects with Team Build by Jon Liperi

    Recently, we have seen more questions about building database projects with Team Build. It is absolutely possible to build these project types with Team Build. However, you’ll need to have VSTE for Database Professionals (aka DBPro) and SQL Server installed on the build agent. There are also several known issues. In this blog post, I’ll describe these issues and their workarounds. Please let me know if there are additional issues you encounter or if the work-around steps need a bit of correction by commenting on this blog entry or posting on the Team Build forum:

    http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=481&SiteID=1

    To start, I’ll point you to a few links specific to database projects, including some existing documentation around Team Build integration.

    Visual Studio Team System Database Edition (MSDN documentation)
    http://msdn2.microsoft.com/en-us/library/aa833253(VS.90).aspx

    How to: Deploy Changes using Team Foundation Build (MSDN documentation)
    http://msdn2.microsoft.com/en-us/library/aa833289(VS.90).aspx

    Visual Studio Team System - Database Professionals Forum
    http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=725&SiteID=1

    Issue #1: Team Build service account does not have the required SQL Server permissions or cannot connect to SQL Server

    Database projects create a scratch database on the local SQL Server instance when building. Therefore, the build will fail if the Team Build service account does not have the appropriate permissions on the SQL Server instance running on the build machine. You may see an error message similar to the one below in the build log:

    The "SqlBuildTask" task failed unexpectedly.

    Microsoft.VisualStudio.TeamSystem.Data.Common.Exceptions.DesignDatabaseFailedException: You have insufficient permissions to create the database project. For more information, see the product documentation.

    To resolve this issue, grant the required SQL Server permissions to the Team Build service account. In Orcas, the Team Build service runs by default as NT AUTHORITY\NETWORK SERVICE. A quick way to fix this is to create a SQL login for the service account that has sysadmin privileges. However, if you want to grant only the minimal permissions, detailed SQL Server permissions for DBPro are described here:

    http://msdn2.microsoft.com/en-us/library/aa833413(vs.90).aspx

    Additionally, you may need to ensure that a necessary registry key, which points to the local SQL Server instance to use for build, exists on the build machine. To configure the build account’s HKCU hive to point to the correct instance, you can either:

    1. Start the Visual Studio IDE as that user. Set the instance name by opening Tools | Options and navigating to Database Tools | Design-Time Validation Database.
    2. Run the following command from a command prompt.
      (Note: You may need to replace 9.0 with 8.0 depending on the version of DBPro you are using.)

      runas /user:<Team Build service account> "REG ADD HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\DBPro\DialogPage\Microsoft.VisualStudio.TeamSystem.Data.DBProject.Settings.DataConnectionOptionsSettings /v DefaultSqlServerName /d <instance name>"

      The value for <instance name> is just the name of the instance. For example, if the instance is “.\SQLEXPRESS”, replace <instance name> with “SQLEXPRESS”. If it is an unnamed instance, enter an empty string.

    Issue #2: Values for TargetDatabase, TargetConnectionString, or DefaultDataPath are missing or incorrect

    Database projects store any non-default values for the TargetDatabase, TargetConnectionString, and DefaultDataPath properties in a <ProjectName>.dbproj.user file which is not checked into version control (as the values may be different for each user). Therefore, these values are missing when built via Team Build resulting in the following error message:

    TSD257: The value for $(DefaultDataPath) is not set, please set it through the build property page.

    To build successfully from Team Build, you must either:

    1. Copy these properties from the <ProjectName>.dbproj.user file and add them to the <ProjectName>.dbproj file for the configuration that you want to build.
    2. Pass these properties as MSBuild command line arguments by entering them in the Queue Build dialog (Orcas only) or by storing them in the TFSBuild.rsp file. For example:
      /p:DefaultDataPath=<path>; TargetDatabase=<databaseName>;AlwaysCreateNewDatabase=true;TargetConnectionString="<connection string>"

    [After this was posted, Peter Moresi, a developer for Microsoft's AdECN Exchange, sent us the following alternative that may better fit your needs.]

    There is also a third alternative.  Using the first solution of copying the properties in the .dbproj file may cause problems when checking out and checking in the project file, depending on how your team works. The alternative is to create, prior to compilation in the build process, a .user file that includes the required settings. This solution is very similar to the solution to Issue # 5.

    1. Create a new file, called $(MSProjectFile).teambuild.user, in the database project that contains the settings for TargetConnectionString, TargetDatabase, and DefaultDataPath. You may also want to include AlwaysCreateNewDatabase so you can explicitly manage this setting for the build.  You'll need to use .user file generated for your project and add the PropertyGroup shown in bold below as the contents for this new file.

      <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
        <PropertyGroup>
          <FileGroupsFileNames>
            <FileGroupFileNameList Version="1" xmlns="">
              <AllFileNames>
                <FileGroupFileName>
                  <FileGroupName>PRIMARY</FileGroupName>
                  <FileGroupFileName>903b7a26-677c-46d9-85ee-3ada32272e76</FileGroupFileName>
                </FileGroupFileName>
              </AllFileNames>
            </FileGroupFileNameList>
          </FileGroupsFileNames>
          <DesignDBName>MyDatabase_DB_35eb0dc1-5552-427d-9071-c8874464e107</DesignDBName>
        </PropertyGroup>
        <PropertyGroup Condition=" '$(Configuration)' == 'Default' ">
          <AlwaysCreateNewDatabase>True</AlwaysCreateNewDatabase>
          <DefaultDataPath>c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\</DefaultDataPath>
          <TargetConnectionString>Data Source=localhost;Integrated Security=True;Pooling=False</TargetConnectionString>
          <TargetDatabase>MyDatabase</TargetDatabase>
        </PropertyGroup>
      </Project>

    2. Create the file Microsoft.VisualStudio.TeamSystem.Data.TeamBuildTasks.targets as described in Issue #5, Step 4 described later in this post.  Then add the following statements to the end of that file (inside the closing </Project> tag).

      <PropertyGroup>
        <BeforeBuildTeamBuildTargets>RenameTeamBuildUserFile</BeforeBuildTeamBuildTargets>
      </PropertyGroup>
      <ItemGroup>
        <__TeamBuildUserFile Include="$(MSBuildProjectFile).user"/>
      </ItemGroup>
      <Target Name="RenameTeamBuildUserFile">
        <CreateItem Include="$(MSBuildProjectFile).teambuild.user">
          <Output ItemName="TeamBuildUserFile" TaskParameter="Include" />
        </CreateItem>
        <Copy SourceFiles="@(TeamBuildUserFile)" DestinationFiles="@(__TeamBuildUserFile)" />
      </Target>

    3. Modify the .dbproj file and add these elements to the end (inside the closing </Project> tag).  You'll need to change v9.0 to v8.0 if you are using the 2005 (v8.0) release rather than the 2008 release (v9.0).

      <Import Condition="'$(TeamBuildConstants)' != ''" Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.VisualStudio.TeamSystem.Data.TeamBuildTasks.targets" />
      <Target Name="BeforeBuild" DependsOnTargets="$(BeforeBuildTeamBuildTargets)"></Target>

    Issue #3: The New Build Definition dialog does not provide a “Default” configuration option

    Database projects do not define Debug or Release configurations. They only define “Default”. When creating a new build definition, you may notice that the only listed options are “Release” and “Debug”. To work-around this, you can either:

    1. Manually type in “Default” in the dialog.
    2. Ensure the “Release” and “Debug” solution-level configurations are set to build the “Default” configuration of the database project.

    Issue #4: The Deploy target is not invoked when built via Team Build

    The default target executed by MSBuild is Build. Therefore, database projects may not deploy by default when built via Team Build. To invoke the Deploy target from Team Build, you must either:

    1. Ensure your solution configuration is set to invoke both the Build and Deploy targets on the database project.
    2. Override AfterDropBuild to explicitly invoke the Deploy target. Instructions for overriding Team Build targets can be found at:
      http://msdn2.microsoft.com/en-us/library/aa337604(VS.90).aspx
    3. Modify the TFSBuild.proj file to list the individual projects to build and their targets instead of listing the entire solution. For example:
      <SolutionToBuild Include=”foo.dbproj”>
      <Targets>Build;Deploy</Targets>
      </SolutionToBuild>
    4. Edit the .dbproj file to make the DefaultTargets in their dbproj file Build;Deploy. For example:
      <Project DefaultTargets=”Build;Deploy”...>
      This would apply outside of Team Build as well, but it would have the desired effect.

    Issue #5: Database Unit Tests cannot find database project files, data generation plans, or the database instance(s) to be used for running tests when run via Team Build

    Database unit tests provide the ability to deploy a database project and/or run a data generation plan prior to running tests. Database unit test projects store the location of the referenced .dbproj and .dgen files as relative paths in the app.config file.

    When building a database project using Team Build, the output and source files are stored in a different directory structure on the build machine. The test files are located in a TestResults folder while the source files are located in a Sources folder. When the unit tests are run from the TestResults folder, the relative path to the referenced .dbproj and/or .dgen files in the <assemblyname>.config file is no longer correct. This causes the tests to fail with one of the following messages:

    Database deployment failed. Path ‘<path>\Database1.dbproj' is not a valid path to a database project file.

    Data Generation failed. Path '<path>\Data Generation Plans\DataGenerationPlan1.dgen' is not a valid path to a data generation plan file.

    Additionally, the app.config file also stores connection string information that points to the SQL Server instance to be used for running tests. If the connection strings are not valid when the tests are run from the build machine during a Team Build, the unit tests will fail with the following message:

    An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.

    (Note that you may want the connection string to change depending on if they are running locally or through Team Build. For example, you may want tests to use your local SQL Server instance during development but use a remote shared instance when running via Team Build. The solution to this issue can also be used to achieve this result.)

    To fix this issue, users need to manually create an app.TeamBuild.config file with the correct path locations and connection strings to be used when builds are created via Team Build. This file will then be renamed to <AssemblyName>.config through a post-build target, overwriting the .config file that contains the incorrect values.

    1. Create a file named app.TeamBuild.config by copying the existing app.config in your unit test project. Add it to your unit test project in version control.
    2. In the app.TeamBuild.config file, change the relative path to the .dbproj and .dgen files by adding a folder level for the Sources folder and a subfolder with the same name as the solution. For example,

      Before:
      "..\..\..\Database1\Data Generation Plans\DataGenerationPlan1.dgen"

      After:
      "..\..\..\Sources\Database1\Database1\Data Generation Plans\DataGenerationPlan1.dgen"

      Before:
      "..\..\..\Database1\Database1.dbproj"

      After:
      "..\..\..\Sources\Database1\Database1\Database1.dbproj"

      Additionally, you can modify the connection strings in the app.TeamBuild.config to the strings that should be used from the Team Build machine. Check the changes back into version control.
    3. Check out the unit test project file and modify the end of the file to include these lines right before the </project> tag. Check the changes back into version control.
      (Note: You may need to replace v9.0 with v8.0 depending on the version of DBPro you are using.)

      <Import Condition="'$(TeamBuildConstants)' != ''" Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.VisualStudio.TeamSystem.Data.TeamBuildTasks.targets"/>

      <Target Name="AfterBuild" DependsOnTargets="$(AfterBuildTeamBuildTargets)">
      </Target>

    4. On the Team Build machine, create a file with the code below and name it Microsoft.VisualStudio.TeamSystem.Data.TeamBuildTasks.targets. Save the file in folder %Program Files%\MSBuild\Microsoft\Visual Studio\v9.0\TeamData
      (Note: You may need to replace v9.0 with v8.0 depending on the version of DBPro you are using.)

      <?xml version="1.0" encoding="utf-8"?>

      <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

      <UsingTask TaskName="DataGeneratorTask" AssemblyName="Microsoft.VisualStudio.TeamSystem.Data.Tasks, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>

      <PropertyGroup>
        <AfterBuildTeamBuildTargets>RenameTeamBuildConfig</AfterBuildTeamBuildTargets>
      </PropertyGroup>

      <ItemGroup>
        <__TeamBuildConfig Include="$(OutDir)$(TargetFileName).config"/>
      </ItemGroup>

      <Target Name="RenameTeamBuildConfig">
        <CreateItem Include="app.teambuild.config">
          <Output ItemName="TeamBuildAppConfig" TaskParameter="Include" />
        </CreateItem>
        <Copy SourceFiles="@(TeamBuildAppConfig)" DestinationFiles="@(__TeamBuildConfig)" />
      </Target>

      <Target Name="DataGen">
        <DataGeneratorTask
         ConnectionString="$(ConnectionString)"
         SourceFile="$(SourceFile)"
         PurgeTablesBeforePopulate="$(PurgeTablesBeforePopulate)"
         Verbose="$(Verbose)" />
      </Target>

      </Project>

    [UPDATE 8/29/08] I've added another alternative to solving issue #2 courtesy of Peter Moresi.

  • Buck Hodges

    Migrating from SourceSafe to Team Foundation Server

    • 35 Comments

    We plan to provide migration tools for users switching to TFS.  A VSS user asked in the newsgroup about migrating VSS labels, revision history, sharing, and pinning.

    The goal is migration of all data, consisting of projects, files, and folders, with associated metadata reliably with minimal information loss while preserving user information and appropriate permissions.  There are some features in VSS that do not translate to TFS.  The following is quick overview of the preliminary plan.

    • Users and groups in TFS are Windows accounts (it uses standard NTLM authentication).  SourceSafe identities will be migrated to Windows accounts. 
    • Labels and revision history will be preserved.  With regard to revision history, TFS supports add, delete, rename/move, etc.  It does not support destroy/purge in V1.
    • TFS does not have the equivalent of sharing, so the current plan
      is that the migration tool will handle that by copying.  Each copy will have its own history of the common changes made after the file was shared.  VSS branching is migrated in a similar fashion.
    • Pinning and unpinning are also not available in TFS.  The current plan is that for any item currently pinned, the user will be given the option to either ignore the pinning or to assign a label to the versions that are pinned and lock them.
    • TFS does not support the VSS archive and restore features.

    That is a very quick summary of the plan, which may change.  We welcome your feedback.

  • Buck Hodges

    TFS and VS 2012 Update 1 now available

    • 18 Comments

    [Update 2/1/13] A fix for the issues is now available.

    [Update 1/14/13] See this post for the latest on issues with attaching a collection.

    As announced on Soma’s blog, Update 1 for Visual Studio and Team Foundation Server 2012 is now available. Over at the ALM blog, you can find more details on what’s new. For those using the Team Foundation Service at tfs.visualstudio.com, you are already familiar with the new features, as we update the service every three weeks.

    In addition to fixing bugs that were discovered after RTM, here are the new Features in Team Foundation Server in Visual Studio 2012 Update 1, which I’ve copied here from the ALM blog.

    Note: The Version control warehouse still has a 260 character limit and you need to have this update applied to both your Team Foundation Server and Visual Studio client.

    Build

    One caveat I want to mention is with upgrading your build computers. After you install TFS Update 1 on your build computers to update them, you will need to go through the configuration again, including choosing the collection, setting the service account, etc. If you have settings you want to preserve, be sure to look them up in the TFS Administration Console on your build computer and write them down for use after you upgrade your build computer. We hope to have this fixed for Update 2.

    Power Tools

    The Team Foundation Server Power Tools will be updated for Update 1 as well. There were significant changes to the APIs in the server DLLs (not the web services – the .NET assemblies), so tools like the backup/restore tool for the server had to be updated. The update 2012 power tools should be available later this week, barring any last-minute issues.

    Server Installation

    I’ve seen some questions about how to install Team Foundation Server 2012 Update 1. Because it is a server and we need to do things like modify the database as part of the installation, we designed TFS 2012 to use the regular installer to install the updates. You do not need to uninstall anything. Just run the installer, and it will take care of updating your TFS. After the installer completes, it launches the upgrade wizard.

    So, when you go to http://www.microsoft.com/visualstudio/eng/downloads you’ll see two choices for TFS 2012, Install Now and Download Now.

    image

    • Install Now will use the web installer and will download what is needed in order to install TFS 2012 Update 1
    • Download Now will download an entire layout, which you can use to install on a machine that doesn’t have internet access or if you need to update more than one and want to avoid the downloads by the web installer.

    After you run either one and the installation phase is complete, you will see the upgrade wizard.

    image

     

    You then must confirm that you have a backup.

    image

    Then select your database by setting the SQL Server Instance if the default isn’t correct and then using List Available Databases to see a list of all of the configuration DBs (usually there is just one).

    image

    At that point it becomes just a matter of clicking the Next button a few times, watching the upgrade run (okay, only for a small DB – you may want to grab a bite to eat while it runs for a bigger DB), and you’re done!

    image

    image

    image

     

    Note: If you had a browser open with the web UI for TFS 2012, you may get something garbled after the upgrade if you click on a link that doesn’t do a page refresh. The reason is that it’s a mix of the old and new web code. Just click Refresh in your browser to fix that.

    After clicking a link after upgrade (browser was open with the TFS web interface prior to upgrade):

    image

    After clicking Refresh in the browser:

    image

     

    Enjoy!

    Follow me on Twitter at twitter.com/tfsbuck

  • Buck Hodges

    How to delete a team project from Team Foundation Service (tfs.visualstudio.com)

    • 57 Comments

    [UPDATE 9/13/13] You can now use the web UI to delete a team project.

    [UPDATE 5/14/13] Updated the URLs and version of VS (used to say preview)

    The question came up as to how to delete a team project in the Team Foundation Service (TFService).  When I first tried it, it didn’t work.  Then I realized it’s the one case where you have to explicitly specify the collection name.  It’s surprising because in hosted TFS each account has only one collection.  You cannot create multiple collections currently as you can with on-premise TFS (this will change at some point in the future).  Incidentally, you cannot delete a collection right now either.

    You must have installed the Visual Studio 2012 RTM or newer build to do this (you can also use the standalone Team Explorer 2012).  Even with the patch to support hosting, the 2010 version of tfsdeleteproject.exe will not work.

    If you leave off the collection, here’s the error you will see when I try to delete the team project called Testing.

    C:\project>tfsdeleteproject /collection:https://buckh-test2.visualstudio.com Testing
    Team Foundation services are not available from server https://buckh-test2.visualstudio.com.
    Technical information (for administrator):
      HTTP code 404: Not Found

    With DefaultCollection added to your hosting account’s URL, you will get the standard experience with tfsdeleteproject and successfully delete the team project.

    C:\project>tfsdeleteproject /collection:https://buckh-test2.visualstudio.com/DefaultCollection Testing

    Warning: Deleting a team project is an irrecoverable operation. All version control, work item tracking and Team Foundation build data will be destroyed from the system. The only way to recover this data is by restoring a stored backup of the databases. Are you sure you want to delete the team project and all of its data (Y/N)?y

    Deleting from Build ...
    Done
    Deleting from Version Control ...
    Done
    Deleting from Work Item Tracking ...
    Done
    Deleting from TestManagement ...
    Done
    Deleting from LabManagement ...
    Done
    Deleting from ProjectServer ...
    Done
    Warning. Did not find Report Server service.
    Warning. Did not find SharePoint site service.
    Deleting from Team Foundation Core ...
    Done

    This is the error you will get when using tfsdeleteproject 2010, even with the patch for hosting access.

    C:\Program Files\Microsoft Visual Studio 10.0\VC>tfsdeleteproject /collection:https://buckh-test2.visualstudio.com/DefaultCollection Testing2

    Warning: Deleting a team project is an irrecoverable operation. All version control, work item tracking and Team Foundation build data will be destroyed from the system. The only way to recover this data is by restoring a stored backup of the databases. Are you sure you want to delete the team project and all of its data (Y/N)?y

    TF200040: You cannot delete a team project with your version of Team Explorer. Contact your system administrator to determine how to upgrade your Team Explorer client to the version compatible with Team Foundation Server.

Page 1 of 23 (561 items) 12345»