Buck Hodges

Visual Studio Online, Team Foundation Server, MSDN

Posts
  • Buck Hodges

    TFS 2010 server licensing: It's included in MSDN subscriptions

    • 76 Comments

    [UPDATE 2/10/2010]  You can now get the official Visual Studio 2010 Licensing whitepaper, which also covers TFS, Lab, and IntelliTrace. That is the best resource for understanding the licensing.

    Another big piece of news with the release of VS and TFS 2010 betas yesterday is the changes to TFS licensing for 2010 that make it even more affordable.  Here are the comments from Doug Seven, our licensing guru in marketing, on Soma's beta 2 announcement post.

    Team Foundation Server 2010 will be included in the MSDN subscription that comes with Visual Studio 2010 Professional, Premium, Ultimate, and Test Elements. This copy of Team Foundation Server in licensed for unlimited development and test use (as is all MSDN software) and licensed for one production deployment. These MSDN subscriptions also include one CAL.

    Team Foundation Server has three installation choices - Basic, Advanced and Custom.  You will be able to install this either on your client machine (very similar to client side SCM such as VSS) or on a server machine just like TFS 2008.

    Team Foundation Server will also be available in retail for around $500 USD and will include a license term allowing up to five (5) named users without CALs to use Team Foundation Server. To grow to more than five users, you will need to have CALs for additional users beyond five users. This enables small teams of five or fewer to get up and running on Team Foundation Server for as little as $500 USD.

    Of course having Visual Studio 2010 with MSDN means you can get Team Foundation Server up and running at no additional cost.

    You can also hear more in an interview with Doug Seven conducted by three MVPS: The Ultimate Announcement Show.

    I'm not a licensing expert, so I can't answer detailed questions about licensing.  I did want to make sure everyone sees this.  It's a really exciting change.

    [UPDATE 10/20/09]  I wanted to add a clarification from Doug around the CALs and SQL.  There is a licensing whitepaper in the works that should be out soon.

    Retail TFS does not come with 5-CALs. It has a EULA exception allowing up to 5 users without CALs. The primary difference is that CALs can be used to access multiple TFS instances. A EULA exception cannot. In other words, buying two TFS retail licenses does NOT give me rights for 10-users on one instance of TFS. It gives me rights to two instances with 5-users each. To add more than 5 users, you must have CALs for all additional users.

    TFS also still includes a SQL Server license for use with TFS.  In other words, you can't use the SQL license included with TFS to do anything other than to support TFS.

  • Buck Hodges

    Keyword expansion in TFS

    • 63 Comments

    Periodically, the topic of keyword expansion comes up, which TFS (at least through 2008) does not support.  At one point during the v1 product cycle, it was a planned feature and was partially implemented.  However, there are lots of challenges to getting it right in TFS version control, and it wasn't worth the cost to finish the feature.  As a result, we ripped it out, and TFS does not support keyword expansion.

    Since it's not supported in the product and not likely to be supported any time soon, folks gravitate toward the idea of using checkin policies to implement keyword expansion.  The idea is appealing since the checkin policy will be called prior to checkin, of course, which would seem to provide the perfect opportunity to do keyword expansion.

    Personally, I’m not fond of trying to do keyword expansion as a checkin policy.  There are a number of issues related to checkin policies to deal with immediately, because any checkin policy that performs keyword expansion is going to modify file contents.

    • Checkin policies get called repeatedly.  Every time the user clicks on the checkin policy channel in the pending changes tool window in Visual Studio, for instance, the checkin policies are evaluated.
    • Whatever a policy does must be done really quickly.  Otherwise, you are going to make the VS painful to use.  The checkin policy evaluation isn't done on a background thread, and it wouldn't really help anyway since you wouldn't want to have to wait for some long policy evaluation before the checkin process started.
    • Checkin policies can be evaluated at any time.  The user may or may not actually be checking in at the point that the checkin policies are evaluated.  You even have the option of evaluating checkin policies prior to shelving.
    • For applications using the version control API, checkin policies are only evaluated if the application chooses to evaluate them (see How to validate check-in policies, evaluate check-in notes, and check for conflicts).  Some folks may read this and this it's a hole in checkin policy enforcement.  However, since checkin policy evaluation is done on the client, you can't rely on it being done (i.e., clients can lie and call the web service directly anyway).  The other reason has to do with performance.  For an application like Visual Studio, it controls when checkin policies are evaluated, and by the time that it calls the version control API to checkin, there's no need to evaluate them yet again.  Some day there may be server-side checkins, but they don't exist yet (as of TFS 2008).
    • You've got to get your checkin policy onto all of the client computers that are used to check in.  The deployment story for checkin policies is probably the single biggest hole in the checkin policy feature in the product (the second biggest hole is the lack of built-in support for scoping the policy to something less than an entire team project, though there is a power tool checkin wrapper policy to do that now).  Any computer without the checkin policy assembly on it and properly listed in the registry is not going to do keyword expansion.

    If you read that and still want to do it, you would need to pend an edit on each file that does not already have the edit bit set (for example, non-edit branch, merge, rename, and undelete) and is not a delete (can’t edit a pending delete).  I’m pretty sure that VS and the command line will have problems with changes being pended during a checkin policy evaluation, because they’ve already queried for the pending changes and won’t re-query after the pending checkin policy evaluation.  This would result in edits not being uploaded.  This pretty much makes pending edits on files via the checkin policy impractical.

    Alternatively, you could do keyword expansion only for changes where the edit bit is already set in the pending change.  That's sort of the "least evil solution."  You would just use the checkin policy for keyword expansion in files that already have pending edits (i.e., check to see that the Edit bit is set in the pending change’s ChangeType).

    Some of the files with pending edits may not have actually been changed (e.g., you pended edits on all of the files in a directory as a convenience because you knew you would be editing at least half of them via a script).  When the server detects that a file that's being checked in with only a pending edit hasn't been modified, it doesn't commit a change for that file (i.e., create a new version of that file).  You can read a bit about that in the post, VC API: CheckIn() may return 0.  To detect for yourself whether this is the case, you can compute the MD5 hash of the file content and compare that to the HashValue property of the PendingChange class.  If the two are equal, then the file didn't change.  For those of you doing government work, you'll want to watch out for FIPS enforcement.  When that's turned on in Windows, MD5 hashes are unavailable because the MD5CryptoServiceProvider class in .NET throws when you try to create one.  In that environment, the hash values are empty arrays.

    But wait, there's more!  You would also have to make sure that you read and write the file in the correct encoding (e.g., reading in DBCS as ASCII or Unicode would be bad – for example, Japanese or Chinese DBCS files).  There are probably more encoding issues to contend with.  One thing that's probably on your side, though, is that if you do read in the file in the wrong encoding, you won't likely find the markers indicating that the file needs keyword expansion.  To avoid randomly finding the tags when you know you don't want to, you'd likely want to skip all binary files that your expansion logic doesn't know how to handle (e.g., you could conceivably handle keyword expansion in JPEG file headers, but that doesn't seem too likely).

    The other thing to consider is how keyword expansion interacts with branching and merging.  Imagine putting the date in every file in a keyword expansion system.  It’s going to be a merge conflict every time your merge branches.  The same is true for log (history) information.  You would need to write a tool to handle the content conflicts; otherwise, merging large branches would be a real bear.

    Even with all of that, you are not going to get one of the things that often comes up (and this was true when we were thinking of putting in the product, because it was all going to be done on the client prior to checking in) which is the ability to record the changeset number in the keyword expansion comment.

    So, to sum it all up, you are going to need to consider the following.

    1. Your checkin policy will get evaluated multiple times and often not when someone is actually checking in.
    2. You'll want to only modify the files that already have pending edits, as pending new changes from within a checkin policy may lead to new content changes being uploaded with the rest of the checkin.
    3. You'll want to test out how it's going to interact when merging files from one branch to another.  What's it like to deal with the conflicts your keyword expansion introduces?
    4. There will be checkins where keyword doesn't happen for whatever reason, so it's not completely reliable.  This is in addition to the fact that changes that do not already involve an edit won't have keyword expansion at all.

    If we had done keyword expansion as a feature, what would have been different (other than the fact that you wouldn't have to think about all of this :-) )?  We wouldn't have the limitation of 1.  Just like in 2, we'd have to think hard about whether every rename, branch, undelete, and merge should have an edit pended also.  At best it would have been an option, and it wouldn't have been the default.  Regarding the branch merging issue, doing something on the server would be a performance hit that may be excessive with large branches (keep in mind that the server stores the content as compressed and doesn't uncompress it -- the client takes care of compressing and decompressing content), so the client would need to have some logic to help make it bearable (e.g., before performing a three-way merge, collapse all of the expanded keywords).

    Our conclusion was that the feature was too expensive to implement and test relative to the value provided and other features could be implemented and tested with a similar effort (e.g., making history better).  Folks either strongly agree (can't live without it) or disagree (don't care about it at all) with that conclusion.  There's rarely anyone on the fence.  The feedback we've received to this point indicates that we've made the right tradeoff for the vast majority of our customers (and potential customers).

  • Buck Hodges

    How to delete a team project from Team Foundation Service (tfs.visualstudio.com)

    • 57 Comments

    [UPDATE 9/13/13] You can now use the web UI to delete a team project.

    [UPDATE 5/14/13] Updated the URLs and version of VS (used to say preview)

    The question came up as to how to delete a team project in the Team Foundation Service (TFService).  When I first tried it, it didn’t work.  Then I realized it’s the one case where you have to explicitly specify the collection name.  It’s surprising because in hosted TFS each account has only one collection.  You cannot create multiple collections currently as you can with on-premise TFS (this will change at some point in the future).  Incidentally, you cannot delete a collection right now either.

    You must have installed the Visual Studio 2012 RTM or newer build to do this (you can also use the standalone Team Explorer 2012).  Even with the patch to support hosting, the 2010 version of tfsdeleteproject.exe will not work.

    If you leave off the collection, here’s the error you will see when I try to delete the team project called Testing.

    C:\project>tfsdeleteproject /collection:https://buckh-test2.visualstudio.com Testing
    Team Foundation services are not available from server https://buckh-test2.visualstudio.com.
    Technical information (for administrator):
      HTTP code 404: Not Found

    With DefaultCollection added to your hosting account’s URL, you will get the standard experience with tfsdeleteproject and successfully delete the team project.

    C:\project>tfsdeleteproject /collection:https://buckh-test2.visualstudio.com/DefaultCollection Testing

    Warning: Deleting a team project is an irrecoverable operation. All version control, work item tracking and Team Foundation build data will be destroyed from the system. The only way to recover this data is by restoring a stored backup of the databases. Are you sure you want to delete the team project and all of its data (Y/N)?y

    Deleting from Build ...
    Done
    Deleting from Version Control ...
    Done
    Deleting from Work Item Tracking ...
    Done
    Deleting from TestManagement ...
    Done
    Deleting from LabManagement ...
    Done
    Deleting from ProjectServer ...
    Done
    Warning. Did not find Report Server service.
    Warning. Did not find SharePoint site service.
    Deleting from Team Foundation Core ...
    Done

    This is the error you will get when using tfsdeleteproject 2010, even with the patch for hosting access.

    C:\Program Files\Microsoft Visual Studio 10.0\VC>tfsdeleteproject /collection:https://buckh-test2.visualstudio.com/DefaultCollection Testing2

    Warning: Deleting a team project is an irrecoverable operation. All version control, work item tracking and Team Foundation build data will be destroyed from the system. The only way to recover this data is by restoring a stored backup of the databases. Are you sure you want to delete the team project and all of its data (Y/N)?y

    TF200040: You cannot delete a team project with your version of Team Explorer. Contact your system administrator to determine how to upgrade your Team Explorer client to the version compatible with Team Foundation Server.

  • Buck Hodges

    Visual Studio setup projects (vdproj) will not ship with future versions of VS

    • 51 Comments

    [UPDATE 04/18/14] The Visual Studio team has released an extension to VS 2013 to address the feedback on this, which has been loud and clear for a long time now: Visual Studio Installer Projects Extension.

    [UPDATE 11/6/12] Fixed broken links.

    At the user group meeting last night, someone asked about the future of WiX.  There was some confusion over the future of WiX since at one point there was a plan to ship it but then that changedRob Mensching, who leads the WiX project, is a developer on Visual Studio, and Visual Studio continues to contribute to the WiX project.  We use WiX to create the installation packages for VS and a bunch of other stuff.

    The Visual Studio setup projects will not ship again – VS 2010 was the last release with support for it.  So, you’ll want to make plans to migrate to something else.  Of course, I’d suggest looking into WiX, and there are other options as well.  The MSDN page Choosing a Windows Installer Deployment Tool contains a table showing a comparison of VS setup projects, WiX, and InstallShield Limited Edition.

    Caution

    Future versions of Visual Studio will not include the Visual Studio Installer project templates. To preserve existing customer investments in Visual Studio Installer projects, Microsoft will continue to support the Visual Studio Installer projects that shipped with Visual Studio 2010 per the product life-cycle strategy. For more information, see Expanded Microsoft Support Lifecycle Policy for Business & Development Products.

  • Buck Hodges

    How to run tests in a build without test metadata files and test lists (.vsmdi files)

    • 46 Comments

    [UPDATE 6/16/2010]  The VSTS 2008 release added support for test containers (/testcontainer) in the product, and the 2010 release added support for test categories.  This post now only applies to TFS 2005.

    Since the beginning, running tests in Team Build (or MSBuild in general) has meant having to use .vsmdi files to specify the tests to run.  Tons of people have complained about it, as it's a burden to create and edit the files, as either VSTS for Testers or the full suite is required in the 2005 release, and merging changes to the file is painful when multiple developers are updating the file.  While mstest.exe has a /testcontainer option for simply specifying a DLL or load/web test file, the test tools task used with MSBuild does not expose an equivalent property.

    Attached to this post is a zip archive containing the files needed to run tests without metadata.  There are three files.

    1. TestToolsTask.doc contains the instructions for installing the new task DLL and modifying the TfsBuild.proj files to use it.
    2. Microsoft.TeamFoundation.Build.targets is a replacement for the v1 file by the same name that resides on the build machine.
    3. Microsoft.TeamFoundation.PowerToy.QualityTools.dll contains the new TestToolsTask class that extends the TestToolsTask class that shipped in v1 and exposes a TestContainers property that is is the equivalent of mstest.exe's /testcontainer option.

    After you read the instructions (please, read the instructions), you'll be able to run all of your unit tests by simply specifying the DLLs or even specifying a file name pattern in TfsBuild.proj.  Here are a couple of examples.  The TestToolsTask.doc file explains how they work, including what %2a means.

    <TestContainerInOutput Include="HelloWorldTest.dll" />
    <TestContainerInOutput Include="%2a%2a\%2aTest.dll" />
    <TestContainer Include="$(SolutionRoot)\TestProject\WebTest1.webtest" />

    This new task will be included in future releases of the Team Foundation Power Toys (soon to be called Power Tools).  The TestToolsTask in the shipping product will support test containers and Team Build will support using it in the next release of the product.

    I started with code and documentation that someone else wrote and finished it off.  Thanks to Clark Sell for digging up an old email thread about the original work and Tom Marsh, development lead in VSTS Test, for pointing me to the code and doc.  Thanks to Aaron Hallberg and Patrick Carnahan, Team Build developers, for their help.  Thanks to Brian Harry for letting me post it.

    Please post your feedback in the comments to this post.  We'd like to know if this hits the mark, or if there's something else we need to support.

    [UPDATE 4/26/2007] New features: Test categories and test names

    Pierre Greborio, a developer over in MSTV, has contributed a great new feature: test categories.  Those of you who have used NUnit are probably familiar with the Category attribute.  Test categories allow you to execute specific groups of unit tests.  Unlike the test container feature, the test category feature will not be in Orcas.

    While the details are discussed in the TestToolsTask.doc included in the zip file attached to this blog post, here's the quick version.

    1. Add the Category attribute to your unit test method.
    2. Specify the category to run in your tfsbuild.proj file.

    The other feature that's new in this release is the support for test names.  This is equivalent to the mstest.exe /test command line switch.  The name that's specified is implicitly a wildcard match.  Specifying "Blue" as the test name, for example, will execute every test method that has Blue in the name, including DarkBlue and BlueLight.

    Pierre did his testing on Vista.  Because the dll is not signed, he ran into some trust issues.  If you hit a similar problem, he recommends this forum post for how to get it to work.

    [UPDATE 11/9/2006] Bug fix

    I've updated the attachment with a new zip archive file.  Unfortunately, the task I posted originally didn't result in builds being marked as Failed when the tests had errors.  I missed that in my testing.  The reason for the problem is that the v1 Team Build logger looks for the task name, TestToolsTask, and the power toy task was originally called TestToolsTaskEx.  With this new release, the task has the same name as the original v1 task, so that builds will be marked as failed when the tests fail.

    If you downloaded the original release, you simply need to copy the Microsoft.TeamFoundation.Build.targets and Microsoft.TeamFoundation.PowerToys.Tasks.QualityTools.dll files from the zip to get the fix (see the Word doc for the paths).

    Thanks to Thomas for pointing out this bug!

    tags: , , , , , ,

  • Buck Hodges

    Preview of the build notification tray applet power tool for TFS 2008

    • 38 Comments

    [UPDATE 12/21/07]  The build notification tool has now become part of the TFS Power Tools for TFS 2008!  It has new features and quite a few fixes (not to mention that it's a signed binary), so I've removed the attachment from this post.

    We would have loved to have included in TFS 2008 a build notification tray applet along the lines of CCTray for CruiseControl.  However, we didn't have the time in the schedule to do it.  As a result, we're going to be releasing one as a power tool.

    You may remember seeing the spec for this on Jim Lamb's blog.  Swaha Miller, a developer on Team Build, implemented this tool, and I've attached the binary to this post to provide a preview and get your feedback.  Disclaimer: Please note that this is not official software, has bugs, may burn up your computer, etc.  In other words, you accept full responsibility for it if you choose to run it.

    When you run it, you'll see a balloon tip in your system tray (I have my taskbar docked to the right-hand side of my screen).  The applet automatically configures itself to run when you log into your computer.  Don't worry, though.  You get the option of removing that if you shut down the applet.

    Start up balloon

    When you click on the balloon, you'll be able to select which build definitions you would like to monitor.  The list of servers is retrieved from the registry location that Team Explorer stores them.  If you've never used Team Explorer before, there won't be any servers listed.

    Here I'm going to monitor the HelloworldTest builds in the VSTS V2 Plans team project.  You can monitor as many builds as you like and on multiple servers, but I'm just monitoring one build.  I've chosen to be notified when a build is started and finished, regardless of who kicked it off.  Note that you can filter the build definitions if you have a lot to deal with.

    Configure Build Notifications

    It turns out that the last time this build executed, it was successful.  You'll notice the tray applet's icon has a green circle with a check mark in it.

    Last build was good

    Let's kick off a new build and see what happens.  Here's the notification that the build is starting.  The Stop Build link on the "toast" window allows you to stop the build, if you don't want it.  For those of you paying really close attention, you'll notice that this is the .3 build.  I missed capturing a screen shot earlier.

    Build started notification

    Meanwhile, the tray applet's icon changes to show a green triangle "playing" icon, indicating a build is in progress.

    Build is in progress

    When the build completes, you can see that I've broken the build.  By clicking on the popup window, you can view the build details in a web browser.  If you click the little triangle in the upper right corner, you'll get a menu with other options.  In this case, it turns out that the drop location that I specified didn't exist.

    Build failed notification

    Now the applet's icon shows a red circle with an 'X' in it, indicating that the last build is broken.

    Last build was broken

    If you want to learn more about this build, you can double click on the tray applet's icon to pop up the following window.  If you right click on the build, you'll get options to view the details in a web browser, delete it, etc.

    Current Build Status

    I fixed the drop share problem and ran the build again.

    Build partially succeeded notification

    As you can see, the build was only partially successful.  What went wrong?  Well, it's something many of you have experienced.  The compilation succeeded, but the test failed because Visual Studio Team System for Testers isn't installed on the build machine!  We have plans to make installing the unit test framework on your build server much easier in the release after TFS 2008.

    We hope you enjoy using this build notification tray app.  Please let us know what you like and dislike and what features you would like to see in the next version by posting your comments here.

    Enjoy!

  • Buck Hodges

    Team Foundation Version Control client API example for TFS 2010 and newer

    • 35 Comments

    Over six years ago, I posted a sample on how to use the version control API.  The API changed in TFS 2010, but I hadn’t updated the sample.  Here is a version that works with 2010 and newer and is a little less aggressive on clean up in the finally block.

    This is a really simple example that uses the version control API.  It shows how to create a workspace, pend changes, check in those changes, and hook up some important event listeners.  This sample doesn't do anything useful, but it should get you going.

    You have to supply a Team Project as an argument.

    The only real difference in this version is that it uses the TeamFoundationServer constructor (in beta 3, you were forced to use the factory class).

    You'll need to add references to the following TFS assemblies to compile this example.

    Microsoft.TeamFoundation.VersionControl.Client.dll
    Microsoft.TeamFoundation.Client.dll

    Code Snippet
    1. using System;
    2. using System.Collections.Generic;
    3. using System.Diagnostics;
    4. using System.IO;
    5. using System.Text;
    6. using Microsoft.TeamFoundation.Client;
    7. using Microsoft.TeamFoundation.VersionControl.Client;
    8.  
    9. namespace BasicSccExample
    10. {
    11.     class Example
    12.     {
    13.         static void Main(string[] args)
    14.         {
    15.             // Verify that we have the arguments we require.
    16.             if (args.Length < 2)
    17.             {
    18.                 String appName = Path.GetFileName(Process.GetCurrentProcess().MainModule.FileName);
    19.                 Console.Error.WriteLine("Usage: {0} collectionURL teamProjectPath", appName);
    20.                 Console.Error.WriteLine("Example: {0} http://tfsserver:8080/tfs/DefaultCollection $/MyProject", appName);
    21.                 Environment.Exit(1);
    22.             }
    23.  
    24.             // Get a reference to our Team Foundation Server.
    25.             TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri(args[0]));
    26.  
    27.             // Get a reference to Version Control.
    28.             VersionControlServer versionControl = tpc.GetService<VersionControlServer>();
    29.  
    30.             // Listen for the Source Control events.
    31.             versionControl.NonFatalError += Example.OnNonFatalError;
    32.             versionControl.Getting += Example.OnGetting;
    33.             versionControl.BeforeCheckinPendingChange += Example.OnBeforeCheckinPendingChange;
    34.             versionControl.NewPendingChange += Example.OnNewPendingChange;
    35.  
    36.             // Create a workspace.
    37.             Workspace workspace = versionControl.CreateWorkspace("BasicSccExample", versionControl.AuthorizedUser);
    38.  
    39.             String topDir = null;
    40.  
    41.             try
    42.             {
    43.                 String localDir = @"c:\temp\BasicSccExample";
    44.                 Console.WriteLine("\r\n--- Create a mapping: {0} -> {1}", args[1], localDir);
    45.                 workspace.Map(args[1], localDir);
    46.  
    47.                 Console.WriteLine("\r\n--- Get the files from the repository.\r\n");
    48.                 workspace.Get();
    49.  
    50.                 Console.WriteLine("\r\n--- Create a file.");
    51.                 topDir = Path.Combine(workspace.Folders[0].LocalItem, "sub");
    52.                 Directory.CreateDirectory(topDir);
    53.                 String fileName = Path.Combine(topDir, "basic.cs");
    54.                 using (StreamWriter sw = new StreamWriter(fileName))
    55.                 {
    56.                     sw.WriteLine("revision 1 of basic.cs");
    57.                 }
    58.  
    59.                 Console.WriteLine("\r\n--- Now add everything.\r\n");
    60.                 workspace.PendAdd(topDir, true);
    61.  
    62.                 Console.WriteLine("\r\n--- Show our pending changes.\r\n");
    63.                 PendingChange[] pendingChanges = workspace.GetPendingChanges();
    64.                 Console.WriteLine("  Your current pending changes:");
    65.                 foreach (PendingChange pendingChange in pendingChanges)
    66.                 {
    67.                     Console.WriteLine("    path: " + pendingChange.LocalItem +
    68.                                       ", change: " + PendingChange.GetLocalizedStringForChangeType(pendingChange.ChangeType));
    69.                 }
    70.  
    71.                 Console.WriteLine("\r\n--- Checkin the items we added.\r\n");
    72.                 int changesetNumber = workspace.CheckIn(pendingChanges, "Sample changes");
    73.                 Console.WriteLine("  Checked in changeset " + changesetNumber);
    74.  
    75.                 Console.WriteLine("\r\n--- Checkout and modify the file.\r\n");
    76.                 workspace.PendEdit(fileName);
    77.                 using (StreamWriter sw = new StreamWriter(fileName))
    78.                 {
    79.                     sw.WriteLine("revision 2 of basic.cs");
    80.                 }
    81.  
    82.                 Console.WriteLine("\r\n--- Get the pending change and check in the new revision.\r\n");
    83.                 pendingChanges = workspace.GetPendingChanges();
    84.                 changesetNumber = workspace.CheckIn(pendingChanges, "Modified basic.cs");
    85.                 Console.WriteLine("  Checked in changeset " + changesetNumber);
    86.             }
    87.             finally
    88.             {
    89.                 if (topDir != null)
    90.                 {
    91.                     Console.WriteLine("\r\n--- Delete all of the items under the test project.\r\n");
    92.                     workspace.PendDelete(topDir, RecursionType.Full);
    93.                     PendingChange[] pendingChanges = workspace.GetPendingChanges();
    94.                     if (pendingChanges.Length > 0)
    95.                     {
    96.                         workspace.CheckIn(pendingChanges, "Clean up!");
    97.                     }
    98.  
    99.                     Console.WriteLine("\r\n--- Delete the workspace.");
    100.                     workspace.Delete();
    101.                 }
    102.             }
    103.         }
    104.  
    105.         internal static void OnNonFatalError(Object sender, ExceptionEventArgs e)
    106.         {
    107.             if (e.Exception != null)
    108.             {
    109.                 Console.Error.WriteLine("  Non-fatal exception: " + e.Exception.Message);
    110.             }
    111.             else
    112.             {
    113.                 Console.Error.WriteLine("  Non-fatal failure: " + e.Failure.Message);
    114.             }
    115.         }
    116.  
    117.         internal static void OnGetting(Object sender, GettingEventArgs e)
    118.         {
    119.             Console.WriteLine("  Getting: " + e.TargetLocalItem + ", status: " + e.Status);
    120.         }
    121.  
    122.         internal static void OnBeforeCheckinPendingChange(Object sender, ProcessingChangeEventArgs e)
    123.         {
    124.             Console.WriteLine("  Checking in " + e.PendingChange.LocalItem);
    125.         }
    126.  
    127.         internal static void OnNewPendingChange(Object sender, PendingChangeEventArgs e)
    128.         {
    129.             Console.WriteLine("  Pending " + PendingChange.GetLocalizedStringForChangeType(e.PendingChange.ChangeType) +
    130.                               " on " + e.PendingChange.LocalItem);
    131.         }
    132.     }
    133. }

  • Buck Hodges

    Migrating from SourceSafe to Team Foundation Server

    • 35 Comments

    We plan to provide migration tools for users switching to TFS.  A VSS user asked in the newsgroup about migrating VSS labels, revision history, sharing, and pinning.

    The goal is migration of all data, consisting of projects, files, and folders, with associated metadata reliably with minimal information loss while preserving user information and appropriate permissions.  There are some features in VSS that do not translate to TFS.  The following is quick overview of the preliminary plan.

    • Users and groups in TFS are Windows accounts (it uses standard NTLM authentication).  SourceSafe identities will be migrated to Windows accounts. 
    • Labels and revision history will be preserved.  With regard to revision history, TFS supports add, delete, rename/move, etc.  It does not support destroy/purge in V1.
    • TFS does not have the equivalent of sharing, so the current plan
      is that the migration tool will handle that by copying.  Each copy will have its own history of the common changes made after the file was shared.  VSS branching is migrated in a similar fashion.
    • Pinning and unpinning are also not available in TFS.  The current plan is that for any item currently pinned, the user will be given the option to either ignore the pinning or to assign a label to the versions that are pinned and lock them.
    • TFS does not support the VSS archive and restore features.

    That is a very quick summary of the plan, which may change.  We welcome your feedback.

  • Buck Hodges

    Why doesn't Team Foundation get the latest version of a file on checkout?

    • 33 Comments

    I've seen this question come up a few times.  Doug Neumann, our PM, wrote a nice explanation in the Team Foundation forum (http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=70231).

    It turns out that this is by design, so let me explain the reasoning behind it.  When you perform a get operation to populate your workspace with a set of files, you are setting yourself up with a consistent snapshot from source control.  Typically, the configuration of source on your system represents a point in time snapshot of files from the repository that are known to work together, and therefore is buildable and testable.

    As a developer working in a workspace, you are isolated from the changes being made by other developers.  You control when you want to accept changes from other developers by performing a get operation as appropriate.  Ideally when you do this, you'll update the entire configuration of source, and not just one or two files.  Why?  Because changes in one file typically depend on corresponding changes to other files, and you need to ensure that you've still got a consistent snapshot of source that is buildable and testable.

    This is why the checkout operation doesn't perform a get latest on the files being checked out.  Updating that one file being checked out would violate the consistent snapshot philosophy and could result in a configuration of source that isn't buildable and testable.  As an alternative, Team Foundation forces users to perform the get latest operation at some point before they checkin their changes.  That's why if you attempt to checkin your changes, and you don't have the latest copy, you'll be prompted with the resolve conflicts dialog.

  • Buck Hodges

    TFS 2008: A basic guide to Team Build 2008

    • 33 Comments

    Patrick Carnahan, a developer on Team Build, put together the following guide to the basic, as well as a few advanced, features of Team Build in TFS 2008.  It's a great way to get started with continuous integration and other features in TFS 2008.

    Team Build – Continuous Integration

    One of the new and most compelling features of Team Foundation Build is the out-of-the-box support for continuous integration and scheduling. A few in-house approaches have been built around the TFS Soap Event mechanism, most likely set to listen for check-in events and evaluating whether or not a build should be performed. The disadvantages to this approach are the speed and accuracy at which build decisions may be made.

    For all of the following steps, it is assumed that the ‘Edit Build Definition’ dialog is currently active. To activate this dialog, expand the ‘Builds’ node of the team project to which your V1 build type belongs (or to which you want to add a new V2 build definition) and click on ‘Edit Build Definition’ as shown below.

    Setting up the workspace

    Setting up a workspace is pretty important to the continuous integration build, since this is how the server determines when to automatically start builds on your behalf. Although the default workspace mapping is $/<Team Project Name> -> $(SourceDir), something more specific should be used in practice. For instance, in the Orcas PU branch you should use (at a minimum) the mapping $/Orcas/PU/TSADT -> $(SourceDir).

    What is this $(SourceDir) variable? Well, in V1 the build directory was specified per build type, meaning that the build type was built on the same directory no matter what machine the build was performed on. In V2 we have separated out the idea of a build machine into a first-class citizen called a Build Agent; this is where the build directory is stored. The variable $(SourceDir) is actually a well-understood environment variable by Team Build, and is expanded to: <BuildAgent.BuildDirectory>\Sources (more on the Build Directory and environment variables later). Typically you will want to make use of $(SourceDir) and keep everything relative to it, but there is no restriction that forces this upon you.

    Just so we’re on the same page with the workspace dialog, a picture has been included below. Those of you familiar with version control workspaces should feel right at home!

    Already have a workspace ready to go? Simply select the ‘Copy Existing Workspace’ button to search for existing workspaces to use as a template. The local paths will be made relative to $(SourceDir) automatically!

    Defining a Trigger

    The trigger defines how the server should automatically start builds for a given build definition.

    http://blogs.msdn.com/photos/buckh/images/1623211/original.aspx

    The first option should be self-explanatory, and keeps the build system behaving just like V1 (no automatic builds). The other options are as follows.

    • The 'Build each check-in' option queues a new build for each changeset that includes one or more items which are mapped in a build definition's workspace.
    • 'Accumulate check-ins', otherwise known as 'Rolling Build', will keep track of any checkins which need to be built but will not start another build inside of the defined quiet period. This option is a good way to control the number of builds if continuous integration is desired but you want a maximum number of builds per day (e.g. 12 builds per day, which would require a quiet period of 120 minutes) or your builds take much longer than the typical time between checkins.
    • Standard scheduled builds will only occur if checkins were made since the previous build. Don't worry about this rule affecting the first build, however, because the system will ensure that the first build is started right on time.
    • Scheduled builds can optionally be scheduled even if nothing changes. This is useful when builds are dependent on more than what is in version control.

    Drop Management

    One of the side effects of a continuous integration system is that a greater number of builds will be created. In order to manage the builds and drops created through CI we have introduced a feature in Team Build called drop management.

    Drop management is enabled through a concept we call Retention Policy in Team Build, which allows you to define how many builds to retain for a particular build outcome. For instance, you may only want to keep the last 5 successful builds and only one of any other kind. Through retention policy you can define this by setting the appropriate values in the dialog shown above and the server will manage the automatic deletion of builds for you.

    What if you don’t want a build to be susceptible to automatic deletion? We have an option available on individual builds if you would like to retain a build indefinitely (it just so turns out that this is what the menu option is called). To do this go to the ‘Build Explorer’ in Visual Studio 2008 (available by double clicking a node under the Builds node in the Team Explorer window), right click on the build, then select ‘Retain Indefinitely’. Once this option has been toggled on you will see a padlock icon next to the build.

    If you decide that the build is no longer useful, simply toggle the option off for the build and let drop management take care of the build automatically.

    Advanced Topics

    1. Automated UI Tests

    Automated UI tests cannot be run in the build agent running as a windows service since it is not interactive, meaning that it cannot interact with the desktop. So, we have provided the ability to run an interactive instance of the service out-of-the-box!

    The first thing you will need to do is create a new build agent on the server. To do this you should right click on the ‘Builds’ node for your team project, then click ‘Manage Build Agents’. Once this dialog comes up, click the ‘Add’ button which will bring you to the dialog below.

    The display name can be anything descriptive you want. For instance, if the machine name is ‘BuildMachine02’ you may choose to name the build agent ‘BuildMachine02 (Interactive)’ so you can easily differentiate between the interactive and non-interactive agents. The computer name should be the computer name of the machine on which the build service resides, and the port should be changed to 9192 since it is the default interactive port. When changing the port you may see a dialog with the message below, which may be safely disregarded in this case since you’ll be using the default interactive port.

    TF226000: After you change the port that is used by Team Foundation Build, you must also update the build service on the build computer to use the new port. For more information, see http://go.microsoft.com/fwlink/?LinkId=83362 .

    In order to run the build service in interactive mode just start a command-prompt on the build computer in the directory %PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies (if you do not want to run the build agent service as yourself then you need to be sure to spawn the command-prompt as the correct user using the ‘runas’ command). Now that you’re in the directory as the appropriate user all you need to do is type ‘TFSBuildService.exe’, which will output something similar to the following:

    C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies>TFSBuildService.exe

    Press the Escape key (Esc) to exit...

    Once you see that prompt your interactive agent is ready to go. You’ll need to leave that command running. Be sure that any time you need to run automated UI tests that you target the correct agent!

    2. Build Directory Customization

    In TFS 2005, you were able to specify the root build directory which should be used in the build type definition file TfsBuild.proj. During the build, the root build directory would automatically get expanded to:

    <Root Build Directory>\<Team Project Name>\<Build Type Name>

    This was not configurable and was done this way to ensure uniqueness across build types being built on the same machine. The sources, binaries, and build type were then brought into sub folders named ‘Sources’, ‘Binaries’, and ‘BuildType’, respectively. Since the names could get quite long, there were quite a few issues with path name length errors which were unavoidable.

    In TFS 2008 we have made it easier to customize the build directory on the agent through the use of 2 well-known environment variables.

    $(BuildDefinitionPath), which expands to <TeamProjectName>\<DefinitionName>

    $(BuildDefinitionId), which is the unique identifier by which the definition may be referenced (an Int32)

    The build directory is no longer guaranteed to be unique by the system automatically unless one of these two variables is used. This approach has some great advantages, especially since $(BuildDefinitionId) is merely an Int32 and will definitely reduce the length of paths at the build location!

    There is another method by which this path may be reduced even more if the source control enlistment requires it. If you take a look at the file %PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\TfsBuildService.exe.config on the build computer, you will see some settings which may be of interest to you.

    <add key="SourcesSubdirectory" value="Sources" />

    <add key="BinariesSubdirectory" value="Binaries" />

    <add key="TestResultsSubdirectory" value="TestResults" />

    These control the name of the sub-directories which will house the Sources, Binaries, and TestResults. If you need an even shorter path, you could name the sources sub directory ‘s’!

    NOTE: Be sure to restart the team build service ( the Windows service or the interactive service, as needed) after making changes to this file in order for the changes to take effect!

    3. Port Configuration

    For Orcas we have changed the communication method for the build agent (the build agent is the service installed and running on the build computer). In TFS 2005 the Team Build Server communicated with the build machines via .NET Remoting. In TFS 2008 we changed this to use SOAP+XML over HTTP, just like the other Team Foundation services. In order to do this, we have switched to using Windows Communication Foundation self-hosting functionality to expose the service end-point without requiring Internet Information Services (IIS) on the build computer. There is a new series of steps which must be followed in order to change the port for an existing TFS 2008 build agent.

    1. Disable the build agent on the server using the 'Manage Build Agents' option in Visual Studio 2008 described above. This will keep the server from starting new builds on the machine, but will let existing builds finish. This way you do not accidentally stop an in-progress build from finishing.
    2. Once there are no builds running, issue a 'Stop' command to the build Windows service, either using the Windows Services control panel or from the command line using the "net stop vstfbuild" command.
    3. Navigate a command prompt to the directory %PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies.
    4. Open the file TFSBuildService.exe.config, and look for a child element in the <appSettings> section with the key="port" (case sensitive!). Change the value of that key to the desired port and remember the value for the next step.
    5. In the same directory there will be an executable named wcfhttpconfig.exe. This will ensure that the appropriate URL ACLs are in place to allow the service account to listen for incoming HTTP requests. The command you should issue is: 'wcfhttpconfig reserve <domain>\<user name> <port number>'. You must run this command as a local administrator.
    6. Now issue a 'Start' command to the window service. 
    7. Change the status of the build agent back to 'Enabled' using the 'Build Agent Properties' dialog and click ok.

    You can find the official docs on MSDN at How to: Configure an Interactive Port for Team Foundation Build.

    NOTE: Changing the port of the Interactive Service is exactly the same as the instructions above with one exception: the appSettings key you will need to modify is ‘InteractivePort’.

    [UPDATED 9/07/07]  I've added links to the official docs on changing the port.

  • Buck Hodges

    TFS 2008: How to check in without triggering a build when using continuous integration

    • 29 Comments

    If part of your build process is to check in a file, such as an updated version file, you wouldn't want that checkin to kick off another build.  You'd be stuck in an infinite loop.

    To prevent that problem, simply put the string ***NO_CI*** in the checkin comment.  The code that examines a changeset to determine whether to kick off a new build will skip any changeset with that string in the comment.

    [Update 07/02/2008]  If you are making a checkin as part of the build process, you can use $(NoCICheckinComment).  That property is set at run time when the build agent starts msbuild.  I had forgotten about it until a reader pointed this out.

  • Buck Hodges

    Permission error with creating a team project from VS 2010 on TFS 2012

    • 28 Comments

    You must use the Visual Studio Team Explorer 2012 (included in all Visual Studio editions or may be separately downloaded) to create a team project on a TFS 2012 server.  If you use VS 2010, you will get an error about not having permission.  The error message is very misleading, because it’s not a problem with your permissions.

    ---------------------------
    Microsoft Visual Studio
    ---------------------------
    TF30172: You do not have permission to create a new team project.
    ---------------------------
    OK   Help  
    ---------------------------

  • Buck Hodges

    Team System Web Access 2008 SP1 CTP and Work Item Web Access 2008 CTP are now available

    • 27 Comments

    Hakan has announced the availability of the new TSWA community technology preview (CTP) in his post, What's New in TSWA 2008 SP1.  Personally, I would say this release is beta quality or better, so don't let the CTP designation scare you too much.

    Also released is the first CTP release of what we are calling Work Item Web Access (WIWA).  You may recall that we published a spec for it recently, referring to it as a "bug submission portal."  WIWA provides you with the ability to have folks create work items and view work items they have created without needing a client access license (CAL) for 2008.  This was a new condition that was added to the TFS 2008 license agreement.  Hakan has more details in his post on WIWA.

    Both the CTP of TSWA and the CTP of WIWA have the same requirements as the previous release of TSWA 2008 (e.g., you must have Team Explorer 2008 installed as a prerequisite).

    This release of TSWA has some really great new features.

    • Single instance with multiple languages
    • Support for specifying field values in the URL for creating new work items (works in both TSWA and WIWA)
    • Share ad-hoc work item queries
    • Shelveset viewer
    • Improved search support

    I want to call out two features in particular that I really like.

    Support for specifying field values in the URL for creating new work items (works in both TSWA and WIWA)

    How often have you wanted users or testers to file bugs and needed to have them fill in certain fields with particular values so that the work item shows up in the correct area?  We now support providing field values in the new work item URL.  Here's the example that Hakan provided.

    http://<server>/wi.aspx?pname=MyProject&wit=Bug&[Title]=Bug Bash&[AssignedTo]=Hakan Eskici&[Iteration Path]=MyProject\Iteration2&[FoundIn]=9.0.30304

    This will open a new work item editor window with the following initial values:

    • Team Project = MyProject
    • Work Item Type = Bug
    • Title = Bug Bash
    • Assigned To = Hakan Eskici
    • Iteration Path = MyProject\Iteration2
    • Found in Build = 9.0.30304

    Now you can start sending your users and testers a link with all of this already filled in!

    Improved search support

    Have you ever wanted to search for bugs assigned to someone in particular or in a particular area without writing a query?  In the past, you could only search the Title and Description fields in a work item, which I described here.  Now you can enter the following into the search box in TSWA to find any bug assigned to me that also has the word "exception" in the Title or Description.

    exception a="Buck Hodges"

    The core fields have shortcuts.  Any field can be used by specifying the reference name for the field.  Here's the equivalent without using the shortcut.

    exception System.AssignedTo="Buck Hodges"

    Here are the shortcuts for the core fields.

    • A: Assigned To
    • C: Created By
    • S: State
    • T: Work Item Type

    You can use TFS macros, such as @me, in search.  For example, find all work items containing "watson" in the Title or Description that are assigned to me that are in the Resolved state and are work items of type Bug.

    watson a=@me s=Resolved t=Bug

    Now, if you really want to do something cool, there are the "contains" and "not" operations.  The "=" operator matches exact phrases, whereas the ":" operator is used for "contains" clauses.  The following search looks for bugs assigned to Active (i.e., not assigned to any particular person yet) where the word "repro" is contained in the History field.

    a=Active History:repro

    This example illustrates the difference between the two operators.  The first example finds all work items where the Title is exactly "Bug Bash" with no other words or characters in it.  The second example, which uses the contains operator (colon) rather than the exact match operator (equals), finds all bugs where the Title contains the phrase "Bug Bash" along with any other words or characters.

    • Title="Bug Bash"
    • Title:"Bug Bash"

    Personally, I find myself almost always using the contains operator.

    Finally, you need to be able to exclude certain things from your search.  For that, there is the not operator, which is represented by the hyphen ("-").  The following example finds all work items with "watson" in the Title or Description fields that are not assigned to me and that are not closed.

    watson –a=@me –s=closed

    The not operator only works with field references, so you can’t use the following to find all work items containing "watson" but not containing "repro" in the Title and Description fields.

    watson –repro

    However, you can accomplish this by specifying the Title field explicitly with the not operator.

    watson –Title:repro

    Please send us your feedback on both the new features and Work Item Web Access!

  • Buck Hodges

    Visual Studio and Team Explorer 2013 no longer require IE 10 for installation

    • 26 Comments

    When Visual Studio 2013 and Team Explorer 2013 were originally released, the installation process required that Internet Explorer 10 or newer was installed. Today we released updated installers that no longer require IE 10.

    You will get a warning at the beginning of your installation that looks like the screen shot below. For VS 2013 there is a KB article titled Visual Studio 2013: Known issues when IE10 is not installed that describes what the limitations are if you don’t have IE 10 or newer installed (the “some features” link in the dialog takes you to that KB article). The good news is that there aren’t many things that require IE 10.

    TE 2013 will work as you expect without IE 10. There are no limitations for Team Explorer when IE 10 is not installed.

    The updated installers are available from the Visual Studio Download page and from the MSDN subscriber downloads. [Update Nov. 13th: Due to a problem with the update to subscriber downloads, the new bits are only available from the Visual Studio Download page. This will be fixed in the next few days.]

    image

    Follow me on Twitter at http://twitter.com/tfsbuck

  • Buck Hodges

    Creating a new server from an old one: Beware of the InstanceId

    • 26 Comments

    [UPDATE 8/23/14] The MSDN topic Move Team Foundation Server has the information about cloning TFS 2013. Today, that info is in the Q&A section at the bottom of that page.

    [This post contains instructions for TFS 2005/2008 and TFS 2010, which is in a separate section below]

    Grant Holliday wrote a post called, TFS InstanceId, ServerMap.xml and havoc.  In it he describes his experience with backing up a production server and restoring it to a test environment.  The problem he ran into is that every server has a unique ID called an InstanceId, which is just a GUID.  That GUID is the real name of a server, as all other names can change for any number of reasons, from renaming a server to internet vs. intranet access.

    Grant references the MSDN forum post where Keith Hill ran into the same issue.  By far, the common case is restoring the same server, perhaps after a hardware failure.  In that case, you absolutely don't want to change the InstanceId.

    However, if you want a copy of your server to experiment with or you're trying to split a server by cloning it and deleting what you don't want (Keith's scenario), the situation is a little different.  We've done this a number of times internally where we take a backup the dogfood server and restore it to a pre-production test machine.  In such a case, we'd change the InstanceId in the pre-production test machine so that there's no confusion with the real server (it's important that the ID in the test server be changed -- not the other way around :-).

    Unfortunately, I don't believe that we have released a tool that will change the InstanceId.  I've sent email to a few folks to make sure I'm not simply forgetting something.  I'll post a follow up if there's anything available to change it, as that's what's needed when "splitting" a server.

    For now, the best approach for creating a test server is to make sure that the machines that access the test server aren't also used to access the real server.  If in testing you need to switch between the two, you'll need to delete the cache directory, as Grant noted.  To do that, delete the directory %userprofile%\Local Settings\Application Data\Microsoft\Team Foundation\1.0\Cache (%userprofile% is typically c:\Documents and Settings\<your login>\).  That will delete the ServerMap.xml file, which caches server name and InstanceId pairs, the work item tracking cache that's partitioned by InstanceId, and the version control cache file VersionControl.config that contains a list of workspaces per server InstanceId.

    TFS 2005 and TFS 2008 instructions

    The forum thread that James Manning references below shows how to change the InstanceId using the instanceinfo.exe tool.  Here are Dan Kershaw's instructions from that thread.

    It's really important to make sure that you change the InstanceId in the correct data tier, such as the test server or the new separate server created by restoring the databases mentioned above.  You do not want to change the InstanceId when just restoring a production server that folks are using (if you do, you'll have to go clear every user's local cache directory).

    I double checked with one of our devs and there is a way to "restamp" the cloned machine using a shipping command-line tool called InstanceInfo.exe (which can be found under the TFS install directory in the Tools folder on the Application Tier machine - along with the other server command line tools, like TFsAdminUtil).  You should restamp the server after following the other "move" steps.

    After making this change it should be safe to connect a client to both the original server and the cloned server.  Here are the instructions (please replace the variables with your own settings).

    If you are doing this with TFS 2005, you'll need to remove TfsWarehouse from the list.

    Rem Clear the instance info

    “%TFSInstallDir%\Tools\InstanceInfo.exe" stamp /setup /install /rollback /d TFSWorkItemTracking,TFSBuild,TFSVersionControl,TFSIntegration,TfsWarehouse /s <<your new data tier>>

    Rem Re-stamp it with a new instance id

    "% TFSInstallDir %\Tools\InstanceInfo.exe" stamp /d TFSWorkItemTracking,TFSBuild,TFSVersionControl,TFSIntegration,TfsWarehouse /s <<your new data tier>>

    TFS 2010 instructions

    In TFS 2010 the concept is the same, only more of it.  The IDs in the configuration database and each of the collection databases must be changed.  Fortunately, a single command will handle all of that: changeServerID.  Here are the steps that you need to follow.

    1. Open a cmd window as admin on the AT
    2. Change to the directory: “%programfiles%\Microsoft Team Foundation Server 2010\Tools” and run the following commands.
    3. iisreset /stop
    4. tfsconfig changeserverid /sqlinstance:<dataTierName> /databasename:Tfs_Configuration
    5. tfsconfig registerdb /sqlinstance:<dataTierName> /databaseName:Tfs_Configuration
    6. iisreset /start
    7. net start tfsjobagent

    SharePoint and Reporting Services configuration data remains in tact, so you will need to go into the administration console and disable them.  Once you open the Team Foundation Server Administration Console, you'll see a node for each in the tree.

    TFS 2013 instructions

    Follow the commands that are shown in Q&A on the MSDN page for moving a server: http://msdn.microsoft.com/en-us/library/ms404869.aspx.

    [UPDATE 1/21/09]  I've added the warehouse DB to the list.  Thanks for the comment, Wendell!

    [UPDATE 2/8/2009]  If you have used build prior to creating a clone of the server, watch out for the TF214007: No build was found with the URI problem Mac Noland ran into.

    [UPDATE 7/17/2009]  I've added a note about dealing with TFS 2005 (remove TfsWarehouse from the command line).  Thanks for the comment, Lim!

    [UPDATE 1/22/2010]  I've now added instructions for TFS 2010.

    tags: ,

  • Buck Hodges

    Team Foundation Version Control client API example (RTM version)

    • 24 Comments

    [Update 3/10/2012] If you are using TFS 2010 or newer, there is an updated version control client API example.

    [Update 6/13/06] While the documentation is not what it needs to be, you can find reference-style documentation on a significant amount of the API in the VS SDK (current release is April): http://blogs.msdn.com/buckh/archive/2005/12/09/502179.aspx.

    I've updated this sample a few times before.  This is a really simple example that uses the version control API.  It shows how to create a workspace, pend changes, check in those changes, and hook up some important event listeners.  This sample doesn't do anything useful, but it should get you going.

    You have to supply a Team Project as an argument.  Note that it deletes everything under the specified Team Project, so don't use this on a Team Project or server you care about.

    The only real difference in this version is that it uses the TeamFoundationServer constructor (in beta 3, you were forced to use the factory class).

    You'll need to reference the following dlls to compile this example.

    System.dll
    Microsoft.TeamFoundation.VersionControl.Client.dll
    Microsoft.TeamFoundation.Client.dll

    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.IO;
    using System.Text;
    using Microsoft.TeamFoundation.Client;
    using Microsoft.TeamFoundation.VersionControl.Client;

    namespace BasicSccExample
    {
        class Example
        {
            static void Main(string[] args)
            {
                // Verify that we have the arguments we require.
                if (args.Length < 2)
                {
                    String appName = Path.GetFileName(Process.GetCurrentProcess().MainModule.FileName);
                    Console.Error.WriteLine("Usage: {0} tfsServerName tfsTeamProjectPath", appName);
                    Console.Error.WriteLine("Example: {0} http://tfsserver:8080 $/MyProject", appName);
                    Environment.Exit(1);
                }

                // Get a reference to our Team Foundation Server.
                String tfsName = args[0];
                TeamFoundationServer tfs = new TeamFoundationServer(tfsName);

                // Get a reference to Version Control.
                VersionControlServer versionControl = (VersionControlServer) tfs.GetService(typeof(VersionControlServer));

                // Listen for the Source Control events.
                versionControl.NonFatalError += Example.OnNonFatalError;
                versionControl.Getting += Example.OnGetting;
                versionControl.BeforeCheckinPendingChange += Example.OnBeforeCheckinPendingChange;
                versionControl.NewPendingChange += Example.OnNewPendingChange;

                // Create a workspace.
                Workspace workspace = versionControl.CreateWorkspace("BasicSccExample", versionControl.AuthenticatedUser);

                try
                {
                    // Create a mapping using the Team Project supplied on the command line.
                    workspace.Map(args[1], @"c:\temp\BasicSccExample");

                    // Get the files from the repository.
                    workspace.Get();

                    // Create a file.
                    String topDir = Path.Combine(workspace.Folders[0].LocalItem, "sub");
                    Directory.CreateDirectory(topDir);
                    String fileName = Path.Combine(topDir, "basic.cs");
                    using (StreamWriter sw = new StreamWriter(fileName))
                    {
                        sw.WriteLine("revision 1 of basic.cs");
                    }

                    // Now add everything.
                    workspace.PendAdd(topDir, true);

                    // Show our pending changes.
                    PendingChange[] pendingChanges = workspace.GetPendingChanges();
                    Console.WriteLine("Your current pending changes:");
                    foreach (PendingChange pendingChange in pendingChanges)
                    {
                        Console.WriteLine("  path: " + pendingChange.LocalItem +
                                          ", change: " + PendingChange.GetLocalizedStringForChangeType(pendingChange.ChangeType));
                    }

                    // Checkin the items we added.
                    int changesetNumber = workspace.CheckIn(pendingChanges, "Sample changes");
                    Console.WriteLine("Checked in changeset " + changesetNumber);

                    // Checkout and modify the file.
                    workspace.PendEdit(fileName);
                    using (StreamWriter sw = new StreamWriter(fileName))
                    {
                        sw.WriteLine("revision 2 of basic.cs");
                    }

                    // Get the pending change and check in the new revision.
                    pendingChanges = workspace.GetPendingChanges();
                    changesetNumber = workspace.CheckIn(pendingChanges, "Modified basic.cs");
                    Console.WriteLine("Checked in changeset " + changesetNumber);
                }
                finally
                {
                    // Delete all of the items under the test project.
                    workspace.PendDelete(args[1], RecursionType.Full);
                    PendingChange[] pendingChanges = workspace.GetPendingChanges();
                    if (pendingChanges.Length > 0)
                    {
                        workspace.CheckIn(pendingChanges, "Clean up!");
                    }

                    // Delete the workspace.
                    workspace.Delete();
                }
            }

            internal static void OnNonFatalError(Object sender, ExceptionEventArgs e)
            {
                if (e.Exception != null)
                {
                    Console.Error.WriteLine("Non-fatal exception: " + e.Exception.Message);
                }
                else
                {
                    Console.Error.WriteLine("Non-fatal failure: " + e.Failure.Message);
                }
            }

            internal static void OnGetting(Object sender, GettingEventArgs e)
            {
                Console.WriteLine("Getting: " + e.TargetLocalItem + ", status: " + e.Status);
            }

            internal static void OnBeforeCheckinPendingChange(Object sender, ProcessingChangeEventArgs e)
            {
                Console.WriteLine("Checking in " + e.PendingChange.LocalItem);
            }

            internal static void OnNewPendingChange(Object sender, PendingChangeEventArgs e)
            {
                Console.WriteLine("Pending " + PendingChange.GetLocalizedStringForChangeType(e.PendingChange.ChangeType) +
                                  " on " + e.PendingChange.LocalItem);
            }
        }
    }

  • Buck Hodges

    How to handle "The path X is already mapped in workspace Y"

    • 24 Comments

    This has come up before on the forums, but I don't think I've ever posted about it here.  Today I saw a reference to the TFS Workspace Gotcha! post in today's Team System Rocks VSTS Links roundup.  There's a command to deal with the problem, but it's not obvious.

    Here's the post.

    I have been working with a team that has recently migrated a TFS source project from a trail TFS to a full production server (different server).  They disconnected their solution from source control (removes all the SCC stuff from .sln) files and then tried to add to the new TFS.

    However, they were getting weird errors.

    I suggested that they might not have their workspace mapped correctly to the new TFS project.

    When they tried to map the workspace, they got the following error:

    The Path <local path> is already mapped in workspace <machine name [old tfs server]>

    Hmm, I thought we removed all the source stuff?

    Turns out that the workspace mappings are stored in a file called:

    VersionControl.config under the users local settings/application data directory.

    I could find no way (other than manually editing the forementioned file) to remove the workspace mapping from that local folder to the other TFS server (which is no longer in usage).

    Anyway, once that was done, all was good in the world.

    Thanks go out to Kevin Jones for his excellent post on Workspaces in TFS.

    While deleting versioncontrol.config will clear the cache, we've actually got a way to do it from the command line.  The command "tf workspaces /remove:*" clears out all of the cached workspaces (it only affects the cache file).  You can also specify "/s:http://oldserver:8080" to clear out only the workspaces that were on the old server. The MSDN documentation for the workspaces command is at http://msdn2.microsoft.com/en-us/library/54dkh0y3.aspx.

    The reason that he hit this is due to switching servers. Every server has a unique identifier, which is a GUID. Each local path can only be mapped in a single workspace, and the versioncontrol.config cache file is the file that the client uses to determine what server to use when given a local path (e.g., tf edit c:\projects\BigApp\foo.cs).

    He originally had a workspace on the old server that used the local path he wanted to use with the new server. Let's say that's c:\projects. When he tried to create the new workspace on the new server (GUID2) that he also wanted to map to c:\projects, the client saw that the old server (GUID1) was already using that local path. Since the IDs for the servers did not match, the client complained that c:\projects is already mapped to the old workspace on the old server.

    The client uses the server's ID as the real name of a server.  The reason is that the name could change, either because an admin changed the server's name or because a user needs to access the server through a different name depending on the connection (intranet vs. internet).  The only "name" guaranteed not to change is the ID.  So, when presented with a different network name, the client requests the ID from the server and compares it to the IDs listed in the versioncontrol.config cache file.  If the ID matches one of them, the client simply updates the existing entry to have the new name, so that subsequent uses of the new name do not incur an extra request to check the ID.  If the ID does not match any of them, then the server is different than any of the servers in the cache file.

    The error message looks like it's showing the machine, but it's actually showing the workspace name.  The reason for the confusion there is that the version control integration in VS creates a default workspace that has the same name as the machine.

    The problem will not occur if you upgrade the same server (i.e., you don't create a new server), as an upgraded server still has the same ID.

    Though /remove isn't mentioned (part 2 does mention the error message at the end), you can check out Mickey Gousset's workspace articles for more information on workspaces and managing them.

    tags: , ,

  • Buck Hodges

    Internal error loading the Changeset Comments checkin policy

    • 24 Comments

    [Update 11/26/12] You can get the fix by installing Update 1 (or newer) for Visual Studio 2012: http://www.microsoft.com/visualstudio/eng/downloads.

    Some customers, after starting to use Visual Studio 2012 with their existing TFS deployment, have been receiving check-in policy errors having to do with the Changeset Comments policy. The errors look like:

    Internal error in Changeset Comments Policy. Error loading the Changeset Comments Policy policy (The policy assembly 'Microsoft.TeamFoundation.PowerTools.CheckinPolicies.ChangesetComments, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' is not registered.). Installation instructions: To install this policy, follow the instructions in CheckForComments.cs.

    The version number may vary slightly, but for this particular problem, it's always going to start with an 8 or a 9.

    Cause

    With VS 2005 through 2010, to get the Changeset Comments policy, you had to download and install the Team Foundation Power Tools. With VS 2012, the policy is included in the box and requires no additional download. This problem is a bug that was introduced as a part of moving that check-in policy into the product.

    For this particular bug, only users using Visual Studio 2012 will be affected. If you have other users in your organization connecting to the same TFS server with VS 2005, 2008, or 2010, then the Changeset Comments policy should be working fine for them.

    Workaround

    There is also a simple workaround that you can put in place immediately, as long as you have administrative permissions on your team project. Using a Visual Studio 2010 or 2012 client, navigate to the Team Project Settings for the Team Project that has the Changeset Comments policy configured. Remove the check-in policy from the Team Project, and then immediately re-add it. The fact that you performed this step from a Visual Studio 2010 or 2012 client will re-register the policy on the server as the "10.0.0.0" version, which fixes the problem. Now any client (VS 2005 through VS 2012) will be able to load the policy successfully.

    Fix

    We are including a fix for this problem in the final version of Visual Studio 2012 Update 1. You can read more about Update 1 in Brian's blog post, but the currently available preview release of that update doesn't include this fix.

    We apologize for the inconvenience!

    Follow me on Twitter at twitter.com/tfsbuck

  • Buck Hodges

    Team Foundation Server 2012 Update 2 supports 2010 Build Agents and Controllers

    • 23 Comments

    [UPDATE 9/26/13] TFS 2013 will include support for both TFS 2010 build agents and controllers and also TFS 2012 build agents and controllers.

    One of the changes we made in TFS 2012 for Update 2 was to support TFS 2010 build agents and controllers. This provides several major benefits. The first is the obvious one of not having to upgrade your build agents and controllers when you upgrade your server to 2012. The second is that you don’t have to alter your custom build WF activities to be built with 2012. The third is that you will still be able to use Windows XP and Windows Server 2003 to run build agents and controllers – OSes that are not supported with the TFS 2012 release. This feature is available in the currently released CTP 4 of Update 2, and the final version of Update 2 will be available shortly.

    Martin Hinshelwood, a Microsoft MVP, has written an extensive blog post about this feature.

    Visual Studio 2012 Update 2 supports 2010 Build Servers

    Did you know that Visual Studio 2012 Update 2 supports 2010 Build Servers? Being able to connect TF Build 2010 Controllers to TFS 2012 is opening up upgrade paths for customers that are currently blocked from upgrading to TFS 2012.

    more…

    Enjoy!

    Follow me on Twitter at twitter.com/tfsbuck

  • Buck Hodges

    How to build without having the timestamp change on every file in the build's workspace

    • 23 Comments

    A question came up a couple of times recently about an issue with the timestamps on the files involved in a build always being the current time.  The issue is that folks have customized their deployment process to deploy only files where the timestamps are newer.  Folks then ask for an option to have get set the timestamp to the timestamp when the file was checked rather than when it was downloaded.  I don't think that's the best answer in this case (and if you aren't doing a clean build, it's not an answer at all, since that will lead to botched builds).

    The biggest culprit here is likely that the default behavior for Team Build is to create a new workspace and get all of the files for every build.  We've left that default in place in for TFS 2008, but we may change it in the future.

    You can change Team Build to get only the files that have changed since the last build by having it not create a new workspace each time and doing an incremental get.  Aaron wrote a post about how to do incremental gets in TFS 2005.  It's a simple matter of setting IncrementalGet to true in TFS 2008.

    After making the appropriate change, you'll want to switch to incremental builds if you need to have the binaries updated only when the corresponding source files change (there are links for that in the posts referenced earlier).

  • Buck Hodges

    Team Foundation Beta 3 has been released!

    • 23 Comments

    Today we signed off on Team Foundation Beta 3!  If you used beta 2, beta 3 is a vast improvement.  Beta 3 should hopefully show up on MSDN in about two days.  You may remember that beta 3 has the go-live license and will be supported for migration to the final release version 1, which means this is the last time you have to start from scratch.

    With beta 3, single-server installation is once again supported!  I know many people didn't install the July CTP because of the lack of a single-server installation.  With each public release, installation has gotten easier and more reliable, and this is the best installation thus far.

    I wrote about what changed between beta 2 and the July CTP.  That's still a good summary.  Between the July CTP and beta 3, we fixed a lot of bugs, further improved performance across the product, improved the handling of authenticating with the sever using different credentials (e.g., you're not on the domain), improved installation, and more.

    If you have distributed teams, be sure to try out the source control proxy server.  It's one of the features we have to support distributed development.

    While you are waiting on TFS to show up, you'll want to make sure you already have Visual Studio 2005 Team Suite Release Candidate (build 50727.26) and SQL Server 2005 Standard (or Enterprise) Edition September CTP (build 1314.06, which uses the matching 2.0.50727.26 .NET framework)).

    TFS beta 3 is build 50727.19.  The reason the minor number is different than the VS RC minor number is due to the fact that TFS beta 3 was built in a different branch.  The major build number stopped changing at 50727 (July 27, 2005 build) for all of Visual Studio, and only the minor number changes now.

    Here's a list of my recent posts that are particularly relevant to beta 3.

    This one will need to be updated (URLs changed):  TFS Source Control administration web service.

    [Update 9/22/05]  Updated links and SQL info.

  • Buck Hodges

    Power Toy: tfpt.exe

    • 23 Comments

    [UPDATE 8/9/2007]  I fixed the broken link to the power tools page. 

    [UPDATE 9/8/2006]  TFPT is now available in its own small download: http://go.microsoft.com/?linkid=5422499!  You no longer need to download the VS SDK.  You can find more information about the September '06 release here.

    Back at the start of October, I wrote about the tfpt.exe power toy.  The beta 3 version has been released with the October SDK release.  In the future, we plan to have a better vehicle for delivering it.

    Here's the documentation, minus the screenshots, in case you are trying to decide whether to download the SDK.  The documentation is included in the SDK release as a Word document, including screenshots of the various dialogs (yes, most commands have a GUI, but you can still use the commands from scripts by specifying the /noprompt option).

    Review  The only command not documented is the review command, which is very handy for doing code reviews.  When you run "tfpt review" you get a dialog with a list of your pending changes that you can check off as you diff or view each one.

    I hope you find these useful.  Please leave a comment, and let us know what you think.

    Team Foundation PowerToys

    Introduction

    The Team Foundation PowerToys (TFPT) application provides extra functionality for use with the Team Foundation version control system. The Team Foundation PowerToys application is not supported by Microsoft.

    Five separate operations are supported by the TFPT application: unshelve, rollback, online, getcs, and uu. They are all invoked at the command line using the tfpt.exe application. Some of the TFPT commands have graphical interfaces.

    Unshelve (Unshelve + Merge)

    The unshelve operation supported by tf.exe does not allow shelved changes and local changes to be merged together. TFPT’s more advanced unshelve operation allows this to occur under certain circumstances.

    If an item in the local workspace has a pending change that is an edit, and the user uses TFPT to unshelve a change from a shelveset, and that shelved change is also an edit, then the changes can be merged with a three-way merge.

    In all other cases where changes exist both in the local workspace and in the shelveset, the user can choose between the local and shelved changes, but no combination of the changes can be made. To invoke the TFPT unshelve tool, execute

    tfpt unshelve

    at the command line. This will invoke the graphical interface for the TFPT unshelve tool:

    Running TFPT Unshelve on a Specified Shelveset

    To skip this dialog, you can specify the shelveset name and owner on the command line, with

    tfpt unshelve shelvesetname;shelvesetowner

    If you are the owner of the shelveset, then specifying the shelveset owner is optional.

    Selecting Individual Items Within a Shelveset for Unshelving

    If you specify a shelveset on the command line as in “Running TFPT Unshelve on a Specified Shelveset,” or if you select a shelveset in the window above and click Details, you are presented with the Shelveset Details window, where you can select individual changes within a shelveset to unshelve.

    You can check and uncheck the boxes beside individual items to mark or unmark them for unshelving. Click the Unshelve button to proceed.

    Unshelve Conflicts

    When you press the Unshelve button, all changes in the shelveset for which there is no conflicting local change will be unshelved. You can see the status of this unshelve process in the Command Prompt window from which you started the TFPT unshelve tool.

    There may, however, be conflicts which must be resolved for the unshelve to proceed. If any conflicts are encountered, the conflicts window is displayed:

    Edit-Edit Conflicts

    To resolve an edit-edit conflict, select the conflict in the list view and click the Resolve button. The Resolve Unshelve Conflict window appears.

    For edit-edit conflicts, there are three possible conflict resolutions. 

    • Taking the local change abandons the unshelve operation for this particular change (it would be as if the change had not been selected for unshelving).
    • Taking the shelved change first undoes the local change, and then unshelves the shelved change. This results in the local change being completely lost.
    • Clicking the Merge button first attempts to auto-merge the two changes together, and if it cannot do so without conflict, attempts to invoke a pre-configured third-party merge tool to merge the changes together. The local change is not lost by choosing to merge – if the merge fails, the local change remains.

    The Auto-Merge All Button

    The Auto-Merge All button is enabled when there are edit-edit conflicts remaining that are unresolved. Clicking the button goes through the edit-edit conflicts and attempts to auto-merge the changes together. For each conflict, if the merge succeeds, then the conflict is resolved. If not, then the conflict is marked as “Requires Manual Merge.” In order to resolve conflicts marked as “Requires Manual Merge,” you must select the conflict and click the Resolve… button. Clicking the Merge button will then start the configured third-party merge tool. If no third-party merge tool is configured, then the conflict must be resolved by selecting to take the local change or take the shelved change.

    Generic Conflicts

    Any other conflict (a local delete with a shelved edit, for example) is a generic conflict that cannot be merged.

    There is no merge option for generic conflicts. You must choose between keeping the local change and taking the shelved change.

    Aborting the Unshelve Process

    Because the unshelving process makes changes to the local workspace, and because the potential exists to discover a problem halfway through the unshelve process, the TFPT Unshelve application makes a backup of the local workspace before starting its execution if there are pending local changes. This backup is stored as a shelveset on the server. In the event of an abort, all local pending changes are undone and the backup shelveset is unshelved to the local workspace. This restores the workspace to the state it was in before the unshelve application was run.

    The backup shelveset is named by adding _backup and then a number to the name of the shelveset that was unshelved. For example, if the shelveset TestSet were unshelved, the backup shelveset would be named TestSet_backup1. Up to 9 backup shelvesets can exist for each shelveset.

    With the backup shelveset, changes made during an unshelve operation can be undone after the unshelve is completed but before the changes are checked in, by undoing all changes in the workspace and then unshelving the backup shelveset:

    tf undo * /r

    tf unshelve TestSet_backup1

    Rollback

    Sometimes it may be necessary to undo a checkin of a changeset. This operation is directly not supported by Team Foundation, but with the TFPT rollback tool you can pend changes which attempt to undo any changes made in a specified changeset.

    Not all changes can be rolled back, but in most scenarios the TFPT rollback command works. In any event, the user is able to review the changes that TFPT pends before checking them in.

    To invoke the TFPT rollback tool, execute

    tfpt rollback

    at the command line. This will invoke the graphical user interface (GUI) for the TFPT rollback tool. Please note that there must not be any changes in the local workspace for the rollback tool to run. Additionally, a prompt will be displayed to request permission to execute a get operation to bring the local workspace up to the latest version.

    The Find Changesets window is presented when the TFPT rollback tool is started. The changeset to be rolled back can be selected from the Find Changesets window.

    Specifying the Changeset on the Command Line

    The Find Changesets window can be skipped by supplying the /changeset:changesetnum command line parameter, as in the following example:

    tfpt rollback /changeset:3

    Once the changeset is selected, either by using the Find Changesets window or specifying a changeset using a command-line parameter, the Roll Back Changeset window is displayed.

    Each change is listed with the type of change that will be counteracted by a rollback change.

    To rollback a:

    The tool pends a:

    Add, Undelete, or Branch

    Delete

    Rename

    Rename

    Delete

    Undelete

    Edit

    Edit

    Unchecking a change in the Roll Back Changeset window marks it as a change not to be rolled back. There are cases involving rolling back deletes which may result in unchecked items being rolled back. If this occurs, clear warnings to indicate this are displayed in the command prompt window. If this is unsatisfactory, undo the changes pended by the rollback tool.

    When the changes to roll back have been checked appropriately, pressing the Roll Back button starts the rollback. If no failures or merge situations are encountered, then the changes should be pended and the user returned to the command prompt:

    Merge scenarios can arise when a rollback is attempted on a particular edit change to an item that occurred in-between two other edit changes. There are two possible edit rollback scenarios: 

    1. An edit is being rolled back on an item, and the edit to roll back is the latest change to the content of the item. 

    This is the most common case. Most rollbacks are performed on changesets that were just checked in. If the edit was just checked in, it is unlikely that another user has edited it in the intervening time.

    To roll back this change, an edit is pended on the item, and the content of the item is reverted to the content from before the changeset to roll back. 

    1. An edit is being rolled back on an item, and the edit to roll back is not the latest change to the content of the item. 

    This is a three-way merge scenario, with the version to roll back as the base, and the latest version and the previous version as branches. If there are no conflicts, then the changes from the change to roll back (and only the change to roll back) are extracted from the item, preserving the changes that came after the change to roll back. 

    In the event of a merge scenario, the merge window is displayed:

    To resolve a merge scenario, select the item and click the Merge button. An auto-merge will first be attempted, and if it fails, the third-party merge tool (if configured) will be invoked to resolve the merge. If no third-party merge tool is configured, and the auto-merge fails, then the item cannot be rolled back:

    The Auto-Merge All button attempts an auto-merge on each of the items in the merge list, but does not attempt to invoke the third-party merge tool.

    Failures

    Any changes which fail to roll back will also be displayed in the same window.

    Online

    With Team Foundation, a server connection is necessary to check files in or out, to delete files, to rename files, etc. The TFPT online tool makes it easier to work without a server connection for a period of time by providing functionality that informs the server about changes made in the local workspace.

    Non-checked-out files in the local workspace are by default read-only. The user is expected to check out the file with the tf checkout command before editing the file. When working in this

    When working offline with the intent to sync up later by using the TFPT online tool, users must adhere to a strict workflow: 

    • Users without a server connection manually remove the read-only flag from files they want to edit. Non-checked-out files in the local workspace are by default read-only, and when a server connection is available the user must check out the file with the tf checkout command before editing the file. When working offline, the DOS command “attrib –r” should be used.
    • Users without a server connection add and delete files they want to add and delete. If not checked out, files selected for deletion will be read-only and must be marked as writable with “attrib –r” before deleting. Files which are added are new and will not be read-only.
    • Users must not rename files while offline, as the TFPT online tool cannot distinguish a rename from a deletion at the old name paired with an add at the new name.
    • When connectivity is re-acquired, users run the TFPT online tool, which scans the directory structure and detects which files have been added, edited, and deleted. The TFPT online tool pends changes on these files to inform the server what has happened.  

    To invoke the TFPT online tool, execute 

    tfpt online

    at the command line. The online tool will begin to scan your workspace for writable files and will determine what changes should be pended on the server.

    By default, the TFPT online tool does not detect deleted files in your local workspace, because to detect deleted files the tool must transfer significantly more data from the server. To enable the detection of deleted files, pass the /deletes command line option.

    When the online tool has determined what changes to pend, the Online window is displayed.

    Individual changes may be deselected here if they are not desired. When the Pend Changes button is pressed, the changes are actually pended in the workspace.

    Important Note: If a file is edited while offline (by marking the file writable and editing it), and the TFPT online tool pends an edit change on it, a subsequent undo will result in the changes to the file being lost. It is therefore not a good idea to try pending a set of changes to go online, decide to discard them (by doing an undo), and then try again, as the changes will be lost in the undo. Instead, make liberal use of the /preview command line option (see below), and pend changes only once.

    Preview Mode

    The Online window displayed above is a graphical preview of the changes that will be pended to bring the workspace online, but a command-line version of this functionality is also available. By passing the /preview and /noprompt options on the command line, a textual representation of the changes that the TFPT online tool thinks should be pended can be displayed.

    tfpt online /noprompt /preview

    Inclusions

    The TFPT online tool by default operates on every file in the workspace. Its focus can be more directed (and its speed improved) by including only certain files and folders in the set of items to inspect for changes. Filespecs (such as *.c, or folder/subfolder) may be passed on the command line to limit the scope of the operation, as in the following example:

    tfpt online *.c folder\subfolder

    This command instructs the online tool to process all files with the .c extension in the current folder, as well as all files in the folder\subfolder folder. No recursion is specified. With the /r (or /recursive) option, all files matching *.c in the current folder and below, as well as all files in the folder\subfolder folder and below will be checked. To process only the current folder and below, use

    tfpt online . /r

    Exclusions

    Many build systems create log files and/or object files in the same directory as source code which is checked in. It may become necessary to filter out these files to prevent changes from being pended on them. This can be achieved through the /exclude:filespec1,filespec2,… option.

    With the /exclude option, certain filemasks may be filtered out, and any directory name specified will not be entered by the TFPT online tool. For example, there may be a need to filter out log files and any files in object directories named “obj”.

    tfpt online /exclude:*.log,obj

    This will skip any file matching *.log, and any file or directory named obj.

    GetCS (Get Changeset)

    The TFPT GetCS tool gets all the items listed in a changeset at that changeset version.

    This is useful in the event that a coworker has checked in a change which you need to have in your workspace, but you cannot bring your entire workspace up to the latest version. You can use the TFPT GetCS tool to get just the items affected by his changeset, without having to inspect the changeset, determine the files listed in it, and manually list those files to a tf.exe get command.

    There is no graphical user interface (GUI) for the TFPT GetCS tool. To invoke the TFPT GetCS tool, execute

    tfpt getcs /changeset:changesetnum

    at the command line, where changesetnum is the number of the changeset to get.

    UU (Undo Unchanged)

    The TFPT UU tool removes pending edits from files which have not actually been edited.

    This is useful in the event that you check out fifteen files for edit, but only actually make changes to three of them. You can back out your edits on the other twelve files by running the TFPT UU tool, which compares hashes of the files in the local workspace to hashes the server has to determine whether or not the file has actually been edited.

    There is no graphical user interface (GUI) for the TFPT UU tool. To invoke the TFPT UU tool, execute

    tfpt uu

    at the command line. You can also pass the /changeset:changesetnum argument to compare the files in the workspace to a different version.

    Help

    Help for each TFPT tool, as well as all its command-line switches, is available at the command line by running

    tfpt help

    or for a particular command, with

    tfpt help <command>

    or

    tfpt <command> /?

  • Buck Hodges

    VSTS pricing

    • 22 Comments

    There have been questions about pricing in the newsgroups.  Here is what Raju Malhotra had to say about it.  As I understand it, many of the pricing details have yet to be worked out and are far from being set in stone.

    We will share the specific pricing details with you as soon as they are finalized but here is what we know. Hope this helps.

    MSDN Universal customers will have an option to get any one of the VS Team Architect, VS Team Developer or VS Team Test products as part of their subscription without paying anything extra as long as their subscription is current at the release time. Of course, all other benefits of their subscription like monthly shipments, access to subscriber download site, MS servers for dev/test purposes, etc. will continue as usual. They will also be able to migrate to the full suite (including all three of the above prducts) at an additional price to be announced later. In general, the pricing for VS Team System products will be competitive with the lifecycle tools market.

    We will also have upgrade paths available in retail. We are currently working on the upgrade eligibility and specific pricing for that.

    Please stay tuned for more details.

    Raju Malhotra
    Product Manager,
    Visual Studio Team System

  • Buck Hodges

    Authentication in web services with HttpWebRequest

    • 20 Comments

    Hatteras has three tiers: client, middle, and data.  The middle tier is an ASP.NET web service on a Windows 2003 Server running IIS 6.  When the client (we use C# for both it and the middle tier) connects to the middle tier, it must authenticate with IIS 6.  Depending upon the IIS configuration, that may be negotiate, NTLM, Kerberos, basic, or digest authentication.  Here's a page on Internet Authentication in .NET.

    NOTE:  The code below uses the .NET 2.0 framework (Visual Studio 2005).

    In order to authenticate, the client must have credentials that the server recognizes as valid.  For Windows Integrated Authentication (comprising NTLM and Kerberos) using the current logged-on user, the client can use CredentialCache.DefaultCredentials.  Here's how it looks in code.

    using System;
    using System.IO;
    using System.Net;
    
    class Creds
    {
        public static void Main(string[] args)
        {
            Uri uri = new Uri(args[0]);
    
            HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
            req.Credentials = CredentialCache.DefaultCredentials;
    
            // Get the response.
            using (HttpWebResponse res = (HttpWebResponse)req.GetResponse())
            {
                StreamReader sr = new StreamReader(res.GetResponseStream());
                Console.WriteLine(sr.ReadToEnd());
            }
        }
    }

    You can find that same type of sample code in MSDN.  However, it gets more interesting if you want to use basic or digest authentication or use credentials other than the current logged-on user.

    One interesting fact is that the HttpWebRequest.Credentials property is of type ICredentials, but it only uses instances of NetworkCredential and CredentialCache.  If you implement ICredentials on your own class that is not one of those two classes, you can assign it to the Credentials property, but HttpWebRequest will silently ignore it.

    To go further, we need to look at the CredentialCache class itself.  This class is used to hold a set of credentials that are associated with hosts and authentication types.  It has two static properties, one of which we used above, that are the "authentication credentials for the current security context in which the application is running," which means the logged-on user in our case.

    The difference is very subtle.  The documentation for DefaultCredentials says, "The ICredentials instance returned by DefaultCredentials cannot be used to view the user name, password, or domain of the current security context."  The instance returned by DefaultNetworkCredentials, being new in .NET 2.0 and of type NetworkCredential, would presumably let you get the user name and domain, but it didn't work for me when I tried it with the following code (UserName returned an empty string).

    Console.WriteLine("User name: " + CredentialCache.DefaultNetworkCredentials.UserName);

    The NetworkCredential class implements both the ICredentials (NetworkCredential GetCredential(Uri uri, String authType)) and ICredentialsByHost (NetworkCredential GetCredential(String host, int port, String authType)) interfaces.  The ICredentialsByHost interface is new in .NET 2.0.

    The CredentialCache class has methods that let you add, get, and remove credentials for particular hosts and authentication types.  Using this class, we can manually construct what setting req.Credentials = CredentialCache.DefaultCredentials accomplished in the original example.

            CredentialCache credCache = new CredentialCache();
            credCache.Add(new Uri("http://localhost"), "Negotiate",
                          CredentialCache.DefaultNetworkCredentials);
            HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
            req.Credentials = credCache;

    The authentication type can also be explicitly specified as "NTLM" and "Kerberos" in separate calls to Add().  This page on authentication schemes explains using Negotiate as follows.

    Negotiates with the client to determine the authentication scheme. If both client and server support Kerberos, it is used; otherwise NTLM is used.

    Let's say you want to work with basic or digest authentication.  The documentation for CredentialsCache.DefaultCredentials and CredentialsCache.DefaultNetworkCredential says that neither will work with basic or digest.  If we add basic to credentials cache, we get a runtime exception.

            credCache.Add(new Uri("http://localhost"), "Basic",
                          CredentialCache.DefaultNetworkCredentials);

    The exception is thrown by the Add() method.

    Unhandled Exception: System.ArgumentException: Default credentials cannot be supplied for the Basic authentication scheme.
    Parameter name: authType
    at System.Net.CredentialCache.Add(Uri uriPrefix, String authType, NetworkCredential cred)

    So, in order to use basic or digest, we must create a NetworkCredential object, which is also what we need to do in order to authenticate as some identity other than the logged-on user.  To do that, we create NetworkCredential object and add it to the CredentialCache as follows.

            credCache.Add(new Uri("http://localhost"), "Basic" /* or "Digest" */,
                          new NetworkCredential("me", "foo", "DOMAIN"));

    Basic authentication sends the password across the wire in plain text.  That's okay for a secure connection, such as one using SSL, and for situations where you don't need much security.  Digest authentication hashes the password along with other data from the server before sending a response over the wire.  It's a significant step up from basic.

    Now we need to have the user name and password to create the NetworkCredential object.  There are two parts to this.  First is prompting the user for the name and password.  The second is storing the information.  For prompting there is the Windows dialog that pops up any time you go to a web site that requires authentication.  That dialog includes a "Remember my password" checkbox.  I don't yet know what the managed API is for that.

    To store and retrieve the information, there is the new managed DPAPI explained by Shawn Farkas in several blog postings.

    [Update 3:44pm]  The Windows dialog used when IE prompts for name and password is created by the CredUIPromptForCredentials() function.  CredUIConfirmCredentials() is used to save credentials that authenticated successfully, if desired.  Duncan Mackenzie's MSDN article Using Credential Management in Windows XP and Windows Server 2003 explains how to use it from .NET.

    [UPDATE 4/10/2006]  I updated the MSDN links that were broken.

  • Buck Hodges

    Data tier load with Team Foundation beta

    • 20 Comments

    Did you install your beta data tier in Virtual PC or Virtual Server and see a high CPU load while its running?  Even on real hardware, you may notice some load when nothing would appear to be going on.  Someone mentioned on an internal mailing list that the data tier CPU load for a combined app and data tier installed in Virtual Server was quite high, averaging about 50-70% with most of that time being used by SQL analysis services (msmdsrv.exe).

    Well, here's the answer (I didn't write the question or the answer, but I hope people find it useful).

    The warehouse was designed to run processing every hour. For demo purposes the period was changed to 2 minutes in beta 2. On a weak system or a virtual machine you will see this behavior.

    Change the run interval on the app tier as follows.

    1. Stop TFSServerScheduler using 'net stop TFSServerScheduler'.
    2. Go to http://localhost:8080/Warehouse/warehousecontroller.asmx using a browser on the app tier.  Click on ChangeSetting and enter the following values and then press the 'Invoke' button (3600 seconds = run once per hour).
      1. settingID: RunIntervalSeconds
      2. newValue: 3600
    3. Restart TFSServerScheduler using 'net start TFSServerScheduler'.

    Note: It is important to restart TFSServerScheduler, as the interval is cached and will not take effect until the next run.

    You can also manually kick off the data warehouse.  Here are the steps to do so:

    1. Go to http://localhost:8080/Warehouse/warehousecontroller.asmx using a browser on the app tier.
    2. Click the ‘Run’ link.
    3. Press the ‘Invoke’ button.

     This will trigger a refresh of the reports.

    [Update]  Thanks to Mike for pointing out the the original instructions were a little rough.  I've updated them.

    [Update 2] Added msmdsrv.exe to the text to (hopefully) make it easier for folks to find the post when they notice that the Yukon April CTP Analysis Services process is consuming a lot of CPU time.

Page 1 of 23 (562 items) 12345»