Buck Hodges

Visual Studio Online, Team Foundation Server, MSDN

Posts
  • Buck Hodges

    Why are there two Source Control entries on the File menu?

    • 0 Comments

    If you add the Code Analysis check-in policy to your Team Project, you'll see two entries for Source Control on the File menu in VS, as shown below.

    All of the standard menu items for TFS source control are under the second Source Control entry.  The issue is cosmetic and is related to a change we made in the TFS source control provider used in VS.  This should be fixed by the time version 1 is released.

    On a related note, Eric Lee has a post regarding an issue with solutions that were bound to source control in TFS beta 2.  If you have a solution was originally bound to source control in beta 2, you can remove the bindings by editing the .sln file, as he describes.  Alternatively, if you open your solution without editing it, you should get a dialog stating that the "associated source control plug-in is not installed or could not be initialized."  Choosing to "permanently remove source control association bindings" will also remove the bindings from the solution file.

    After the solution file no longer has the old bindings, you can then re-bind the solution using File -> Source Control -> Change Source Control if it's already checked in or right-click on the solution and choose Add to Source Control to put it in TFS beta 3.

  • Buck Hodges

    The 10,000th Changeset

    • 1 Comments

    Back on July 13, I wrote about the dates of changeset "milestones" in our dogfood system.  Today we hit changeset 10,000.  The time between 9000 and 10000 was longer than usual because of check-in restrictions in place leading up to beta 3 (bug fixes had to be individually reviewed and approved in ask mode).

    Here's the full list.  The rate of changesets being created should increase as we continue to add more teams to the system.

    Changeset 10000 was checked in on September 26
    Changeset 9000 was checked in on September 2
    Changeset 8000 was checked in on August 16
    Changeset 7000 was checked in on July 30
    Changeset 6000 was checked in on July 15
    Changeset 5000 was checked in on June 29
    Changeset 4000 was checked in on June 13
    Changeset 3000 was checked in on May 20
    Changeset 2000 was checked in on April 26
    Changeset 1000 was checked in on March 15
    Changeset 1 (creation of the server - that upgrade started with a fresh server installation) was checked in on December 10, 2004

  • Buck Hodges

    Active Directory is no longer required with Team Foundation Beta 3

    • 2 Comments

    There have been a lot of requests from folks wanting to use TFS without Active Directory.  Since several questions and answers in comments on Jeff Beehler's blog dealt with this, I thought it's worth mentioning in a post that TFS beta 3 supports workgroup installations, and Active Directory is not required in that configuration.

    [Update 09/22/05]  Doug's comment below addresses another long-standing issue: "Additionally, beta 3 now supports Windows 2000 Active Directories as well as the previously supported Windows Server 2003 version."

  • Buck Hodges

    Team Foundation Beta 3 has been released!

    • 23 Comments

    Today we signed off on Team Foundation Beta 3!  If you used beta 2, beta 3 is a vast improvement.  Beta 3 should hopefully show up on MSDN in about two days.  You may remember that beta 3 has the go-live license and will be supported for migration to the final release version 1, which means this is the last time you have to start from scratch.

    With beta 3, single-server installation is once again supported!  I know many people didn't install the July CTP because of the lack of a single-server installation.  With each public release, installation has gotten easier and more reliable, and this is the best installation thus far.

    I wrote about what changed between beta 2 and the July CTP.  That's still a good summary.  Between the July CTP and beta 3, we fixed a lot of bugs, further improved performance across the product, improved the handling of authenticating with the sever using different credentials (e.g., you're not on the domain), improved installation, and more.

    If you have distributed teams, be sure to try out the source control proxy server.  It's one of the features we have to support distributed development.

    While you are waiting on TFS to show up, you'll want to make sure you already have Visual Studio 2005 Team Suite Release Candidate (build 50727.26) and SQL Server 2005 Standard (or Enterprise) Edition September CTP (build 1314.06, which uses the matching 2.0.50727.26 .NET framework)).

    TFS beta 3 is build 50727.19.  The reason the minor number is different than the VS RC minor number is due to the fact that TFS beta 3 was built in a different branch.  The major build number stopped changing at 50727 (July 27, 2005 build) for all of Visual Studio, and only the minor number changes now.

    Here's a list of my recent posts that are particularly relevant to beta 3.

    This one will need to be updated (URLs changed):  TFS Source Control administration web service.

    [Update 9/22/05]  Updated links and SQL info.

  • Buck Hodges

    Working remotely with TFS Version Control

    • 11 Comments
    Here in North Carolina, we have a 10Mb connection to the outside world (it was a 3Mb connection until a few months ago), so we know what it's like to use TFS remotely.  In addition to minimizing the number of requests from the client to the server, TFS Version Control contains several features that aid in working in a situation where the connection to the server is much slower than the local network: compressed files, compressed SOAP responses, and a caching proxy server.
     
    Every file that we upload as part of checking in or shelving is compressed using GZip.  If a file is larger after being compressed, which may be due to it being in a compressed format (e.g., ZIP or JPEG) already or being encrypted, the file will be uploaded without being compressed.  When files are downloaded, they are still compressed as they were when they were uploaded.
     
    Communication between the client and server in TFS uses HTTP for everything, and it uses SOAP for everything other than file uploads and downloads.  We support IIS 6 compression of the SOAP responses (requires IIS settings for asmx pages -- I expect this is in the installation guide, but I haven't made time to go check), so the communication with the remote server is compressed.  Since SOAP is XML, responses compress very well.
     
    The caching proxy in version 1 caches versions of files, and it does not handle any other parts of the client-server communications.  When a user configures a client to use a proxy server (in VS, Tools -> Options, click on Source Control and then Team Foundation Source Control to get to the proxy settings), the client still calls the server for the list of files to download (e.g., diff, get, view, undo) and calls the proxy server to download the file contents.
     
    The first user to download a file must wait until the file is transferred over the remote link, but subsequent downloads are served from the proxy's local disk copy.  To prevent the lag for the first user, you could, for example, set up a workspace that exists for the purpose of populating the cache by having a Windows task scheduled to run "tf get" every so often (say 30 minutes) or hook into the check-in event from the mid-tier and kick off a get in the workspace (see Continuous Integration Demo Code from Doug Neumann's TLN301 PDC Talk for an example of hooking into the check-in event generated by the app tier).
     
    The amount of disk space used by the proxy is configurable, and it removes the least recently used files first when it needs to reclaim disk space.  With large hard drives being cheap, it's not hard to have a caching proxy server cache every version of every file in the server and not have to delete anything.
     
    Here in North Carolina, we have a proxy server set up that is on our 100Mb network, so downloads are local while the rest of the communication is still over the WAN link.  We also take advantage of the IIS compression for the SOAP responses (it's turned on for our dogfood server).
     
    I wrote back in March that our experience with using TFS had been good, and it's gotten significantly better since then due to features like the proxy, compressed SOAP responses, and the performance optimizations that have been made.
     
    [Update 10/01/05]  When you install the server, SOAP response compression is automatically configured.  You don't need to make any changes to take advantage of it.
  • Buck Hodges

    Outlook macro to create work item and changeset hyperlinks

    • 3 Comments

    [NOTE: If this post looks truncated, scroll down or make the browser window wider.  The style sheet causes this.]

    Ed Hintz, a dev lead on the TFS Version Control team, sends a lot of email with TFS work item numbers.  Work items can be viewed (not edited) in a web browser using an URL constructed from the server name and the work item number.  Our dogfood system is a single server, so the server is always the same.  So, Ed wrote a macro for Outlook that will convert the current selection to a hyperlink to the work item.

    Here's the code for the macro, which he bound to Alt+L.  The hyperlinks generated will work for beta 3, but you would need to tweak the hyperlink for earlier releases.

    Sub LinkToWorkItem()
    '
    ' Convert the current selection to a work item hyperlink
    '
        ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:= _
            "http://TFserver:8080/WorkItemTracking/Workitem.aspx?artifactMoniker=" _
    & Selection.Text, _
            SubAddress:="", ScreenTip:="", TextToDisplay:=Selection.Text
    End Sub

    To do the same thing for changesets, use the following.

    Sub LinkToChangeset()
    '
    ' Convert the current selection to a changeset hyperlink
    '
        ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:= _
            "http://TFserver:8080/VersionControl/VersionControl/Changeset.aspx?artifactMoniker=" _
    & Selection.Text & "&webView=true" _
            , SubAddress:="", ScreenTip:="", TextToDisplay:=Selection.Text
    End Sub

    If you have never created a macro in Outlook, create a new mail message, click in the body (otherwise, the Macros menu is disabled), go to Tools -> Macro -> Macros, enter the name "LinkToWorkItem" in the dialog's text box, and click Create.  In the Visual Basic Editor that opens up, paste the body of the LinkToWorkItem macro into the subroutine, save, and close the VB Editor.  To associate it with Alt+L, go to Tools -> Customize, click the Keyboard button, scroll down to Macros in the Categories list box, click in the "Press new shortcut key" text box, and press Alt+L.  Now click Assign, Close, and Close.  In the new mail message you have up, type a number, select it, and hit Alt+L to test it.

    Now when you want to turn a plain number into a work item or changeset link, highlight the number and run the macro via the shortcut you've assigned.

    [Update 10/03/05]  Updated code comment.

  • Buck Hodges

    Continuous Integration Demo Code from Doug Neumann's TLN301 PDC Talk

    • 7 Comments

    Doug Neumann's TLN301 presentation, VSTS: Behind the Scenes of Visual Studio 2005 Team Foundation Server (slides), featured a demonstration of how to use the server's check-in event notification to kick off a build for a continuous integration build system using Team Build.  A number of people asked for it, so we've decided to post it here.

    Doug's demonstration used the July CTP, but since beta 3 will hopefully be released this week, the code has been modified to run on beta 3.  You'll need to create a web service and put the following code into it.  Here are the assemblies you'll need to reference.

    Microsoft.TeamFoundation.Build.Client.dll
    Microsoft.TeamFoundation.Build.Common.dll
    Microsoft.TeamFoundation.Client.dll
    Microsoft.TeamFoundation.VersionControl.Client.dll
    Microsoft.TeamFoundation.VersionControl.Common.dll
    Microsoft.TeamFoundation.WorkItemTracking.Client.dll

    The code is intentionally simplified to meet the needs of a demo (e.g., a real continuous integration system would need more intelligence in handling check-in events that come in while the current build is running, etc.), but it's a good example of how to hook into the Team Foundation server's events and build useful extensions.

    Once you've built and deployed your web service using VS 2005, you'll need to add a subscription for your web service.  The check-in event is sent to your continuous integration web service via SOAP.  The following command, with changes as necessary for the name of your machine (here, I use localhost), must be run on the application tier to add an event subscription for your service.

    bissubscribe /eventType CheckinEvent /userId mydomain\myusername /address http://localhost:8080/ContinuousBuild/Service.asmx /deliveryType Soap /domain localhost

    Now when you check code into your server, your continuous integration service will kick off a build.

    The Team Build team plans to post a more elaborate continuous integration example.

    [UPDATE 2/20/06]  I updated the version numbers in the SoapDocumentMethod attribute to work with RC and RTM releases.

    using System;
    using System.Collections;
    using System.Collections.Generic;
    using System.Collections.Specialized;
    using System.ComponentModel;
    using System.Configuration;
    using System.Diagnostics;
    using System.Globalization;
    using System.IO;
    using System.Reflection;
    using System.Threading;
    using System.Web;
    using System.Web.Services;
    using System.Web.Services.Protocols;
    using System.Xml;
    using System.Xml.Serialization;
    using Microsoft.Win32;
    using Microsoft.TeamFoundation.Build.Common;
    using Proxy = Microsoft.TeamFoundation.Build.Proxy;
    using Microsoft.TeamFoundation.Client;
    using Microsoft.TeamFoundation.WorkItemTracking.Client;
    using Microsoft.TeamFoundation.WorkItemTracking.Common;
    
    [WebService(Namespace = "http://tempuri.org/")]
    [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
    public class Service : System.Web.Services.WebService
    {
        public Service ()
        {
        }
    
        [SoapDocumentMethod("http://schemas.microsoft.com/TeamFoundation/2005/06/Services/Notification/03/Notify", 
                            RequestNamespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/Services/Notification/03")]
        [WebMethod]
        public void Notify(string eventXml)     // Do not change the name of this argument.
        {
            ThreadPool.QueueUserWorkItem(CallBuild, eventXml);
        }
    
        public void CallBuild(object state)
        {
            string eventXml = (string)state;
    
            // De-serializing the event
            XmlDocument Xmldoc = new XmlDocument();
            Xmldoc.LoadXml(eventXml);
    
            string teamProject = Xmldoc.DocumentElement["TeamProject"].InnerText;
            string owner = Xmldoc.DocumentElement["Owner"].InnerText;
    
            // NOTE: hard-code info for demo
            string teamFoundationServer = "http://localhost:8080";
            string buildType = "Continuous Integration Build";
            string buildMachine = "localhost";
            string buildDirectoryPath = "c:\\builds";
    
            Proxy.BuildController controller = Proxy.BuildProxyUtilities.GetBuildControllerProxy(teamFoundationServer);
            Proxy.BuildStore store = Proxy.BuildProxyUtilities.GetBuildStoreProxy(teamFoundationServer);
            Proxy.BuildParameters buildParams = new Proxy.BuildParameters();
            buildParams.TeamFoundationServer = teamFoundationServer;
            buildParams.TeamProject = teamProject;
            buildParams.BuildType = buildType;
            buildParams.BuildDirectory = buildDirectoryPath;
            buildParams.BuildMachine = buildMachine;
    
            string buildUri = controller.StartBuild(buildParams);
    
            // wait until the build completes
            BuildConstants.BuildStatusIconID status;
            bool buildComplete = false;
            do
            {
                Proxy.BuildData bd = store.GetBuildDetails(buildUri);
                status = (BuildConstants.BuildStatusIconID)bd.BuildStatusId;
                buildComplete = (status == BuildConstants.BuildStatusIconID.BuildSucceeded ||
                    status == BuildConstants.BuildStatusIconID.BuildFailed ||
                    status == BuildConstants.BuildStatusIconID.BuildStopped);
            } while (!buildComplete);
    
            if (status == BuildConstants.BuildStatusIconID.BuildFailed)
            {
                // create a workitem for the developer who checked in
                CreateWorkItem(teamFoundationServer, teamProject, owner);
            }
        }
    
        public void CreateWorkItem(string server, string projectName, string owner)
        {
            TeamFoundationServer tfs = TeamFoundationServerFactory.GetServer(server);
            WorkItemStore store = (WorkItemStore) tfs.GetService(typeof(WorkItemStore));
    
            WorkItemTypeCollection workItemTypes = store.Projects[projectName].WorkItemTypes;
    
            // Enter the work item as a bug
            WorkItemType wit = workItemTypes["bug"];
            WorkItem workItem = new WorkItem(wit);
    
            workItem.Title = "The changes submitted have caused a build break - please investigate";
            string[] ownerSplit = owner.Split('\\');
            owner = ownerSplit[ownerSplit.GetLength(0) - 1];
            workItem.Fields["System.AssignedTo"].Value = owner;
            workItem.Fields["Microsoft.VSTS.Common.Priority"].Value = 1;
            workItem.Save();
        }
    }
    
  • Buck Hodges

    Doug Neumann on Channel 9

    • 1 Comments

    Doug Neumann, one of our esteemed Program Managers, is in a short video on Channel 9 talking about our office, version control, the proxy, etc.  He and Mario, our other version control PM, are at PDC this week.

    Doug Neumann - Source Code control guy on VSTS team

  • Buck Hodges

    How wide is a string when displayed in the console window?

    • 3 Comments

    Not too long ago, I had to fix the command line output for tf.exe, the Team Foundation Version Control command line app, so that output would be formatted properly for console windows (cmd.exe) using double-byte code pages.

    The code originally computed the output display width as the length of the string.  However, that's not correct when the code is running on a Japanese Windows system, for example.  Whereas English letters all take up one position in the console window, double-byte characters are twice as wide and take two positions.  The original code produced really bad looking output.

    To make sure that I got the right solution, I asked Aldo, an internalization PM.  The solution was to use the number of bytes that would result from converting the string to a byte array using the console window's code page.  Each English letter results in a byte, and each Japanese character results in two bytes.

    For single-byte code pages, the double-byte characters only take up one position, because they get munged due to the console not being able to display them.  The .NET System.Text.Encoding class handles properly converting strings to and from byte arrays.  Of course, I didn't need an actual byte array.  Fortunately, the Encoding class has a method called GetByteCount() that gives the information I need.

    The end result is the following simple method that calculates the display width of a string in the console's encoding.

        static int CalculateConsoleWidth(String text)
        {
            Encoding encoding = Console.Out.Encoding;

            if (encoding.IsSingleByte)
            {
                return text.Length;
            }
            else
            {
                return encoding.GetByteCount(text);
            }
        }

  • Buck Hodges

    How TFS Version Control determines a file's encoding

    • 3 Comments

    TFS Version Control will automatically detect a file's encoding based upon the following.

    • First, a file with a Unicode byte order mark (BOM) is added as that particular type (UTF-8, UTF-16 big endian, UTF-16 little endian, etc.).
    • If a file doesn't have a BOM, we check for an unprintable ASCII character in the first 1 kilobyte of the file.  If there is no unprintable ASCII character in there, the encoding is set to the current code page being used, which is Windows-1252 on US English Windows systems.
    • If an unprintable character is detected, the file is detected as being binary.  The unprintable ASCII characters detected are in the range of 0 - 0x1F and 0x7F excluding 0x9 (TAB), 0xA (LF), 0xC (FF), 0xD (CR), and 0x1A (^Z).

    The only exception to the foregoing is PDF files.  Those are always detected as binary because they are so common and can be all text in the first 1 kilobyte with binary streams later in the file.  The detection is based on the signature, "%PDF-", that always appears at the start of a PDF file.

    So, if you take a file that is in the euc-jp encoding and add it to source control on a US English Windows system, it will be added as Windows-1252 unless you specify a different encoding with the /type parameter on the add command (e.g., "tf add /type:euc-jp file.txt").  If the file is already in source control, use the edit command's /type option to change the encoding.

    Within Visual Studio 2005, you can change a committed file's encoding by navigating to it using Source Control Explorer (View -> Other Windows -> Source Control Explorer), right-clicking on the file, and choosing Properties.  On the General tab, click on the Set Encoding button and choose the encoding or click on the Detect button and have Version Control detect the encoding using the process described above.

    Because changing the encoding requires pending a change on the file, you must have the file in your workspace.  Files and folders in Source Control Explorer that are in grey text (rather than black text) are either cloaked or not mapped in your workspace or the workspace does not "have" the file (the server keeps track of what files are in your workspace).

    Unfortunately, TFS does not support changing the encoding of a pending add.  If you need to do that, you will have to undo the pending add, and then re-add the file using the command line and specify the /type option.

    [UPDATE 6/8/2012] The TFS client (Visual Studio/Team Explorer) in 2012 has changed how this is done for file merges (e.g., when resolving conflicts).

    1. VS 2012 reads the entirety of the source, target, and base files during automerge. This helps in the case of a UTF-8 file without a BOM where the first non-ASCII character is after the first 1024 bytes of the file.We detect a file that does not have a BOM as UTF-8 if:

    • There is at least one non-ASCII character (Unicode codepoint > 127)
    • There are no byte sequences in the file that are invalid in UTF-8. If you read http://en.wikipedia.org/wiki/Utf-8, you will see that all code points > 127 in UTF-8 must be represented as multibyte sequences following a very specific pattern. It would be unlikely for a meaningful non-UTF-8 file with non-ASCII characters to meet this criteria
    • If one of the above criteria is not met, we use the “fallback” encoding.

    2. In VS 2012, the fallback encoding is the server encoding. This allows you to override our heuristic in the scenario where:

    • you have a file that you would like to be encoded as UTF-8 without a BOM
    • but it does not have any non-ASCII characters yet.

    In previous versions we would fall back to the default system code page (e.g. Windows-1252). In VS 2012, if you set the server encoding to UTF-8, automerge will use UTF-8 instead.

  • Buck Hodges

    Team Foundation Version Control client API example

    • 8 Comments

    [Update 3/16/2006]  I have decided to remove this version to reduce confusion.  Please see http://blogs.msdn.com/buckh/archive/2006/03/15/552288.aspx for the RTM v1 example.

  • Buck Hodges

    Hold the shift key and double click

    • 3 Comments

    I've learned two shortcuts recently that I didn't know existed in TFS source control integration in VS.  These may or may not work based on the build you are using.

    Double clicking the Source Control node in the Team Explorer will bring up the Source Control Explorer window.  What I learned recently is that holding the shift key and double clicking Source Control will bring up the Pending Changes window.

    The other shortcut is more useful.  You may already know that double clicking a file in the checkin window will bring it up in the editor.  If you hold the shift key while double clicking a pending edit, it will launch your diff viewer so you can see what's changed.  That's kind of handy.

  • Buck Hodges

    Look for TFS Beta 3 in September, RTM in Q1 '06

    • 2 Comments

    Beta 3 for Team Foundation will be released in September along with Visual Studio 2005 Release Candidate 1.  One of the most significant points with beta 3 is that it will have a go-live license that includes data migration to RTM.  The details are in Soma's blog post for today: Visual Studio 2005 update.

    Here's a copy of the part that's TFS-specific.

    Simultaneously with Visual Studio 2005 RC1 [in September], we will also release Beta 3 of Visual Studio 2005 Team Foundation Server (TFS). TFS is Microsoft’s server-based product for team collaboration and is part of the Visual Studio 2005 wave of products. 

    TFS Beta 3 will include a Go Live license along with technical support for Premier customers, enabling organizations to begin deployment of their Team System collaboration tools immediately.  The Go Live license will also enable us to solicit an additional round of feedback from customers prior to shipping.  We have received fantastic feedback from customers, partners, and our own internal use so far.  Over the next several months, I want to encourage you to exercise TFS under your real-world conditions and send us feedback via the MSDN Product Feedback Center.

    At Visual Studio 2005 launch, we will continue supporting TFS Beta 3 with the rest of Visual Studio 2005.    Further, all data within TFS Beta 3 will migrate seamlessly and in-place to the final version of TFS.  We will ship TFS in the first quarter of 2006.

  • Buck Hodges

    Why doesn't Team Foundation get the latest version of a file on checkout?

    • 33 Comments

    I've seen this question come up a few times.  Doug Neumann, our PM, wrote a nice explanation in the Team Foundation forum (http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=70231).

    It turns out that this is by design, so let me explain the reasoning behind it.  When you perform a get operation to populate your workspace with a set of files, you are setting yourself up with a consistent snapshot from source control.  Typically, the configuration of source on your system represents a point in time snapshot of files from the repository that are known to work together, and therefore is buildable and testable.

    As a developer working in a workspace, you are isolated from the changes being made by other developers.  You control when you want to accept changes from other developers by performing a get operation as appropriate.  Ideally when you do this, you'll update the entire configuration of source, and not just one or two files.  Why?  Because changes in one file typically depend on corresponding changes to other files, and you need to ensure that you've still got a consistent snapshot of source that is buildable and testable.

    This is why the checkout operation doesn't perform a get latest on the files being checked out.  Updating that one file being checked out would violate the consistent snapshot philosophy and could result in a configuration of source that isn't buildable and testable.  As an alternative, Team Foundation forces users to perform the get latest operation at some point before they checkin their changes.  That's why if you attempt to checkin your changes, and you don't have the latest copy, you'll be prompted with the resolve conflicts dialog.

  • Buck Hodges

    Dogfood statistics update

    • 0 Comments

    John Lawrence posted the latest set of dogfood system statistics.  The deltas are from May I think, since it says checkins are up 4,500.  The June/early-July statistics are here.  One thing to note is that we now have more than 600,000 files and folders in the system.  Hopefully, we'll hit 1 million in the not too distant future.

    Note that you can get some of these same statistics for your own system, such as number of files and workspaces, using the QueryServerRequests web service call that I wrote about here.  Jeff Lucovsky recently wrote a post on command logging and tbl_Command.  You'll need that if you want to keep track of the number of gets issued, etc.

    Anyone want to take a guess on what day we'll hit the 10,000th changeset?  You can check out my changeset milestones post for some data.  We're at about 8,300 right now.

  • Buck Hodges

    Upgraded

    • 0 Comments

    The dogfood system is now based on the July CTP (it's nearly the same).  Surprisingly, we had to fix the NIC setting on the production dogfood server.  This time we knew where to look when we saw terrible data transfer rates from the server.  Rather than being explicitly set to half duplex, it was set to auto.  However, it fell back to a half duplex setting automatically.  Changing the setting fixed the problem.

  • Buck Hodges

    July CTP for Team Foundation is headed your way

    • 4 Comments

    Rob Caron just posted that the July CTP is making its way through MSDN now.  As Rob points out, be sure to check out the Known Issues and Readme pages.

    One of the things that held up the release was we spent Monday trying to figure out why the communication between the source control mid-tier and client was terrible.  Downloads were taking forever, as were large web service responses.  We thought it might be a bug in our code or a bug in the framework.  At the end of a long day, it turned out that the duplex settings for the network cards in the test machines were wrong, causing dropped packets.  Evidently the image used to set up the machines was wrong.

    I've written about some of the July CTP changes before.  Try it out!

  • Buck Hodges

    Dogfood statistics update

    • 2 Comments

    Here's an update on what's changed since June 1 (John's last update was for May).

    Users

    • With assigned work items: 408 (down 26)
    • Version control users: 370 (up 51)

    Work items

    • Work items: 39,156 (up over 5,900)
    • CSS nodes: 1,831 (up 120)
    • Versions: 298,879 (up 56,000)
    • Attached files: 9,629  (up 1,600)
    • Queries: 3,213 (up 600)

    Version control

    • Files: 305,154 (up 34,000)
    • Folders: 33,440 (up 3,000)
    • Workspaces: 750 (up 140)
    • Shelvesets: 1,547 (up 300)
    • Checkins: 5,663 (up 2,200)
    • Pending changes: 8,391 (up 3,500)

    Past 7 days

    • Users: 407
    • Gets: 32,111 (up 10,000)
    • Downloads: 2.5M (down 1.1M)
    • Checkins: 415 (up 126)
    • Uploads: 17,380 (down 52,000)
    • Shelves: 474 (up 100)
  • Buck Hodges

    Alert settings

    • 0 Comments

    Occasionally, the question about how to configure, after installation, the alert email SMTP server and from address setting have come up in the Team Foundation forum.  Alerts are emails that are sent when a check-in occurs in the Team Project you specify, or a work item assigned to you changes.  Users can sign up for alerts by going to the Team -> Alerts menu in VS 2005 after connecting to a Team Foundation Server or by right-clicking on a Team Project in Team Explorer and choosing Alerts.

    To set the SMTP server and the from address, edit file "%ProgramFiles%\Microsoft Visual Studio 2005 Enterprise Server\BISIISDIR\bisserver\web.config" on the application tier.  In the AppSettings section, you should see settings for the SMTP server and email address. Change the values or add the settings.

    <appSettings>
    ...
    <add key="emailNotificationFromAddress" value="tfsaccount@yourdomain." />
    <add key="smtpServer" value="yoursmtphost" />
    ...
    </appSettings>

  • Buck Hodges

    Changeset milestones in dogfood

    • 2 Comments

    John Lawrence has posted dogfood usage stats in the past.  I took a look at the changeset "milestones" today.  The rate of changesets being created has certainly increased as more users have been added to the system.

    To get the latest changeset, you can run h changeset /latest or h changeset /latest /i if you don't want the gui.  For any given changeset, you can specify the number, such as h changeset 4000 /i.  You could also use the history command, h history $/ /r /i, to see them all (use /stopafter to limit the number of changesets returned by history, such as /stopafter:10 to see only the last 10 changesets).

    Latest at the moment is changeset 5771
    Changeset 5000 was checked in on June 29
    Changeset 4000 was checked in on June 13
    Changeset 3000 was checked in on May 20
    Changeset 2000 was checked in on April 26
    Changeset 1000 was checked in on March 15
    Changeset 1 (creation of the server - that upgrade started with a fresh server installation) was checked in on December 10, 2004

  • Buck Hodges

    Info on the upcoming July CTP

    • 16 Comments

    We're still ironing out a few more wrinkles in preparation for the dogfood upgrade, but I wanted to go let folks know that it's going to require two server machines to install.  With beta 2 we had a setup that would allow you to install it all on a single machine, which was great for getting a system set up for evaluation.  Unfortunately, I was told that the July CTP won't be able to do that.  You'll need separate machines for the application tier and data tier.  You can still install a client (VS 2005) on the application tier box.  You could also run the application tier in a virtual PC that runs on the machine with SQL server to reduce the physical machine requirements to one (if you use Virtual PC, make sure you give it enough memory).

    The July CTP has a lot of good changes in it.  We've cleaned up the names across most everything.  The assembly names have been changed and no longer contain code names.  Likewise, the namespaces have been cleaned up.  Some of the executables, such h.exe, still need to be renamed.  Overall, you'll find there's been a lot of clean up.

    Also in the July CTP we've tried to get in all of the protocol changes.  Some of those changes were name changes.  Others were significant overhauls of particular web methods.  Going forward, we want to minimize the protocol changes, partly because we'll be using the live development client binaries with the dogfood server.  The code developers build on their machines will be what's used to get work done, such as check out files and change work items.

    Other changes are a little less noticeable.  We've cleaned up the database names and cleaned up the schemas.  We've updated the names of the web services.  Active Directory Application Mode (AD/AM) is gone, so backing up the server is little more than backing up the SQL Server databases.

    As you might imagine, there have also been a lot of bugs fixed in the product over the four months or so since the code was frozen for beta 2.  Some of the bugs fixed have been reported in the Visual Studio Team Foundation MSDN forum.

    After the cut for beta 2, there were a number of design change requests (DCR) that were approved and implemented.  These are effectively features that either needed to be added or improved significantly based on either customer feedback or our own use of the product.  For source control, these DCR's included re-working the conflict resolution experience (resolve dialog), improved administration features, support for files larger than 4 GB, support for uncompressed files (e.g., files, such as JPEG files, that grow rather than shrink with GZip won't be stored compressed), and a caching proxy server.

    The caching proxy server was the biggest source control DCR and provides support for geographically distributed teams by caching the files that are downloaded from the server.  After the first request for a version of a file, all subsequent requests can be fulfilled locally using the local caching server.  The caching proxy stores each version of each file that is requested, and its ability to cache files is limited only by the amount of disk space that it is given.  This DCR was driven by customer feedback (when something comes up often enough, it gets addressed).  It also is something we're looking forward to using with the new dogfood upgrade because we (the source control team) are in North Carolina.

    Speaking of making things faster, the most important changes in the July CTP are the overall performance improvements.  We've made a lot of significant performance improvements across all of Team Foundation Server when compared to beta 2.  Work item tracking bulk updates are faster, as well as integration with Excel and Project.  The analysis and reporting services use less memory and run faster.  The warehouse code is more efficient (you hopefully shouldn't need to change the warehouse interval any more).

    One area that has improved greatly is the performance of the TFS source control integration in VS 2005.  We spent quite a bit of time making it faster by changing how and how often it calls the server, as well as tuning the server for some of the calls that it uses most often.  While we're not completely done with performance improvements, I think you'll find the difference to be significant and worth the time to upgrade if you are using beta 2.

    On separate note, be sure to check out the slides from Doug Neumann's presentation on TFS source control from TechEd 2005: DEV 466 Enterprise-Class Source Control.  Doug includes topics like promotion modeling and the aforementioned caching proxy server in his presentation.  He's no longer the only source control PM, as Mario Rodriguez joined the team recently (he used to work on the XBOX team).

  • Buck Hodges

    TFS Source Control administration web service

    • 7 Comments

    Have you ever wondered how many files are in your server?  Or workspaces?  Have you wanted to stop a request that was being processed?  Using the administration web service and matching web page for TFS Source Control, you can do exactly that.

    For a beta 2 server, bring up http://localhost:8080/scc/application/admin.asmx in Internet Explorer (either run IE on the application tier or replace localhost with the name of the server).  For the upcoming mid-July CTP server, bring up http://localhost:8080/SourceControl/Application/Administration.asmx (yep, longer name and more capital letters).

    Since we're close to the upcoming mid-July CTP, I'm going use it in the examples.

    When you bring up the web page, you'll see links for the web methods that are available.  You can run these in Internet Explorer, or you can write code in VS 2005 to call the web service methods from an application.  Here is what you'll see.

    Admin

    Team Foundation VersionControl Admin web service

    The following operations are supported. For a formal definition, please review the Service Description.

    If you click on QueryRepositoryInformation and then click the Invoke button, you'll see how many files, workspaces, and so forth that are on the server.  For my current development server, I see the following.

    <?xml version="1.0" encoding="utf-8" ?> 
    <AdminRepositoryInfo xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" UserCount="1" GroupCount="6" WorkspaceCount="2" ShelvesetCount="1" FileCount="2487" FolderCount="27" MaxChangesetID="3" PendingChangeCount="12" xmlns="http://schemas.microsoft.com/TeamFoundation/2005/06/VersionControl/Admin/02" />

    From this you can see that I'm the only user (UserCount = 1).  There are six groups defined (GroupCount = 6).  I have created two workspaces (WorkspaceCount = 2) and one shevleset (ShelvesetCount = 1).  On a beta 2 server, the workspace and shelveset counts are lumped together in the workspace count.  There are currently 2487 files in 27 directories checked into the server (FileCount = 2487 and FolderCount = 27).  I've only checked in twice since the server was created (MaxChangesetID = 3, and the first changeset is always the creation of $/ when the server is installed).

    The last number is the number of pending changes.  It says that there are 12 pending changes on the server (PendingChangeCount = 12).  However, six of those pending changes are in one of my workspaces, and the other six are in the one shelveset I've created.  So, as your users make use of shelving, you'll see your pending change count climb quite a bit.

    You can create a quick shortcut to the repository info page by using the URL you see in IE when you are looking at the XML data.  In this case, it's http://localhost:8080/SourceControl/Application/Administration.asmx/QueryRepositoryInformation.  That gets you the result of clicking the Invoke button without having to click it.

    Clicking on QueryServerInformation will tell you how long your server has been running, the unique ID of the server, and a few other details.

    The ChangeServerState operation allows you to pause or stop the server.  When you click on it, you'll see a newServerState parameter and a comment parameter.  The newServerState values are shown in the SOAP detail below the parameter boxes.  The following lines from that description show what you can enter.

          <newServerState>Stopped or Starting or Running or Paused or Unknown</newServerState>
          <comment>string</comment>
    So, if you enter Paused (be sure to type it with the exact casing you see, or you'll get an error about the value being unknown) for newServerState and type in some text for a comment, the server will be Paused when you click Invoke.  If it succeeds, you'll see a blank window in IE, since we're just using the feature of ASP.NET that allows you to interactively run the web methods, so it doesn't give you anything back since the web method has no response, unlike QueryServerInformation.

    When I paused my server, I put in a comment of "For this demo."  Now, if you'll go back to QueryServerInformation, you'll see that your server is indeed paused (ServerState = Paused and Comment is whatever you typed).

    <?xml version="1.0" encoding="utf-8" ?>
      <AdminServerInfo xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" StartTime="2005-06-29T20:38:33.0581917Z" Uptime="06:49:58.2633623" ServerState="Paused" Comment="For this demo." DateLastStateChange="2005-06-30T03:22:18.3Z" ServerLogState="None" LogComment="Administrative logging not enabled" LogDateLastStateChange="2005-06-29T20:34:06.053Z" RepositoryName="buckhHatteras" RepositoryId="97732cf6-ff08-455c-b0e5-b77ae02844fa" xmlns="http://schemas.microsoft.com/TeamFoundation/2005/06/VersionControl/Admin/02" />

    If you paused your server, now's a good time to go back to ChangeServerState and set newServerState to Running.

    The QueryServerRequests page will show you the currently active server requests.  When I invoke QueryServerRequests, I see the following.

    <?xml version="1.0" encoding="utf-8" ?>
    <ArrayOfAnyType xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/TeamFoundation/2005/06/VersionControl/Admin/02">
      <anyType xsi:type="AdminServerRequest" User="NORTHAMERICA\buckh" ServerProcessID="5978" StartTime="2005-06-30T03:36:11.5461948Z" StopTime="0001-01-01T00:00:00" InProgress="true" ExecutionTime="00:00:00.7343844" WebMethod="Get" RemoteComputerName="192.168.20.105" RemoteComputer="192.168.120.105" RemotePort="2519" />
      <anyType xsi:type="AdminServerRequest" User="NORTHAMERICA\buckh" ServerProcessID="5979" StartTime="2005-06-30T03:36:12.264954Z" StopTime="0001-01-01T00:00:00" InProgress="true" ExecutionTime="00:00:00.0156252" WebMethod="QueryServerRequests" RemoteComputerName="127.0.0.1" RemoteComputer="127.0.0.1" RemotePort="2464" />
      </ArrayOfAnyType>

    I ran "h get" from a command prompt immediately before running QueryServerRequests.  So, there are two requests active.  The second one, WebMethod = QueryServerRequests, is my own requests to see the other requests.  You'll always see at least this entry.  The first one is the result of my execution of "h get" from the command prompt.  The client has called the Get web service on the server to get the latest files into my workspace.

    Earlier I mentioned being able to stop active server requests.  The data returned by QueryServerRequests includes a ServerProcessID.  In the data above, you can see that my Get web service request has ServerProcessID = 5978.

    If you go back to the main Admin web page and click on KillProcess, you can use this ServerProcessID you obtained from QueryServerRequests.  KillProcess takes two parameters, serverProcessID and comment.  For the serverProcessID, enter the ServerProcessID from the QueryServerRequests page for the request you want to stop.  You can also enter a comment to describe why you are doing it.  When you execute the method, the specified request will immediately be stopped, and an error will be returned to the caller.

    The OptimizeDatabase operation removes unreferenced content from the server.  Unreferenced content is data that was uploaded to the server but is not referenced by any shelved change or committed file version.  You don't need to run this, as it's run periodically (once per week, by default I believe).

    With QueryServerInformation and the other methods, you have more information about what's going on with your server and the ability to change it.

  • Buck Hodges

    Please send error reports when prompted

    • 1 Comments

    A couple of months ago, Jeff Lucovsky wrote about filing watson reports for Team Foundation.  When you log in as an administrator on the application tier for TFS, you may be greeted with a dialog asking if you would like to report errors to Microsoft.  Likewise, if Visual Studio crashes, you may be asked to report the error to Microsoft.  Please do this.  Specific developers and testers for each of the components of TFS are responsible for making sure that the errors filed through the Microsoft error reporting service (aka Watson reports) are filed as bugs to be fixed.  Each error report contains information about the state of the web service when the error occurred, such as the stack trace and the information passed in the HTTP request, to allow us to diagnose the problem.

    If you look at the event log on the application tier (you can run Event Viewer by entering "eventvwr" in the Run dialog triggered by clicking the Windows' Start button followed by Run...), you will see events logged for errors that are reported using Microsoft error reporting (they'll be reported to us only if you allow them to be sent in the aforementioned dialog).  If you look at the Application log type in Event Viewer, you can spot the events corresponding to reportable errors by looking for events where the Source column is Tfs and the Event column is 9000.  The actual error event usually appears just before it (or it may be separated by a couple of other events).

    You may not run into these errors, but if you do (remember, it's beta software), please consider reporting the errors so that we can fix them.

  • Buck Hodges

    Jeff Beehler's blogging

    • 0 Comments

    Tonight I stumbled across Jeff Beehler's new blog.  So far he's written about our effort to upgrade our dogfood system to new code (we've been using a system based on the beta 2 release), which we'll release as a mid-July CTP.

  • Buck Hodges

    Using baseless merge with converted VSS shared folders

    • 2 Comments

    In this forum thread, http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=18817, a question was asked about using baseless merge with converted VSS shared folders.

    [UPDATE Aug. 22, 2006]  The GUI will not show the relatives of baseless merges, only explicit branch relationships.

    [UPDATE Feb. 6, 2006]  I'm happy to say that baseless merges DO record history in the RC and RTM releases of TFS.  I wouldn't normally go back and edit a post that's this old, but since people will find it using search engines, I want it to reflect the final v1 behavior.

    [UPDATE June 27]  Unfortunately, it doesn't work this way currently.  I was informed this morning by the dev responsible for merge that merge history is no longer recorded for baseless merges.  This means there's currently no way to "re-connect" a branch source and target after import using merge.

    Given that, the only choice is to delete the target directory, check in the deletion, branch the source of the share to the target, and check it in.  With that, you'll have the source and target properly related for merging.

    Hopefully, the baseless merge will record the history before we ship (we're discussing it).

    [UPDATE July 15]  Baseless merge will not record merge history in version 1.

    The VSSConverter doesn't convert shared folders as branches.  Thus, there's no way to merge changes between what the "source" of the converted VSS share and the "target" of the share.  The source and the target have no branch or merge history that connects them.

    This is where the baseless merge comes in.  The purpose of the baseless merge is to establish a merge history between the source and destination.  A baseless merge creates merge history where there was none before.  In a normal dev scenario where you aren't converting from VSS, branching is what establishes the relationship to allow merging changes between the source and target.

    After the conversion is complete, perform a baseless merge between the source and the target of each converted VSS share.  Check in the baseless merge.  Now the source and the target are related through merge history.  As the source (or target) change, the changes can be propagated using the regular (not baseless) merge (command line or the GUI merge wizard).

    So, you only need to do the baseless merge once per shared source and target pair to establish the merge history.

    You can't perform a baseless merge from the GUI.  You must use the command line.  The command sequence looks like the following.

    tf merge /baseless shared-source shared-target

    tf checkin

    After checking in the baseless merge, the source and target are related as though the target had been branched from the source, though the GUI merge wizard will not show that (it queries only for literal branch relationships).

    You can find the command line documentation at http://blogs.msdn.com/buckh/articles/category/9194.aspx.

Page 20 of 23 (570 items) «1819202122»