Steve Lange @ Work

Steve Lange's thoughts on application lifecycle management, Visual Studio, and Team Foundation Server

  • Steve Lange @ Work

    My 2 cents on Areas and Iterations in Team Foundation Server


    There’s not a huge amount of best practice info out there regarding areas and iterations.  One interesting place to look at is a blog post that describes how the Visual Studio team uses them (


    So here are my 2 cents (you can see how much that's worth these days!) on Areas and Iterations. 



    To me, areas are ways of tagging or organizing objects within a Team Project.  Typically, areas are used to define either logical, physical, or functional boundaries.  It’s a way to slice and dice a normally large project effort into more manageable, reportable, and easily identifiable pieces. 


    For example, let’s say we have a tiered web application managed in a single TFS project called “MySite”.  There are 3 major components to this app:  the web site, a web service, and a database.  If this is a decent-sized application, you might have 1,200 tasks in the system for this project.  But how do you know to which component a given task belongs?  What if I only wanted to see tasks for the web service piece?  Areas are a convenient way to handle this.  Set up areas like this:



       \Web Site

       \Web Service



    Now you can specify an area of assignment for each task (work item), making it easy to effectively filter what you want to look at/work on.  You can use areas in both queries and reports as well.


    You may optionally want to further dissect those major components to be even more specific:



       \Web Site

          \Layout & Design



             \Contact Us



       \Web Service








    One final aspect of Areas to consider is security.  You can set security options on each Area node which can dictate not only who can change the areas, but also who can view or edit work items in a particular Area.



    So if you think of Areas as slicing and dicing by “space”, think of Iterations as slicing and dicing by “time”.  Iterations are like “phases” of a lifecycle, which can dissect the timeline of a project effort into more manageable time-based pieces. 


    So going back to the “MySite” example, say the project management team wants to split the entire project into 3 cycles, Phase 1, Phase 2, and Phase 3.  Thus, your Iterations can mirror that:



       \Phase 1

       \Phase 2

       \Phase 3


    These Iterations can be phases within the entire life of a project, or phases within a given release of a project.  So if “MySite” is going to have multiple releases over time, my Iterations might look lik this



       \Release 1.0

          \Phase 1

          \Phase 2

          \Phase 3

       \Release 2.0

          \Phase 1

          \Phase 2

          \Phase 3


    Now you have categorization options for both space and time (now if only we had a continuum..) for your project, allowing you to assign your tasks or other work items not only to the appropriate functional area (Area), but also to the phase (time cycle) of the project.

  • Steve Lange @ Work

    VS/TFS 2012 Tidbits: Requesting a Code Review on Code Already Checked in


    As the Visual Studio family of products (Visual Studio, TFS, Test Professional) nears its 2012 release, I thought I’d bring some short hits – tidbits, if you will – to my blog. Some of these are pretty obvious (well-documented, or much-discussed), but some may be less obvious than you’d think. Either way, it’s always good to make sure the word is getting out there. Hope you enjoy!

    Requesting a Code Review on Code Already Checked in

    There’s been great hype about the new built-in code review capabilities in TFS 2012, and for good reason. The process is easy, effective, and most of all, audited.


    But did you know that “My Work” is not the only place from where you can kick of a code review?  You can also do a review on code that’s already been checked in. Go to the file in Source Control Explorer, then view its history. In the History window, right-click on the changeset/revision and select “Request Review”.


    This will load up the New Code Review form in Team Explorer:


    Notice that it not only brings in the files from the changeset (5 of them, in this example), but also any work items that were related to this changeset as well.  The check-in comments are used to populate the title of the code review, as well as the optional description.

    Off ya go!

  • Steve Lange @ Work

    Panels vs. Context: A Tale of Two Visual Studios and a Practical Explanation of the Value of CodeLens


    If you have Visual Studio 2013 Ultimate, you know CodeLens is amazing.  If you don’t know what CodeLens is, I hope this helps.  I have a lot of customers who ask me about CodeLens, what it is, and how valuable I think it is for an organization.  Here’s my response.

    It’s really a tale of two Visual Studios, if you think about.

    A Visual Studio Full of Panels

    Let’s say you’re looking at a code file, specifically a method.  Your Visual Studio environment may look like this:


    I’m looking at the second Create method (the one that takes a Customer).  If I want to know where this method may be referenced, I can “Find All References”, either by selecting it from the context menu, or using Shift + F12. Now I have this:


    Great!  Now, if I decide to change this code, will it will work?  Will my tests still work?  In order for me to figure that out, I need open my Test Explorer window.


    Which gives me a slightly more cluttered VS environment:


    (Now I can see my tests, but I still need to try and identify which tests actually exercise my method.)

    Another great point of context to have is knowing if I’m looking at the latest version of my code.  I’d hate to make changes to an out-of-date version and grant myself a merge condition.  So next I need to see the history of the file.


    Cluttering my environment even more (because I don’t want to take my eyes of my code, I need to snap it somewhere else), I get this:


    Okay, time out. 

    Yes, this looks pretty cluttered, but I can organize my panels better, right?  I can move some panels to a second monitor if I want, right?  Right on both counts.  By doing so, I can get a multi-faceted view of the code I’m looking at.  However, what if I start looking at another method, or another file?  The “context” of those other panels don’t follow what I’m doing.  Therefore, if I open the EmployeesController.cs file, my “views” are out of sync!


    That’s not fun.

    A Visual Studio Full of Context

    So all of the above illustrates two main benefits of something like CodeLens.  CodeLens inserts easy, powerful, at-a-glance context for the code your looking at.  If it’s not turned on, do so in Options:


    While you’re there, look at all the information it’s going to give you!

    Once you’ve enabled CodeLens, let’s reset to the top of our scenario and see what we have:


    Notice an “overlay” line of text above each method.  That’s CodeLens in action. Each piece of information is called a CodeLens Indicator, and provides specific contextual information about the code you’re looking at.  Let’s look more closely.




    References shows you exactly that – references to this method of code.  Click on that indicator and you can see and do some terrific things:


    It shows you the references to this method, where those references are, and even allows you to display those references on a Code Map:




    As you can imagine, this shows you tests for this method.  This is extremely helpful in understanding the viability of a code change.  This indicator lets you view the tests for this method, interrogate them, as well as run them.


    As an example, if I double-click the failing test, it will open the test for me.  In that file, CodeLens will inform me of the error:


    Dramatic pause: This CodeLens indicator is tremendously valuable in a TDD (Test Driven Development). Imagine sitting your test file and code file side-by-side, turning on “Run Tests After Build”, and using the CodeLens indicator to get immediate feedback about your progress.



    This indicator gives you very similar information as the next one, but list the authors of this method for at-a-glance context.  Note that the latest author is the one noted in the CodeLens overlay.  Clicking on this indicator provides several options, which I’ll explain in the next section.




    The Changes indicator tells you information about the history of the file at it exists in TFS, specifically Changesets.  First, the overlay tells you how many recent changes there are to this method in the current working branch.  Second, if you click on the indicator you’ll see there are several valuable actions you can take right from that context:


    What are we looking at?

    • Recent check-in history of the file, including Changeset ID, Branch, Changeset comments, Changeset author, and Date/time.
    • Status of my file compared to history (notice the blue “Local Version” tag telling me that my code is 1 version behind current).
    • Branch icons tell me where each change came from (current/parent/child/peer branch, farther branch, or merge from parent/child/unrelated (baseless)).

    Right-clicking on a version of the file gives you additional options:


    • I can compare my working/local version against the selected version
    • I can open the full details of the Changeset
    • I can track the Changeset visually
    • I can get a specific version of the file
    • I can even email the author of that version of the file
    • (Not shown) If I’m using Lync, I can also collaborate with the author via IM, video, etc.

    This is a heck of a lot easier way to understand the churn or velocity of this code.

    Incoming Changes


    The Incoming Changes indicator was added in 2013 Update 2, and gives you a heads up about changes occurring in other branches by other developers.  Clicking on it gives you information like:


    Selecting the Changeset gives you the same options as the Authors and Changes indicators.

    This indicator has a strong moral for anyone who’s ever been burned by having to merge a bunch of stuff as part of a forward or reverse integration exercise:  If you see an incoming change, check in first!

    Work Items (Bugs, Work Items, Code Reviews)


    I’m lumping these last indicators together because they are effectively filtered views of the same larger content: work items.  Each of these indicators give you information about work items linked to the code in TFS.



    Knowing if/when there were code reviews performed, tasks or bugs linked, etc., provides fantastic insight about how the code came to be.  It answers the “how” and “why” of the code’s current incarnation.


    A couple final notes:

    • The indicators are cached so they don’t put unnecessary load on your machine.  As such they are scheduled to refresh at specific intervals.  If you don’t want to wait, you can refresh the indicators yourself by right-clicking the indicators and choosing “Refresh CodeLens Team Indicators”


    • There is an additional CodeLens indicator in the Visual Studio Gallery – the Code Health Indicator. It gives method maintainability numbers so you can see how your changes are affecting the overall maintainability of your code.
    • You can dock the CodeLens indicators as well – just know that if they dock, they act like other panels and will be static.  This means you’ll have to refresh them manually (this probably applies most to the References indicator).
    • If you want to adjust the font colors and sizes (perhaps to save screen real estate), you can do so in Tools –> Options –> Fonts and Colors.  Choose “Show settings for” and set it to “CodeLens”.


    I hope you find this helpful!

  • Steve Lange @ Work

    Visual Studio Online (VSO): Owning Multiple VSO Accounts


    NOTE: This is not official guidance, nor may it be even officially a supported “feature” in the near future (my guess is that it’s not a directly intended capability.  It’s simply a short-term workaround that assisted a few of my customers that I thought I’d share.

    Update: You can now create multiple VSO instances under the same account directly from the website (see comments).  Enjoy!

    Visual Studio Online by default only allows a Microsoft account to create a single VSO account.  When you create a VSO account, the system records who the owner (creator) is, and the next time that user comes back, they cannot create additional VSO accounts.

    I have customers who currently maintain several VSO accounts (for various reasons), and have done so by creating multiple Microsoft accounts, one for each VSO account.

    With the May 7th date of ending the “Early Adopter” program for VSO, I have customers in this situation asking what to do about this moving forward.

    There’s a slightly indirect, but perfectly doable way around this: a way to let a single Microsoft account “own” multiple VSO accounts.  You need two (2) Microsoft accounts (but only for VSO account creation purposes), but only two.

    For this example, I’m going to create 2 sample Microsoft accounts, one primary and one secondary (both which I’ll delete after this post – I hate stale/dummy accounts!) and show you how to create three (3) VSO accounts owned by the primary Microsoft account.


    First, I create the primary Microsoft account:

    Here’s this account’s profile page:


    Note that this account neither owns any VSO accounts nor is a member of any VSO accounts.

    Using this Microsoft account, I create a new VSO account:

    Next, I sign out of Microsoft and create my secondary Microsoft account:

    And the resulting profile page:


    Like the first account, this account neither owns any VSO accounts nor is a member of any VSO accounts.

    Using this secondary account, I create a second VSO account:

    Next, while still logged in as the secondary account, I go to the Users page.


    Once there, I click “Add” and add the primary account (the first account I created) to this VSO account as a user, assigning a “Basic” license.


    Now that the primary account is recognized as a user, I set that account to be the owner of this VSO account.  (The same below steps are described here.)

    I click on the "gear” icon at the top-right, which takes me to the Admin area.


    Next, I click on the Settings tab, and for Account Owner, select my primary account from the drop-down list, and click the Save button.


    NOTEWARNING: If you follow my steps to the letter, you may have the unintended consequence of removing the secondary Microsoft account’s access from all the VSO accounts.  If this is truly a “dummy” account, then it’s probably no big deal.  But if you’re using a Microsoft account you with to keep using in VSO, you’ll want to make sure you add that account as a valid member of a group in the VSO account. In this walkthrough, I added the secondary account back into the VSO as an administrator.

    So let’s see what’s happened.  Sign out, and then sign in as the primary Microsoft account.  Here’s the updated profile page for the primary account:


    Notice that now this account “owns” both VSO accounts (primary and secondary).  Cool?

    Now let’s own a third VSO account.  Sign out, then back in as the secondary Microsoft account.  Here’s the secondary account’s profile page:


    This should be expected now, because this account no longer owns the secondary VSO account.

    I click the link to “Create a free account now”, and create a third VSO account:

    Like before, I go to the Users page, add the primary Microsoft account as a valid (Basic) user, then specify in the Administrators area that I want my primary Microsoft account to be the owner (and per the above note/warning, I add the secondary account back in). Be sure to click the “Save” button throughout!



    Once that’s all set, I sign out, then back in as the primary Microsoft account:


    Now my primary Microsoft account owns three (3) VSO account. Sweet!

    See the pattern?

    • As a Microsoft account that doesn’t own a VSO account, create a VSO account.
    • Transfer ownership of that account to the Microsoft account you actually want to own the VSO account.
    • Sign back in as the “dummy” Microsoft account, rinse and repeat as needed.

    As an added FYI, if you have an Azure Subscription (not the same thing as Azure MSDN Benefits, by the way), you can link you Azure account to each of the VSO accounts you own, and distribute your Azure resources (users, build minutes, load testing, etc.) across each of them.

    Here’s a big disclaimer: I’m still not clear if this is intended behavior, mainly because there’s no obvious link to create additional VSO accounts while logged in as a Microsoft account that already owns one.

    Hey, but for now, this works!

  • Steve Lange @ Work

    It’s Official: VS 2010 Branding & Pricing


    Microsoft just announced final branding and pricing for the Visual Studio 2010 lineup!  Here’s what it looks like (you can call this either the stadium or Lego view):



    There are three minor changes to product names, listed below:

    Old Name New Name

    Microsoft Visual Studio Test Elements 2010

    Microsoft Visual Studio Test Professional 2010

    Microsoft Visual Studio Team Lab Management 2010

    Microsoft Visual Studio Lab Management 2010

    Microsoft Test and Lab Manager*

    Microsoft Test Manager 2010*

    * Not available as a separate product for purchase.


    Below is the suggested pricing (USD) for each of the 2010 products.

    With 1-yr MSDN Subscription
    Product Buy Upgrade Buy Renew
    Visual Studio 2010 Ultimate - - $11,899 $3,799
    Visual Studio 2010 Premium - - $5,469 $2,299
    Visual Studio 2010 Professional $799 $549 $1,199 >$799
    Visual Studio Test Professional 2010 - - $2,169 $899
    Visual Studio Team Foundation Server 2010 $499 $399 - -
    Visual Studio Team Foundation Server 2010 CAL $499 - - -
    Visual Studio Load Test Virtual User Pack 2010 (1000 Virtual Users) $4,499 - - -

    * Subscription contents vary by purchased product.

    A couple things to note:

    • TFS 2010 and a TFS 2010 CAL are included with every MSDN subscription
    • The above prices are suggested list price.  Companies buying development tools licenses usually go through volume licensing which usually result in lower prices.

    Not sure what product has what?

    Visual Studio 2010 lineup - from the Rangers 2010 Quick Reference Guide

    Here’s another angle:

    Visual Studio 2010 lineup 

    For more details on each feature, you can view a matrix here.

  • Steve Lange @ Work

    Requirements Management in TFS: Part 2 (of 4): TFS Out of the Box


    This is Part 2 of the series, Requirements Management in TFS.  For a brief background, please see Part 1: Overview.

    In this part, we'll discuss a couple of the primary ways to support requirements in Team Foundation Server: the Team Portal and Work Item Tracking

    Team Portal (SharePoint)

    Team Explorer's Documents folderIf you use some kind of document format for authoring and tracking your specifications (Word, PDF, etc.), you may already be using something like a SharePoint site to store them.  Or some kind of repository, even if it's a network share somewhere.  Team Foundation Server creates SharePoint-based (specifically Windows SharePoint Services) web portals automatically when you create a new project.  These portals provide an easy, web-based way for interested parties to "check-in" on a project's status, participate in discussions, post announcements, view process guidance documentation, and submit supporting documents and files to a document library.

    It's the document library that provides a natural fit for bringing your specifications a little more in line with the rest of the application lifecycle.  The document library allows analysts to remain in their comfort application (i.e. Word), but submit changes to requirements to a location that is much more accessible to those that will consume those requirements (architects, developers, testers, etc.).  Team Foundation Server users can access the document library from within Visual Studio Team System, TFS Web Access or other interfaces, thereby allowing them to more readily react to new changes and provide feedback as necessary. 

    Document Library in TFS PortalAnd now that the requirements documents are in the document library and managed (indirectly) by TFS, you can easily leverage the linking capabilities of TFS to add traceability between your specifications and source code, defects, tasks, and test results.  (How do you link work items to items in the document library?  Work items can be linked to hyperlinks, so you can simply link to the URL of the specific file in the document library in SharePoint.)  This adds some real tangibility to your reporting in that you can now view reports from TFS that, for example, show all development tasks and their related requirements spec, changed source code, and validated test results.

    Bug linked to a changeset and work item

    Easy, right?  Well, yes and no.  There are some definite drawbacks to this approach (I'm leaving it up to you to decide if the good outweighs the bad), the primary being that you still don't have any more granular control over individual requirements changes than you did before.  Changes are still tracked, and linked, at the document level.  This can be challenging if you need to track changes to individual requirements (change tracking in Word will only take you so far) for auditing and compliance reasons.

    Benefits Drawbacks
    • Analysts remain in comfort application (Word, etc.)
    • SharePoint is a "natural" extension in Word/Office applications
    • Requirement specs more easily consumed by other roles in lifecycle.
    • Provides basic mechanism to enable traceability and better "cross-artifact" reporting.
    • Lack of item-level granularity.
    • Document-level linking only (can't link to an individual requirement inside specification document)
    • Document Workflow managed by SharePoint, whereas Workflow for other lifecycle artifacts are managed by TFS.

    Work Item Tracking

    Requirements QueriesTeam Foundation Server's work item tracking feature is a major moving part of the system.  Work Item Tracking (WIT) provides a robust yet flexible way to track any item of record throughout a lifecycle.  Some work item types provided with TFS include: bug, task, risk, scenario, or (to the point of this article) requirement. Work items are managed in the TFS database alongside code, builds, test results, etc, and provide a proper level of granularity for controlling change and traceability.

    In the previous example, using the SharePoint project portal lacked the ability to control changes to individual requirements, nor did it allow linking to those individual elements.  Leveraging WIT in TFS addresses both of these shortcomings.  You can create and customize your own types of work items, allowing teams to have complete control over what types of work items are used, their fields, workflow, and even UI.  Say for example, your team typically leverages three types of requirements: Business, Functional, and Non-Functional.  TFS allows you to create custom work item types that represent each of these categories of requirements.

    Now that your requirements are managed as work items in TFS, you can take advantage of all the benefits of the work item tracking system (see benefits below)

    Requirements work items are accessed in the exact same manner as any other work item:

    Since work items are primarily access by way of queries in Team Explorer, teams can easily filter what requirements are displayed and accessed at certain points.

    Reporting gets a considerable leg up using the work item tracking approach.

    The biggest challenge with this approach (in my opinion) is the shift in mindset.  In case you didn't notice, I haven't mentioned using Word in this section.  WIT gets more granular than Word does for managing item-level changes, and there is not currently a Microsoft-provided integration to Word from TFS.  There is often considerable resistance to change in that, "without the document, what will we do?"

    Benefits Drawbacks
    • All changes will be recorded and audited
    • Links can be created between individual requirements and other work items (any type), source code, test results, and hyperlinks)
    • Workflow is enforced and controlled in the same manner as all other work item types
    • Supporting information (screenshots, documents, UML diagrams, etc.) can be attached
    • Reporting can be much more granular (showing requirement implementation rates, impact analysis, scope creep).
    • Change of interface may meet resistance (i.e. no more Word!)
    • Customization work most likely involved (creating custom work item types, fields, & workflow).


    Getting Into the Code

    And lastly, if you're really into it, you can tap the Team Foundation Server SDK to get really creative.  For example, you can write a custom lightweight interface for business analysts to enter and track requirements work items in TFS.  Or create a custom report (although you might be better off creating a custom report via the server's reporting mechanism (SQL Server Reporting Services).  I have a little app that creates a "coverage analysis" spreadsheet in Excel that shows coverage (i.e. links) between work item types (for example, I can see if there are any business requirements that have no corresponding functional requirements).

    Next:  TFS - Partner Integrations


  • Steve Lange @ Work

    Requirements Management in TFS: Part 1 (of 4): Overview


    There are several schools of thought on how to "do RM" ranging from the very lightweight (whiteboards, sticky notes, cocktail napkins, etc) to the robust (formal elicitation, authoring, validation and management of specifications).  Chances are your organization falls somewhere in between. 

    Past dictates future (thanks, Dr. Phil), and the same applies in how teams approach requirements management.  Basically, if you're used to managing your requirements using Word documents (and believe me, you're in the majority), most likely that's what you figure to do when starting a new project.

    The historically mainstream ways to manage requirements (Word, Excel, email) can be efficient (people know the tools, and again, it's how it's been done before) and satisfactory.  But with the application development projects of today becoming increasingly complex and distributed (both architecture and project teams), this process becomes more difficult to manage.  Throw in your regulation/compliance package-of-the-day and you quickly realize you need more.  Key capabilities around collaboration, audit/history, and traceability rise to the top of the priority list.

    As a result, the idea of managing requirements as individual elements (rather than parts of a larger specification document) is becoming increasingly popular.

    I hear this quite often:  "How does Team System support Requirements Management?"  Visual Studio Team System, or more specifically Team Foundation Server, possesses the plumbing needed for the above-mentioned capabilities out of the box as part of its inherent architecture.  TFS provides work item tracking to allow items (bugs, tasks, or in this case, requirements) to be treated as individually managed objects with their own workflows, attributes, and traceability.  However, while the infrastructure is there, TFS wasn't designed specifically to support a requirements management process. 

    But if you are looking at Team Foundation Server to manage your development process, I would suggest that you take a peek at how it can be used to support your business analysts from a requirements management perspective as well.  Again, although it's not specifically targeted at business analysts (it is on the radar (see: Team System Futures, however) there many of the capabilities of TFS can help support a productive RM process.

    This series will take a look at a few different ways that TFS can support requirements management.  In Part 2 I'll show a couple of ways to do this using TFS "natively" (without any add-ins/plug-ins); and in Part 3 I'll briefly discuss some 3rd party solutions that support requirements more directly yet still integrate with Team System.  And we'll close the loop in Part 4 with a summary.

    Next:  TFS - Out of the Box


  • Steve Lange @ Work

    Requirements Management in TFS: Part 4 (of 4): Summary


    Every organization approaches the concept of "requirements" differently.  Factors include general history, skill set, complexity, and agility.  Many development organizations are adopting Team Foundation Server to help improve team communication & collaboration, project control & visibility, and generally a more integrated experience across the various actors in the application lifecycle. 

    The more pervasive TFS becomes in an organization, the more I'm asked about managing requirements within the confines if Team System.  Some shops want to know about how to integrate more RM-specific applications into the platform, while others want to leverage TFS as much as possible and wait until Microsoft releases a requirements management solution (I know, I know, Word is the most widely-used requirements tool in the world - but I think you know what I mean by now!).

    If you're trying to choose which path to take (TFS-only or a partner integration), here are a few basic considerations:


      Benefits Drawbacks
    TFS Only
    • Affordability (only a TFS CAL is required)
    • Full integration with rest of the application lifecycle (existing infrastructure is leveraged for reporting & visibility)
    • Consistent capture & storage mechanism for all project artifacts.
    • Lack of specific focus on the analyst role
    • Interface may be a bit "heavy" and counter-intuitive for analysts.
    Partner Integrations
    • Can immediately provide requirements-specific capabilities (rich-text, use case diagramming, etc.)
    • Many can trace/link to work items in TFS, providing end-to-end visibility
    • Cost (Most partner tools require their own licenses, and each user still requires a TFS CAL from Microsoft.  Maintenance costs may be a factor as well)
    • Additional skill set is required for partner tool

    Some requirements-related resources (other links can be found in the various parts of this series):

    Well, I hope you at least found this series worth the time it took you to read it.  I welcome any comments and feedback as this topic is always shifting in perception, intention, schools of thought.


  • Steve Lange @ Work

    Team Foundation Server vs. Team Foundation Service


    You’ve probably found a few comparisons on the interwebs comparing the “traditional”, on-premise TFS with the new cloud-hosted Team Foundation Service.  I get asked about this a lot – as a result, I thought I’d share the slide deck I used to drive this conversation.  Please let me know if you have any questions!


    Basically, TF Service is a nice way to get up and running quickly, without worrying about infrastructure, backups, etc.  What you lose is some customization, lab management, and SSRS reporting.

    Happy developing!

  • Steve Lange @ Work

    VS 2012 ALM Tidbits: The Feedback Client’s Two Modes


    As the Visual Studio family of products (Visual Studio, TFS, Test Professional) nears its 2012 release, I thought I’d bring some short hits – tidbits, if you will – to my blog. Some of these are pretty obvious (well-documented, or much-discussed), but some may be less obvious than you’d think. Either way, it’s always good to make sure the word is getting out there. Hope you enjoy!

    The Feedback Client’s Two Modes

    One of the “new” new features of TFS 2012 is the addition of the Microsoft Feedback Client (download) for collecting feedback from stakeholders, end users, etc. This tool integrates with TFS to provide a mechanism to engage those stakeholders and more seamlessly include their insights in the lifecycle.

    It’s important to know that this new tool provides a mechanism for collecting feedback in two distinct manners, voluntary and requested. The rest of this post will walk through each of these “modes”.

    Regardless of the mode used to provide feedback, this feedback gets stored in TFS as a work item (of type Feedback Response) which then gets all the benefits of being a work item (auditing, assignment, linking, reporting, etc.). As you can imagine, this is a much more effective way of tracking feedback than email, lists, and forms. We’ll talk about that (plus licensing) toward the end of this post.

    Voluntary Feedback Mode

    This mode is used naturally by a stakeholder (I’m using the term “stakeholder” to mean anyone that may want to have a say in how a product evolves) to provide unsolicited feedback about a product or application. This means that if a stakeholder is using an application and thinks of an idea to improve it (or maybe even to report a problem), they can fire up the Feedback Client and include annotated screenshots, record video or audio, and notes.  

    Voluntary feedback

    In this screenshot, I provide “voluntary” feedback that I should be more prominently featured on Bing. Yes, I’m that way.. ;)

    This is an incredible light and easy way for a stakeholder to feel like they have a say/vote in the direction of an application.

    Requested Feedback Mode

    As the name implies, this kind of feedback is given in response to a request for feedback from another user. Requesting feedback begins in Team Web Access on a project’s home page, by clicking on the “Request feedback” link under Activities. 

    Request feedback

    The requestor fills out the Request Feedback form:

    Request feedback form

    Which sends the following email to all included stakeholders (yes, you can send a single request to multiple recipients, as well as request multiple items of feedback in a single request):

    Feedback request email

    When the stakeholder clicks the link in the email, the Feedback Client will launch and walk the stakeholder through the process.Requested feedback in Feedback Client

    Once the feedback is submitted, everything shoots back into TFS and is automatically linked to the Feedback Request work item.

    Response linked to Request

    Looking at the feedback response in my example:

    Feedback Response work item

    Okay, Now What?

    Now that you have feedback in TFS, what do you do with it?

    Several things, actually.  First, leverage the linking capabilities of work items to associate feedback with the appropriate task, backlog item, bug, or whatever. In my example, I linked my feedback request to a PBI:


    This provides an even more cohesive story for “covering” the PBI.  Now not only can you see from a PBI all the tasks, storyboards, bugs, etc. related it to it, but you have a way to track “sign-off”, or at least unofficial support from stakeholders about the “doneness” of the backlog item.

    Also, you may want to is create a few shared queries to better help you view and track feedback.

    Feedback queries

    In this example, I created 4 queries to help me manage feedback (again, just an example):

    • All Feedback – Flat list showing all feedback responses (voluntary or requested).
    • Feedback Requests & Responses – Direct links query showing all feedback request and any associated responses.
    • Feedback without PBI – Flat list showing all feedback requests and responses that are not associated with a Product Backlog Item.
    • Unsolicited Feedback – Flat list showing all voluntary feedback.

    Lastly, if stakeholder feedback is important to you, add one of your feedback queries as a Team Favorite, which will make it show up on your team’s home page.

    Team Favorites


    • To provide feedback (i.e. use the Microsoft Feedback Client), there is no licensing requirement at all. The Feedback Client tool is free to download, and there is no TFS CAL requirement to use it.
    • To request feedback (i.e. solicit feedback from others), you need to be part of one of the following licensing groups: Visual Studio Premium, Visual Studio Ultimate, or Visual Studio Test Professional.


    There’s plenty of documentation on stakeholder feedback, but something that can fall through the cracks is the fact that there are indeed two modes of using this capability.

    Hope this helps!

  • Steve Lange @ Work

    The “Ultimate” Event: Visual Studio 2010 & Team Foundation Server 2010



    Join us for a sneak peek of Microsoft® Visual Studio® 2010, which will be a landmark release of the premier development toolset for Windows®, Web and Cloud development.
    The Ultimate Event is your exclusive opportunity to hear about Visual Studio 2010 from experts before the product is launched this year. Microsoft has made significant investments to and improvements of Modeling and Testing/QA tools in Visual Studio 2010. At this event you’ll get a comprehensive overview of Visual Studio 2010 and Team Foundation Server 2010, which is the Application Lifecycle Management (ALM) core of Visual Studio. We’ll present enhancements in version control, reporting, project management and build management. 
    Spend the day with us to learn how to take software development to the next level with Visual Studio 2010!


    Time Topic
    8:30 AM-9:00 AM Registration, Welcome
    9:00 AM-10:30 AM Lap Around VS 2010
    10:45 AM-12:00 PM Agile Management with TFS
    12:00 PM-12:30 PM Lunch
    12:30 PM-1:45 PM No More "No Repro"
    2:00 PM-3:15 PM Architecture for Everyone

    I hope to see you there!


    Date Location Event ID


    Bellevue, WA



    San Diego, CA



    Los Angeles, CA



    Mountain View, CA



    Irvine, CA



    Phoenix, AZ



    Salt Lake City, UT



    Portland, OR



    Denver, CO



    San Francisco, CA


  • Steve Lange @ Work

    FREE EVENT: “A Day in the Life of Scrum” with Team System


    *** UPDATED 4/30/2009 ***

    • The Denver date has changed to June 4th
    • The Phoenix date has been updated!  The new date is June 2nd, and the updated registration information is below. 

    vsts2If you attended the Agile & Scrum Essentials event series last fall, then you’ve been expecting this second round!  And if you missed it, now you can catch up!

    Please join Microsoft and Neudesic for a day in the life of Scrum with Visual Studio Team System 2008 and Team Foundation Server!  Agile methods are a set of development processes intended to create software in a lighter, faster, more people-centric way. Many development teams have adopted "agile" methodologies to manage change and to improve software quality. These methodologies promote continuous integration as a practice to build and test software products incrementally as new features are included, bugs are fixed, and code is refactored.

    If you missed the first series of Agile & Scrum Essentials last fall; here’s your chance to attend the follow-on event where we’ll briefly revisit the basics of Agile and Scrum and provide a walkthrough of how to configure Visual Studio Team System 2008 and Team Foundation Server for Scrum. Participants will be familiarized with how key artifacts are managed within this popular process template for enacting Scrum in organizations.   

    Join us for this interactive event as we explore a “day in the life of a Sprint,” that will give you a practical perspective of how Scrum teams leverage Visual Studio Team System for end to end management of the planning, execution and control of Scrum projects. The day will end with an overview of what’s coming in Visual Studio Team System 2010!

    Please register today for the event nearest you!

    3/19/2009 Mountain View, CA Click here to register with invitation code: 38B820
    6/4/2009 Denver, CO Click here to register with invitation code: 02B7F8
    Phoenix, AZ Click here to register with invitation code: 4DEAA2
    4/2/2009 Bellevue, WA Click here to register with invitation code: 46F263
    4/7/2009 Salt Lake City, UT Click here to register with invitation code: FF5466
    4/9/2009 Portland, OR Click here to register with invitation code: ED7794
    4/14/2009 San Diego, CA Click here to register with invitation code: 1A8639
    4/15/2009 Irvine, CA Click here to register with invitation code: E4995A
    4/16/2009 Los Angeles, CA Click here to register with invitation code: A61EB4

    You can also call 1.877.MSEVENT (1.877.673.8368) and provide the appropriate invitation code to register.

    I will be at the Denver, Phoenix, and Salt Lake City venues and hope to see you there!

    Did I mention this event is FREE?

    microsoft logo   neudesic logo

  • Steve Lange @ Work

    Running Code Metrics as Part of a TFS 2010 Build – The Poor Man’s Way


    Code Metrics, not to be confused with code analysis, has always been tough impossible to run as part of a build in Team Foundation Server.  Previously, the only way to run code metrics was to do so inside Visual Studio itself.

    In January, Microsoft released the Visual Studio Code Metrics PowerTool, a command line utility that calculates code metrics for your managed code and saves the results to an XML file (Cameron Skinner explains in detail on his blog). The code metrics calculated are the standard ones you’d see inside Visual Studio (explanations of metric values):

    • Maintainability Index
    • Cyclomatic Complexity
    • Depth of Inheritance
    • Class Coupling
    • Lines Of Code (LOC)

    Basically, the power tool adds a Metrics.exe file to an existing Visual Studio 2010 Ultimate or Visual Studio 2010 Premium or Team Foundation Server 2010 installation.

    So what does this mean?  It means that you can now start running code metrics as part of your builds in TFS.  How?  Well, since this post is titled “The Poor Man’s Way”, I’ll show you the quick and dirty (read: it works but is not elegant) way to do it.

    As a note, Jakob Ehn describes a much more elegant way to do it, including a custom build activity, the ability to fail a build based on threshold, and better parameterization.  I really like how flexible it is!  Below is my humble, quick & dirty way.

    The below steps will add a sequence (containing individual activities to the build process workflow that will run just prior to copying binaries to the drop folder.  (These steps are based on modifying DefaultBuildTemplate.xaml.)

    1. Open the build process template you want to edit (it may be simpler to create a new template (based on the DefaultBuildProcessTemplate.xaml) to work with.
    2. Expand the activity “Run On Agent”
    3. Expand the activity “Try, Compile, Test and Associate Changesets and Work items”
      1. Click on “Variables”, find BuildDirectory, and set its scope to “Run On Agent”
    4. In the “Finally” area, expand “Revert Workspace and Copy Files to Drop Location”
    5. From the toolbox (Control Flow tab), drag a new Sequence onto the designer, just under/after the “Revert Workspace for Shelveset Builds”. (Adding a sequence will allow you to better manage/visualize the activities related to code metrics generation).
      1. In the Properties pane, set the DisplayName to “Run Code Metrics”
    6. From the toolbox (Team Foundation Build Activities), drag a WriteBuildMessage activity into the “Run Code Metrics” sequence.
      1. In the Properties pane
        1. set DisplayName to Beginning Code Metrics
        2. set Importance to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.Normal (or adjust to .High if needed)
        3. set Message to “Beginning Code Metrics: “ & BinariesDirectory
    7. From the toolbox, drag an InvokeProcess activity into the sequence below the “Beginning Code Metrics” activity (this activity will actually execute code metrics generation).
      1. In the Properties pane
        1. set DisplayName to Execute Coded Metrics
        2. set FileName to “””<path to Metrics.exe on the build machine>”””
        3. set Arguments to “/f:””” & BinariesDirectory & “\<name of assembly>”” /o:””” & BinariesDirectory & “\MetricsResult.xml”  (you can also omit the assembly name to run matrics against all assemblies found)
        4. set WorkingDirectory to BinariesDirectory
    8. (optional) From the toolbox, drag another InvokeProcess activity below “Execute Code Metrics” (This activity will copy the XSD file to the binaries directory)
      1. In the Properties pane
        1. set DisplayName to Copy Metrics XSD file
        2. set FileName to “xcopy”
        3. set Arguments to “””<path to MetricsReport.xsd>”” ””” & BinariesDirectory & “”””
    9. Save the XAML file and check it in to TFS.

    Workflow after adding code metrics sequenceThe sequence you just added should look like (boxed in red):

    You basically have a sequence called “Run Code Metrics” which first spits out a message to notify the build that code metrics are beginning.

    Next, you actually execute the Metrics.exe executable via the InvokeProcess activity, which dumps the results (XML) file in the Binaries directory (this makes it simpler to eventually copy into the drop folder).

    The “Copy Metrics XSD file” activity is another InvokeProcess activity which brings along the appropriate XSD file with the metrics result file.  This is optional of course.

    After you run a build using this updated template, your drop folder should have something like this:

    Drop folder after running build with new template

    Pay no attention to the actual binaries – it’s the presence of MetricsReport.xsd and MetricsResults.xml that matter.

    Pretty cool, but there’s one annoyance here!  The metrics results are still in XML, and aren’t nearly as readable as the results pane inside Visual Studio:

    MetricsResults.xml on top, Code Metrics Results window in VS on bottom

    Don’t get me wrong – this is a huge first step toward a fully-baked out-of-VS code metrics generator.  The actual report generation formatting will surely be improved in future iterations.

    I decided to take one additional step and write a simple parser and report generator to take the XML results and turn them into something more pretty, like HTML.

    Before I dive into code, this is the part where I remind you that I’m not (nor have I ever been) a developer by trade, so the code in this blog is purely for functional example purposes.  Winking smile

    I created a relatively simple console application to read in a results XML file, parse it, and spit out a formatted HTML file (using a style sheet to give some control over formatting).

    I’m posting the full example code to this post, but below are the highlights:

    I first created some application settings to specify the thresholds for Low and Moderate metric values (anything above ModerateThreshold is considered “good”).

    Settings to specify Low and Moderate metric thresholds

    I created a class called MetricsParser, with properties to capture the results XML file path, the path to output the report, and a path to a CSS file to use for styling.

    To store individual line item results, I also created a struct called ResultEntry:

        struct ResultEntry
            public string Scope { get; set; }
            public string Project { get; set; }
            public string Namespace { get; set; }
            public string Type { get; set; }
            public string Member { get; set; }
            public Dictionary<string, string> Metrics { get; set; }

    I then added:

    private List<ResultEntry> entriesShifty

    which captures each code metrics line item.

    If you look at the results XML file, you can see that in general the format cascades itself, capturing scope, project, namespace, type, then member.  Each level has its own metrics.  So I wrote a few methods which effectively recurse through all the elements in the XML file until a complete list of ResultEntry objects is built.

    private void ParseModule(XElement item)
                string modulename = item.Attribute("Name").Value.ToString();
                ResultEntry entry = new ResultEntry
                    Scope = "Project",
                    Project = modulename,
                    Namespace = "",
                    Type = "",
                    Member = ""
                List<XElement> metrics = (from el in item.Descendants("Metrics").First().Descendants("Metric")
                                          select el).ToList<XElement>();
                entry.Metrics = GetMetricsDictionary(metrics);
                List<XElement> namespaces = (from el in item.Descendants("Namespace")
                                          select el).ToList<XElement>();
                foreach (XElement ns in namespaces)
                    ParseNamespace(ns, modulename);

    Bada-bing, now we have all our results parsed.  Next, to dump them to an HTML file.

    I simply used HtmlTextWriter to build the HTML, the write it to a file.  If a valid CSS file was provided, the CSS was embedded directly into the HTML header:

     #region Include CSS if available
                    string cssText = GetCssContent(CssFile);
                    if (cssText != string.Empty)

    After that, I looped through my ResultEntry objects, inserting them into an HTML table, applying CSS along the way.  At the end, the HTML report is saved to disk, ideally in the build’s binaries folder.  This then allows the report to be copied along with the binaries to the TFS drop location.

    Code Metrics Results HTML Report

    You’ll notice that this layout looks much like the code metrics in Visual Studio if exported to Excel.

    So again, not the most sophisticated solution, but one that a pseudo-coder like me could figure out.  You can expand on this and build all of this into a custom build activity which would be much more portable.

    Here is the code for MetricsParser:

    Again I recommend looking at Jakob’s solution as well.  He puts a much more analytical spin on build-driven code metrics by allowing you specify your own thresholds to help pass or fail a build.  My solution is all about getting a pretty picture

    Happy developing!

  • Steve Lange @ Work

    Visual Studio 2012 Launch Roadshow!


    Visual Studio 2012 Launch Roadshow

    If you’re not heading to Seattle for the Visual Studio 2012 Launch Event on September 12th, don’t worry: We’re coming to you!

    Be our guest and attend in person to experience all of the incredible new capabilities of Visual Studio 2012 first hand.


    For those of you who have attended the events so far and are looking for the slides/content, look no further! Everything is here:

    I’ll be there, will you?

    Discover how Visual Studio 2012 allows you to collaborate better and be more agile. See how it helps you turn big ideas into more compelling apps. Experience how it integrates best practices that accelerate development and deployment.  You’ll enjoy several sessions which will take Visual Studio, Team Foundation Server, and Test Professional through their paces to show off what’s possible with this incredible release!

    Register today for a city near you (dates and locations listed below), we hope to see you there!

    Cities & Dates


    Denver, CO


    Lehi, UT


    Tempe, AZ


    San Diego, CA


    Irvine, CA


    Mountain View, CA


    San Francisco, CA


    Portland, OR


    Boise, ID


    Registration/check-in begins at 8:30.  The event runs from 9:00AM to 4:00PM.

  • Steve Lange @ Work

    VS/TFS 2012 Tidbits: TFS Feedback Management Behind the Scenes


    In this post, I’m going to take a closer look at how the new Feedback capabilities in TFS 2012 works under the covers.

    In general, TFS users can request “feedback” from stakeholders about particular elements/areas of an application.  This can be either specific or general scenarios.  TFS manages the feedback process using work items and email notifications.  But how does this work, really?  Let’s find out!

    You initiate a feedback request from your project (or team) dashboard by clicking on “Request Feedback”


    In this example, I’m soliciting feedback for Bing searches.  And I’ve asked for two items of feedback (called “feedback items”, see Step 3 in the screenshot).

    So what happens when I actually click send?

    Well, as far as the stakeholder is concerned, they just get an email with some simple instructions to get started:


    But on the TFS side, TFS creates two work items (of type Feedback Request), one for each feedback item specified in the previous request (see #142 and #143):


    Let’s look at #142 (Search for ‘Steve Lange’) and see what’s stored.

    The Title field obviously houses the name of the feedback item.  On the Details tab, the Description field holds the specific instructions that correspond with the feedback item.



    On the Application tab you’ll find the details from step 2 on the request form (“Tell the Stakeholders How to Access the Application”).


    And lastly, you’ll notice that the feedback items (Feedback Request work items) are linked together to form the more composite request.


    So how can we see to whom this request was sent?  Look in the comments of the history for the work item (at the bottom of the screenshot):


    The benefit of this approach is that it allows feedback items to be tracked individually.  If you submit a feedback request, but ask stakeholders to check out features that align with different backlog items/requirements/whatever, this method provides more specific tracking.

    Reuse or Making Corrections

    Another less obvious, but equally nice benefit to managing feedback requests this way is that you can make changes to the request without creating an entire new one.  This is possible because the link that’s inserted in the email sent to the stakeholders references the work item ID’s of the feedback request items, rather than embedding all the instructions in the URL (or statically somewhere else).

    So if I make a mistake I don’t have to create a brand new request, but instead the Title, Description, and other applicable fields on the Application tab and have the stakeholder simply reuse the previously-sent link.

    In this example, the URL provided in the email looks like this (see bolded area):


    By updating either of the work items specified in the URL, the feedback session will subsequently be updated.

    More on the feedback URL

    You can modify the URL and send it directly to someone without filling out the feedback request form.  For instance, if I requested feedback of work items 142 and 143 from certain people but also wanted feedback on work item 143 from an additional person I can augment the URL and send it to that one-off person.  In this example it would look like:


    Here’s the basic breakdown of the URL:


    <TFS_URL> The URL of your TFS instance
    <Project_Name> The name of your Team Project
    work_item_ids Comma-separated list (or use %2C) of work item ID’s.  These work item ID’s need to correspond to Feedback Request work items or the Feedback Client will throw an error.

    Moving On

    So let’s say I walk through the process of providing feedback using the Microsoft Feedback Client.  For summary, here’s the feedback I provide:



    Once I submit my feedback through the Feedback Client, a work item (of type Feedback Response) is created for each individual reply.  (See #144 and #145 below)


    Again, this allows teams to track individual responses to discrete items.  So if a stakeholder skips feedback for a particular item, it doesn’t interfere with feedback on other items.

    Revisiting and Providing More Feedback

    Here’s one last great feature.  As the stakeholder, let’s say I want to either review or amend feedback I’ve already provided, or submit additional feedback.  I’m covered!  If I simply click on the link provided in the original feedback request email, upon entering the “Provide” step of my feedback session TFS is smart enough to see that I’ve already provided feedback.  In doing so, it inserts the feedback details I provided earlier into the Feedback Client.  So now I can make changes to existing feedback, or enter more information. 

    For example, here’s the Feedback Response work item created from my first submitted feedback:


    If I click on the URL in the original email sent to the stakeholder, the Feedback Client runs again, and I can instantly see the feedback I’d previously supplied:


    (Look familiar?)

    Really all it’s doing is looking at the Feedback Request work item, checking to see if there is a Feedback Response item submitted by me already.  If there is, it pulls the content of the Stakeholder Comments field and sends it over to the Feedback Client for the stakeholder to make further edits.  Sweet!


    The basic thing to remember here is that the Feedback Management process in TFS uses TFS work items to manage the storage and workflow of providing feedback.  Think of the Feedback Client as a very lightweight TFS client.  Changes you make in the Feedback Client either create or update Feedback Response work items in TFS.  Direct changes made in TFS to the work items are reflected in the Feedback Client when those work items are accessed.

    I hope this helps better explain how feedback actually works in TFS 2012.  It’s terrific and easy way to engage stakeholders to get feedback at various points in the development lifecycle.  But it’s a flexible implementation as well, providing mechanisms for reuse and more granular tracking.


  • Steve Lange @ Work

    “Fake” a TFS Build (or Create a Build That Doesn’t Do Anything)


    Team Foundation Server’s build system serves as the “heartbeat” for your development lifecycle.  It automatically creates relationships between code changes, work items, reports, and test plans.

    But once in a while I’m asked, “What if we don’t use TFS Build for building our application, but we still want to have builds in TFS so we can track and associate work?”  Besides the biased question of “Why NOT use TFS Build, then?!”, there is sometimes the need to leverage the benefit of having empty/fake builds in TFS that don’t do anything more than create a build number/entry in TFS.

    There are a couple scenarios where this makes some sense, but the most common one I hear is this:

    Without builds in TFS, it’s near impossible (or at least very inconvenient) to tie test plans (accurately) to the rest of the lifecycle.

    Luckily, TFS 2010’s build system is incredibly flexible: flexible enough to allow us to “fake” builds without actually performing build actions (get source, compile, label, etc.).  It’s surprisingly simple, actually; and it doesn’t require writing any code.

    In my example (which I’ll detail below), I define a build which doesn’t do much more than craft a build number and spit out some basic information to the build log.

    First, create a new build process template, based on the default process template, using the steps described in this MSDN article.

    Once you have the process template created and registered in TFS, open the new template (.xaml file) in Visual Studio.  It will look (collapsed) something like this:

    Collapsed default build process template

    Here’s where it gets fun.  Inside the outermost sequence, delete every sequence or activity except for “Get the Build”.

    Drag an UpdateBuildNumber activity from the toolbox into the sequence, after “Get the Build”.

    (optional) Rename “Get the build” to “Get Build Details” so there’s no implication that an actual build will take place".

    Now expand the Arguments section (at the bottom of the XAML Designer window).  Delete all arguments except for BuildNumberFormat, Metadata, and SupportedReasons.

    At the bottom of the now-shorter list, use “Create Argument” and create the following arguments:

    Name Direction Argument type Default value
    MajorBuildName In String  
    MinorBuildName In String  
    Comment In String  
    IncludeBuildDetails In Boolean True

    MajorBuildName” and “MinorBuildName” will be used to help manually name each build.  “Comment” will be used to capture any notes or comments the builder wants to include for a given build.  “IncludeBuildDetails” will be used to determine if additional summary information about the build will be written to the build log.

    To provide users with means to set values to these arguments, create parameters in Metadata.  Click the ellipsis (…) in the Default value column for Metadata.  This will bring up the Process Parameters Metadata editor dialog.  Add each of the following parameters:

    Parameter Name Display Name Category Required View this parameter when
    MajorBuildName Major Build Name Manual Build Details Checked Always show the parameter
    MinorBuildName Minor Build Name Manual Build Details Unchecked Only when queuing a build
    Comment Comment Manual Build Details Unchecked Only when queuing a build
    IncludeBuildDetails Include Build Details Summary Manual Build Details Unchecked Always show the parameter

    Process Parameter Metadata editorA couple notes about setting the above parameters:

    • The “parameter name” should match the name of the like-named argument.
    • Use the exact same category name for each parameter, unless you want to see different groupings.  Also, check for any leading or trailing whitespace, as the category field is not trimmed when saved.
    • Feel free to add descriptions if you like, as they may help other users understand what to do.
    • Leave the “Editor” field blank for each parameter.

    Your dialog should now look something like the one at right.

    Next, open the expression editor for the Value property of the BuildNumberFormat argument and edit the value to read: “$(BuildDefinitionName)_$(Date:yyyyMMdd)_$BuildID)”. Including the BuildID will help ensure that there is always a unique build number.

    Now, Click “Variables” (next to Arguments) and create a new variable named ManualBuildName of type String, scoped to the Sequence, and enter the following as the Default:

    If(String.IsNullorEmpty(MinorBuildName), MajorBuildName, MajorBuidName & “.” & MinorBuildName)

    This variable will be used to provide a manual build name using the supplied MajorBuildName and MinorBuildName arguments.

    Now we have all the variables, arguments, and parameters all ready to go.  Let’s put them into action in the workflow!

    Drag a WriteBuildMessage activity into the main sequence, before Get Build Details, with these settings:

    • Display name: “Write Build Comment”
    • Importance: Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High
    • Message: “Comment for this build: “ & Comment

    Next, add an “If” activity below “Get Build Details” to evaluate when to include additional details in the build log, with the following properties:

    • Display name: “Include Build Details If Chosen”
    • Condition: IncludeBuildDetails

    In the “Then” side of the “If” activity, add a WriteBuildMessage activity for each piece of information you may want to include in the build log.  In my example, I included 3 activities:

    Display name Importance Message
    Write Team Project Name Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High “Team Project: “ & BuildDetail.TeamProject
    Write Requested for Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High “Requested for: “ & BuildDetail.RequestedFor
    Write Build reason Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High “Build Reason: “ & BuildDetail.Reason.ToString()

    Your “If” activity will look like this:

    If activity for showing build details

    The last thing to do is to add an UpdateBuildNumber activity as the last element in the main sequence, with the following properties:

    • Display name: “Set Build Number”
    • BuildNumberFormat: BuildNumberFormat & “-“ & ManualBuildName

    This last activity will actually create the build number which will be stored back into TFS.  Your completed workflow should look like this:

    Completed fake build process

    Now go back to Source Control Explorer and check this template back into TFS.

    Go create a new build definition, opting to use your new template on the process tab.  You’ll notice that your options are dramatically simplified:

    Process tab on build definition editor

    Specify a value for Major Build Name and save your new definition. 

    Queue the build and you’ll see the following on the Parameters tab:

    Parameters tab while queuing a build

    Enter some basic information and click “Queue” to run the (fake) build.

    What you end up with is a build that completes in just a couple seconds, does pretty much nothing, but includes your specified information in the build log:

    Build log after fake build

    Pretty sweet!

    And just to be clear, my example adds more “noise” into the build than you may find necessary, with additional build information, comments, etc.  You could streamline the build even more by removing the “Include Build Details If Chose” activity (and all its sub-activities).

    Given the overall flexibility TFS 2010 has with incorporating Windows Workflow into the build system, there are undoubtedly other ways to accomplish variations of this type of build template.  But I had fun with this one and thought I should share.  I’ve posted my sample template’s xaml file on SkyDrive here:

    I’m all ears for feedback!

  • Steve Lange @ Work

    One-Click Check-in on Southwest Airlines with your Windows Mobile Phone


    NOTE:  Since this posting (about a year or two after), Southwest updated their mobile site so this no longer works..

    Okay, I just had to share this nugget of a time-saver (If you know about it already, then this won't seem very original..).  I got this tip from a colleague of mine, so I'm not taking credit here, but rather just passing it along.

    If you haven't flown Southwest Airlines before, it's open seating, first-come, first-serve based upon passengers order of check-in.  That means that if you check-in first, you board first. 

    First 60 to check-in.. ..get an A boarding pass (numbered 1-60)
    Second 60 to check-in.. .. get a B boarding pass (numbered 1-60)
    Everyone else.. .. gets a C boarding pass (numbered 1-60)

    You can check-in 24 hours before departure.  So what do you do if you want an "A" boarding pass but aren't at your computer to check-in online?

    Southwest Airlines has a mobile website which allows you to check-in via your phone and then print your boarding pass at the airport.  So that saves you some time.  You go to the site with your Windows Mobile phone, enter your first name, last name, and confirmation number, and you're all set.

    Check-in page on SWA's mobile site

    Fill in your information, and (assuming you're within the 24-hour check-in window) you'll arrive here:


    Click "Check In All", and you'll be checked into your flight:


    Then just either print your boarding pass later on your printer, or do it at a kiosk at the airport.

    But wait, there's more..

    But what if you don't have the confirmation handy, say, while you're driving in your car? 

    You can link to the check-in page's submission directly by embedding your name and confirmation number in the below URL:

    Following the link directly will take you to the "Checkin Availability" page where all you need to do is click the "Check In All"  button.

    What I do is save the "template" URL as part of my Outlook Contact entry for Southwest.  When I book a flight and add the flight to my calendar, I put the completed URL in the calendar entry, then set a 1 day (24 hour) reminder for the flight. 

    When I get the reminder, I simply open the calendar entry, click the link, and check-in.  It takes less than 30 seconds.

    Another way to store the completed URL is to create an Outlook task ("Check in for tomorrow's flight") with the URL, with a reminder or due date set for 24 hours before the flight.

    And since my Windows Mobile device automatically syncs with Exchange, my calendar and task entries, including their reminders, are readily accessible from my phone.

    Lastly, I also use TripIt to organize and share my travel itineraries with family and friends.  You can add the direct check-in URL to my itinerary and access on my mobile phone via TripIt's mobile site.

  • Steve Lange @ Work

    A Mock Business Plan for the New Microsoft Stores


    “Coming Soon, to a Mall Near You”

    So if you haven’t check your favorite news site already, Microsoft has announced plans to open retail stores (for real – we’ve even hired a VP to do it).

    Initial, knee-jerk thoughts vary greatly, from the “what are they thinking” to “hey, that could work”..

    I’m mixed on this one.  The Microsoftie in me thinks this is a bold but needed step to start correcting the negative perception of Microsoft products in the eyes of the consumer (Vista sucks, right?  Microsoft’s evil, right?).  The amateur economist in me can’t help but be wary of venturing into retail when that industry is hurting so badly.

    Signage?Since the state of the economy has been discussed to almost a numbing degree, let’s look at the possible positive (and humorous, of course) scenarios surrounding the “Microsoft Store”.

    First of all, what will it be called?  Should we follow Apple’s suit and just call it the “Microsoft Store”?  (Actually, if we’re really following Apple we wouldn’t have a name, just the Windows or Vista logo.)  Here are a few thoughts:

    • Microsoft Store
    • MSStore
    • Windows
    • The Mojave Store
    • Hotfix

    The nay-sayers are wondering what the heck will actually be sold in the store.  It’s not like we can “sell” Windows Live, SkyDrive, or Photosynth.  Well, it sounds like the store will be stocked with new computers (Dell, HP, etc.) loaded with Vista (actually, probably Windows 7 by the time the stores are fully operational), software packages (i.e. Office), Xboxes and Zunes.  All the typical stuff, right?  Ahh, not so fast.  A real hidden bonus for this retail idea is the opportunity to showcase a lot of physical products (i.e. hardware, what you can touch) that the typical consumer may not know about.  Let’s look at some of the possibilities (including some obvious ones):

    • Xbox:  Duh!  Have plenty of Xboxes to sell, and have several set up, networked together and online.  Also showcase how users can watch Netflix movies, and connect to Media Center PC’s.
    • Zune:  Another “duh”, right?  The Zune, right out of the gate, unfortunately had to bear a “this product is crappy” moniker simply because of the Microsoft logo on it.  If you haven’t actually played with one before, here’s your opportunity?
    • Gaming Products:  Huh?  That’s right.  Did you know that Microsoft cranks out some killer accessories to boost your gaming experience?  Like the Sidewinder mouse & keyboard, and Reclusa keyboard.
    • Communications Hardware:  There are some really great available webcams and headsets.  I have a LifeCam NX-6000 for my laptop and it works terrific given its form factor.
    • Mice & Keyboards:  Beyond just the standard ones, try the wireless presenter mouse or Explorer Mini-Mouse.
    • Cell Phones:  Unless your a corporate guy/gal, you may not really know that Microsoft provides an OS for smartphones/PDA’s called Windows Mobile.  Why not use a storefront to showcase some of the cooler phones running Windows Mobile?
    • Surface:  Sure, no one will really be able to actually buy one, but putting a Surface machine or two in a store will bring people in the door, GUARANTEED.  Encourage folks to put their phones on it and display pictures, view YouTube videos, play games, etc.  Put it smack-dab in the middle of the store.
    • Mediaroom:  Microsoft Mediaroom isn’t a light investment either, but it provides a “whoah, that’s cool” factor which will bring people in the door (“butts in seats”, as we presenters call it).

    Now, what should the PC’s in the store have on them?  Okay, okay – BESIDES Windows and Office.  Here’s a short list of software & services that should be readily available for any shopper who saddles up to a machine, including what the “Microsoft Guru” should be ready to show:

    Product/Service What to Demonstrate
    Live Products:
    Live Writer, Live Photo Gallery, Live Messenger, Live Mesh, etc.
    Have some sample LiveID’s already set up so shoppers can browse the various Live services, such as Spaces, SkyDrive, and Photos.

    Show how the different services work together (example: Use Live Writer to post to a blog, pulling pictures from Live Photo Gallery (or even Facebook), to Spaces.)

    Demonstrate how you can use Live Mesh to easily push photos from your PC in Colorado to Grandma in California.

    PhotoSynth Seriously, this is a killer app if you like to take pictures.  Show it off with existing collections, or take a battery of pictures of the store and watch it work.
    Windows Home Server Why not?  Show how WHS can automatically backup all the computers in your house, and restore them from crashes in just a few clicks.  On the more fun site, demonstrate how to serve up websites & photo albums.
    Media Center Show how you can record TV right to your PC, and access/broadcast those shows in other areas in your house.
    AutoCollage Take eight pictures of the store, and show how easily you can drop it into a collage.
    Songsmith Create a song on the fly.

    There are several more, but this is a good start, I think.

    Take a page from the Apple folks and surround all the set up PC’s with complementary products, such as Windows Mobile phones, Zunes, digital picture frames, etc.

    Now of course, you’ll want to stock the shelves with all the software we offer, including OS’s, Office, Streets & Trips, OneCare, etc.

    Lastly, there should be an “ask the expert” station where you can discuss any Microsoft-related product issue with (presumably) an expert.  There shouldn’t just be sales-oriented people in the store, but rather technical support –types that can put a smile on their face.  Lastly, the store employees will need a thick skin as there will undoubtedly anti-Microsoft (justified or not) walk in for the sake of whining & moaning.  (As a former tech support guy, I assure you they’re out there.)

    These “gurus” should hold regularly-schedules workshops:  “Get the most out of your photos", “How to back up my PC”, “Tell me about Internet Explorer”.. those kinds of things.

    So we’ve covered signage, inventory, and personnel.  What about store layout?  I have no idea what this will actually look like, but here’s a rough thought:

    Okay, I got carried away.. possibility for MS Store Layout? (by Visio)

    The key to getting people in the store will be to move the rows of stocked software (boring to look at) to the back and bring the cool stuff to the front, i.e. Xbox and Surface.  If a shopper walking by glances inside and sees some people on a couch having a blast playing video games, and a small crowd of people going nuts on a Surface, that person will have a hard time not venturing inside to check it out.

    Okay, so I’ve gone a little overboard here.  I had a little time on my hands and found myself getting surprisingly excited by this concept.  To start changing perception, Microsoft needs to be tangible and approachable.  This could be a great start!

  • Steve Lange @ Work

    I told you it was coming! “Team System Big Event”


    TeamSystemBigEventI mentioned before, but couldn’t give away the details until now.. We’re covering all aspects of Application Lifecycle Management, and I hope to see you there.  We plan to pack in each venue, so tell your friends..  There will be presenters from both Microsoft and the development community, including Team System MVP’s, so you’re bound to be entertained and learn something in the process!

    Be sure to register below, and download the attached PDF to distribute as you see fit!

    How do you take an idea from conception to completion? How can you truly do more with less?

    Please join us for this FREE unique, invitation-only event to discover how both product and processes help your organization succeed in today’s environment. We will explore how Team System assists teams across the board to be successful in today’s tough times. This “break through” event will not only provide you with best practices around development and testing, but will demonstrate key capabilities of both Visual Studio Team System 2008 and the upcoming 2010 release. It’s a day that promises to have something for everyone!

    Team System Big Event



    • Test Driven Development: Improving .NET Application Performance & Scalability
      • This session will demonstrate how to leverage Test Driven Development in Team System. We’ll highlight both writing unit tests up front as well as creating test stubs for existing code.
    • "It Works on My Machine!" Closing the Loop Between Development & Testing
      • In this session, we will examine the traditional barriers between the developer and tester; and how Team System can help remove those walls.
    • Treating Databases as First-Class Citizens in Development
      • Team System Database Edition elevates database development to the same level as code development. See how Database Edition enables database change management, automation, comparison, and deployment.
    • vstsbigevent_characterArchitecture without Big Design Up Front
      • Microsoft Visual Studio Team System 2010 Architecture Edition, introduces new UML designers, use cases, activity diagrams, sequence diagrams that can visualize existing code, layering to enforce dependency rules, and physical designers to visualize, analyze, and refactor your software. See how VSTS extends UML logical views into physical views of your code. Learn how to create relationships from these views to work items and project metrics, how to extend these designers, and how to programmatically transform models into patterns for other domains and disciplines.
    • Development Best Practices & How Microsoft Helps
      • Sometimes development teams get too bogged down with the details. Take a deep breath, step back, and re-acquaint yourself with a review of current development best practice trends, including continuous integration, automation, and requirements analysis; and see how Microsoft tools map to those practices.
    • "Bang for Your Buck" Getting the Most out of Team Foundation Server
      • Today’s IT budgets are forcing teams to do as much as they can with as little as possible. Why not leverage Team Foundation Server to its full potential? In this session we’ll highlight some capabilities of TFS that you may or may not already know about to help you maximize productivity.

    Welcome: 8:00 AM

    Seminar: 8:30-5:00 PM


    Denver, CO April 22, 2009 Click here to register with invitation code: DD1A7F
    Mountain View, CA April 28, 2009 Click here to register with invitation code: 80D459
    Irvine, CA April 30, 2009 Click here to register with invitation code: A86389
    Portland, OR May 5, 2009 Click here to register with invitation code: 2DC0A9
    Phoenix, AZ May 7, 2009 Click here to register with invitation code: 90BC47

    To Register by Phone – Call 1.877.MSEVENT (1.877.673.8368) with invitation code.

    PS - This event is Free!

  • Steve Lange @ Work

    Microsoft Test Manager – Working Across Projects: Adding an Existing Test Case vs.Cloning/Copying


    You’ve probably noticed that for the most part, managing multiple test plans and test cases are more convenient when they’re part of the same Team Project in TFS/VSO.  For one thing, area paths and iteration paths just work much more cleanly.

    If you’re working across multiple Team Projects, though, the story changes slightly.  Things can still work just fine, but you need to be even more aware of the differences between working with existing test cases, and copying them.

    Let’s look at the differences.

    Using an Existing Test Case (by reference)

    When adding test cases that exist in another TFS/VSO Project entirely (not just another test plan in the same project), you are basically creating a reference back to the main test case, not a copy. That is, you can’t modify the area or iteration fields to represent the target project. Because it is a reference, when the test case is opened it is opened as it resides in the source test plan/project. You can’t modify the area or iteration fields because they are scoped to the project in which the test case resides.

    Consider this example:

    I have two projects: Project A and Project B. I created a test plan in Project A called “Master Test Plan”, and inside that plan created a test case named “Test Case from Master Test Plan in Project A” (just to make it easy to reference). It has an ID of 13.


    In Project B, I create a test plan named “Project B Test Plan”. In this plan, I want to reference the test case from Project A. So I select “Add existing” from the toolbar, and query VSO for the test case.


    I select the test case and click “Add test cases”, which adds it to my plan in Project B, as seen below:


    If I open the test case, I am only able to set the iteration path (and the area path for that matter) to a value that is within the scope of the plan which we referenced.


    Because we are referencing the test case, any changes made here will be reflected back in the Master Test Plan in Project A. This is because we are working with a single instance of the test case, we’re just referencing it from a different location.

    Copying a Test Case across Projects

    If you wish to have a discrete copy of a test case, test cases, or test suites across projects (to a test plan that resides in a different project), you can perform a “clone” across projects via the command line (tcm.exe).

    The TCM tool contains various options to control what gets copied, and new values to set (i.e. area and iteration). (It can also be used to run automated tests)

    For this example, I’ll copy/clone my test cases (in the root test suite) of Project A to a newly-created project, Project C (in which I have a test plane named “Project C Test Plan”). Since this is a basic example, all that exists is that single test case.

    Here is the command line I will run:

    tcm suites /clone


    /teamproject:”Project A”


    /destinationteamproject:”Project C”


    /overridefield:”IterationPath”=”Project C\Release 1\Sprint 1”

    /overridefield:”Area Path”=”Project C”

    When I execute this for the current scenario, I’m telling the tool to copy the test suite (with ID: 1, the root test suite in my “Master Test Plan”) from Project A to Project C (to the suite with ID: 3, the root test suite in Project C’s test plan (named “Project C Test Plan”)). I’m also instructing to the tool to set the Iteration Path of the copied test cases to “Project C\Release 1\Sprint 1”, and the Area Path to “Project C”.

    After running this, if I look at my test plan in Project C, I see this:


    I can see the copied test suite (“Master Test Plan”), and the test case it contains. If I open that test case, note the new values of Area and Iteration:


    Also note the new ID (14) as a sign that this is an actual copy of the original test case (ID 13).

    Because I’ve actually made a copy of this test case (not a reference, as in the first scenario), any changes I make to this test case will NOT affect the original from Project A. To illustrate this, I modified the title of the test case in (Project C) to reflect that it has been changed in Project C. Compare that change (top) with the original test case back in Project A (bottom):



    To help with reference, the command line tool created a link between the two so users can see where the test case came from, and gain context as to why it’s there.


    Additional notes

    Thanks for reading!

  • Steve Lange @ Work

    Creating a Data-Driven Web Test against a Web Service


    Okay, I'm sure some of you will tell me, "Yeah, I know this already!", but I've been asked this several times.  So in addition to pointing you to the MSDN documentation, I thought I'd give my own example.

    The more mainstream recommendation for testing a web service is to use a unit test.  Code up the unit test, add a reference to the service, call the web service, and assert the results.  You can then take the unit test and run it under load via a load test.

    However, what if you want a more visual test?  Well, you can use a web test to record interaction with a web service.  This is actually documented in the MSDN Library here, but below is my simple example.

    Here's what we're going to do:

    1. Create the web service
    2. Create the web test
    3. Execute the web test (to make sure it works)
    4. Create the test data data source
    5. Bind it to the web test
    6. Run the test again

    First, we create a web service.  In my example, it's the sample "Hello, World" service and I've created one additional method called "HelloToPerson":

    <WebMethod()> _
        Public Function HelloToPerson(ByVal person As String) As String
            Return "Hello, " & person
        End Function

    As you can see, the method will simply say hello to the passed person's name.

    Now, let's create a web test to exercise this web method (Test->New Test, select Web Test), creating a test project in the process if you don't already have one in your solution.  I named my web test WebServiceTest.webtest.

    As soon as Internet Explorer opens with the web test recorder in the left pane, click the "Stop" button in the recorder.  This will return you Visual Studio's web test editor with an empty test.

    Web test with no requests

    Now launch Internet Explorer, go to your web service (.asmx), and select the method to test (again, in this example it's "HelloToPerson").  Examine the SOAP 1.1 message.  In my example, the message looks like this:

    POST /Service1.asmx HTTP/1.1
    Host: localhost
    Content-Type: text/xml; charset=utf-8
    Content-Length: length
    <?xml version="1.0" encoding="utf-8"?>
    <soap:Envelope xmlns:xsi="" xmlns:xsd="" xmlns:soap="">
          <HelloToPerson xmlns="">

    We'll need to refer back to this information (I color-coded a couple of sections for later reference).

    Right-click on the root node (WebServiceTest in my example) and select "Add Web Service Request."

    Add a Web service Request

    In the URL property of the new request, enter the URL of the web service (by default this value is populated with http://localhost/.

    Specifying the correct URL for the web service

    Now, let's make sure we use a SOAP request.  Right-click the request and select "Add Header".

    Adding a header to the request

    Enter "SOAPAction" in the name field.  In the value field, enter the value of SOAPAction in the message from your web service.  For my example, it's "" (color-coded in blue)

    Adding the SOAPAction header to the request

    Next, select the String Body node

    • In the Content Type property, specify "text/xml"
    • In the String Body property, copy/paste the XML portion of the SOAP 1.1 message of your web service method (color-coded in red).  At this time, be sure to replace any parameters with actual values you want to test (in this example, my parameter is "person", so I enter "Steve" instead of "string"

    Entering the XML portion of the SOAP message, specifying a real value for the 'person' parameter

    The properties dialog for the String Body node

    Now, right-click on the web service request and select "Add URL QueryString Parameter."

    Adding a URL QueryString Parameter

    In the QueryString Parameter node, specify "op" as the name and the name of your method as the value.  In this example, it's "HelloToPerson".

    Viewing the added QueryString Parameter

    Finally, let's run the test and see the results!

    Viewing the test results

    As you can see, the test passed, and the "Web Browser" panel shows the returned SOAP envelope with the correct results.

    Now for some more fun.  Let's make this a data-driven test so we can pass different values to the web method.

    We'll create a simple data source so that we can pass several names to this method (very helpful so we don't have to record multiple tests against the same method).  You can use a database, XML file, or CSV (text) file as a data source.  In my example, I'm going to use an XML file:

    <?xml version="1.0" encoding="utf-8" ?>

    Save this file as "Names.xml" in your test project. 

    To make this data source available to the web test, right click on the web test and select "Add Data Source" (you can also click the corresponding toolbar button).

    Adding a data source

    Provide a name for the data source (for me, it's "Names_DataSource") and select XML file for the data source type.

    Selecting the data source type

    Next, provide the path to the XML file, then select the data table containing your test data.  You'll know if you select it correctly since you'll get a preview of your data.

    Selecting the XML file

    Check the boxes next to the data tables you want to be available for your tests.  In my example, I only have one ("Names").


    Click Finish (if you're asked to include any files in your test project, just click yes to the prompts).

    Now your XML data is available to bind to your web test.

    Data source is now available to your test.

    Finally, let's put this data source to work.  We want to bind the name values in the data source to the "person" parameter for my web service call.  If you recall, that value is specified in the String Body property.  So we inject the following syntax (using the values appropriate for this example) into the String Body property:

    {{DataSourceName.TableName.ColumnName}}, so for my example, I use {{Names_DataSource.Name.Name_Text}}


    Now we just need to tell the web test to execute once for each value in my data source.  We can do this two ways:

    If you will mostly just run this test in a single pass (not iterate through the data source), you can just run your test and "Edit Run Settings" to augment (on a one-off basis) your iteration settings.

    Editing test run settings

    Again, note that doing this way will affect only the current test run (i.e. next run made), and will not be saved.

    If you want to specify that you want to use the data source by default, you need to open the LocalTestRun.testrunconfig file in your Solution Items folder.

     Finding the .testrunconfig file

    Opening the .testrunconfig file will give you the below dialog.  Select Web Test on the left, then click the radio button to specify "One run per data source row."  Click Apply then Close.


    Now for the beautiful part.  Go back to your web test and run it again.  This time instead of a single run, it will automatically execute a test run for each row in your data source. 

    Viewing test results with multiple runs

    Notice results for each run, including pass/fail information, and the resulting SOAP envelope with the appropriate method result in each (I've highlighted the second run to show that "Mickey" was used in this run).

    Happy Testing! 

  • Steve Lange @ Work

    VS/TFS 2012 Tidbits: Merging Changes by Work Item


    As the Visual Studio family of products (Visual Studio, TFS, Test Professional) nears its 2012 release, I thought I’d bring some short hits – tidbits, if you will – to my blog. Some of these are pretty obvious (well-documented, or much-discussed), but some may be less obvious than you’d think. Either way, it’s always good to make sure the word is getting out there. Hope you enjoy!

    Merging Changes by Work Item

    This is something that existed in VS 2010, but it wasn’t talked about as much.  While it’s pretty straightforward to track changes merged across branches by changeset, sometimes it’s even more effective to track merges by work item (i.e. show me where changes associated with a work item have been merged/pushed to other branches).

    Let’s catch up. Consider the relatively simple branch hierarchy below:


    A work item has been assigned to Julia, Task #80.


    Julia makes some code changes, and checks in against (linking to) the work item (Task #80).

    She checks in 2 individual changes to create links to 2 discrete changesets from the task.

    Now, it’s easy to go ahead and track an individual changeset by selecting the option from the History window.


    That’s all well and good, but if I didn’t know the exact changeset ID (#17), or if there was more than one changeset in associated with the task, this tracking process becomes less effective.

    What Julia can do is right-click on the work item and select “Track Work Item”.    (Note that this option will be disabled if there are no changesets linked to the work item.)


    She can also click the “Track Work Item” button at the top of the work item form:


    I get a much clearer picture now of all the work and where it’s been applied, and the “Tracking” visualization will now include all changesets (in my case, 2 changesets) in the window.

    Now I know exactly what changes to merge.  I merge them, and now I can see that the entire work item has been merged to Main from Dev (i.e. both changesets were merged).


    And just as effectively, I can see these changes in the Timeline Tracking view:


    So that’s it! Tracking by work items are pretty easy to do, and paint a much clearer picture of how a change from a work item perspective can, or has been, applied across branches.

    Again, I know this isn’t exactly a new feature, but there are a lot of people out there who are looking for ways to “merge by work item” and aren’t aware of this feature.

  • Steve Lange @ Work

    Thoughts on TFS Project Collections


    New to TFS 2010, Team Project Collections (TPCs) provide an additional layer of project organization/abstraction above the Team Project level (see the MSDN article, “Organizing Your Server with Project Collections”)

    I’ve been asked numerous times over the past couple of months about the intention of project collections, their flexibility and limitations.  Below are simply my educated thoughts on the subject.  Please do your due diligence before deciding how you wish (or wish not) to implement project collections in your environment.

    You can use collections to more tightly couple related projects, break up the administrative process, or to dramatically increase scale.  But the primary design goal behind introducing project collections is around isolation (of code, projects, or groups within an organization) in a way that provides all the benefits of TFS, scoped to a defined level within a single instance of TFS.  You’re effectively partitioning TFS.

     Basic project collection structure


    If you have ever used TFS 2005 or 2008, think of it this way.  A project collection effectively compartmentalizes all the capabilities you’ve grown to love in a single TFS 2005/2008 instance:

    Project collection compartmentalization

    I won’t go into how you create/edit/delete project collections.  Just know that you can.  (BTW – for those of you upgrading from an earlier version of TFS, your existing projects will go into a single default project collection (by default, it’s named “Default Collection”.  Original, right?)

    Consider this (over-simplified) example.  I have 4 projects in my server, currently in a single (“default”) collection:

    Single collection view

    Say Project A and Project B are used by “Division A” in my company, and Agile1 and Sample Win App are used by “Division B”.  Project A and Project B share some code and leverage the same user base.  The assets in each division’s projects are in no way related to the other.  Consequently, I’d love to take advantage of project collections and separate our divisions’ stuff.  A more practical implementation of project collections might look like this:

    I build out my collections using the TFS Administration Console to look like this:

    Viewing my project collections in the admin console

    Once that’s done, I can ultimately end up with such a structure that my desired projects are contained in their respective organization’s collection:

    Division A’s stuff:

    Division A's collection

    Division B’s stuff:

    Division B's collection

    Now each division’s stuff is effectively compartmentalized.  No shared process templates, no shared user base, and no shared database (which means one division’s screw-up won’t affect another division’s work).

    Okay, so I lied a little – I earlier said I wouldn’t go into detail about how to CRUD collections.  But I will mention one thing here, which will add context to the above scenario.  In the above, I had a single collection that I effectively wanted to split into two collections (i.e. go from “Default Collection” to “Division A” and “Division B”).  This is surprisingly easy to do (more complicated than drag & drop, but not ridiculous either).  The documentation for splitting a collection lists 15 main steps to accomplish this, but basically what you’re doing is cloning a collection and then deleting what you don’t want.

    See?  I told you it would be a simple example.  But if you expand this to think of a TFS environment with 100 projects (instead of my puny four), you get the point.

    This all sounds pretty cool, right?  It. Is. Very. Cool.  Project collections can be used for various purposes in your TFS deployment (consolidating related development efforts, scaling the SQL backend, mapping TFS hierarchy to organization hierarchy, etc.).  However, with flexibility comes complexity.  If you had fun sitting around a conference table debating how to structure your TFS 2005/2008 project hierarchy (perhaps consulting our branching guidance document or patterns & practices?), project collections add a new element to consider for 2010.  Below I’ve outlined some of the main considerations for you and your team to think about before taking advantage of project collections in TFS 2010.

    For Systems Administrators:  Pros & Cons


    • Flexibility to to backup/restore collections individually.  This can reduce downtime as restoring one collection will not impact users of other collections.
    • Since each collection is contained in its own database, these databases can be moved around a SQL infrastructure to increase scale and load balancing.
    • Could help consolidate IT resources.  If your organization currently leverages several TFS instances simply to isolate environments between departments, collections can allow the same TFS instance to be used while still providing this isolation.


    • Again, with flexibility comes complexity.  Since project collections use their own databases, each one must be backed up (and restored) individually.  Also, other admin tasks such as permissions and build controller configuration grow proportionately as additional collections are created.
    • Users and permissions need to be administered separately for each project collection (this may also be a project admin consideration).
    • There are more databases to restore in the event a full disaster recovery is needed.

    For Project Administrators:  Pros & Cons


    • Organizational hierarchy.  If your organization has multiple divisions/departments, you can break up your TFS project structure to reflect that organizational view.  This makes it much easier for users to identify (or constrain via permissions) which projects belong to their department.
    • Projects grouped in the same project collection can leverage similar reports (“dashboards”) work item types, etc.  They can can also inherit source code from other grouped projects.


    • In Visual Studio, you can only connect to one collection at a time.  While it’s relatively trivial to simply connect to a different collection, you can’t view team projects in Team Explorer that reside in different project collections.
    • Relationship-specific operations you enjoy across team projects cannot span project collections.  This means that there are several things you cannot do across collection boundaries, such as:
    • Branch/merge source code (you can do this cross-project, but not cross-collection)
    • Query work items (i.e. you can’t build a query that will show you all bugs across multiple collections)
    • Link items (i.e. you can’t link a changeset in one collection to a task in another collection)
    • Process templates are customized and applied at the project collection level, not the TFS server level

    What does it boil down to?

    It’s really about your need for isolation.  Do you ideally want to isolate by application/system, organization, or something else?  Do you foresee a need to share code, work items, or other assets across projects?  It’s a fun little decision tree:

     Basic, over-simplified decision tree

    So that’s it!  The devil is always hiding in the details, so do your own research and use your own discretion when deciding how to adopt project collections into your TFS deployment.  I anticipate more guidance on this topic to come out as TFS 2010 installations propagate throughout the world.

    For more resources and practical guidance on using Team Foundation Server, see the TFS team’s blog on MSDN.

    I hope this helps you somewhat!  And thanks for reading!

  • Steve Lange @ Work

    Ordered Tests in TFS Build


    In an earlier article I discussed how to use and Ordered Test to control the execution order of Coded UI Tests (the same can be applied to other test types as well).  I received a few follow-up questions about how to do this in TFS Build so tests run in a particular order as part of a build.

    Here’s one way that’s remarkably easy.

    In my example, I have a project called JustTesting, which contains just a test project with 3 unit tests (which will always pass, BTW).


    I put those tests into an ordered test:


    In Solution Items, I open up my JustTesting.vsmdi file, create a new test list (called Ordered Tests), and add my ordered test to it.


    Once that’s done, I check everything into TFS (my Team Project’s name is “Sample CMMI”.

    Next, I set up a build definition (in Team Explorer, right-click Builds, and select “New Build Definition”).  Set whatever options you want (name, trigger, workspace, build defaults) but stop at “Process”.

    In the section named “2. Basic”, you’ll see that by default the Automated Tests field is set to (something like): “Run tests in assemblies matching **\*test*.dll using settings from $/Sample CMMI/JustTesting/Local.testsettings”. 


    Click on the ellipsis on the right of that to open the Automated Tests dialog:


    Remove the entry you see (or leave it if you wish to include that test definition), and then click “Add”.

    In the Add/Edit Test dialog, select the optoin for “Test metadata file (.vsmdi)”.  Use the browse button to find and select your desired .vsmdi file.  In my example, JustTesting.vsmdi.

    Uncheck “Run all tests in this VSMDI file”, then check the box next to your test list containing the ordered test.  In my example, the test list is called “Ordered Tests”.  Your dialog should look something like this:


    Click OK and you’re Automated Tests dialog should look like:


    Click OK again, then save your build definition.

    Queue a new build using this definition.  Once complete, look at the build report to see your test results.



    It’s a few steps, but nothing ridiculous.  And I didn’t have to hack any XML files or do any custom coding.

    Hope this helps!

  • Steve Lange @ Work

    Thank you, Denver! Goodnight!


    Thanks to the roughly 100 of you who attended the Denver VS.Net User Group.  While I’m sure you all showed up primarily for the free food and door prizes, I appreciate the level of interaction during my presentation last night (“Team Foundation Server: Today & Tomorrow”). 

    As promised, here is the presentation I used last night (posted on SkyDrive):

    Please send me feedback or any other questions you might have!

Page 1 of 15 (368 items) 12345»