Steve Lange @ Work

Steve Lange's thoughts on application lifecycle management, Visual Studio, and Team Foundation Server

  • Steve Lange @ Work

    Requirements Management in TFS: Part 1 (of 4): Overview


    There are several schools of thought on how to "do RM" ranging from the very lightweight (whiteboards, sticky notes, cocktail napkins, etc) to the robust (formal elicitation, authoring, validation and management of specifications).  Chances are your organization falls somewhere in between. 

    Past dictates future (thanks, Dr. Phil), and the same applies in how teams approach requirements management.  Basically, if you're used to managing your requirements using Word documents (and believe me, you're in the majority), most likely that's what you figure to do when starting a new project.

    The historically mainstream ways to manage requirements (Word, Excel, email) can be efficient (people know the tools, and again, it's how it's been done before) and satisfactory.  But with the application development projects of today becoming increasingly complex and distributed (both architecture and project teams), this process becomes more difficult to manage.  Throw in your regulation/compliance package-of-the-day and you quickly realize you need more.  Key capabilities around collaboration, audit/history, and traceability rise to the top of the priority list.

    As a result, the idea of managing requirements as individual elements (rather than parts of a larger specification document) is becoming increasingly popular.

    I hear this quite often:  "How does Team System support Requirements Management?"  Visual Studio Team System, or more specifically Team Foundation Server, possesses the plumbing needed for the above-mentioned capabilities out of the box as part of its inherent architecture.  TFS provides work item tracking to allow items (bugs, tasks, or in this case, requirements) to be treated as individually managed objects with their own workflows, attributes, and traceability.  However, while the infrastructure is there, TFS wasn't designed specifically to support a requirements management process. 

    But if you are looking at Team Foundation Server to manage your development process, I would suggest that you take a peek at how it can be used to support your business analysts from a requirements management perspective as well.  Again, although it's not specifically targeted at business analysts (it is on the radar (see: Team System Futures, however) there many of the capabilities of TFS can help support a productive RM process.

    This series will take a look at a few different ways that TFS can support requirements management.  In Part 2 I'll show a couple of ways to do this using TFS "natively" (without any add-ins/plug-ins); and in Part 3 I'll briefly discuss some 3rd party solutions that support requirements more directly yet still integrate with Team System.  And we'll close the loop in Part 4 with a summary.

    Next:  TFS - Out of the Box


  • Steve Lange @ Work

    Requirements Management in TFS: Part 2 (of 4): TFS Out of the Box


    This is Part 2 of the series, Requirements Management in TFS.  For a brief background, please see Part 1: Overview.

    In this part, we'll discuss a couple of the primary ways to support requirements in Team Foundation Server: the Team Portal and Work Item Tracking

    Team Portal (SharePoint)

    Team Explorer's Documents folderIf you use some kind of document format for authoring and tracking your specifications (Word, PDF, etc.), you may already be using something like a SharePoint site to store them.  Or some kind of repository, even if it's a network share somewhere.  Team Foundation Server creates SharePoint-based (specifically Windows SharePoint Services) web portals automatically when you create a new project.  These portals provide an easy, web-based way for interested parties to "check-in" on a project's status, participate in discussions, post announcements, view process guidance documentation, and submit supporting documents and files to a document library.

    It's the document library that provides a natural fit for bringing your specifications a little more in line with the rest of the application lifecycle.  The document library allows analysts to remain in their comfort application (i.e. Word), but submit changes to requirements to a location that is much more accessible to those that will consume those requirements (architects, developers, testers, etc.).  Team Foundation Server users can access the document library from within Visual Studio Team System, TFS Web Access or other interfaces, thereby allowing them to more readily react to new changes and provide feedback as necessary. 

    Document Library in TFS PortalAnd now that the requirements documents are in the document library and managed (indirectly) by TFS, you can easily leverage the linking capabilities of TFS to add traceability between your specifications and source code, defects, tasks, and test results.  (How do you link work items to items in the document library?  Work items can be linked to hyperlinks, so you can simply link to the URL of the specific file in the document library in SharePoint.)  This adds some real tangibility to your reporting in that you can now view reports from TFS that, for example, show all development tasks and their related requirements spec, changed source code, and validated test results.

    Bug linked to a changeset and work item

    Easy, right?  Well, yes and no.  There are some definite drawbacks to this approach (I'm leaving it up to you to decide if the good outweighs the bad), the primary being that you still don't have any more granular control over individual requirements changes than you did before.  Changes are still tracked, and linked, at the document level.  This can be challenging if you need to track changes to individual requirements (change tracking in Word will only take you so far) for auditing and compliance reasons.

    Benefits Drawbacks
    • Analysts remain in comfort application (Word, etc.)
    • SharePoint is a "natural" extension in Word/Office applications
    • Requirement specs more easily consumed by other roles in lifecycle.
    • Provides basic mechanism to enable traceability and better "cross-artifact" reporting.
    • Lack of item-level granularity.
    • Document-level linking only (can't link to an individual requirement inside specification document)
    • Document Workflow managed by SharePoint, whereas Workflow for other lifecycle artifacts are managed by TFS.

    Work Item Tracking

    Requirements QueriesTeam Foundation Server's work item tracking feature is a major moving part of the system.  Work Item Tracking (WIT) provides a robust yet flexible way to track any item of record throughout a lifecycle.  Some work item types provided with TFS include: bug, task, risk, scenario, or (to the point of this article) requirement. Work items are managed in the TFS database alongside code, builds, test results, etc, and provide a proper level of granularity for controlling change and traceability.

    In the previous example, using the SharePoint project portal lacked the ability to control changes to individual requirements, nor did it allow linking to those individual elements.  Leveraging WIT in TFS addresses both of these shortcomings.  You can create and customize your own types of work items, allowing teams to have complete control over what types of work items are used, their fields, workflow, and even UI.  Say for example, your team typically leverages three types of requirements: Business, Functional, and Non-Functional.  TFS allows you to create custom work item types that represent each of these categories of requirements.

    Now that your requirements are managed as work items in TFS, you can take advantage of all the benefits of the work item tracking system (see benefits below)

    Requirements work items are accessed in the exact same manner as any other work item:

    Since work items are primarily access by way of queries in Team Explorer, teams can easily filter what requirements are displayed and accessed at certain points.

    Reporting gets a considerable leg up using the work item tracking approach.

    The biggest challenge with this approach (in my opinion) is the shift in mindset.  In case you didn't notice, I haven't mentioned using Word in this section.  WIT gets more granular than Word does for managing item-level changes, and there is not currently a Microsoft-provided integration to Word from TFS.  There is often considerable resistance to change in that, "without the document, what will we do?"

    Benefits Drawbacks
    • All changes will be recorded and audited
    • Links can be created between individual requirements and other work items (any type), source code, test results, and hyperlinks)
    • Workflow is enforced and controlled in the same manner as all other work item types
    • Supporting information (screenshots, documents, UML diagrams, etc.) can be attached
    • Reporting can be much more granular (showing requirement implementation rates, impact analysis, scope creep).
    • Change of interface may meet resistance (i.e. no more Word!)
    • Customization work most likely involved (creating custom work item types, fields, & workflow).


    Getting Into the Code

    And lastly, if you're really into it, you can tap the Team Foundation Server SDK to get really creative.  For example, you can write a custom lightweight interface for business analysts to enter and track requirements work items in TFS.  Or create a custom report (although you might be better off creating a custom report via the server's reporting mechanism (SQL Server Reporting Services).  I have a little app that creates a "coverage analysis" spreadsheet in Excel that shows coverage (i.e. links) between work item types (for example, I can see if there are any business requirements that have no corresponding functional requirements).

    Next:  TFS - Partner Integrations


  • Steve Lange @ Work

    My 2 cents on Areas and Iterations in Team Foundation Server


    There’s not a huge amount of best practice info out there regarding areas and iterations.  One interesting place to look at is a blog post that describes how the Visual Studio team uses them (


    So here are my 2 cents (you can see how much that's worth these days!) on Areas and Iterations. 



    To me, areas are ways of tagging or organizing objects within a Team Project.  Typically, areas are used to define either logical, physical, or functional boundaries.  It’s a way to slice and dice a normally large project effort into more manageable, reportable, and easily identifiable pieces. 


    For example, let’s say we have a tiered web application managed in a single TFS project called “MySite”.  There are 3 major components to this app:  the web site, a web service, and a database.  If this is a decent-sized application, you might have 1,200 tasks in the system for this project.  But how do you know to which component a given task belongs?  What if I only wanted to see tasks for the web service piece?  Areas are a convenient way to handle this.  Set up areas like this:



       \Web Site

       \Web Service



    Now you can specify an area of assignment for each task (work item), making it easy to effectively filter what you want to look at/work on.  You can use areas in both queries and reports as well.


    You may optionally want to further dissect those major components to be even more specific:



       \Web Site

          \Layout & Design



             \Contact Us



       \Web Service








    One final aspect of Areas to consider is security.  You can set security options on each Area node which can dictate not only who can change the areas, but also who can view or edit work items in a particular Area.



    So if you think of Areas as slicing and dicing by “space”, think of Iterations as slicing and dicing by “time”.  Iterations are like “phases” of a lifecycle, which can dissect the timeline of a project effort into more manageable time-based pieces. 


    So going back to the “MySite” example, say the project management team wants to split the entire project into 3 cycles, Phase 1, Phase 2, and Phase 3.  Thus, your Iterations can mirror that:



       \Phase 1

       \Phase 2

       \Phase 3


    These Iterations can be phases within the entire life of a project, or phases within a given release of a project.  So if “MySite” is going to have multiple releases over time, my Iterations might look lik this



       \Release 1.0

          \Phase 1

          \Phase 2

          \Phase 3

       \Release 2.0

          \Phase 1

          \Phase 2

          \Phase 3


    Now you have categorization options for both space and time (now if only we had a continuum..) for your project, allowing you to assign your tasks or other work items not only to the appropriate functional area (Area), but also to the phase (time cycle) of the project.

  • Steve Lange @ Work

    VS/TFS 2012 Tidbits: Requesting a Code Review on Code Already Checked in


    As the Visual Studio family of products (Visual Studio, TFS, Test Professional) nears its 2012 release, I thought I’d bring some short hits – tidbits, if you will – to my blog. Some of these are pretty obvious (well-documented, or much-discussed), but some may be less obvious than you’d think. Either way, it’s always good to make sure the word is getting out there. Hope you enjoy!

    Requesting a Code Review on Code Already Checked in

    There’s been great hype about the new built-in code review capabilities in TFS 2012, and for good reason. The process is easy, effective, and most of all, audited.


    But did you know that “My Work” is not the only place from where you can kick of a code review?  You can also do a review on code that’s already been checked in. Go to the file in Source Control Explorer, then view its history. In the History window, right-click on the changeset/revision and select “Request Review”.


    This will load up the New Code Review form in Team Explorer:


    Notice that it not only brings in the files from the changeset (5 of them, in this example), but also any work items that were related to this changeset as well.  The check-in comments are used to populate the title of the code review, as well as the optional description.

    Off ya go!

  • Steve Lange @ Work

    Requirements Management in TFS: Part 3 (of 4): Integrations


    In Part 2, I discussed how you can begin to manage requirements using the built-in facilities of Team Foundation Server.  While hopefully you can see how the infrastructure for a great requirements management solution already exists in TFS, the interface and client-side functionality isn't there.

    Enter Microsoft's amazing partner ecosystem.  Several technology partners have provided integrations (or at least interfaces) to help fill the requirements management gap.  If your organization needs a more requirements-specific solution for your RM practice (and you don't want to wait for Rosario), you might want to take a peek at the below partner integrations.  They are listed in no particular order, and I have pasted abstracts from each products' respective web sites along with my personal comments (based on my exposure to the tools as well as comments from my peers and customers).  Also, I'm sure there are a few others, and I'll try to add more as they are brought to my attention:

    CaliberRM by Borland Software

    Abstract: Borland® CaliberRM™ is an enterprise software requirements management tool that facilitates collaboration, impact analysis and communication, enabling software teams to deliver on key project milestones with greater accuracy and predictability. CaliberRM also helps small, large and distributed organizations ensure that applications meet end users’ needs by allowing analysts, developers, testers and other project stakeholders to capture and communicate the users' voice throughout the application lifecycle.

    About CaliberRM for Visual Studio Team System:  CaliberRM for Visual Studio Team System allows teams to manage requirements throughout the software delivery process.  By integrating Microsoft Visual Studio Team System and Borland CaliberRM, you enable the free flow of requirements between business analysts, developers, testers, and business stakeholders. Software developers are able to respond rapidly to requirements authored by analysts using CaliberRM, through traces from requirements to tests and work items such as Change Requests and Tasks.

    CaliberRM is a client-server application that focuses on requirements management.  It's server is an object-oriented database (OODB) that stores requirements artifacts as uniquely identified objects in its data store.  It supports rich-text, document generation (think mail merge on steroids), requirement hierarchies, glossaries, and traceability. 

    TeamSpec by Personify Design

    Abstract: Personify Design TeamSpec™ provides a rich project requirement management experience directly inside Microsoft Word. By making Team Foundation Server (TFS) project artifacts such as Scenarios, QOS Requirements, Risks, Issues, Bugs, Tasks, among others, first class citizens inside Microsoft Word, TeamSpec enables Application Lifecycle contributions by the Business Analyst, Project Manager, and Executive roles.


    MindManager by Mindjet

    Abstract: Use MindManager to create software requirements documents and turn those requirements into work items on Microsoft Visual Studio Team System.  The requirements map then becomes a bi- directional link to the work items.

    MindManager Pro 7 enables companies and individuals to work smarter, think creatively and save time by revolutionizing the way they visually capture and manage information.

    With MindManager 7, you will:

    • Align organizational strategy and objectives by visually conveying information in a single, centralized and coherent view.
    • Empower people to accelerate business processes by enhancing strategic thinking, facilitating quicker project planning and increasing team productivity.
    • Engage and excite employees by engaging people in stimulating real-time interactions during process planning.
    • Bring better products and services to market faster by enforcing best practices and making existing plans, processes and ideas accessible.
    • Stay ahead of the competition and foster innovation by increasing team interactions during the early stages of strategic planning.
    • Win new business faster and improve business relationships by quickly capturing relevant information and improving communication with clients.


    RavenFlow by Raven

    Abstract: RAVEN is an automated collaborative solution for detecting requirements errors early. It enables enterprises to elicit, specify, analyze, and validate requirements. RAVEN produces functional specifications, both graphical and textual, that everyone can understand.

    RAVEN automatically generates visual models of requirements, making errors easily visible to all stakeholders. Common requirements errors, such as ambiguous, conflicting, or missing requirements, can be detected and corrected early, reducing software costs and development time while increasing software quality.


    stpBA StoryBoarding by stpSoft

    Abstract: stpBA Storyboarding for Microsoft® Visual Studio® Team System allows a business analyst or analyst developer to capture, define and validate requirements and scenarios in a Team System project through GUI storyboarding. Requirements can be imported from stpsoft Quew. The tool seamlessly integrates with Team System process templates and generates screen flow diagrams, HTML storyboards, UI specifications, functional specifications, Team System work items and test scripts.


    RASK (Requirements Authoring Starter Kit) - MSDN Offering

    Abstract: The Requirements Authoring Starter Kit (RASK) provides a customizable requirements-authoring solution for software development teams. RASK serves two purposes. It provides the basis of a Requirements Authoring solution and illustrates how to access Microsoft Visual Studio 2005 Team Foundation Server programmatically from Microsoft Visual Studio 2005 Tools for the Microsoft Office System (Visual Studio 2005 Tools for Office). RASK has broad functionality that you can extend with minimal effort.

    RASK integrates several Microsoft products into the solutions: Microsoft Office Word 2003, Visual Studio 2005 Tools for Office, Microsoft SQL Server 2005, and Microsoft Windows SharePoint Services. In addition, RASK uses Microsoft Visual Studio 2005 Team Suite and Visual Studio 2005 Team Foundation Server, which are part of the Microsoft Visual Studio 2005 Team System.

    RASK is not a complete requirements-authoring application and is not intended to compete with existing requirements-management applications.


    Optimal Trace by Compuware

    Abstract:  Optimal Trace is Compuware’s business requirements definition and management solution, built to enable IT and the business to collaborate more effectively and improve IT project delivery outcomes. According to CIO magazine, ineffective requirements are the cause of over 70 percent of IT project failures. Compuware Optimal Trace addresses this problem with “structured requirements.” This approach captures software requirements from the perspective of the user, complete with visual storyboards and traceable relationships throughout the project lifecycle to business needs. Using structured requirements, IT organizations ensure that they are accurately and completely capturing the right requirements, communicating them effectively and dramatically improving their ability to deliver on the expectations of the business.



    Next:  Summary


  • Steve Lange @ Work

    It’s Official: VS 2010 Branding & Pricing


    Microsoft just announced final branding and pricing for the Visual Studio 2010 lineup!  Here’s what it looks like (you can call this either the stadium or Lego view):



    There are three minor changes to product names, listed below:

    Old Name New Name

    Microsoft Visual Studio Test Elements 2010

    Microsoft Visual Studio Test Professional 2010

    Microsoft Visual Studio Team Lab Management 2010

    Microsoft Visual Studio Lab Management 2010

    Microsoft Test and Lab Manager*

    Microsoft Test Manager 2010*

    * Not available as a separate product for purchase.


    Below is the suggested pricing (USD) for each of the 2010 products.

    With 1-yr MSDN Subscription
    Product Buy Upgrade Buy Renew
    Visual Studio 2010 Ultimate - - $11,899 $3,799
    Visual Studio 2010 Premium - - $5,469 $2,299
    Visual Studio 2010 Professional $799 $549 $1,199 >$799
    Visual Studio Test Professional 2010 - - $2,169 $899
    Visual Studio Team Foundation Server 2010 $499 $399 - -
    Visual Studio Team Foundation Server 2010 CAL $499 - - -
    Visual Studio Load Test Virtual User Pack 2010 (1000 Virtual Users) $4,499 - - -

    * Subscription contents vary by purchased product.

    A couple things to note:

    • TFS 2010 and a TFS 2010 CAL are included with every MSDN subscription
    • The above prices are suggested list price.  Companies buying development tools licenses usually go through volume licensing which usually result in lower prices.

    Not sure what product has what?

    Visual Studio 2010 lineup - from the Rangers 2010 Quick Reference Guide

    Here’s another angle:

    Visual Studio 2010 lineup 

    For more details on each feature, you can view a matrix here.

  • Steve Lange @ Work

    Team Foundation Server vs. Team Foundation Service


    You’ve probably found a few comparisons on the interwebs comparing the “traditional”, on-premise TFS with the new cloud-hosted Team Foundation Service.  I get asked about this a lot – as a result, I thought I’d share the slide deck I used to drive this conversation.  Please let me know if you have any questions!


    Basically, TF Service is a nice way to get up and running quickly, without worrying about infrastructure, backups, etc.  What you lose is some customization, lab management, and SSRS reporting.

    Happy developing!

  • Steve Lange @ Work

    One-Click Check-in on Southwest Airlines with your Windows Mobile Phone


    NOTE:  Since this posting (about a year or two after), Southwest updated their mobile site so this no longer works..

    Okay, I just had to share this nugget of a time-saver (If you know about it already, then this won't seem very original..).  I got this tip from a colleague of mine, so I'm not taking credit here, but rather just passing it along.

    If you haven't flown Southwest Airlines before, it's open seating, first-come, first-serve based upon passengers order of check-in.  That means that if you check-in first, you board first. 

    First 60 to check-in.. ..get an A boarding pass (numbered 1-60)
    Second 60 to check-in.. .. get a B boarding pass (numbered 1-60)
    Everyone else.. .. gets a C boarding pass (numbered 1-60)

    You can check-in 24 hours before departure.  So what do you do if you want an "A" boarding pass but aren't at your computer to check-in online?

    Southwest Airlines has a mobile website which allows you to check-in via your phone and then print your boarding pass at the airport.  So that saves you some time.  You go to the site with your Windows Mobile phone, enter your first name, last name, and confirmation number, and you're all set.

    Check-in page on SWA's mobile site

    Fill in your information, and (assuming you're within the 24-hour check-in window) you'll arrive here:


    Click "Check In All", and you'll be checked into your flight:


    Then just either print your boarding pass later on your printer, or do it at a kiosk at the airport.

    But wait, there's more..

    But what if you don't have the confirmation handy, say, while you're driving in your car? 

    You can link to the check-in page's submission directly by embedding your name and confirmation number in the below URL:

    Following the link directly will take you to the "Checkin Availability" page where all you need to do is click the "Check In All"  button.

    What I do is save the "template" URL as part of my Outlook Contact entry for Southwest.  When I book a flight and add the flight to my calendar, I put the completed URL in the calendar entry, then set a 1 day (24 hour) reminder for the flight. 

    When I get the reminder, I simply open the calendar entry, click the link, and check-in.  It takes less than 30 seconds.

    Another way to store the completed URL is to create an Outlook task ("Check in for tomorrow's flight") with the URL, with a reminder or due date set for 24 hours before the flight.

    And since my Windows Mobile device automatically syncs with Exchange, my calendar and task entries, including their reminders, are readily accessible from my phone.

    Lastly, I also use TripIt to organize and share my travel itineraries with family and friends.  You can add the direct check-in URL to my itinerary and access on my mobile phone via TripIt's mobile site.

  • Steve Lange @ Work

    Panels vs. Context: A Tale of Two Visual Studios and a Practical Explanation of the Value of CodeLens


    If you have Visual Studio 2013 Ultimate, you know CodeLens is amazing.  If you don’t know what CodeLens is, I hope this helps.  I have a lot of customers who ask me about CodeLens, what it is, and how valuable I think it is for an organization.  Here’s my response.

    It’s really a tale of two Visual Studios, if you think about.

    A Visual Studio Full of Panels

    Let’s say you’re looking at a code file, specifically a method.  Your Visual Studio environment may look like this:


    I’m looking at the second Create method (the one that takes a Customer).  If I want to know where this method may be referenced, I can “Find All References”, either by selecting it from the context menu, or using Shift + F12. Now I have this:


    Great!  Now, if I decide to change this code, will it will work?  Will my tests still work?  In order for me to figure that out, I need open my Test Explorer window.


    Which gives me a slightly more cluttered VS environment:


    (Now I can see my tests, but I still need to try and identify which tests actually exercise my method.)

    Another great point of context to have is knowing if I’m looking at the latest version of my code.  I’d hate to make changes to an out-of-date version and grant myself a merge condition.  So next I need to see the history of the file.


    Cluttering my environment even more (because I don’t want to take my eyes of my code, I need to snap it somewhere else), I get this:


    Okay, time out. 

    Yes, this looks pretty cluttered, but I can organize my panels better, right?  I can move some panels to a second monitor if I want, right?  Right on both counts.  By doing so, I can get a multi-faceted view of the code I’m looking at.  However, what if I start looking at another method, or another file?  The “context” of those other panels don’t follow what I’m doing.  Therefore, if I open the EmployeesController.cs file, my “views” are out of sync!


    That’s not fun.

    A Visual Studio Full of Context

    So all of the above illustrates two main benefits of something like CodeLens.  CodeLens inserts easy, powerful, at-a-glance context for the code your looking at.  If it’s not turned on, do so in Options:


    While you’re there, look at all the information it’s going to give you!

    Once you’ve enabled CodeLens, let’s reset to the top of our scenario and see what we have:


    Notice an “overlay” line of text above each method.  That’s CodeLens in action. Each piece of information is called a CodeLens Indicator, and provides specific contextual information about the code you’re looking at.  Let’s look more closely.




    References shows you exactly that – references to this method of code.  Click on that indicator and you can see and do some terrific things:


    It shows you the references to this method, where those references are, and even allows you to display those references on a Code Map:




    As you can imagine, this shows you tests for this method.  This is extremely helpful in understanding the viability of a code change.  This indicator lets you view the tests for this method, interrogate them, as well as run them.


    As an example, if I double-click the failing test, it will open the test for me.  In that file, CodeLens will inform me of the error:


    Dramatic pause: This CodeLens indicator is tremendously valuable in a TDD (Test Driven Development). Imagine sitting your test file and code file side-by-side, turning on “Run Tests After Build”, and using the CodeLens indicator to get immediate feedback about your progress.



    This indicator gives you very similar information as the next one, but list the authors of this method for at-a-glance context.  Note that the latest author is the one noted in the CodeLens overlay.  Clicking on this indicator provides several options, which I’ll explain in the next section.




    The Changes indicator tells you information about the history of the file at it exists in TFS, specifically Changesets.  First, the overlay tells you how many recent changes there are to this method in the current working branch.  Second, if you click on the indicator you’ll see there are several valuable actions you can take right from that context:


    What are we looking at?

    • Recent check-in history of the file, including Changeset ID, Branch, Changeset comments, Changeset author, and Date/time.
    • Status of my file compared to history (notice the blue “Local Version” tag telling me that my code is 1 version behind current).
    • Branch icons tell me where each change came from (current/parent/child/peer branch, farther branch, or merge from parent/child/unrelated (baseless)).

    Right-clicking on a version of the file gives you additional options:


    • I can compare my working/local version against the selected version
    • I can open the full details of the Changeset
    • I can track the Changeset visually
    • I can get a specific version of the file
    • I can even email the author of that version of the file
    • (Not shown) If I’m using Lync, I can also collaborate with the author via IM, video, etc.

    This is a heck of a lot easier way to understand the churn or velocity of this code.

    Incoming Changes


    The Incoming Changes indicator was added in 2013 Update 2, and gives you a heads up about changes occurring in other branches by other developers.  Clicking on it gives you information like:


    Selecting the Changeset gives you the same options as the Authors and Changes indicators.

    This indicator has a strong moral for anyone who’s ever been burned by having to merge a bunch of stuff as part of a forward or reverse integration exercise:  If you see an incoming change, check in first!

    Work Items (Bugs, Work Items, Code Reviews)


    I’m lumping these last indicators together because they are effectively filtered views of the same larger content: work items.  Each of these indicators give you information about work items linked to the code in TFS.



    Knowing if/when there were code reviews performed, tasks or bugs linked, etc., provides fantastic insight about how the code came to be.  It answers the “how” and “why” of the code’s current incarnation.


    A couple final notes:

    • The indicators are cached so they don’t put unnecessary load on your machine.  As such they are scheduled to refresh at specific intervals.  If you don’t want to wait, you can refresh the indicators yourself by right-clicking the indicators and choosing “Refresh CodeLens Team Indicators”


    • There is an additional CodeLens indicator in the Visual Studio Gallery – the Code Health Indicator. It gives method maintainability numbers so you can see how your changes are affecting the overall maintainability of your code.
    • You can dock the CodeLens indicators as well – just know that if they dock, they act like other panels and will be static.  This means you’ll have to refresh them manually (this probably applies most to the References indicator).
    • If you want to adjust the font colors and sizes (perhaps to save screen real estate), you can do so in Tools –> Options –> Fonts and Colors.  Choose “Show settings for” and set it to “CodeLens”.


    I hope you find this helpful!

  • Steve Lange @ Work

    Thoughts on TFS Project Collections


    New to TFS 2010, Team Project Collections (TPCs) provide an additional layer of project organization/abstraction above the Team Project level (see the MSDN article, “Organizing Your Server with Project Collections”)

    I’ve been asked numerous times over the past couple of months about the intention of project collections, their flexibility and limitations.  Below are simply my educated thoughts on the subject.  Please do your due diligence before deciding how you wish (or wish not) to implement project collections in your environment.

    You can use collections to more tightly couple related projects, break up the administrative process, or to dramatically increase scale.  But the primary design goal behind introducing project collections is around isolation (of code, projects, or groups within an organization) in a way that provides all the benefits of TFS, scoped to a defined level within a single instance of TFS.  You’re effectively partitioning TFS.

     Basic project collection structure


    If you have ever used TFS 2005 or 2008, think of it this way.  A project collection effectively compartmentalizes all the capabilities you’ve grown to love in a single TFS 2005/2008 instance:

    Project collection compartmentalization

    I won’t go into how you create/edit/delete project collections.  Just know that you can.  (BTW – for those of you upgrading from an earlier version of TFS, your existing projects will go into a single default project collection (by default, it’s named “Default Collection”.  Original, right?)

    Consider this (over-simplified) example.  I have 4 projects in my server, currently in a single (“default”) collection:

    Single collection view

    Say Project A and Project B are used by “Division A” in my company, and Agile1 and Sample Win App are used by “Division B”.  Project A and Project B share some code and leverage the same user base.  The assets in each division’s projects are in no way related to the other.  Consequently, I’d love to take advantage of project collections and separate our divisions’ stuff.  A more practical implementation of project collections might look like this:

    I build out my collections using the TFS Administration Console to look like this:

    Viewing my project collections in the admin console

    Once that’s done, I can ultimately end up with such a structure that my desired projects are contained in their respective organization’s collection:

    Division A’s stuff:

    Division A's collection

    Division B’s stuff:

    Division B's collection

    Now each division’s stuff is effectively compartmentalized.  No shared process templates, no shared user base, and no shared database (which means one division’s screw-up won’t affect another division’s work).

    Okay, so I lied a little – I earlier said I wouldn’t go into detail about how to CRUD collections.  But I will mention one thing here, which will add context to the above scenario.  In the above, I had a single collection that I effectively wanted to split into two collections (i.e. go from “Default Collection” to “Division A” and “Division B”).  This is surprisingly easy to do (more complicated than drag & drop, but not ridiculous either).  The documentation for splitting a collection lists 15 main steps to accomplish this, but basically what you’re doing is cloning a collection and then deleting what you don’t want.

    See?  I told you it would be a simple example.  But if you expand this to think of a TFS environment with 100 projects (instead of my puny four), you get the point.

    This all sounds pretty cool, right?  It. Is. Very. Cool.  Project collections can be used for various purposes in your TFS deployment (consolidating related development efforts, scaling the SQL backend, mapping TFS hierarchy to organization hierarchy, etc.).  However, with flexibility comes complexity.  If you had fun sitting around a conference table debating how to structure your TFS 2005/2008 project hierarchy (perhaps consulting our branching guidance document or patterns & practices?), project collections add a new element to consider for 2010.  Below I’ve outlined some of the main considerations for you and your team to think about before taking advantage of project collections in TFS 2010.

    For Systems Administrators:  Pros & Cons


    • Flexibility to to backup/restore collections individually.  This can reduce downtime as restoring one collection will not impact users of other collections.
    • Since each collection is contained in its own database, these databases can be moved around a SQL infrastructure to increase scale and load balancing.
    • Could help consolidate IT resources.  If your organization currently leverages several TFS instances simply to isolate environments between departments, collections can allow the same TFS instance to be used while still providing this isolation.


    • Again, with flexibility comes complexity.  Since project collections use their own databases, each one must be backed up (and restored) individually.  Also, other admin tasks such as permissions and build controller configuration grow proportionately as additional collections are created.
    • Users and permissions need to be administered separately for each project collection (this may also be a project admin consideration).
    • There are more databases to restore in the event a full disaster recovery is needed.

    For Project Administrators:  Pros & Cons


    • Organizational hierarchy.  If your organization has multiple divisions/departments, you can break up your TFS project structure to reflect that organizational view.  This makes it much easier for users to identify (or constrain via permissions) which projects belong to their department.
    • Projects grouped in the same project collection can leverage similar reports (“dashboards”) work item types, etc.  They can can also inherit source code from other grouped projects.


    • In Visual Studio, you can only connect to one collection at a time.  While it’s relatively trivial to simply connect to a different collection, you can’t view team projects in Team Explorer that reside in different project collections.
    • Relationship-specific operations you enjoy across team projects cannot span project collections.  This means that there are several things you cannot do across collection boundaries, such as:
    • Branch/merge source code (you can do this cross-project, but not cross-collection)
    • Query work items (i.e. you can’t build a query that will show you all bugs across multiple collections)
    • Link items (i.e. you can’t link a changeset in one collection to a task in another collection)
    • Process templates are customized and applied at the project collection level, not the TFS server level

    What does it boil down to?

    It’s really about your need for isolation.  Do you ideally want to isolate by application/system, organization, or something else?  Do you foresee a need to share code, work items, or other assets across projects?  It’s a fun little decision tree:

     Basic, over-simplified decision tree

    So that’s it!  The devil is always hiding in the details, so do your own research and use your own discretion when deciding how to adopt project collections into your TFS deployment.  I anticipate more guidance on this topic to come out as TFS 2010 installations propagate throughout the world.

    For more resources and practical guidance on using Team Foundation Server, see the TFS team’s blog on MSDN.

    I hope this helps you somewhat!  And thanks for reading!

  • Steve Lange @ Work

    Requirements Management in TFS: Part 4 (of 4): Summary


    Every organization approaches the concept of "requirements" differently.  Factors include general history, skill set, complexity, and agility.  Many development organizations are adopting Team Foundation Server to help improve team communication & collaboration, project control & visibility, and generally a more integrated experience across the various actors in the application lifecycle. 

    The more pervasive TFS becomes in an organization, the more I'm asked about managing requirements within the confines if Team System.  Some shops want to know about how to integrate more RM-specific applications into the platform, while others want to leverage TFS as much as possible and wait until Microsoft releases a requirements management solution (I know, I know, Word is the most widely-used requirements tool in the world - but I think you know what I mean by now!).

    If you're trying to choose which path to take (TFS-only or a partner integration), here are a few basic considerations:


      Benefits Drawbacks
    TFS Only
    • Affordability (only a TFS CAL is required)
    • Full integration with rest of the application lifecycle (existing infrastructure is leveraged for reporting & visibility)
    • Consistent capture & storage mechanism for all project artifacts.
    • Lack of specific focus on the analyst role
    • Interface may be a bit "heavy" and counter-intuitive for analysts.
    Partner Integrations
    • Can immediately provide requirements-specific capabilities (rich-text, use case diagramming, etc.)
    • Many can trace/link to work items in TFS, providing end-to-end visibility
    • Cost (Most partner tools require their own licenses, and each user still requires a TFS CAL from Microsoft.  Maintenance costs may be a factor as well)
    • Additional skill set is required for partner tool

    Some requirements-related resources (other links can be found in the various parts of this series):

    Well, I hope you at least found this series worth the time it took you to read it.  I welcome any comments and feedback as this topic is always shifting in perception, intention, schools of thought.


  • Steve Lange @ Work

    Running Code Metrics as Part of a TFS 2010 Build – The Poor Man’s Way


    Code Metrics, not to be confused with code analysis, has always been tough impossible to run as part of a build in Team Foundation Server.  Previously, the only way to run code metrics was to do so inside Visual Studio itself.

    In January, Microsoft released the Visual Studio Code Metrics PowerTool, a command line utility that calculates code metrics for your managed code and saves the results to an XML file (Cameron Skinner explains in detail on his blog). The code metrics calculated are the standard ones you’d see inside Visual Studio (explanations of metric values):

    • Maintainability Index
    • Cyclomatic Complexity
    • Depth of Inheritance
    • Class Coupling
    • Lines Of Code (LOC)

    Basically, the power tool adds a Metrics.exe file to an existing Visual Studio 2010 Ultimate or Visual Studio 2010 Premium or Team Foundation Server 2010 installation.

    So what does this mean?  It means that you can now start running code metrics as part of your builds in TFS.  How?  Well, since this post is titled “The Poor Man’s Way”, I’ll show you the quick and dirty (read: it works but is not elegant) way to do it.

    As a note, Jakob Ehn describes a much more elegant way to do it, including a custom build activity, the ability to fail a build based on threshold, and better parameterization.  I really like how flexible it is!  Below is my humble, quick & dirty way.

    The below steps will add a sequence (containing individual activities to the build process workflow that will run just prior to copying binaries to the drop folder.  (These steps are based on modifying DefaultBuildTemplate.xaml.)

    1. Open the build process template you want to edit (it may be simpler to create a new template (based on the DefaultBuildProcessTemplate.xaml) to work with.
    2. Expand the activity “Run On Agent”
    3. Expand the activity “Try, Compile, Test and Associate Changesets and Work items”
      1. Click on “Variables”, find BuildDirectory, and set its scope to “Run On Agent”
    4. In the “Finally” area, expand “Revert Workspace and Copy Files to Drop Location”
    5. From the toolbox (Control Flow tab), drag a new Sequence onto the designer, just under/after the “Revert Workspace for Shelveset Builds”. (Adding a sequence will allow you to better manage/visualize the activities related to code metrics generation).
      1. In the Properties pane, set the DisplayName to “Run Code Metrics”
    6. From the toolbox (Team Foundation Build Activities), drag a WriteBuildMessage activity into the “Run Code Metrics” sequence.
      1. In the Properties pane
        1. set DisplayName to Beginning Code Metrics
        2. set Importance to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.Normal (or adjust to .High if needed)
        3. set Message to “Beginning Code Metrics: “ & BinariesDirectory
    7. From the toolbox, drag an InvokeProcess activity into the sequence below the “Beginning Code Metrics” activity (this activity will actually execute code metrics generation).
      1. In the Properties pane
        1. set DisplayName to Execute Coded Metrics
        2. set FileName to “””<path to Metrics.exe on the build machine>”””
        3. set Arguments to “/f:””” & BinariesDirectory & “\<name of assembly>”” /o:””” & BinariesDirectory & “\MetricsResult.xml”  (you can also omit the assembly name to run matrics against all assemblies found)
        4. set WorkingDirectory to BinariesDirectory
    8. (optional) From the toolbox, drag another InvokeProcess activity below “Execute Code Metrics” (This activity will copy the XSD file to the binaries directory)
      1. In the Properties pane
        1. set DisplayName to Copy Metrics XSD file
        2. set FileName to “xcopy”
        3. set Arguments to “””<path to MetricsReport.xsd>”” ””” & BinariesDirectory & “”””
    9. Save the XAML file and check it in to TFS.

    Workflow after adding code metrics sequenceThe sequence you just added should look like (boxed in red):

    You basically have a sequence called “Run Code Metrics” which first spits out a message to notify the build that code metrics are beginning.

    Next, you actually execute the Metrics.exe executable via the InvokeProcess activity, which dumps the results (XML) file in the Binaries directory (this makes it simpler to eventually copy into the drop folder).

    The “Copy Metrics XSD file” activity is another InvokeProcess activity which brings along the appropriate XSD file with the metrics result file.  This is optional of course.

    After you run a build using this updated template, your drop folder should have something like this:

    Drop folder after running build with new template

    Pay no attention to the actual binaries – it’s the presence of MetricsReport.xsd and MetricsResults.xml that matter.

    Pretty cool, but there’s one annoyance here!  The metrics results are still in XML, and aren’t nearly as readable as the results pane inside Visual Studio:

    MetricsResults.xml on top, Code Metrics Results window in VS on bottom

    Don’t get me wrong – this is a huge first step toward a fully-baked out-of-VS code metrics generator.  The actual report generation formatting will surely be improved in future iterations.

    I decided to take one additional step and write a simple parser and report generator to take the XML results and turn them into something more pretty, like HTML.

    Before I dive into code, this is the part where I remind you that I’m not (nor have I ever been) a developer by trade, so the code in this blog is purely for functional example purposes.  Winking smile

    I created a relatively simple console application to read in a results XML file, parse it, and spit out a formatted HTML file (using a style sheet to give some control over formatting).

    I’m posting the full example code to this post, but below are the highlights:

    I first created some application settings to specify the thresholds for Low and Moderate metric values (anything above ModerateThreshold is considered “good”).

    Settings to specify Low and Moderate metric thresholds

    I created a class called MetricsParser, with properties to capture the results XML file path, the path to output the report, and a path to a CSS file to use for styling.

    To store individual line item results, I also created a struct called ResultEntry:

        struct ResultEntry
            public string Scope { get; set; }
            public string Project { get; set; }
            public string Namespace { get; set; }
            public string Type { get; set; }
            public string Member { get; set; }
            public Dictionary<string, string> Metrics { get; set; }

    I then added:

    private List<ResultEntry> entriesShifty

    which captures each code metrics line item.

    If you look at the results XML file, you can see that in general the format cascades itself, capturing scope, project, namespace, type, then member.  Each level has its own metrics.  So I wrote a few methods which effectively recurse through all the elements in the XML file until a complete list of ResultEntry objects is built.

    private void ParseModule(XElement item)
                string modulename = item.Attribute("Name").Value.ToString();
                ResultEntry entry = new ResultEntry
                    Scope = "Project",
                    Project = modulename,
                    Namespace = "",
                    Type = "",
                    Member = ""
                List<XElement> metrics = (from el in item.Descendants("Metrics").First().Descendants("Metric")
                                          select el).ToList<XElement>();
                entry.Metrics = GetMetricsDictionary(metrics);
                List<XElement> namespaces = (from el in item.Descendants("Namespace")
                                          select el).ToList<XElement>();
                foreach (XElement ns in namespaces)
                    ParseNamespace(ns, modulename);

    Bada-bing, now we have all our results parsed.  Next, to dump them to an HTML file.

    I simply used HtmlTextWriter to build the HTML, the write it to a file.  If a valid CSS file was provided, the CSS was embedded directly into the HTML header:

     #region Include CSS if available
                    string cssText = GetCssContent(CssFile);
                    if (cssText != string.Empty)

    After that, I looped through my ResultEntry objects, inserting them into an HTML table, applying CSS along the way.  At the end, the HTML report is saved to disk, ideally in the build’s binaries folder.  This then allows the report to be copied along with the binaries to the TFS drop location.

    Code Metrics Results HTML Report

    You’ll notice that this layout looks much like the code metrics in Visual Studio if exported to Excel.

    So again, not the most sophisticated solution, but one that a pseudo-coder like me could figure out.  You can expand on this and build all of this into a custom build activity which would be much more portable.

    Here is the code for MetricsParser:

    Again I recommend looking at Jakob’s solution as well.  He puts a much more analytical spin on build-driven code metrics by allowing you specify your own thresholds to help pass or fail a build.  My solution is all about getting a pretty picture

    Happy developing!

  • Steve Lange @ Work

    Data-Driven Tests in Team System Using Excel as the Data Source


    There is some documentation to explain this already, but below is a step-by-step that shows how to use an Excel spreadsheet as a Data Source for both unit and web tests.

    First, let’s set the stage.  I’m going to use a solution containing a class library and a web site. 


    The class library has a single class with a single method that simply returns a “hello”-type greeting. 

    namespace SimpleLibrary
        public class Class1
            public string GetGreeting(string name)
                return "Hello, " + name;
    For my VB friends out there:
    Namespace SimpleLibrary
        Public Class Class1
            Public Function GetGreeting(ByVal name As String) As String
                Return "Hello, " & name
            End Function
        End Class
    End Namespace

    Unit Testing

    So now I’m going to create a unit test to exercise the “GetGreeting” method.  (As always, tests go into a Test project.  I’m calling mine “TestStuff”.)


    Here’s my straightforward unit test:

    public void GetGreetingTest()
       Class1 target = new Class1();
       string name = "Steve";
       string expected = "Hello, " + name;
       string actual;
       actual = target.GetGreeting(name);
       Assert.AreEqual(expected, actual);

    In VB:

    <TestMethod()> _
    Public Sub GetGreetingTest()
       Dim target As Class1 = New Class1
       Dim name As String = "Steve"
       Dim expected As String = "Hello, " & name
       Dim actual As String
       actual = target.GetGreeting(name)
       Assert.AreEqual(expected, actual)
    End Sub

    I’ll run it once to make sure it builds, runs, and passes:


    I have an Excel file with the following content in Sheet1:


    Nothing fancy, but I reserve the right to over-simplify for demo purposes.  :)

    To create a data-driven unit test that uses this Excel spreadsheet, I basically follow the steps you’d find on MSDN, with the main difference being in how I wire up my data source.

    I click on the ellipsis in the Data Connection String property for my unit test.


    Follow these steps to set up the Excel spreadsheet as a test data source for a unit test.

    • In the New Test Data Source Wizard dialog, select “Database”. 
    • Click “New Connection”.
    • In the “Choose Data Source” dialog, slect “Microsoft ODBC Data Source” and click “Continue”.  (For additional details about connection strings & data sources, check this out.)
    • In “Connection Properties”, select the “Use connection string” radio button, then click “Build”.
    • Choose if you want to use a File Data Source or a Machine Data Source.  For this post, I’m using a Machine Data Source
    • Select the “Machine Data Source” tab, select “Excel Files” and click Ok
    • Browse to and select your Excel file.
    • Click “Test Connection” to make sure everything’s golden.
    • Click Ok to close “Connection Properties”
    • Click Next
    • You should see the worksheets listed in the available tables for this data source.
    • In my example, I’ll select “Sheet1$”
    • Click “Finish”
    • You should get a message asking if you want to copy your data file into the project and add as a deployment item.  Click Yes.
    • You should now see the appropriate values in Data Connection String and Data Table Name properties, as well as your Excel file listed as a deployment item:
    • Now I return to my unit test, note that it’s properly decorated, and make a change to the “name” variable assignment to reference my data source (accessible via TestContext):
      [DataSource("System.Data.Odbc", "Dsn=Excel Files; 
      driverid=1046;maxbuffersize=2048;pagetimeout=5", "Sheet1$", 
      DeploymentItem("TestStuff\\ExcelTestData.xlsx"), TestMethod()]
              public void GetGreetingTest()
                  Class1 target = new Class1();
                  string name = TestContext.DataRow["FirstName"].ToString();
                  string expected = "Hello, " + name;
                  string actual;
                  actual = target.GetGreeting(name);
                  Assert.AreEqual(expected, actual);
    Again, in VB:
    <DataSource("System.Data.Odbc", "Dsn=Excel Files;
    driverid=1046;maxbuffersize=2048;pagetimeout=5", "Sheet1$", 
    <DeploymentItem("TestStuff\ExcelTestData.xlsx")> <TestMethod()> _
        Public Sub GetGreetingTest()
            Dim target As Class1 = New Class1
            Dim name As String = TestContext.DataRow("FirstName").ToString()
            Dim expected As String = "Hello, " + name
            Dim actual As String
            actual = target.GetGreeting(name)
            Assert.AreEqual(expected, actual)
        End Sub
    • Now, running the unit test shows me that it ran a pass for each row in my sheet


    Web Testing

    You can achieve the same thing with a web test.  So I’m going to first create a simple web test that records me navigating to the website (at Default.aspx), entering a name in the text box, clicking, submit, and seeing the results.  After recording, it looks like this.


    See “TxtName=Steve”?  The value is what I want to wire up to my Excel spreadsheet.  To do that:

    • Click on the “Add Data Source” toolbar button.
    • Enter a data source name (I’m using “ExcelData”)
    • Select “Database” as the data source type, and click Next
    • Go through the same steps in the Unit Testing section to set up a data connection to the Excel file.  (Note:  If you’ve already done the above, and therefore the Excel file is already in your project and a deployment item, browse to and select the copy of the Excel file that’s in your testing project.  That will save you the hassle of re-copying the file, and overwriting.)
    • You’ll now see a Data Sources node in my web test:
    • Select the parameter you want to wire to the data source (in my case, TxtName), and view its properties.
    • Click the drop-down arrow in the Value property, and select the data field you want to use.
    • Now save and run your web test again.  If you haven’t used any other data-driven web tests in this project, you’ll notice that there was only one pass.  That’s because your web test run configuration is set to a fixed run count (1) by default.  To make changes for each run, click “Edit run settings” and select “One run per data source row”.  To make sure all rows in data sources are always leveraged, edit your .testrunconfig file to specify as such.
    • Now run it again, and you should see several passes in your test results:

    That’s it in a simple nutshell!  There are other considerations to keep in mind such as concurrent access, additional deployment items, and perhaps using system DSNs, but this should get you started.

  • Steve Lange @ Work

    Creating a Data-Driven Web Test against a Web Service


    Okay, I'm sure some of you will tell me, "Yeah, I know this already!", but I've been asked this several times.  So in addition to pointing you to the MSDN documentation, I thought I'd give my own example.

    The more mainstream recommendation for testing a web service is to use a unit test.  Code up the unit test, add a reference to the service, call the web service, and assert the results.  You can then take the unit test and run it under load via a load test.

    However, what if you want a more visual test?  Well, you can use a web test to record interaction with a web service.  This is actually documented in the MSDN Library here, but below is my simple example.

    Here's what we're going to do:

    1. Create the web service
    2. Create the web test
    3. Execute the web test (to make sure it works)
    4. Create the test data data source
    5. Bind it to the web test
    6. Run the test again

    First, we create a web service.  In my example, it's the sample "Hello, World" service and I've created one additional method called "HelloToPerson":

    <WebMethod()> _
        Public Function HelloToPerson(ByVal person As String) As String
            Return "Hello, " & person
        End Function

    As you can see, the method will simply say hello to the passed person's name.

    Now, let's create a web test to exercise this web method (Test->New Test, select Web Test), creating a test project in the process if you don't already have one in your solution.  I named my web test WebServiceTest.webtest.

    As soon as Internet Explorer opens with the web test recorder in the left pane, click the "Stop" button in the recorder.  This will return you Visual Studio's web test editor with an empty test.

    Web test with no requests

    Now launch Internet Explorer, go to your web service (.asmx), and select the method to test (again, in this example it's "HelloToPerson").  Examine the SOAP 1.1 message.  In my example, the message looks like this:

    POST /Service1.asmx HTTP/1.1
    Host: localhost
    Content-Type: text/xml; charset=utf-8
    Content-Length: length
    <?xml version="1.0" encoding="utf-8"?>
    <soap:Envelope xmlns:xsi="" xmlns:xsd="" xmlns:soap="">
          <HelloToPerson xmlns="">

    We'll need to refer back to this information (I color-coded a couple of sections for later reference).

    Right-click on the root node (WebServiceTest in my example) and select "Add Web Service Request."

    Add a Web service Request

    In the URL property of the new request, enter the URL of the web service (by default this value is populated with http://localhost/.

    Specifying the correct URL for the web service

    Now, let's make sure we use a SOAP request.  Right-click the request and select "Add Header".

    Adding a header to the request

    Enter "SOAPAction" in the name field.  In the value field, enter the value of SOAPAction in the message from your web service.  For my example, it's "" (color-coded in blue)

    Adding the SOAPAction header to the request

    Next, select the String Body node

    • In the Content Type property, specify "text/xml"
    • In the String Body property, copy/paste the XML portion of the SOAP 1.1 message of your web service method (color-coded in red).  At this time, be sure to replace any parameters with actual values you want to test (in this example, my parameter is "person", so I enter "Steve" instead of "string"

    Entering the XML portion of the SOAP message, specifying a real value for the 'person' parameter

    The properties dialog for the String Body node

    Now, right-click on the web service request and select "Add URL QueryString Parameter."

    Adding a URL QueryString Parameter

    In the QueryString Parameter node, specify "op" as the name and the name of your method as the value.  In this example, it's "HelloToPerson".

    Viewing the added QueryString Parameter

    Finally, let's run the test and see the results!

    Viewing the test results

    As you can see, the test passed, and the "Web Browser" panel shows the returned SOAP envelope with the correct results.

    Now for some more fun.  Let's make this a data-driven test so we can pass different values to the web method.

    We'll create a simple data source so that we can pass several names to this method (very helpful so we don't have to record multiple tests against the same method).  You can use a database, XML file, or CSV (text) file as a data source.  In my example, I'm going to use an XML file:

    <?xml version="1.0" encoding="utf-8" ?>

    Save this file as "Names.xml" in your test project. 

    To make this data source available to the web test, right click on the web test and select "Add Data Source" (you can also click the corresponding toolbar button).

    Adding a data source

    Provide a name for the data source (for me, it's "Names_DataSource") and select XML file for the data source type.

    Selecting the data source type

    Next, provide the path to the XML file, then select the data table containing your test data.  You'll know if you select it correctly since you'll get a preview of your data.

    Selecting the XML file

    Check the boxes next to the data tables you want to be available for your tests.  In my example, I only have one ("Names").


    Click Finish (if you're asked to include any files in your test project, just click yes to the prompts).

    Now your XML data is available to bind to your web test.

    Data source is now available to your test.

    Finally, let's put this data source to work.  We want to bind the name values in the data source to the "person" parameter for my web service call.  If you recall, that value is specified in the String Body property.  So we inject the following syntax (using the values appropriate for this example) into the String Body property:

    {{DataSourceName.TableName.ColumnName}}, so for my example, I use {{Names_DataSource.Name.Name_Text}}


    Now we just need to tell the web test to execute once for each value in my data source.  We can do this two ways:

    If you will mostly just run this test in a single pass (not iterate through the data source), you can just run your test and "Edit Run Settings" to augment (on a one-off basis) your iteration settings.

    Editing test run settings

    Again, note that doing this way will affect only the current test run (i.e. next run made), and will not be saved.

    If you want to specify that you want to use the data source by default, you need to open the LocalTestRun.testrunconfig file in your Solution Items folder.

     Finding the .testrunconfig file

    Opening the .testrunconfig file will give you the below dialog.  Select Web Test on the left, then click the radio button to specify "One run per data source row."  Click Apply then Close.


    Now for the beautiful part.  Go back to your web test and run it again.  This time instead of a single run, it will automatically execute a test run for each row in your data source. 

    Viewing test results with multiple runs

    Notice results for each run, including pass/fail information, and the resulting SOAP envelope with the appropriate method result in each (I've highlighted the second run to show that "Mickey" was used in this run).

    Happy Testing! 

  • Steve Lange @ Work

    Ordering Method Execution of a Coded UI Test


    There’s often no discernable, consistent pattern that dictates the execution order of automated tests (Coded UI in this example, but the same applies to Unit Tests).  Some argue that it may be a good thing that there isn’t an inherent pattern for the execution of CodedUI tests (and unit tests, for that matter), as a seemingly more random pattern can uncover test dependencies which reduce the overall effective coverage of tests.  And I agree with that to an extent, but there are always cases in which control over execution order is needed.

    An easy way to accomplish test order is to use an Ordered Test. This will provide you explicit control over the execution order of your tests.

    For this example, I have a Coded UI Test class called CodedUITest1 (for more on Coded UI Tests, see the Anatomy of a Coded UI Test).  In it, I have two CodedUI Test methods:

    • CodedUITestRunFirst()
    • CodedUITestRunSecond()

    I want to order them such that they execute like:

    • CodedUITestRunSecond()
    • CodedUITestRunFirst()

    1. Add a new Ordered Test. Go to Test->New Test, and select Ordered Test.

     New test window

    2. The ordered test will open. I can move the available tests from the left list into my ordered test list on the right. I can then move the tests up/down to create the desired order.

     Ordered Test dialog

    It’s not shown in this screenshot, but there is a checkbox to allow the ordered test to continue upon a failure.

    3. Save the ordered test. I can now see the ordered test in my Test View window.

    Test View window showing new ordered test

    4. When ready, I select to run my ordered test. It will appear in the Test Results window as a single test.

    Test Results window showing ordered test

    When finished, I can double-click on the test result to see that both tests did actually run, their individual results, and their order.

    Detailed results of ordered test

    It’s a surprisingly easy yet elegant solution.  I can put pretty much any automated test into an ordered test (except for load tests).  If you have a lot of tests, coupling the use of ordered tests with other test lists can really help visually organize your test project.

  • Steve Lange @ Work

    VS/TFS 2012 Tidbits: Merging Changes by Work Item


    As the Visual Studio family of products (Visual Studio, TFS, Test Professional) nears its 2012 release, I thought I’d bring some short hits – tidbits, if you will – to my blog. Some of these are pretty obvious (well-documented, or much-discussed), but some may be less obvious than you’d think. Either way, it’s always good to make sure the word is getting out there. Hope you enjoy!

    Merging Changes by Work Item

    This is something that existed in VS 2010, but it wasn’t talked about as much.  While it’s pretty straightforward to track changes merged across branches by changeset, sometimes it’s even more effective to track merges by work item (i.e. show me where changes associated with a work item have been merged/pushed to other branches).

    Let’s catch up. Consider the relatively simple branch hierarchy below:


    A work item has been assigned to Julia, Task #80.


    Julia makes some code changes, and checks in against (linking to) the work item (Task #80).

    She checks in 2 individual changes to create links to 2 discrete changesets from the task.

    Now, it’s easy to go ahead and track an individual changeset by selecting the option from the History window.


    That’s all well and good, but if I didn’t know the exact changeset ID (#17), or if there was more than one changeset in associated with the task, this tracking process becomes less effective.

    What Julia can do is right-click on the work item and select “Track Work Item”.    (Note that this option will be disabled if there are no changesets linked to the work item.)


    She can also click the “Track Work Item” button at the top of the work item form:


    I get a much clearer picture now of all the work and where it’s been applied, and the “Tracking” visualization will now include all changesets (in my case, 2 changesets) in the window.

    Now I know exactly what changes to merge.  I merge them, and now I can see that the entire work item has been merged to Main from Dev (i.e. both changesets were merged).


    And just as effectively, I can see these changes in the Timeline Tracking view:


    So that’s it! Tracking by work items are pretty easy to do, and paint a much clearer picture of how a change from a work item perspective can, or has been, applied across branches.

    Again, I know this isn’t exactly a new feature, but there are a lot of people out there who are looking for ways to “merge by work item” and aren’t aware of this feature.

  • Steve Lange @ Work

    Ordered Tests in TFS Build


    In an earlier article I discussed how to use and Ordered Test to control the execution order of Coded UI Tests (the same can be applied to other test types as well).  I received a few follow-up questions about how to do this in TFS Build so tests run in a particular order as part of a build.

    Here’s one way that’s remarkably easy.

    In my example, I have a project called JustTesting, which contains just a test project with 3 unit tests (which will always pass, BTW).


    I put those tests into an ordered test:


    In Solution Items, I open up my JustTesting.vsmdi file, create a new test list (called Ordered Tests), and add my ordered test to it.


    Once that’s done, I check everything into TFS (my Team Project’s name is “Sample CMMI”.

    Next, I set up a build definition (in Team Explorer, right-click Builds, and select “New Build Definition”).  Set whatever options you want (name, trigger, workspace, build defaults) but stop at “Process”.

    In the section named “2. Basic”, you’ll see that by default the Automated Tests field is set to (something like): “Run tests in assemblies matching **\*test*.dll using settings from $/Sample CMMI/JustTesting/Local.testsettings”. 


    Click on the ellipsis on the right of that to open the Automated Tests dialog:


    Remove the entry you see (or leave it if you wish to include that test definition), and then click “Add”.

    In the Add/Edit Test dialog, select the optoin for “Test metadata file (.vsmdi)”.  Use the browse button to find and select your desired .vsmdi file.  In my example, JustTesting.vsmdi.

    Uncheck “Run all tests in this VSMDI file”, then check the box next to your test list containing the ordered test.  In my example, the test list is called “Ordered Tests”.  Your dialog should look something like this:


    Click OK and you’re Automated Tests dialog should look like:


    Click OK again, then save your build definition.

    Queue a new build using this definition.  Once complete, look at the build report to see your test results.



    It’s a few steps, but nothing ridiculous.  And I didn’t have to hack any XML files or do any custom coding.

    Hope this helps!

  • Steve Lange @ Work

    Thoughts on Managing Documentation Efforts in Team Foundation Server


    I’ve met with several customers over the last few months who either are, or are looking to, manage their documentation efforts in Team Foundation Server.  There’s not much guidance or documentation about the best way to do that.  Now my blog is hardly a repository of impactful information; but I hope this post helps to shed some light on practices that can be used to manage documentation in TFS.

    In thinking about this, the concept of documentation management is somewhat similar to requirements management:  A document format is the ultimate output, consistent capture and management is ideal, and a development workflow is needed.  Several years ago (when TFS 2005 was the current release), I blogged a four-part series on requirements management in TFS, a series which many seemed to appreciate.  (Since then, a much more robust, prescriptive guidance has been published on CodePlex around TFS 2010 called the “Visual Studio Team Foundation Server Requirements Engineering Guidance” ). 

    There are two main schools of thought around using TFS to manage documentation efforts:

    • Document-centric
    • Item-centric


    In the document-centric approach, the document itself is the “version of the truth”. Updates are made to the document directly, and either TFS or the associated SharePoint site manages versioning.  Any approval workflows are handled by SharePoint.

    The benefit of this approach is that people already know how to edit a document (Word is the most popular requirements management tool, as well!).  It’s natural and seemingly convenient to just pop open a document, make some updates, hit “Save”, and close.  When the documentation process is finished, you already have your “output” – the document itself.  Just convert it to the format that you want (PDF, XPS, whatever), and you’re done.

    The drawback however, is in its simplicity.  You lose formatting consistency of individual sections of the document, as well lower-level management of those sections.  This results in extra scrutiny over a document to check for those inevitable inconsistencies. If you have traceability requirements in your process guidelines, it’s going go be very difficult to accurately relate a specific section within a document to another artifact in TFS.  It’s quite near impossible to report on the status of a documentation effort, other than “the document is or isn’t done yet.”  There are no metrics around how much effort has been applied to the documentation, how many people have collaborated on it, etc.


    The item-centric approach uses the work item tracking system in TFS to manage components/pieces of documentation individually.  This is accomplished by creating a work item type designed to support individual pieces of documentation.  In this scenario, TFS becomes the “version of truth”, and the actual document is really just an output of that truth. (Think of this as similar to version control, which houses the truth of your code, and the build is the output of that.)

    Several of these RM-centric approaches can be applied toward documentation efforts:

    • Custom work item types
    • Consistent UI for consistent data capture
    • Querying and reporting
    • Categorization or classification

    Below is just one example how a “Documentation”-like work item type might look in TFS:

    Sample documentation work item type

    You’ll notice there are standard fields such as title, assigned to, state, area, and iteration.  In this example, there are a few custom fields added as well:

    • Document Structure (group)
      • Target Document
      • Document Order
    • Documentation

    Target Document allows you to target a specific document that this documentation piece belongs to.  In my example, I use a global list for this field, allowing choices of End User Manual, Administrator’s Guide, and Installation Guide.

    Document Order is a field I created to help with the ordering of the documentation piece (for sibling work items) when it is finally output into a document.

    In TFS 2010, you also have the added advantage of work item hierarchy to better help organize the structure of your documentation. You can use hierarchy to break down sections or areas of the document.  Viewing the “structure” of a document (like a document outline in Word) is a matter of constructing a query.

    For example, below is a query result that shows a document hierarchy for my “End User Manual”:


    There are a few very tangible advantages of using this approach:

    • Each section of documentation is individually manageable.  They can be assigned to different people, follow individual workflows, and be reported on and queried against.  Documentation can much more explicitly be planned as documentation work items can be put into sprints, iterations,etc.
    • Sections can be modified using a number of tools (Team Explorer, Excel, Web Access, or several 3rd party integrations).
    • Documentation work items, as they are work items, can be related/linked to other artifacts they support.  For instance, you can tangibly relate a build, task, requirement, or even code to a piece of documentation.
    • You can use work item queries to help identify and track the progress of your documentation efforts.  For example, while the previous screenshot shows the entire tree of documentation items, you could have another query to display the items that haven’t yet been completed:


    Creating your Document

    Sounds great, right?  Oh yeah, but what about actually creating the document itself? (What, you don’t just want to dump the query results to Excel and format from there?)

    Well, the first main step is to get your work items exported to a Word document (for any fine tuning) and ultimately converted to your final output format (probably PDF).

    If your list of documentation work items is flat (i.e. no hierarchy, parent/child relationships), that simplifies things because you can dump your work items to a file format that can be used as a merge source for Word (like a TSV or Excel file).  Then you really just have to worry about formatting your document appropriately.

    And there are a couple of 3rd party tools that you may (again, based on your specific needs) be able to leverage.  These tools work to integrate Word with TFS, and each carries their own pros and cons:

    It gets more complicated as you work with a hierarchy.  In my above example, I want my work item hierarchy to reflect  a document hierarchy in the output document (i.e. first level gets Heading 1, second level gets Heading 2, etc.).  That puts a small wrinkle in things.

    So when in doubt, roll your own.  I have several customers who have implemented custom utilities to export their documentation work items to a Word document.  Given my amateur status as a programmer, I thought I’d give it a shot myself.  More on that in a future post, but the basic idea of such a utility is something like this:

    1. Select the WI query that gives you the work items you want, in the right order, etc.
    2. Select a document template to use, or at least a document style.
    3. Click “go”, and watch the utility run through each work item in the query results, inserting field values in the appropriate placeholder (named area, bookmark, whatever) in the document.

    Again, more on that later.


    So keep in mind that while your mileage may vary in terms of approach and need, it is definitely possible to leverage TFS WIT as repository for your document development needs.  My examples above are by no means the only way to attack this topic – I’ve just seen them work with other customers of mine.


  • Steve Lange @ Work

    Querying the TFS Database to Check TFS Usage


    Why would you want to know how many users are actually using Team Foundation Server?  Well, for starters:

    • You want to make sure that each user in your environment using TFS is properly licensed with a TFS CAL (Client Access License). 
    • You want to show management just how popular TFS is in your environment.
    • You want to request additional hardware for TFS, and want to show current usage capacity.

    But, what if your users are spread out all over the world, so you can’t just send a simple email asking, “Hey, are you using TFS?”

    One relatively straightforward way is to ask your TFS server’s database.  TFS logs activity in a database ‘TfsActivityLogging’, specifically in a table ‘tbl_Command’.

    NOTE:  It’s not supported to go directly against the database, so take note of 2 things:

    1. Be very careful!
    2. Be clear that this isn’t supported.  This process works, but only in the absence of a supported way to query TFS usage.  Just because I work for Microsoft, doesn’t mean you can get official support from MS on this.

    All that out of the way, the simple way to do this is to use Excel:

    Open Excel.

    Go to the Data tab and select ‘From Other Sources’ in the ‘Get External Data’ group, and select ‘From SQL Server’.


    The Data Connection Wizard will open.  Follow steps to connect to the SQL Server that’s used by TFS, selecting the ‘TfsActivityLogging’ database and the contained ‘tbl_Command’ table.


    Enter the SQL Server name that TFS uses.  For the below, my SQL server is at ‘tfsrtm08’.


    Select the ‘TfsActivityLogging’ database, then select the ‘tbl_Command’ table. Click Next.


    Click Finish.

    Select how you’d like to import the table’s data.  For this example, I’m choosing ‘PivotTable Report’.


    Now you’re ready to get the data you want:

    Listing All Users Who Have Touched TFS

    In the ‘PivotTable Field List’ panel on the right, select the ‘IdentityName’ field.  Your spreadsheet should look something like this:


    If you just want a list of users that have touched TFS, then you’re done (in my example, I really only have 2 accounts, and one is the TFSSERVICE account that actually runs TFS).

    However, if you want a little extra information about your users’ activities, you can do a couple extra things.

    List Users and Their Relative Activity Levels

    Add the ‘ExecutionCount’ field to the ‘Values’ section of the PivotTable, and you’ll see the number of commands each user has run against TFS (some minor, like gets, and other major, like changing ACL’s):


    List Users and Their Specific Activity Levels

    Add first the ‘ExecutionCount’ field to the ‘Values’ section of the PivotTable, then add the ‘Command’ field to the ‘Row Labels’ section:


    (Again, remember that some of these commands are less significant than others, but still indicate user activity.)

    List Users and Their Clients

    Add the ‘UserAgent’ field to the ‘Row Labels’ section of the PivotTable:


    List Users and Their Last Activity Time

    Add ‘IdentityName’ to the ‘Row Labels’ section of the PivotTable and ‘StartTime’ to the ‘Values’ section.  Then click ‘Count of StartTime’ (in the Values section) and select ‘Value Field Settings’.  Change the ‘Summarize the value field by’ value to ‘Max’.


    Click ‘Number Format’ and set the format to ‘Date’.  Click OK.  You’ll now see the last activity date for each user.


    I hope this helps!

    Other Tip:

    • You’ll probably see (like in my example) the built-in accounts and their activities (i.e. TFSSERVICE, perhaps TFSBUILD as well).  You may want to filter those ones out from your report.
    • I’ve heard conflicting reports about how much data the ‘tbl_Commands’ table retains (some say just the preceding week).  In my example, I queried the ‘Min’ start times for logged activities and went back over 5 months.  Just something to think about:  Your mileage may vary greatly.  (Apparently a clean-up job is supposed to run periodically which trims this table.)
  • Steve Lange @ Work

    Microsoft Test Manager – Working Across Projects: Adding an Existing Test Case vs.Cloning/Copying


    You’ve probably noticed that for the most part, managing multiple test plans and test cases are more convenient when they’re part of the same Team Project in TFS/VSO.  For one thing, area paths and iteration paths just work much more cleanly.

    If you’re working across multiple Team Projects, though, the story changes slightly.  Things can still work just fine, but you need to be even more aware of the differences between working with existing test cases, and copying them.

    Let’s look at the differences.

    Using an Existing Test Case (by reference)

    When adding test cases that exist in another TFS/VSO Project entirely (not just another test plan in the same project), you are basically creating a reference back to the main test case, not a copy. That is, you can’t modify the area or iteration fields to represent the target project. Because it is a reference, when the test case is opened it is opened as it resides in the source test plan/project. You can’t modify the area or iteration fields because they are scoped to the project in which the test case resides.

    Consider this example:

    I have two projects: Project A and Project B. I created a test plan in Project A called “Master Test Plan”, and inside that plan created a test case named “Test Case from Master Test Plan in Project A” (just to make it easy to reference). It has an ID of 13.


    In Project B, I create a test plan named “Project B Test Plan”. In this plan, I want to reference the test case from Project A. So I select “Add existing” from the toolbar, and query VSO for the test case.


    I select the test case and click “Add test cases”, which adds it to my plan in Project B, as seen below:


    If I open the test case, I am only able to set the iteration path (and the area path for that matter) to a value that is within the scope of the plan which we referenced.


    Because we are referencing the test case, any changes made here will be reflected back in the Master Test Plan in Project A. This is because we are working with a single instance of the test case, we’re just referencing it from a different location.

    Copying a Test Case across Projects

    If you wish to have a discrete copy of a test case, test cases, or test suites across projects (to a test plan that resides in a different project), you can perform a “clone” across projects via the command line (tcm.exe).

    The TCM tool contains various options to control what gets copied, and new values to set (i.e. area and iteration). (It can also be used to run automated tests)

    For this example, I’ll copy/clone my test cases (in the root test suite) of Project A to a newly-created project, Project C (in which I have a test plane named “Project C Test Plan”). Since this is a basic example, all that exists is that single test case.

    Here is the command line I will run:

    tcm suites /clone


    /teamproject:”Project A”


    /destinationteamproject:”Project C”


    /overridefield:”IterationPath”=”Project C\Release 1\Sprint 1”

    /overridefield:”Area Path”=”Project C”

    When I execute this for the current scenario, I’m telling the tool to copy the test suite (with ID: 1, the root test suite in my “Master Test Plan”) from Project A to Project C (to the suite with ID: 3, the root test suite in Project C’s test plan (named “Project C Test Plan”)). I’m also instructing to the tool to set the Iteration Path of the copied test cases to “Project C\Release 1\Sprint 1”, and the Area Path to “Project C”.

    After running this, if I look at my test plan in Project C, I see this:


    I can see the copied test suite (“Master Test Plan”), and the test case it contains. If I open that test case, note the new values of Area and Iteration:


    Also note the new ID (14) as a sign that this is an actual copy of the original test case (ID 13).

    Because I’ve actually made a copy of this test case (not a reference, as in the first scenario), any changes I make to this test case will NOT affect the original from Project A. To illustrate this, I modified the title of the test case in (Project C) to reflect that it has been changed in Project C. Compare that change (top) with the original test case back in Project A (bottom):



    To help with reference, the command line tool created a link between the two so users can see where the test case came from, and gain context as to why it’s there.


    Additional notes

    Thanks for reading!

  • Steve Lange @ Work

    Visual Studio Online (VSO): Owning Multiple VSO Accounts


    NOTE: This is not official guidance, nor may it be even officially a supported “feature” in the near future (my guess is that it’s not a directly intended capability.  It’s simply a short-term workaround that assisted a few of my customers that I thought I’d share.

    Update: You can now create multiple VSO instances under the same account directly from the website (see comments).  Enjoy!

    Visual Studio Online by default only allows a Microsoft account to create a single VSO account.  When you create a VSO account, the system records who the owner (creator) is, and the next time that user comes back, they cannot create additional VSO accounts.

    I have customers who currently maintain several VSO accounts (for various reasons), and have done so by creating multiple Microsoft accounts, one for each VSO account.

    With the May 7th date of ending the “Early Adopter” program for VSO, I have customers in this situation asking what to do about this moving forward.

    There’s a slightly indirect, but perfectly doable way around this: a way to let a single Microsoft account “own” multiple VSO accounts.  You need two (2) Microsoft accounts (but only for VSO account creation purposes), but only two.

    For this example, I’m going to create 2 sample Microsoft accounts, one primary and one secondary (both which I’ll delete after this post – I hate stale/dummy accounts!) and show you how to create three (3) VSO accounts owned by the primary Microsoft account.


    First, I create the primary Microsoft account:

    Here’s this account’s profile page:


    Note that this account neither owns any VSO accounts nor is a member of any VSO accounts.

    Using this Microsoft account, I create a new VSO account:

    Next, I sign out of Microsoft and create my secondary Microsoft account:

    And the resulting profile page:


    Like the first account, this account neither owns any VSO accounts nor is a member of any VSO accounts.

    Using this secondary account, I create a second VSO account:

    Next, while still logged in as the secondary account, I go to the Users page.


    Once there, I click “Add” and add the primary account (the first account I created) to this VSO account as a user, assigning a “Basic” license.


    Now that the primary account is recognized as a user, I set that account to be the owner of this VSO account.  (The same below steps are described here.)

    I click on the "gear” icon at the top-right, which takes me to the Admin area.


    Next, I click on the Settings tab, and for Account Owner, select my primary account from the drop-down list, and click the Save button.


    NOTEWARNING: If you follow my steps to the letter, you may have the unintended consequence of removing the secondary Microsoft account’s access from all the VSO accounts.  If this is truly a “dummy” account, then it’s probably no big deal.  But if you’re using a Microsoft account you with to keep using in VSO, you’ll want to make sure you add that account as a valid member of a group in the VSO account. In this walkthrough, I added the secondary account back into the VSO as an administrator.

    So let’s see what’s happened.  Sign out, and then sign in as the primary Microsoft account.  Here’s the updated profile page for the primary account:


    Notice that now this account “owns” both VSO accounts (primary and secondary).  Cool?

    Now let’s own a third VSO account.  Sign out, then back in as the secondary Microsoft account.  Here’s the secondary account’s profile page:


    This should be expected now, because this account no longer owns the secondary VSO account.

    I click the link to “Create a free account now”, and create a third VSO account:

    Like before, I go to the Users page, add the primary Microsoft account as a valid (Basic) user, then specify in the Administrators area that I want my primary Microsoft account to be the owner (and per the above note/warning, I add the secondary account back in). Be sure to click the “Save” button throughout!



    Once that’s all set, I sign out, then back in as the primary Microsoft account:


    Now my primary Microsoft account owns three (3) VSO account. Sweet!

    See the pattern?

    • As a Microsoft account that doesn’t own a VSO account, create a VSO account.
    • Transfer ownership of that account to the Microsoft account you actually want to own the VSO account.
    • Sign back in as the “dummy” Microsoft account, rinse and repeat as needed.

    As an added FYI, if you have an Azure Subscription (not the same thing as Azure MSDN Benefits, by the way), you can link you Azure account to each of the VSO accounts you own, and distribute your Azure resources (users, build minutes, load testing, etc.) across each of them.

    Here’s a big disclaimer: I’m still not clear if this is intended behavior, mainly because there’s no obvious link to create additional VSO accounts while logged in as a Microsoft account that already owns one.

    Hey, but for now, this works!

  • Steve Lange @ Work

    VS/TFS 2012 Tidbits: When to Use the Feedback Client


    As the Visual Studio family of products (Visual Studio, TFS, Test Professional) nears its 2012 release, I thought I’d bring some short hits – tidbits, if you will – to my blog. Some of these are pretty obvious (well-documented, or much-discussed), but some may be less obvious than you’d think. Either way, it’s always good to make sure the word is getting out there. Hope you enjoy!

    When to Use the Feedback Client

    FeedbackOne of the “new” new features of TFS 2012 is the addition of the Microsoft Feedback Client for collecting feedback from stakeholders, end users, etc.  This tool integrates with TFS to provide a mechanism to engage those stakeholders and more seamlessly include their insights in the lifecycle.

    Several of my customers however, perhaps with brains overloaded with the possibilities of this capability, have asked me, “So when exactly do I use this? When do I request feedback?”

    Well, the answer, as it often times is, is “it depends.”

    First, if you aren’t aware of the two ways to use the Microsoft Feedback Client, check out (shameless plug) my previous post covering this.

    The more I play around with this tool and talk about it with customers, the more scenarios I find in which this new 2012 capability adds value.

    Now back to that “it depends” answer.. The key thing to remember for using the feedback capability is that there is no hard and fast rule for when you should use it.  But here are three main scenarios:

    • Voluntary, Unsolicited Feedback – When a stakeholder/end user has something to say, let them say it with the Feedback Client.  Instead of an email, entry on a spreadsheet or SharePoint list, using the Feedback Client leverages the goodness of Team Foundation Server (not to mention proximity to the actual development team) to log, manage, relate, and report on the stakeholder’s insights. If a business analyst or project manager likes the feedback provided, it’s just a few clicks to get a backlog item created from the feedback and shoved onto the backlog.  The feedback then becomes a supporting item for the PBI, helping address any questions as to why the PBI was added to the backlog.
    • User Acceptance Testing (UAT) – When a new feature has been developed and made available for UAT, request feedback from one or more knowledgeable stakeholders to get sign-off.  Linking the feedback request to the PBI/task/bug being tested for acceptance not only gives additional traceability in validating sign-off; but it provides the team additional “clout” if a stakeholder later voices a concern about a feature completion (“You said you liked it, see?”).
    • Checkpoints/Continuous Feedback – Feedback doesn’t have to be just at the beginning and end of a sprint. Any time there’s something new that QA’s already had a run at, why not involve a stakeholder? While you can, you don’t have to wait until a sprint’s over to get feedback.

     From MSDN, “Planning and Tracking Projects”:Planning and Tracking Projects

    What other scenarios can you think of where you could leverage the new feedback capabilities in VS 2012?

  • Steve Lange @ Work

    Using Oracle and Visual Studio together?


    It’s about to get a heck of a lot easier!

    Both of what I’m about to discuss below are in beta, so please exercise your normal caution when using these tools.


    Oracle Data Access Using Entity Framework and LINQ

    A beta of Oracle Data Access Components (ODAC) for Microsoft Entity Framework and LINQ to Entities is now available on the Oracle Technology Network (OTN). What is this? The ODAC for EF and LINQ is a set of components that bring Oracle data access into the folds of the Microsoft Entity Framework, Language Integrated Query (LINQ), and Model-First development.

    If you’ve ever used the Entity Framework or LINQ, you can readily understand how productive these capabilities can be for a developer. Previously, EF and LINQ were not feasible with Oracle.

    If you’re not familiar with EF, LINQ, or the concept of Model-First:

    • The Microsoft Entity Framework (EF) abstracts the relational, logical database schema and presents a conceptual schema to the .NET application. It provides object-relational mapping for .NET developers.
    • LINQ is a .NET data querying language which can query a multitude of data sources using common structured syntax.
    • Model-First allows the conceptual model to be created first by the developer. Next, Visual Studio can create DDL scripts to generate the relational database model based on the conceptual model.

    Get started today! Download the beta, and then walk through the tutorial.

    Note: The beta includes the 32-bit Oracle Database client 11.2, which can access Oracle Database server 9.2 and higher. It requires Microsoft Visual Studio 2010 and .NET Framework 4.

    Toad Extension for Visual Studio 2010Oracle Database Change Management with Toad Extension for Visual Studio

    Speaking of Visual Studio, did you know our friends at Quest Software have been hard at work developing the Toad Extension for Visual Studio? Toad Extension for Visual Studio is a database schema provider (DSP) for Oracle in Visual Studio 2010, and aims to give the full benefits of Visual Studio 2010’s database change management and development features to Oracle databases. This includes offline database design, development and change management, better aligning your Oracle development with the rest of your organization’s application lifecycle management methodology.

    How do you get started? Download the beta, watch a couple videos, and dive in!


    Links & Additional Information

    ODAC for Microsoft Entity Framework and LINQ

    Toad Extension for Visual Studio

  • Steve Lange @ Work

    “Fake” a TFS Build (or Create a Build That Doesn’t Do Anything)


    Team Foundation Server’s build system serves as the “heartbeat” for your development lifecycle.  It automatically creates relationships between code changes, work items, reports, and test plans.

    But once in a while I’m asked, “What if we don’t use TFS Build for building our application, but we still want to have builds in TFS so we can track and associate work?”  Besides the biased question of “Why NOT use TFS Build, then?!”, there is sometimes the need to leverage the benefit of having empty/fake builds in TFS that don’t do anything more than create a build number/entry in TFS.

    There are a couple scenarios where this makes some sense, but the most common one I hear is this:

    Without builds in TFS, it’s near impossible (or at least very inconvenient) to tie test plans (accurately) to the rest of the lifecycle.

    Luckily, TFS 2010’s build system is incredibly flexible: flexible enough to allow us to “fake” builds without actually performing build actions (get source, compile, label, etc.).  It’s surprisingly simple, actually; and it doesn’t require writing any code.

    In my example (which I’ll detail below), I define a build which doesn’t do much more than craft a build number and spit out some basic information to the build log.

    First, create a new build process template, based on the default process template, using the steps described in this MSDN article.

    Once you have the process template created and registered in TFS, open the new template (.xaml file) in Visual Studio.  It will look (collapsed) something like this:

    Collapsed default build process template

    Here’s where it gets fun.  Inside the outermost sequence, delete every sequence or activity except for “Get the Build”.

    Drag an UpdateBuildNumber activity from the toolbox into the sequence, after “Get the Build”.

    (optional) Rename “Get the build” to “Get Build Details” so there’s no implication that an actual build will take place".

    Now expand the Arguments section (at the bottom of the XAML Designer window).  Delete all arguments except for BuildNumberFormat, Metadata, and SupportedReasons.

    At the bottom of the now-shorter list, use “Create Argument” and create the following arguments:

    Name Direction Argument type Default value
    MajorBuildName In String  
    MinorBuildName In String  
    Comment In String  
    IncludeBuildDetails In Boolean True

    MajorBuildName” and “MinorBuildName” will be used to help manually name each build.  “Comment” will be used to capture any notes or comments the builder wants to include for a given build.  “IncludeBuildDetails” will be used to determine if additional summary information about the build will be written to the build log.

    To provide users with means to set values to these arguments, create parameters in Metadata.  Click the ellipsis (…) in the Default value column for Metadata.  This will bring up the Process Parameters Metadata editor dialog.  Add each of the following parameters:

    Parameter Name Display Name Category Required View this parameter when
    MajorBuildName Major Build Name Manual Build Details Checked Always show the parameter
    MinorBuildName Minor Build Name Manual Build Details Unchecked Only when queuing a build
    Comment Comment Manual Build Details Unchecked Only when queuing a build
    IncludeBuildDetails Include Build Details Summary Manual Build Details Unchecked Always show the parameter

    Process Parameter Metadata editorA couple notes about setting the above parameters:

    • The “parameter name” should match the name of the like-named argument.
    • Use the exact same category name for each parameter, unless you want to see different groupings.  Also, check for any leading or trailing whitespace, as the category field is not trimmed when saved.
    • Feel free to add descriptions if you like, as they may help other users understand what to do.
    • Leave the “Editor” field blank for each parameter.

    Your dialog should now look something like the one at right.

    Next, open the expression editor for the Value property of the BuildNumberFormat argument and edit the value to read: “$(BuildDefinitionName)_$(Date:yyyyMMdd)_$BuildID)”. Including the BuildID will help ensure that there is always a unique build number.

    Now, Click “Variables” (next to Arguments) and create a new variable named ManualBuildName of type String, scoped to the Sequence, and enter the following as the Default:

    If(String.IsNullorEmpty(MinorBuildName), MajorBuildName, MajorBuidName & “.” & MinorBuildName)

    This variable will be used to provide a manual build name using the supplied MajorBuildName and MinorBuildName arguments.

    Now we have all the variables, arguments, and parameters all ready to go.  Let’s put them into action in the workflow!

    Drag a WriteBuildMessage activity into the main sequence, before Get Build Details, with these settings:

    • Display name: “Write Build Comment”
    • Importance: Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High
    • Message: “Comment for this build: “ & Comment

    Next, add an “If” activity below “Get Build Details” to evaluate when to include additional details in the build log, with the following properties:

    • Display name: “Include Build Details If Chosen”
    • Condition: IncludeBuildDetails

    In the “Then” side of the “If” activity, add a WriteBuildMessage activity for each piece of information you may want to include in the build log.  In my example, I included 3 activities:

    Display name Importance Message
    Write Team Project Name Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High “Team Project: “ & BuildDetail.TeamProject
    Write Requested for Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High “Requested for: “ & BuildDetail.RequestedFor
    Write Build reason Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High “Build Reason: “ & BuildDetail.Reason.ToString()

    Your “If” activity will look like this:

    If activity for showing build details

    The last thing to do is to add an UpdateBuildNumber activity as the last element in the main sequence, with the following properties:

    • Display name: “Set Build Number”
    • BuildNumberFormat: BuildNumberFormat & “-“ & ManualBuildName

    This last activity will actually create the build number which will be stored back into TFS.  Your completed workflow should look like this:

    Completed fake build process

    Now go back to Source Control Explorer and check this template back into TFS.

    Go create a new build definition, opting to use your new template on the process tab.  You’ll notice that your options are dramatically simplified:

    Process tab on build definition editor

    Specify a value for Major Build Name and save your new definition. 

    Queue the build and you’ll see the following on the Parameters tab:

    Parameters tab while queuing a build

    Enter some basic information and click “Queue” to run the (fake) build.

    What you end up with is a build that completes in just a couple seconds, does pretty much nothing, but includes your specified information in the build log:

    Build log after fake build

    Pretty sweet!

    And just to be clear, my example adds more “noise” into the build than you may find necessary, with additional build information, comments, etc.  You could streamline the build even more by removing the “Include Build Details If Chose” activity (and all its sub-activities).

    Given the overall flexibility TFS 2010 has with incorporating Windows Workflow into the build system, there are undoubtedly other ways to accomplish variations of this type of build template.  But I had fun with this one and thought I should share.  I’ve posted my sample template’s xaml file on SkyDrive here:

    I’m all ears for feedback!

  • Steve Lange @ Work

    VS/TFS 2012 Tidbits: Using SkyDrive/OneDrive with Team Foundation Service (or Server)


    April 2014 Updates:

    • SkyDrive is now OneDrive
    • Team Foundation Service is now Visual Studio Online

    Team Foundation Server has always had a great integration with SharePoint by allowing organizations to leverage the goodness of SharePoint’s web parts and document libraries.  TFS can surface reports and other statistics to SharePoint so roles that are on more of the periphery of the lifecycle can still check in and see how the project is progressing.  For teams that use document libraries in SharePoint, these libraries can be accessed directly from Team Explorer, allowing developers to stay in Visual Studio (or whatever development tool they’re using) while still consuming supporting documents such as vision documents, wireframes, and other diagrams.

    And in TFS 2012, this integration continues.  However, if you’re using Team Foundation Service (AKA TFS Preview - think TFS in the cloud), it does not currently support SharePoint integration.  So this shortens the ability for teams to leverage document collaboration. 

    This is very applicable to the new Storyboarding with PowerPoint capability in TFS 2012.  You can associate storyboards with any work item in TFS; but to follow those associations and access the artifact on the other end of a link in TFS, that artifact needs to be accessible to people on your team.  Which means that your docs should be somewhere in the cloud or on a public share somewhere on your network.  If you’re using the TF service in part because your team is distributed, a public share may not be viable.  Which leaves the cloud.

    Enter SkyDrive.  SkyDrive is a great way to easily store, access, and share documents online (I share every customer presentation I deliver on SkyDrive).  And with TF Service, you’re most likely using a Live ID/Microsoft ID for authentication, that account gives you at least 7GB of space to play with for free.

    Now, you can use SkyDrive for all sorts of artifacts; but for this post I’ll be doing storyboards.  So consider my basic product backlog below (again, on my TF Service instance): 

    Sample product backlog

    Let’s say that I want to create a storyboard to support and better define “Sample PBI 4”, the second item on my backlog.  Effectively what I need to do is put my PowerPoint storyboard on SkyDrive and build the link between the PPTX and my PBI work item.

    The first thing you need to do is set up a folder (or folder structure) on SkyDrive to support all the documents you will want to associate with items in TFS.  You can create this structure either via the SkyDrive app or on the SkyDrive website as well.  For this example, I created a “TFS” folder in my “Documents” default folder, then added subfolders to store “Documents” and “Storyboards”.  Here is what it looks like:

    SkyDrive folder structure

    Regardless of how you create your structure, you’ll need to go to SkyDrive via the browser and grant permissions for others on your team to view/edit the root folder (in my case “TFS”) and its contents.  Select the root folder, choose the “Share” action, and either have SkyDrive send an email to your teammates or grab the View & Edit link and send it yourself.  Be sure to send it to your teammates’ Live/Microsoft email addresses that are associated with their TF Service account.

    There are two ways to do this, and the best path for you really just depends on if you use the SkyDrive app/client on your local computer.  I’ll describe both ways to do it below; but the end goal is to get your PowerPoint document open from SkyDrive and not your local computer.  This ensures that when you actually create the link from it to the work item in TFS, that the path that’s inserted in the link is a SkyDrive path and not a local one.

    With No SkyDrive App/Client

    If you don’t have it, or don’t’ want to use the SkyDrive app, that’s fine.  It’ll just take you a couple extra steps.

    • On the SkyDrive website, go to the folder in which you want to store your storyboard(s) (in my example TFS\Storyboards).
    • Select Create, then PowerPoint presentation

    Creating a PowerPoint presentation on SkyDrive

    • Specify a name for your storyboard.

    Naming your storyboard

    • After your PowerPoint document is created, it will be opened (blank) in the Microsoft PowerPoint Web App Preview

    PowerPoint Web App Preview

    • Select “OPEN IN POWERPOINT” at the top right.  Allow/confirm all prompts that come your way.



    • This will launch PowerPoint on your machine and open the storyboard you initialized on SkyDrive.

    Skip down to “Once You Have Your Storyboard Open From SkyDrive..”

    With the SkyDrive App/Client

    If you have the SkyDrive app, it’s even easier

    • Open your SkyDrive folder from your file system.
    • Right-click and select to create a new PowerPoint document.

    Creating a new PPTX from the file system

    • Give it a name.
    • Double click on you new PowerPoint document to open it.

    Alternatively, you can also launch PowerPoint, create a new presentation, and save it to your SkyDrive folder. 


    You’ll just want to be sure to save it to SkyDrive before you create any links back to TFS.

    Once You Have Your Storyboard Open From SkyDrive..

    There’s a very quick and easy way to double-check that PowerPoint has opened your document from SkyDrive. Look at the “Save” button and see if it has a smaller “refresh”-looking overlay on the icon.

    Save button detecting an online document.

    Now move on and build your storyboards.

    • When you’re ready to associate it with a work item in TFS, on the Storyboard tab/ribbon, click “Storyboard Links” in the “Team” group.


    Selecting the Storyboard Links button

    • Create your link by connecting to your TF Service instance, finding and selecting your work item.  Again in my example, work item #138, “Sample PBI 4”.


    • Save your document (always a good measure, right?)
    • You should now be able to open the associated work item and see the link to the storyboard (by default, the Product Backlog Item work item type has a tab to just show storyboard links.  If you don’t have such a tab, go to the All Links tab and you should see it there.  You can quickly verify that the link to the storyboard is an online link/URL and not a local path (if you see a local path, you didn’t open the PPTX from SkyDrive).  Notice in my example the long HTTPS link to my storyboard that contains and trails with my SkyDrive path (Documents/TFS/Storyboards..).




    That’s it!  My instructions are probably more detailed than you need, but you’ll see that it’s remarkably easy to do.  The most important thing about linking work items to documents (storyboards, files, whatever) is to make sure that the location passed to TFS for setting up the link is an accessible one.

    Hope this helps, and enjoy!

Page 1 of 15 (368 items) 12345»