Having been included in two separate teams where our outsourced projects don't go as well as expected I cannot help but to take a step back and ask why things are consistently failing with separate teams, and separate organizations.

The commonalities in both projects were fixed-bid (i.e. fixed price, features and time) using a modified waterfall model (to account for geographic locations). I know—you say the waterfall model doesn't work. Be that as it may organizations revert to this model and are thus handicapped; unable to make adjustments to the changing business and unable to make definitive decisions on how to improve their process, but more on that later. Other commonalities included high level and low level designs, unit test cases, test cases, etc.

I have heard that groups similar to mine are having great success with outsourced projects, and I myself have also been involved with projects that were executed to great success, however I assert that for us there are areas of improvement. At a high level, I can sum it up in one word: process.

Where have my two projects failed? IMHO it is in the analysis of the requirements. The requirements had the vendors pegged implementing features on product technologies they were unfamiliar with—some churn in requirements and with new technology. Traditionally engineering teams would perform proof of concepts and experiments to validate design, and clickable prototypes to share with customers to stabilize requirements (business and technical). While some of that was done here, I still see where it could have been improved. In both instances, we started with a solid vendor relationship that quickly begins to fall apart due to lack of trust. IMHO the biggest areas we are lacking are:

  • Trust between teams
  • High SNR in communication (i.e. direct, candid communications on issues, risk and status)
  • Close coordination between said teams

IMHO in both projects are considered high risk when examining the vendor skill set, staffing model, and experience with projects of this complexity. In most situations you can apply the Parento principle (AKA 80-20 rule) where 80% of staff is offshore while 20% is onsite for R&D phases, moving to a 90-10 planned iterative model for build and less onsite still for stabilization.

In both projects the offshore team encountered additional issues during implementation that require workarounds for resolution, R&D, or a complete shift in approach. On both projects we tried to mitigate risk by having frequent check-ins with the offshore team and checking progress on a tight basis (one was almost daily, the other bi-weekly). Alas when significant and unexpected delays arose the relationship fell to pieces and in response we put in-house staff on the project to sure it up and get it going in the right direction.

Lessons Learned

  • Complete as many iterations as you can: That is, don't disrupt forward progress on the project for the sake of getting a work product from the team, but do get deliverables as often as it makes sense. Some of our projects get some weekly, others monthly.
  • Use collaborative tools to increase confidence in deliverables: Your in-house staff does not have to wait until the end of the formal delivery cycle to perform review or check in with the offshore team. Utilize tools like LiveMeeting Net Meeting, Windows Live Messenger, etc. to increase collaboration
  • Code review, code review, code review: DO NOT UNDERESTIMATE THIS. A majority of issues we found on one project were found in code review by in-house staff familiar with the business rules. In both projects the adherence to offshore code reviews prior to sending to our in-house staff was not maintained, resulting in many bugs raised later in the cycles. In the reviews we took code that was a representative sample (chosen at random, but was a deep slice) and did a review. Additionally we looked at key strategic areas for performance bottlenecks and security to ensure code conventions, standards, and requirements were being met
  • Integrate as frequently as possible: Our teams use branches to logically separate the project or feature into areas for work. As a matter of practice, the more frequently these branches can be integrated the faster you can identify and resolve defects.
  • Where possible, leverage frameworks: Use application blocks (Enterprise Library), Guidance Automation (Web Client Software Factory, Smart Client Software Factory) to solve common problems. Where possible, create your own frameworks and reuse them on projects to reduce the amount of custom code required to be written, which decreases development time and improves quality.
  • Use Design Patterns: These patterns illustrate common ways to solve problems. Being standard, they can convey meaning across geographical boundaries. Further, by implementing patterns maintenance complexity is reduced and documentation and clarity is improved. A different vendor in a different country, maybe even in-house staff, may need to understand and modify the code—using a proven design allows ramp up to easily occur.
  • Use useful metrics to measure quality and progress: Build metrics into SLAs and look at meaningful metrics, such as defect density, bug reactivations, code churn, code coverage (unit test or otherwise), project velocity, etc.

Leveraging Visual Studio Team System

I have found that VSTS handles the needs of geographically dispersed development teams (and their inherent challenges) in a variety of ways:

  • "Better" communication and coordination: Team Foundation Server provides the ability to track work items (tasks, bugs, issues, requirements, risks, change requests) in addition to source code and documentation. The architecture, utilizing web services and cache servers, is optimized for distributed teams working over slow or unreliable connections. The product also allows for customizable notification of events to give you the clarity you need from the projects and help improve onsite/offshore communications
  • "Better" status reports: Coupled with the data warehouse maintaining statistics and work item history, reports can be generated to give you insight in to some of the metrics I mentioned above. The platform is also extensible, utilizing SQL Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) to allow team members to gather information with various levels of granularity.
  • "Better" code quality: the product also includes a unit testing framework that works with the centralized build and reporting portions to give indications of code churn, unit test coverage, and pass rates. Check in policies can be configured to check if the code compiles, run static code analysis, even run selected tests

Each one of these allow the onsite team to ensure compliance with corporate standards and gain trust in confidence in the offshore team's ability to deliver a quality, stable product.