Application Lifecycle Management Implementation
This is the fourth blog on Application Lifecycle Management series. This part will mainly focus on the standard approaches of implementing an ALM process. These strategies will focus more on the concepts and best practices then we shall finally look at the implementation of the same using the tools that Microsoft provides.
For the past few days, I have been blogging about Application Lifecycle Management, you can read the previous posts:
Application Lifecycle Management Part 1 of 5
Application Lifecycle Management Part 2 of 5
Application Lifecycle Management Part 3 of 5
Application Lifecycle Management Part 5 of 5
Version control and a single coding stream
First, it’s important to store your artifacts in a version-control system (VCS) but which types of artifacts you store there depends on the project context and the requirements:
Although common VCS tools like Team Foundation Server or Subversion weren’t invented to run as file servers, it’s possible to store binary artifacts, such as word documents, SQL Server database scripts in them. This avoids the ugliness of storing files on a central shared file structure, which can then be replaced randomly with no history tracking or traceability. Using VCs for documents is vastly superior to another common method of sharing information: that of sending your documents by email and not having a central place to hold them. Unfortunately, this practice is often the norm resulting in back and forth with the clients whenever there is scope creep within the project.
A workspace is your client-side copy of the files and folders on the VCS. When you add, edit, delete, move, rename, or otherwise manage any source-controlled item, your changes are persisted, or marked as pending changes, in the workspace. A workspace is an isolated space where you can write and test your code without having to worry about how your modifications might affect the stability of checked-in sources or how you might be affected by changes that your teammates make. Pending changes are isolated in a workspace until you check them in to the source control server.
Although frequent integration is essential to rapid coding, developers need control over how they integrate changes into their workspaces so that they can work in most productive way. Avoiding or delaying the integration of changes into a workspace means that the developer can complete a unit of work without dealing with unexpected problem such as surprise compilation error. This is known as working in isolation. Developers should always verify that their changes don’t break the integration build by updating their sandbox withthe most recent changes from other members of the team and then performing a private build prior to committing changes back to the VCS. Private workspaces enables the developers to test their changes before sharing them with the team.
The private build provides a quick and convenient way to see if your latest changes could impact other team members. These practices lead to highly productive development environments. If the quality of the checked-in code is poor (for example, if there were failed tests or compilation errors), other developers will suffer when they include these changes to their workspaces and then see compilation or runtime errors. Getting broken code from the VCS costs everyone time, because the developers have to wait for changes or help acolleague fix the broken build, and then waste more time getting the latest clean code. This also means that all developers should stop checking in code to VCS until the broken build is fixed. Avoiding broken code is key to avoiding poor quality, to learn more from a previous post where I discussed continuous integration and how it can improve code quality.
Developers usually test their isolated changes, and then, if they pass the tests, check them into the VCS. To learn more about how to test your code, please see a previous post where I discussed how to do test driven development. But an efficient flow is only possible whenthe local build and test times are minimal. If the gap between making the code changes and getting the new test results is more than 20 to 30 seconds, the flow is interrupted. If the tests aren’t run frequently enough, the quality decreases. Decreased quality, in turn, means that broken builds aren’t fixed immediately, and this becomes a vicious cycle.
You can optimize test round trips by automating the build process, this can be done by dedicating a build machine or by having a hosted build machine in the cloud. For those teams that are composed of less than 5 developers, Team Foundation Service which is basicallyTeam Foundation Server on the cloud would be the best option for you.
Including Unit Tests in Continuous Integration
VCS and CI ensures that a given revision of code in development will build as intended or fail (break the build) if errors occur. The CI build acts a “single point of truth” so builds can be used in confidence for testing or as production candidate. Whether the build fails are succeeds the CI tool makes its results available to the team. The developer may receive the information by email, RSS notification depending on the preference.
I once worked on a team that didn’t implement CI, we would spend a lot of time trying to integrate and merge the different code branches together and of course this wasn’t adding any value to the customer. CI eradicated this problem, by making sure that all the codecompiled and all the tests associated with the project passed.
Agile ALM: Lightweight tools and Agile strategies