Partilhar via


Application Lifecycle Management Part 4 of 5

 

Application Lifecycle Management Implementation  

This is the fourth blog on Application Lifecycle Management series. This part will mainly focus on the standard approaches of implementing an ALM process. These strategies will focus more on the concepts and best practices then we shall finally look at the implementation of the same using the tools that Microsoft provides.

For the past few days, I have been blogging about Application Lifecycle Management, you can read the previous posts:

Application Lifecycle Management Part 1 of 5

Application Lifecycle Management Part 2 of 5

Application Lifecycle Management Part 3 of 5

Application Lifecycle Management Part 5 of 5 

 

Version control and a single coding stream

First, it’s important to store your artifacts in a version-control system (VCS) but which types of artifacts you store there depends on the project context and the requirements:

  • Coding artifacts should be stored in the VCS. Although this sounds obvious, it’s not always the case. Many projects, for example, patch their applications in production without having those changes under source control. This usually occurs when a developer goes to client site and does the patching to fix an issue that the customer was complaining about and forgets to commit these changes to the main code base. In addition to source code, tests, build scripts and database scripts need to be versioned.
  • I recommend that you externalize (and version) your runtime configuration settings. Its best practice to control all variable values externally from the application so that they can be changed easily without recompiling the application.
  • It’s wise to place documents, such as use cases and other UML diagrams that we were able to achieve during our previous post into a version control repository so that you can benefit from versioning and accessing change history.

Although common VCS tools like Team Foundation Server or Subversion weren’t invented to run as file servers, it’s possible to store binary artifacts, such as word documents, SQL Server database scripts in them. This avoids the ugliness of storing files on a central shared file structure, which can then be replaced randomly with no history tracking or traceability. Using VCs for documents is vastly superior to another common method of sharing information: that of sending your documents by email and not having a central place to hold them. Unfortunately, this practice is often the norm resulting in back and forth with the clients whenever there is scope creep within the project.

 

Productive workspaces

A workspace is your client-side copy of the files and folders on the VCS. When you add, edit, delete, move, rename, or otherwise manage any source-controlled item, your changes are persisted, or marked as pending changes, in the workspace. A workspace is an isolated space where you can write and test your code without having to worry about how your modifications might affect the stability of checked-in sources or how you might be affected by changes that your teammates make. Pending changes are isolated in a workspace until you check them in to the source control server.

Although frequent integration is essential to rapid coding, developers need control over how they integrate changes into their workspaces so that they can work in most productive way. Avoiding or delaying the integration of changes into a workspace means that the developer can complete a unit of work without dealing with unexpected problem such as surprise compilation error. This is known as working in isolation. Developers should always verify that their changes don’t break the integration build by updating their sandbox with
the most recent changes from other members of the team and then performing a private build prior to committing changes back to the VCS. Private workspaces enables the developers to test their changes before sharing them with the team.

The private build provides a quick and convenient way to see if your latest changes could impact other team members. These practices lead to highly productive development environments. If the quality of the checked-in code is poor (for example, if there were failed tests or compilation errors), other developers will suffer when they include these changes to their workspaces and then see compilation or runtime errors. Getting broken code from the VCS costs everyone time, because the developers have to wait for changes or help a
colleague fix the broken build, and then waste more time getting the latest clean code. This also means that all developers should stop checking in code to VCS until the broken build is fixed. Avoiding broken code is key to avoiding poor quality, to learn more from a previous post where I discussed continuous integration and how it can improve code quality.

Developers usually test their isolated changes, and then, if they pass the tests, check them into the VCS. To learn more about how to test your code, please see a previous post where I discussed how to do test driven development. But an efficient flow is only possible when
the local build and test times are minimal. If the gap between making the code changes and getting the new test results is more than 20 to 30 seconds, the flow is interrupted. If the tests aren’t run frequently enough, the quality decreases. Decreased quality, in turn, means that broken builds aren’t fixed immediately, and this becomes a vicious cycle.

You can optimize test round trips by automating the build process, this can be done by dedicating a build machine or by having a hosted build machine in the cloud. For those teams that are composed of less than 5 developers, Team Foundation Service which is basically
Team Foundation Server on the cloud would be the best option for you.

 

 

Including Unit Tests in Continuous Integration

  1. I shall access Team Foundation Service from here

  2. Click on the Account that is associated to my Microsoft Account (@live,@hotmail,@outlook)

  3. Click on the project that am currently working on

  4. From the resulting dashboard, am able to see my build definition, learn about build definition here. I can see clearly that the last build succeeded partially clicking on the build definition will provide more information on why that was the case.

  5. On clicking the build definition am able to get the following information about that build.

    • The person who broke the build, none other than yours truly

    • The changes I was working on when the build was broken

    • The code that was checked in that broke the build

  6. Armed with the information provided above, let’s go back and fix the bug that was introduced by changeset 86.

  7. Confirm that all the unit tests are now passing (All Green)

  8. Proceed to check-in the code

  9. Add a comment for the changes made

  10. This now bumps it up to Changeset 87 which we have just checked in (bug free)

  11. Given that we had set up our build definition to continuous integration, any check-in triggers a build on the server.

  12. This can also be viewed on the web portal

  13. Our build should now succeed and details of the same can be viewed in both Visual Studio and the TFS web portal.

 

 

 

 

 

VCS and CI ensures that a given revision of code in development will build as intended or fail (break the build) if errors occur. The CI build acts a “single point of truth” so builds can be used in confidence for testing or as production candidate. Whether the build fails are succeeds the CI tool makes its results available to the team. The developer may receive the information by email, RSS notification depending on the preference.

I once worked on a team that didn’t implement CI, we would spend a lot of time trying to integrate and merge the different code branches together and of course this wasn’t adding any value to the customer. CI eradicated this problem, by making sure that all the code
compiled and all the tests associated with the project passed.

 

Happy Coding!

 

 

References

Agile ALM: Lightweight tools and Agile strategies