Compartir a través de


6 - Improving the Pipeline

patterns & practices Developer Center

Download code samplesDownload PDFOrder paperback book

The Trey Research Team has come a long way. They have a continuous delivery pipeline that they can monitor for valuable feedback. They're also tracking metrics that let them know how well they're doing, in terms of adding value to the business. Are they finished? Of course not. There are still problems and there are many improvements that they can make.

This chapter offers suggestions for ways to further improve the pipeline and its associated components and practices. For example, there's a discussion of possible branching strategies. While none of these areas are discussed in great depth, we hope there's enough information to help you identify areas that can benefit from some changes, and some practical guidance on how you might implement those changes.

Patterns and Practices for Improving the Pipeline

Here are some patterns and best practices to follow to further improve a continuous delivery pipeline, no matter which specific technologies you use.

Use Visuals to Display Data

Your first reaction to the feedback you see may be that it's incomprehensible. For example, if you use Build Explorer to monitor your pipeline there's a great deal of useful information, but you may want to make it easier to understand. Information becomes much more accessible if you implement a pipeline solution that automatically creates some visuals that present data in an organized and easily understandable format.

An especially helpful visual technique is a matrix view that shows the status of multiple pipeline instances. Another useful visual is a flowchart view, which shows the status of one pipeline instance, the sequence of stages, and the dependencies between them. An example of a dependency is that the acceptance test stage requires the commit stage to succeed before it can be triggered. Finally, another improvement is to have some easily identifiable way to open the build summary page for a stage, in order to make it easy to get more detailed information.

The following illustration is an example of a matrix view. With it, you can easily assess the status of multiple pipeline instances.

Dn449960.857617A285CCB26D2A969A0CC4C2B52E(en-us,PandP.10).png

The following illustration is an example of a flowchart view. You can see the status of each stage of a particular pipeline instance.

Dn449960.2C33FA54DC6FA07AC083C15A3FC234A9(en-us,PandP.10).png

Choose a Suitable Branching Strategy

Coordinating the changes that multiple developers contribute to the same code base can be difficult, even if there are only a few programmers. Code branching is the most widely adopted strategy, especially since distributed version control systems such as Git have become popular. (For information on how to use Git with TFS, see Brian Harry's blog.)

However, branching is not always the best strategy if you want to use continuous delivery. In the context of a continuous delivery pipeline, it's unclear how to work with merge operations. Best practices such as build only once, or propagate changes automatically are harder to adopt if merges across several branches are required. You can’t build only once if you merge several branches. Each time you merge, the code has to be rebuilt before performing validations on that branch.

You also can’t propagate changes automatically if, during the propagation, you have to deal with a merge operation, which many times isn't completed automatically because of conflicts. You can read more about branching in the context of continuous delivery in Jez Humble's article DVCS, continuous integration, and feature branches.

In terms of continuous delivery, a better approach is to get rid of branching and use feature toggles instead. A feature toggle makes it possible to either show users new features or not. In other words, you can toggle a feature's visibility.

With feature toggles, everyone works from the main source trunk and features can be deployed to production even if they're not finished, as long as they're hidden. Feature toggles are often implemented with configuration switches that can easily be changed to activate or deactivate specific features at run time. By using feature toggles, your code is always in a production ready state, and you only have to maintain a single line of development. Feature toggles also enable other continuous delivery practices such as A/B testing and canary releases, which are covered later in this chapter. A good place to start learning about feature toggles is Martin Fowler's blog.

Many teams feel that feature toggles are difficult to implement and so prefer branching. If your team feels the same way, there are some best practices to follow if you practice continuous delivery.

Use Feature Branching

Use feature branching, by which we mean that you should set up a different development branch for each new feature. This is in preference to any other branching strategy, such as having a unique development branch. These strategies make it even more difficult to follow best practices for continuous delivery, such as always having a version of your code that is ready to go to production.

Feature Branches Should Be Short Lived

Feature branches should be as short lived as possible. Don’t let changes stay too long in a branch. The longer changes are unmerged, the more difficult the merge operation will be. By short lived, we mean that a branch typically exists for a few hours, and never longer than a few days.

Keep Features Small

To have short-lived feature branches, make sure that the feature is small enough so that it can be implemented in a few days, at most.

Define and Enforce a Merge Policy

Define a merge policy and enforce it. Small changes should be frequently integrated with the main branch, and from there to any other development branches. Any time a feature is implemented, the corresponding development branch should be merged into the main branch, and the development branch should be discarded. Any time the main branch incorporates changes (because of a merge operation), these changes must be propagated to other active development branches, so that the integration is done there, to minimize integration problems with the main branch.

Don't Cherry Pick

Cherry picking means that you merge only specific check-ins, instead of all the changes, made to a branch. Always merge from the root of the source branch to the root of the target and merge the entire set of changes. Cherry picking makes it difficult to track which changes have been merged and which haven't, so integration becomes more complex. If you find that you must cherry pick, then consider it a symptom of a feature that is too broadly defined, and whose corresponding branch contains more items than it should.

Make the Main Branch the Pipeline Source

Always make the main branch the source for the pipeline. The pipeline must build once and changes should be propagated automatically as much as possible. The easiest way to accomplish this is to avoid merge operations in the context of the pipeline. Make the main branch the source for changes that are sent through the pipeline. The main branch contains the code that is complete and ready for production after it passes all the validations performed by the pipeline stages.

Another option is to have different pipelines that support different branches, where the pipeline would validate the code in each development branch just before the code is merged with the main branch. However, this option generally doesn't provide enough benefits to make it worth the cost of setting it up.

Fix Problems in the Main Branch

If a merge operation from a development branch to the main branch causes the pipeline to fail, fix the problem in the main branch and run the fix through the pipeline. If it's a major problem, you can create a special branch dedicated solely to the problem. However, don't reuse the feature branch that was the source of the merge problem and try to fix it there. It's probable that the branch is already out of sync with the latest integration changes.

Use Release Branches as Archives

Use release branches only as archives. After the pipeline successfully verifies a change, you can store the code in a release branch before you send it to production. An archive allows you to quickly access a particular version of the code if problems occur. Make sure, however, that you always release to production the binaries that came from the main branch and that were built and validated by the pipeline.

Use the Pipeline to Perform Nonfunctional Tests

There may be many nonfunctional requirements that your application must satisfy, other than the specific behaviors detailed in the business specifications. There is an extensive list of nonfunctional requirements in the Wikipedia article Nonfunctional requirement. If you decide to test any of them, remember that the best way to go about it is to run the tests as either automatic or manual steps in some stage of the pipeline. For example, with the proper tools, you can run security tests as an automatic step. On the other hand, usability tests are usually performed as manual steps.

Use the Pipeline for Performance Tests

If you need to validate that your application can, for example, perform under a particular load or that it satisfies a particular capacity requirement, you can add stages to your pipeline that are devoted to performance tests.

Automate Environment Provisioning and Management

The benefits of automation is a constant theme throughout this guidance. Provisioning and managing your environments are no exceptions to this advice. If you need a single computer or an entire environment to run validations inside the pipeline, your life will be easier if the setup is done with the push of a button. Along with the usual benefits you get from automation, you will find that you can easily implement advanced test techniques such as canary releases and blue/green deployments, which are discussed later in this chapter. Virtualization, typically in the cloud, is the most convenient way to implement this automation but you can use physical machines if you have the right tools, such as the Windows Automated Installation Kit (AIK).

Use Canary Releases

A canary release is when you release some version of your application to a particular group of users, before it's released to everyone else. A canary release can help you to identify problems that surface in the production environment before they affect your entire user base. There are also other benefits.

  • Canary releases provide fast, pertinent feedback about your application, especially if the target users are chosen wisely.
  • Canary releases can help to simplify load and capacity testing because you perform the tests against a smaller set of servers or running instances of the application.
  • Canary releases make rollbacks easier because, if there are problems, you simply stop making the new version available to the target users. You also don't have to inconvenience your entire user base with the rollback.
  • Once you can perform canary releases, it will be easier to add A/B testing, which is when you have two groups of target users. One group sees the new version of the software and the other group sees the old version.

To learn more about canary releases, see Chapter 10 of Jez Humble's and David Farley’s book, Continuous Delivery.

Use A/B Testing

A/B testing consists of releasing a feature and measuring how well it performs (for example, do your users like it better than the old version, or does it improve a particular feature of the application). Depending on the results, you will either keep the feature or discard it. A/B testing is frequently done by comparing two different features that are released at the same time. Often, one is the original feature and one is the new feature. A/B testing is a powerful tool that businesses can use to get feedback about changes to the software. For more information about A/B testing, see A/B testing on Wikipedia. You can also refer to Chapter 10 of Jez Humble's and David Farley’s book, Continuous Delivery.

Use Blue/Green Deployments

Blue/green deployments occur when there are two copies of the production environment, where one is traditionally named blue and the other green. Users are routed to one of the two environments. In this example, we'll say that users are routed to the green environment. The blue environment is either idle or receives new binaries from the release stage of the pipeline.

In the blue environment, you can perform any sorts of verifications you like. Once you're satisfied that the application is ready, you switch the router so that users are now sent to the blue environment, which has the latest changes. The green environment then takes on the role of the blue environment, and is either idle or verifies the code that is coming from the pipeline.

If anything goes wrong with the blue environment, you can point your users back to the green environment. Blue/green deployments mean that you can have releases with close to zero downtime, which is very useful if you are continuously (or at least, frequently) delivering new software. If you have a very large system, you can extend the blue/green model to contain multiple environments.

To learn more about blue/green deployments, see Chapter 10 of Jez Humble's and David Farley’s book, Continuous Delivery.

Set Up Error Detection in the Production Environment

Chapter 5 discusses the importance of monitoring the application in the production environment. Catching production bugs early increases the Mean Time Between Failures (MTBF) or, to think of it in another way, lowers the defect rate.

One of the best ways to improve software quality is to make sure that the information the operations team gathers as they monitor an application is shared with the development team. The more detailed the information, the more it helps the development team to debug any problems. Good information can also help to lower the Mean Time To Recover (MTTR).

Use Telemetry and Analytics

You can lower the MTBF and the MTTR by using telemetry and analytics tools in the production environment. There are third-party tools available that look for exceptions, silent exceptions, usage trends, and patterns of failure. The tools aggregate data, analyze it and present the results. When a team examines these results, they may find previously unknown problems, or even be able to anticipate potential problems.

Purposely Cause Random Failures

It's always good to proactively look for potential points of failure in the production environment. One effective way to do this (and, in consequence, to lower the MTBF) is to deliberately cause random failures and to attack the application. The result is that vulnerabilities are discovered early. Also, the team learns how to handle similar problems so that, if they actually occur, they know how to fix them quickly.

One of the best-known toolsets for creating this type of controlled chaos is Netflix's Simian Army, which Netflix uses to ensure the resiliency of its own environments. Some of the disruptions it causes include:

  • Randomly disabling production instances.
  • Introducing artificial delays in the network.
  • Shutting down nodes that don’t adhere to a set of predefined best practices.
  • Disposing of unused resources

Optimize Your Debugging Process

The faster you solve a problem that occurs in the production environment, the better it is for the business. The measurable effect of debugging efficiently is that your MTTR lowers. Unfortunately, problems that occur in production can be particularly difficult to reproduce and fix. The proof is that, if they were easier to detect, they would have been found sooner. There are a variety of tools available that can help you optimize your debugging process. Here are some of them.

Microsoft Test Manager

Microsoft Test Manager (MTM) allows you to file extremely detailed bug reports. For example, you can include the steps you followed that resulted in finding the bug, and event logs from the machine where the bug occurred. For an overview of how to use MTM, see What's New in Microsoft Test Manager 2012.

Standalone IntelliTrace

Use the standalone IntelliTrace collector to debug applications in the production (or other) environments without using Visual Studio. The collector generates a trace file that records what happened to the application. For example, it records the sequence of method calls, and the values of variables. You may be able to find the cause of the problem without rerunning the application.

Symbol Servers

A symbol server enables debuggers to automatically retrieve the correct symbol files without needing product names, releases, or build numbers. Without a symbol server, you would have to get the source code of the correct version, and search for the debugging symbols in the binaries repository (or any other repository you use). If you can't find the symbols, you would need to rebuild the application. (Note that all this work will adversely affect your MTTR.) If you do use a symbol server, you can easily retrieve the debugging information from there, on demand.

Profilers

Profilers help you to discover the source of performance related problems such as poor memory usage resource contention.

Keep the Pipeline Secure

A fully automated pipeline can be an extremely effective recipe for disaster if it's misused. For example, you can instantly disable a production server if you run an automated deployment script manually, outside of the context of the pipeline, and without knowing what the script does, or which version of the binaries or the script you're using.

Trust among team members is always the best defense. Effective teams that use continuous delivery and adopt a DevOps mindset trust each other to make changes to the environments because it's assumed they know what they're doing, they've done it many times before, and they follow the rules.

The same holds true if there are different teams for development and operations. Again, assuming that the teams use a DevOps approach, they closely collaborate with each other. In addition, they make sure that there is complete transparency about what is involved in each deployment, which changes are being made to the target environments, and any potential problems that could occur. Of course, trust is easier achieve if environments are automatically provisioned and managed, which means that a new copy of any of them is only one click away.

However, trust is generally not enough. You may have novices on your team, or there may be so many people involved that you can't know them all. Securing the pipeline is usually a necessity.

The first step to securing the pipeline is discussed in Chapter 4 of this guidance. Lock down the environments so that they can be changed only by administrators and the service accounts that run the pipeline automations. Locking down the environments prevents anyone else from purposely or accidentally logging on to the target machines and causing potentially damaging changes (perhaps by using remote scripting).

The next step is to control who can run the pipeline, or even better, who can trigger specific pipeline stages. For example, you may want particular users to have access to the UAT stage of the pipeline so that they can automatically deploy to a staging environment and perform the tests. However, it's unlikely that you want those same users to have access to the release stage, where they could potentially deploy to production. You'll need to set the permissions of the release tool (in our case, this is TFS) so that only the appropriate people can run either the pipeline, or stages of the pipeline.

The third step is to control who can modify what the pipeline does. A malicious or naive team member could introduce actions that threaten the integrity of the environments by changing what the stages, steps, or automated deployments do. Again, the available security model provided by the release tool can help to configure the appropriate permissions.

Use a Binaries Repository for Your Dependencies

Generally, you can use the TFS build drops folder as the default binaries repository but you may want to make an exception for some dependencies, such as libraries. For them, consider using the official NuGet feed in Visual Studio. Using NuGet has many advantages. Team members know that they're using the correct version of these dependencies because that information is stored and managed inside the Visual Studio project and updated in version control when it changes. Furthermore, NuGet alerts you if new versions of the dependencies are available, so that you can decide if you want the updated versions.

You can get the same benefits for libraries and dependencies that are not part of the official NuGet package feed. These include, for example, third-party components and libraries that you may have purchased, for which you don't own the source code. It can also include, for example) libraries developed by your company that are common to several projects.

Simply install your own NuGet server and place all these dependencies in it, and then add the feed to Visual Studio. Your feed will work the same way as the official feed. If you want, you can use a shared network folder to set up and hold the feed. Alternatively, you can set up a full featured feed that includes automatic notification of updates. If you subscribe to it, you'll be notified when a new version of the dependency is available.

To set up the repository so that it is updated automatically, create a release pipeline for the common code, have its release stage generate the NuGet packages from the validated binaries, and push the packages to the NuGet server. (Common code is code written within the organization, as opposed to third-party code, and that is used by the application in several applications. A typical example of common code is a utility class.

Use a Management Release Tool

Although not a best practice by itself, but certainly a way to help you implement some best practices is to use a release management tool whose purpose is to let you build once but deploy to multiple environments. One possibility is the DevOps Deployment Workbench Express Edition, which is currently available. You can read more about this tool in Appendix 1.

Another possibility is InRelease, which will be included with Visual Studio 2013 and is currently available as a preview. The different components that make up a pipeline and that are discussed in this guidance have counterparts in InRelease. Here are the correspondences.

  • The pipeline itself is defined with a release template. Pipeline instances are called releases.
  • Stages and environments are the same in InRelease as in this guidance.
  • Steps are created with actions and tools artifacts. InRelease has a library with many predefined steps.
  • You can define and manage additional elements, which in InRelease are referred to as components or technology types. In this guidance, these are either represented implicitly (for example, components are Visual Studio projects) or are not used in the implementation presented here.

The following screenshot shows an example of an InRelease menu that allows you to manage various components.

Dn449960.4A83B9A086E68C72B4FBEE59A01E6056(en-us,PandP.10).png

Trey Research

Now let's take a look at how Trey Research is planning to implement these patterns and practices. They still have problems they want to address in future iterations. Here are the ones that concern them the most.

Issue

Cause

Solution

They have integration problems. When they try to integrate code made by different team members or groups, there are merge conflicts. They spend too much time on integrations and it's not clear what to merge.

They use long-lived development (feature) branches.

Use feature toggles instead of feature branches.

They don’t have a way to know if users like new features. This means they don't know if it's worthwhile to invest in them further or if they should discard them.

There's no mechanism in place to show new features to users while they're in the early stages of development. In other words, there's no feedback.

Use feature toggles and/or canary releases. Perform A/B tests.

They don’t know if the application is vulnerable to security threats.

There are no security tests.

Introduce a security testing stage.

They don’t know if the application can cope with the expected load.

There are no capacity or load tests.

Add a capacity testing stage to perform capacity and load tests.

The MTBF is too small. The system fails in production more than it should. Some of the bugs found in production should be found earlier.

The production environment isn't monitored for hard to find bugs or silent exceptions.

Integrate TFS and SCOM, so that issues detected in production by SCOM are recorded in TFS. Use analytics to gather data and identify potential issues.

The MTTR is too high. When an issue or outage occurs in production, they spend too long investigating and fixing it.

The current tools are inadequate.

Use the IntelliTrace standalone collector in the production environment to generate traces for the exceptions and to help debug them. Set up a symbol server that the commit stage updates automatically so that the debug symbols are always available to debug any build.

Team members use different or outdated versions of their own and third party libraries while developing, deploying and testing.

Code and libraries aren't managed. When a common library is updated, the people who use it aren't notified or don’t have an easy way to get it.

Set up a common artifact repository for their own and third party common libraries and dependencies by using a NuGet server. For libraries they develop, update the repository automatically by using the pipeline.

It's hard to set up new environments that have all the required application and the correct configurations.

Environment provisioning is not automated.

Automate environment provisioning by using SCVMM, virtual machine templates, and Lab Management when necessary.

Of course, they know that they'll need to improve their pipeline (not once, but many times over) to finally achieve all their goals. The following diagram shows what the Trey Research pipeline will look like sometime in the future.

Dn449960.4C7AF16120631446927C00B49FBF0EB1(en-us,PandP.10).png

Jin's excited.

Jin says:

Friday, September 13, 2013

Dn449960.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

Everyone on the team has ideas for what we can do next. We've even created a special work item area in TFS for all the improvements! Of course, we'll have to fit them in with everything else but we can do it. The team's really a team now. This afternoon, Paulus made me a cup of coffee with his space age espresso machine. I was up until 3AM playing Halo.

Here's the current list of work items for improving the Trey Research pipeline.

Dn449960.536BAAA4AEB30643F41FF90020BF99AD(en-us,PandP.10).png

The next section discusses how Trey Research is thinking of implementing the work items.

Trey Research's Plans for Improving the Pipeline

Here's how Trey Research is thinking of implementing future improvements.

How Will Trey Research Implement Visuals

The Trey Research team is thinking about using either a web page or a Visual Studio plugin that can create these diagrams. There are several ways that they can retrieve the data.

  • Team Foundation has an API that can retrieve the status of ongoing and finished builds, and also trigger new ones. For more information, see Extending Team Foundation.
  • The TFS OData API exposes similar functionality.
  • TFS Web Access can be extended by writing a plugin in JavaScript, although the API is still not well documented. Here's a blog post with a compilation of links to sites that have example plugins.

Of course, another possibility is to use a tool that already has the capability to create these visuals. One such tool is InRelease, which will be a part of Visual Studio 2013.

How Will Trey Research Implement a Branching Strategy

Until they become more familiar with feature toggles, Trey Research will use short-lived feature branches. They'll have frequent merges, in order to keep the integration process straightforward. In addition, they've set up a TFS alert that warns them whenever any change has been merged to the main branch. The alert lets them know that they can update whatever feature branch they're working on. They avoid cherry-picking, and the commit stage of their pipeline is configured to retrieve changes from the main branch only on every check-in. Here's an example of an alert.

Dn449960.BC0E01E2DD9E5648F26E27820A83D2E0(en-us,PandP.10).png

How Will Trey Research Perform Nonfunctional Tests

The team's going to decide which nonfunctional tests to run on a case by case basis.

How Will Trey Research Implement Performance Tests

The team is planning to first write the performance tests. Visual Studio Ultimate has many tools that help you to write web performance and load tests. For more information, see Testing Performance and Stress Using Visual Studio Web Performance and Load Tests. After the tests are written, the team is planning to add at least one new stage to the pipeline that is devoted to performance tests. They will use a test rig that consists of at least one test controller to orchestrate the process, and as many test agents as necessary to generate the loads. For more information, see Using Test Controllers and Test Agents with Load Tests. You might also want to look at Lab5–Adding New Stages to the Pipeline and Lab3.3– Running the Automated Tests.

For applications with many users, you may find that it's difficult and expensive to set up a test rig that properly simulates the conditions of the production environment. A cloud-based solution, where the load is generated from agents installed in Windows Azure, may be the best way to handle this scenario. Visual Studio 2013 will provide a cloud-based load testing solution. You can get more details at the Visual Studio ALM + Team Foundation Server blog.

How Will Trey Research Automate Environment Provisioning and Management

Now that they're managing environments by using Lab Management, the team plans to add System Center Virtual Machine Manager (SCVMM). This will allow them to automatically manage virtual machines from within Lab Management as well as from within the stages of the pipeline. Using SCVMM will ensure that the environments are automatically provisioned when they are needed. For more information, see Configuring Lab Management for SCVMM Environments.

How Will Trey Research Implement Canary Releases

Performing canary releases seems quite feasible because the team has already automated deployments and tokenized the parameters. These improvements allow them to automatically configure the deployment script to suit the target environment and server that is receiving the new binaries. Raymond could configure the network infrastructure so that requests are routed to the servers dedicated to the chosen subset of users. Once they have other best practices in place, such as automated environment provisioning and feature toggles, it will be even easier to perform canary releases.

How Will Trey Research Implement A/B Testing

The team is first going to set up feature toggles and canary releases. Implementing these two best practices, along with automated deployments, will make it much easier to set up the pipeline so that it activates and deactivates specific features, or releases them only to a certain group of users.

How Will Trey Research Implement Blue/Green Deployments

As with canary releases, automated and tokenized deployments make it much easier for the team to perform blue/green deployments because they can immediately point the output of the pipeline to either environment. Once automated environment provisioning is in place, by using SCVMM, blue/green deployments will be even more feasible. With SCVMM, it will be easy to automatically set up identical environments.

How Will Trey Research Implement Error Detection

Raymond has plans to set up System Center 2012 Operations Manager (SCOM) to monitor the production environment. Once this happens, the team wants to integrate it with TFS so that issues detected by SCOM are recorded in TFS as work items. If the team updates the work items as they solve the issue, the updates are sent back to SCOM so their status is also reflected there. This keeps the development and operations teams in sync. Work items can contain useful data about the issue, such as exception details or IntelliTrace files that help the development team quickly solve the problem, keep the MTTR low and keep the operations people informed. For more information about SCOM, see Operations Manager. For more information about how to integrate SCOM with TFS and other tools, see Integrating Operations with Development Process.

How Will Trey Research Implement Telemetry and Analytics

The Trey Research team is thinking about using PreEmptive Analytics for Team Foundation Server. This tool analyzes silent failures and automatically creates work items in TFS.

How Will Trey Research Purposely Cause Random Failures

Netflix has released the source code for the Simian Army under the Apache license, which means that the Trey Research team can modify and use it if it retains the copyright notice. The team plans to study the code and perhaps use it as the starting point for a solution that fits their situation.

How Will Trey Research Optimize Their Debugging Process

Trey Research already uses MTM to file bugs. They are learning about using IntelliTrace in the production environment, symbol servers in TFS, and the Visual Studio Profiler. For more information, see Collect IntelliTrace Data Outside Visual Studio with the Standalone Collector. For more information about symbol servers, see Publish Symbol Data. For more information about profilers, see Analyzing Application Performance by Using Profiling Tools.

How Will Trey Research Secure Their Pipeline

In Chapter 4, the team locked down the environments so that only the appropriate service accounts and the administrator (Raymond) could log on to the machines.

For now, Raymond controls all the environments and he's still reluctant to give the rest of the team any level of access, especially to the production environment. Whether or not these rules become more relaxed remains to be seen. Many companies have policies that restrict access to the production environment to only a few people.

Raymond has enforced his control of the environments by denying permission to manually queue build definitions in TFS. This restriction won't impact the rate that at which the team delivers releases because, in TFS, each check-in triggers a new pipeline instance that begins with the commit stage build definition. The pipeline instance is created independently of whether the person doing the check-in has permission to manually queue builds or not.

The drawback is that, if each automatically triggered stage succeeds, the pipeline advances until it reaches the first manually triggered stage. At that point, the team member or person who performs the UATs must ask Raymond to trigger it for them. If the release schedule becomes very busy Raymond may become a bottleneck, which will be reflected in the Kanban board. He may have to adopt a different strategy, such as allowing another team member to also trigger the stages.

Even if everyone on the team could queue builds, they still need to restrict access to the pipeline stages. For example, they won't want their UAT testers to have access to any stage but the UAT stage. Fortunately, in TFS, it's possible to set permissions at the build definition level, so you can let users trigger a specific stage while stopping them from triggering any others. The following screenshot shows how to use Team Explorer to access the build definition permissions by selecting Security.

Dn449960.4E0B8CB64E60EF308671C5CB14546B12(en-us,PandP.10).png

The permissions that determine whether someone can or cannot change any component of the pipeline can be so granular because everything is under version control. Permissions in version control can be set at the item level, where an item can be the customized build process templates that make up the stages, all the scripts, and the code used by the steps.

Will Trey Research Implement a Binaries Repository

The team plans to investigate how to create a script that generates the NuGet packages and pushes them to the NuGet server. They think they'll use the nuget.exe command line tool, and the pack and push commands.

The Conclusion

Here are Jin's final thoughts, at least for now.

Dn449960.E0EE2D97C0F19A853C4DF641563F7B80(en-us,PandP.10).png

More Information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away. You can find the bibliography at: https://msdn.microsoft.com/library/dn449954.aspx.

If you're not familiar with branching strategies, and specifically with feature branches, see the Rangers' Visual Studio Team Foundation Server Branching and Merging Guide. However, if you're using continuous delivery, apply its advice after you've made sure you're following the best practices in this guidance.

For information on how to use Git with TFS, see Brian Harry's blog at https://blogs.msdn.com/b/bharry/archive/2013/06/19/enterprise-grade-git.aspx.

You can read more about branching in the context of continuous delivery in Jez Humble's article DVCS, continuous integration, and feature branches at http://continuousdelivery.com/2011/07/on-dvcs-continuous-integration-and-feature-branches/.

A good place to start learning about feature toggles is Martin Fowler's blog at http://martinfowler.com/bliki/FeatureToggle.html.

There is an extensive list of nonfunctional requirements in the Wikipedia article Nonfunctional requirement at https://en.wikipedia.org/wiki/Non-functional_requirement.

Virtualization, typically in the cloud, is the most convenient way to implement this automation but you can use physical machines if you have the right tools, such as the Windows Automated Installation Kit (AIK) available at https://www.microsoft.com/en-us/download/details.aspx?id=5753.

More information about A/B testing is available A/B testing on Wikipedia at http://en.wikipedia.org/wiki/A/B_testing.

One of the best-known toolsets for creating this type of controlled chaos is Netflix's Simian Army at http://techblog.netflix.com/2011/07/netflix-simian-army.html, which Netflix uses to ensure the resiliency of its own environments.

For an overview of how to use MTM, see What's New in Microsoft Test Manager 2012 at https://msdn.microsoft.com/magazine/jj618301.aspx.

Generally, you can use the TFS build drops folder as the default binaries repository but you may want to make an exception for some dependencies, such as libraries. For them, consider using the official NuGet feed in Visual Studio at www.nuget.org.

To install your own NuGet server and place all these dependencies in it, and then add the feed to Visual Studio available at https://www.nuget.org/packages/NuGet.Server/.

Use a release management tool whose purpose is to let you build once but deploy to multiple environments. One possibility is the DevOps Deployment Workbench Express Edition available at https://vsardevops.codeplex.com/.

For information about InRelease, which will be included with Visual Studio 2013 and is currently available as a preview see http://www.incyclesoftware.com/inrelease/inrelease-2013-preview/.

For more information on using Team Foundation to retrieve the status of ongoing and finished builds, and also trigger new ones , see Extending Team Foundation at https://msdn.microsoft.com/library/bb130146(v=vs.110).aspx.

For more information about the TFS OData API, see https://tfsodata.visualstudio.com/.

TFS Web Access can be extended by writing a plugin in JavaScript, although the API is still not well documented. Here's a blog post with a compilation of links to sites that have example plugins http://bzbetty.blogspot.com.es/2012/09/tfs-2012-web-access-plugins.html.

For more information about the tools available in Visual Studio Ultimate that help you to write web performance and load tests, see Testing Performance and Stress Using Visual Studio Web Performance and Load Tests at https://msdn.microsoft.com/library/vstudio/dd293540.aspx.

For more information on using a test rig that consists of at least one test controller to orchestrate the process, see Using Test Controllers and Test Agents with Load Tests at https://msdn.microsoft.com/library/vstudio/ee390841.aspx.

For more information on using SCVMM to ensure that the environments are automatically provisioned when they are needed, see Configuring Lab Management for SCVMM Environments at https://msdn.microsoft.com/library/vstudio/dd380687.aspx.

For more information about SCOM, see Operations Manager at https://technet.microsoft.com/library/hh205987.aspx.

For more information about how to integrate SCOM with TFS and other tools, see Integrating Operations with Development Process at https://technet.microsoft.com/library/jj614609.aspx.

For more information about using PreEmptive Analytics for Team Foundation Server to analyze silent failures and automatically create work items in TFS, see http://www.preemptive.com/products/patfs/overview.

For more information on using IntelliTrace in the production environment, see Collect IntelliTrace Data Outside Visual Studio with the Standalone Collector at https://msdn.microsoft.com/library/vstudio/hh398365.aspx.

For more information about symbol servers, see Publish Symbol Data at https://msdn.microsoft.com/library/hh190722.aspx.

For more information about profilers, see Analyzing Application Performance by Using Profiling Tools at https://msdn.microsoft.com/library/z9z62c29.aspx.

For information about using the nuget.exe command line tool, and the pack and push commands see https://docs.nuget.org/docs/reference/command-line-reference.

The hands-on labs that accompany this guidance are available on the Microsoft Download Center at https://go.microsoft.com/fwlink/p/?LinkID=317536.

Next Topic | Previous Topic | Home | Community