Jaa


How Visual Studio builds containerized apps

Applies to: yesVisual Studio noVisual Studio for Mac

Note

This article applies to Visual Studio 2017. If you're looking for the latest Visual Studio documentation, see Visual Studio documentation. We recommend upgrading to the latest version of Visual Studio. Download it here

Whether you're building from the Visual Studio IDE, or setting up a command-line build, you need to know how Visual Studio uses the Dockerfile to build your projects. For performance reasons, Visual Studio follows a special process for containerized apps. Understanding how Visual Studio builds your projects is especially important when you customize your build process by modifying the Dockerfile.

When Visual Studio builds a project that doesn't use Docker containers, it invokes MSBuild on the local machine and generates the output files in a folder (typically bin) under your local solution folder. For a containerized project, however, the build process takes account of the Dockerfile's instructions for building the containerized app. The Dockerfile that Visual Studio uses is divided into multiple stages. This process relies on Docker's multistage build feature.

Multistage build

The multistage build feature helps make the process of building containers more efficient, and makes containers smaller by allowing them to contain only the bits that your app needs at run time. Multistage build is used for .NET Core projects, not .NET Framework projects.

The multistage build allows container images to be created in stages that produce intermediate images. As an example, consider a typical Dockerfile generated by Visual Studio - the first stage is base:

FROM mcr.microsoft.com/dotnet/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

The lines in the Dockerfile begin with the ASP.NET image from Microsoft Container Registry (mcr.microsoft.com) and create an intermediate image base that exposes ports 80 and 443, and sets the working directory to /app.

The next stage is build, which appears as follows:

FROM mcr.microsoft.com/dotnet/sdk:3.1-buster-slim AS build
WORKDIR /src
COPY ["WebApplication43/WebApplication43.csproj", "WebApplication43/"]
RUN dotnet restore "WebApplication43/WebApplication43.csproj"
COPY . .
WORKDIR "/src/WebApplication43"
RUN dotnet build "WebApplication43.csproj" -c Release -o /app/build

You can see that the build stage starts from a different original image from the registry (sdk rather than aspnet), rather than continuing from base. The sdk image has all the build tools, and for that reason it's a lot bigger than the aspnet image, which only contains runtime components. The reason for using a separate image becomes clear when you look at the rest of the Dockerfile:

FROM build AS publish
RUN dotnet publish "WebApplication43.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication43.dll"]

The final stage starts again from base, and includes the COPY --from=publish to copy the published output to the final image. This process makes it possible for the final image to be a lot smaller, since it doesn't need to include all of the build tools that were in the sdk image.

Building from the command line

If you want to build outside of Visual Studio, you can use docker build or MSBuild to build from the command line.

docker build

To build a containerized solution from the command line, you can usually use the command docker build <context> for each project in the solution. You provide the build context argument. The build context for a Dockerfile is the folder on the local machine that's used as the working folder to generate the image. For example, it's the folder that you copy files from when you copy to the container. In .NET Core projects, use the folder that contains the solution file (.sln). Expressed as a relative path, this argument is typically ".." for a Dockerfile in a project folder, and the solution file in its parent folder. For .NET Framework projects, the build context is the project folder, not the solution folder.

docker build -f Dockerfile ..

MSBuild

Dockerfiles created by Visual Studio for .NET Framework projects (and for .NET Core projects created with versions of Visual Studio prior to Visual Studio 2017 Update 4) are not multistage Dockerfiles. The steps in these Dockerfiles do not compile your code. Instead, when Visual Studio builds a .NET Framework Dockerfile, it first compiles your project using MSBuild. When that succeeds, Visual Studio then builds the Dockerfile, which simply copies the build output from MSBuild into the resulting Docker image. Because the steps to compile your code aren't included in the Dockerfile, you can't build .NET Framework Dockerfiles using docker build from the command line. You should use MSBuild to build these projects.

To build an image for single docker container project you can use MSBuild with the /t:ContainerBuild command option. For example:

MSBuild MyProject.csproj /t:ContainerBuild /p:Configuration=Release

You'll see output similar to what you see in the Output window when you build your solution from the Visual Studio IDE. Always use /p:Configuration=Release, since in cases where Visual Studio uses the multistage build optimization, results when building the Debug configuration might not be as expected. See Debugging.

If you are using a Docker Compose project, use this command to build images:

msbuild /p:SolutionPath=<solution-name>.sln /p:Configuration=Release docker-compose.dcproj

Project warmup

Project warmup refers to a series of steps that happen when the Docker profile is selected for a project (that is, when a project is loaded or Docker support is added) in order to improve the performance of subsequent runs (F5 or Ctrl+F5). This is configurable under Tools > Options > Container Tools. Here are the tasks that run in the background:

  • Check that Docker Desktop is installed and running.
  • Ensure that Docker Desktop is set to the same operating system as the project.
  • Pull the images in the first stage of the Dockerfile (the base stage in most Dockerfiles).
  • Build the Dockerfile and start the container.

Warmup will only happen in Fast mode, so the running container will have the app folder volume-mounted. That means that any changes to the app won't invalidate the container. This therefore improves the debugging performance significantly and decreases the wait time for long running tasks such as pulling large images.

Volume mapping

For debugging to work in containers, Visual Studio uses volume mapping to map the debugger and NuGet folders from the host machine. Volume mapping is described in the Docker documentation here. You can view the volume mappings for a container by using the Containers window in Visual Studio.

Here are the volumes that are mounted in your container:

Volume Description
Remote debugger Contains the bits required to run the debugger in the container depending on the project type. This is explained in more detail in the Debugging section.
App folder Contains the project folder where the Dockerfile is located.
Source folder Contains the build context that is passed to Docker commands.
NuGet packages folders Contains the NuGet packages and fallback folders that is read from the obj{project}.csproj.nuget.g.props file in the project.

For ASP.NET core web apps, there might be two additional folders for the SSL certificate and the user secrets, which is explained in more detail in the next section.

SSL-enabled ASP.NET Core apps

Container tools in Visual Studio support debugging an SSL-enabled ASP.NET core app with a dev certificate, the same way you'd expect it to work without containers. To make that happen, Visual Studio adds a couple of more steps to export the certificate and make it available to the container. Here is the flow that Visual Studio handles for you when debugging in the container:

  1. Ensures the local development certificate is present and trusted on the host machine through the dev-certs tool.

  2. Exports the certificate to %APPDATA%\ASP.NET\Https with a secure password that is stored in the user secrets store for this particular app.

  3. Volume-mounts the following directories:

    • %APPDATA%\Microsoft\UserSecrets
    • %APPDATA%\ASP.NET\Https

ASP.NET Core looks for a certificate that matches the assembly name under the Https folder, which is why it is mapped to the container in that path. The certificate path and password can alternatively be defined using environment variables (that is, ASPNETCORE_Kestrel__Certificates__Default__Path and ASPNETCORE_Kestrel__Certificates__Default__Password) or in the user secrets json file, for example:

{
  "Kestrel": {
    "Certificates": {
      "Default": {
        "Path": "c:\\app\\mycert.pfx",
        "Password": "strongpassword"
      }
    }
  }
}

If your configuration supports both containerized and non-containerized builds, you should use the environment variables, because the paths are specific to the container environment.

For more information about using SSL with ASP.NET Core apps in containers, see Hosting ASP.NET Core images with Docker over HTTPS).

Debugging

When building in Debug configuration, there are several optimizations that Visual Studio does that help with the performance of the build process for containerized projects. The build process for containerized apps is not as straightforward as simply following the steps outlined in the Dockerfile. Building in a container is much slower than building on the local machine. So, when you build in the Debug configuration, Visual Studio actually builds your projects on the local machine, and then shares the output folder to the container using volume mounting. A build with this optimization enabled is called a Fast mode build.

In Fast mode, Visual Studio calls docker build with an argument that tells Docker to build only the base stage. Visual Studio handles the rest of the process without regard to the contents of the Dockerfile. So, when you modify your Dockerfile, such as to customize the container environment or install additional dependencies, you should put your modifications in the first stage. Any custom steps placed in the Dockerfile's build, publish, or final stages will not be executed.

This performance optimization only occurs when you build in the Debug configuration. In the Release configuration, the build occurs in the container as specified in the Dockerfile.

If you want to disable the performance optimization and build as the Dockerfile specifies, then set the ContainerDevelopmentMode property to Regular in the project file as follows:

<PropertyGroup>
   <ContainerDevelopmentMode>Regular</ContainerDevelopmentMode>
</PropertyGroup>

To restore the performance optimization, remove the property from the project file.

When you start debugging (F5), a previously started container is reused, if possible. If you don't want to reuse the previous container, you can use Rebuild or Clean commands in Visual Studio to force Visual Studio to use a fresh container.

The process of running the debugger depends on the type of project and container operating system:

Scenario Debugger process
.NET Core apps (Linux containers) Visual Studio downloads vsdbg and maps it to the container, then it gets called with your program and arguments (that is, dotnet webapp.dll), and then debugger attaches to the process.
.NET Core apps (Windows containers) Visual Studio uses onecoremsvsmon and maps it to the container, runs it as the entry point and then Visual Studio connects to it and attaches to the your program. This is similar to how you would normally set up remote debugging on another computer or virtual machine.
.NET Framework apps Visual Studio uses msvsmon and maps it to the container, runs it as part of the entry point where Visual Studio can connect to it, and attaches to the your program.

For information on vsdbg.exe, see Offroad debugging of .NET Core on Linux and OSX from Visual Studio.

Container entry point

Visual Studio uses a custom container entry point depending on the project type and the container operating system, here are the different combinations:

Container type Entry point
Linux containers The entry point is tail -f /dev/null, which is an infinite wait to keep the container running. When the app is launched through the debugger, it is the debugger that is responsible to run the app (that is, dotnet webapp.dll). If launched without debugging, the tooling runs a docker exec -i {containerId} dotnet webapp.dll to run the app.
Windows containers The entry point is something like C:\remote_debugger\x64\msvsmon.exe /noauth /anyuser /silent /nostatus which runs the debugger, so it is listening for connections. Same applies that the debugger runs the app, and a docker exec command when launched without debugging. For .NET Framework web apps, the entry point is slightly different where ServiceMonitor is added to the command.

The container entry point can only be modified in docker-compose projects, not in single-container projects.

Next steps

Learn how to further customize your builds by setting additional MSBuild properties in your project files. See MSBuild properties for container projects.

See also

MSBuild Dockerfile on Windows Linux containers on Windows