This article describes how to implement the Modern Web App pattern. The Modern Web App pattern defines how to modernize cloud web apps and introduce a service-oriented architecture. The pattern provides prescriptive architecture, code, and configuration guidance that aligns with the principles of the Azure Well-Architected Framework. This pattern builds on the Reliable Web App pattern.
Why use the Modern Web App pattern?
The Modern Web App pattern helps you optimize high-demand areas of your web application. It provides detailed guidance for decoupling these areas to enable independent scaling for cost optimization. This approach enables you to allocate dedicated resources to critical components, which enhances overall performance. Decoupling separable services can improve reliability by preventing slowdowns in one part of the app from affecting others. It also enables independent versioning of individual app components.
How to implement the Modern Web App pattern
This article contains guidance for implementing the Modern Web App pattern. Use the following links to go to the specific guidance that you need:
- Architecture guidance. Learn how to modularize web app components and select appropriate platform as a service (PaaS) solutions.
- Code guidance. Implement four design patterns to optimize the decoupled components: Strangler Fig, Queue-Based Load Leveling, Competing Consumers, and Health Endpoint Monitoring.
- Configuration guidance. Configure authentication, authorization, autoscaling, and containerization for the decoupled components.
Tip
There's a reference implementation (sample app) of the Modern Web App pattern. It represents the end-state of the Modern Web App implementation. It's a production-grade web app that features all the code, architecture, and configuration updates that are discussed in this article. Deploy and use the reference implementation to guide your implementation of the Modern Web App pattern.
Architecture guidance
The Modern Web App pattern builds on the Reliable Web App pattern. It requires a few extra architectural components. You need a message queue, container platform, storage service, and container registry, as shown in the following diagram:
For a higher service-level objective (SLO), you can add a second region to your web app architecture. Configure your load balancer to route traffic to the second region to support either an active-active or an active-passive configuration, depending on your business needs. The two regions require the same services, except one region has a hub virtual network. Use a hub-and-spoke network topology to centralize and share resources, such as a network firewall. Access the container repository through the hub virtual network. If you have virtual machines, add a bastion host to the hub virtual network to manage them with enhanced security. The following diagram shows this architecture:
Decouple the architecture
To implement the Modern Web App pattern, you need to decouple the existing web app architecture. Decoupling the architecture entails breaking down a monolithic application into smaller, independent services, each responsible for a specific feature or function. This process includes evaluating the current web app, modifying the architecture, and, finally, extracting the web app code to a container platform. The goal is to systematically identify and extract application services that benefit most from being decoupled. To decouple your architecture, follow these recommendations:
Identify service boundaries. Apply domain-driven design principles to identify bounded contexts within your monolithic application. Each bounded context represents a logical boundary and is a candidate for decoupling. Services that represent distinct business functions and have fewer dependencies are good candidates.
Evaluate service benefits. Focus on services that benefit most from independent scaling. For example, an external dependency like an email service provider in an LOB application might require more isolation from failure. Consider services that undergo frequent updates or changes. Decoupling these services enables independent deployment and reduces the risk of affecting other parts of the application.
Assess technical feasibility. Examine the current architecture to identify technical constraints and dependencies that might affect the decoupling process. Plan how to manage and share data across services. Decoupled services should manage their own data and minimize direct database access across service boundaries.
Deploy Azure services. Select and deploy the Azure services that you need to support the web app service that you intend to extract. For guidance, see the Select the right Azure services section of this article.
Decouple the web app service. Define clear interfaces and APIs that the newly extracted web app services can use to interact with other parts of the system. Design a data-management strategy that allows each service to manage its own data but ensures consistency and integrity. For specific implementation strategies and design patterns to use during this extraction process, see the Code guidance section.
Use independent storage for decoupled services. To simplify versioning and deployment, ensure that each decoupled service has its own data stores. For example, the reference implementation separates the email service from the web app and eliminates the need for the service to access the database. Instead, the service communicates the email delivery status back to the web app via an Azure Service Bus message, and the web app saves a note to its database.
Implement separate deployment pipelines for each decoupled service. If you implement separate deployment pipelines, each service can be updated according to its own schedule. If different teams or organizations within your company own different services, using separate deployment pipelines gives each team control over its own deployments. Use continuous integration and continuous delivery (CI/CD) tools like Jenkins, GitHub Actions, or Azure Pipelines to set up these pipelines.
Revise security controls. Ensure that your security controls are updated to account for the new architecture, including firewall rules and access controls.
Select the right Azure services
For each Azure service in your architecture, consult the relevant Azure service guide in the Well-Architected Framework. For the Modern Web App pattern, you need a messaging system to support asynchronous messaging, an application platform that supports containerization, and a container image repository.
Choose a message queue. A message queue is an important component of service-oriented architectures. It decouples message senders and receivers to enable asynchronous messaging. Use the guidance on choosing an Azure messaging service to pick an Azure messaging system that supports your design needs. Azure has three messaging services: Azure Event Grid, Azure Event Hubs, and Service Bus. Start with Service Bus, and use one of the other two options if Service Bus doesn't meet your needs.
Service Use case Service Bus Choose Service Bus for reliable, ordered, and possibly transactional delivery of high-value messages in enterprise applications. Event Grid Choose Event Grid when you need to handle a large number of discrete events efficiently. Event Grid is scalable for event-driven applications in which many small, independent events (like resource state changes) need to be routed to subscribers in a low-latency publish-subscribe model. Event Hubs Choose Event Hubs for massive, high-throughput data ingestion, like telemetry, logs, or real-time analytics. Event Hubs is optimized for streaming scenarios in which bulk data needs to be ingested and processed continuously. Implement a container service. For the elements of your application that you want to containerize, you need an application platform that supports containers. The Choose an Azure container service guidance can help you select one. Azure has three principal container services: Azure Container Apps, Azure Kubernetes Service (AKS), and Azure App Service. Start with Container Apps, and use one of the other two options if Container Apps doesn't meet your needs.
Service Use case Container Apps Choose Container Apps if you need a serverless platform that automatically scales and manages containers in event-driven applications. AKS Choose AKS if you need detailed control over Kubernetes configurations and advanced features for scaling, networking, and security. Web App for Containers Choose Web App for Containers in App Service for the simplest PaaS experience. Implement a container repository. When you use a container-based compute service, you need to have a repository to store the container images. You can use a public container registry like Docker Hub or a managed registry like Azure Container Registry. The Introduction to Container registries in Azure guidance can help you choose one.
Code guidance
To successfully decouple and extract an independent service, you need to update your web app code with the following design patterns: Strangler Fig, Queue-Based Load Leveling, Competing Consumers, Health Endpoint Monitoring, and Retry. The following diagram shows the roles of these patterns:
Strangler Fig pattern: The Strangler Fig pattern incrementally migrates functionality from a monolithic application to the decoupled service. Implement this pattern in the main web app to gradually migrate functionality to independent services by directing traffic based on endpoints.
Queue-Based Load Leveling pattern: The Queue-Based Load Leveling pattern manages the flow of messages between the producer and the consumer by using a queue as a buffer. Implement this pattern on the producer portion of the decoupled service to manage message flow asynchronously by using a queue.
Competing Consumers pattern: The Competing Consumers pattern enables multiple instances of a decoupled service to independently read from the same message queue and compete to process messages. Implement this pattern in the decoupled service to distribute tasks across multiple instances.
Health Endpoint Monitoring pattern: The Health Endpoint Monitoring pattern exposes endpoints for monitoring the status and health of different components of the web app. (4a) Implement this pattern in the main web app. (4b) Also implement it in the decoupled service to track the health of endpoints.
Retry pattern: The Retry pattern handles transient failures by retrying operations that might fail intermittently. (5a) Implement this pattern in the main web app, on all outbound calls to other Azure services, such as calls to the message queue and private endpoints. (5b) Also implement this pattern in the decoupled service to handle transient failures in calls to the private endpoints.
Each design pattern provides benefits that align with one or more of the pillars of the Well-Architected Framework. The following table provides details.
Design pattern | Implementation location | Reliability (RE) | Security (SE) | Cost Optimization (CO) | Operational Excellence (OE) | Performance Efficiency (PE) | Supporting Well-Architected Framework principles |
---|---|---|---|---|---|---|---|
Strangler Fig pattern | Main web app | ✔ | ✔ | ✔ | RE:08 CO:07 CO:08 OE:06 OE:11 |
||
Queue-Based Load Leveling pattern | Producer of decoupled service | ✔ | ✔ | ✔ | RE:06 RE:07 CO:12 PE:05 |
||
Competing Consumers pattern | Decoupled service | ✔ | ✔ | ✔ | RE:05 RE:07 CO:05 CO:07 PE:05 PE:07 |
||
Health Endpoint Monitoring pattern | Main web app and decoupled service | ✔ | ✔ | ✔ | RE:07 RE:10 OE:07 PE:05 |
||
Retry Pattern | Main web app and decoupled service | ✔ | RE:07 |
Implement the Strangler Fig pattern
Use the Strangler Fig pattern to gradually migrate functionality from the monolithic code base to new independent services. Extract new services from the existing monolithic code base and slowly modernize critical parts of the web app. To implement the Strangler Fig pattern, follow these recommendations:
Set up a routing layer. In the monolithic web app code base, implement a routing layer that directs traffic based on endpoints. Use custom routing logic as needed to handle specific business rules for directing traffic. For example, if you have a
/users
endpoint in your monolithic app and you move that functionality to the decoupled service, the routing layer directs all requests to/users
to the new service.Manage feature rollout. Implement feature flags and staged rollout to gradually roll out the decoupled services. The existing monolithic app routing should control how many requests the decoupled services receive. Start with a small percentage of requests and increase usage over time as you gain confidence in the service's stability and performance.
For example, the reference implementation extracts the email delivery functionality into a standalone service. The service can be gradually introduced to handle a larger percentage of the requests to send emails that contain Contoso support guides. As the new service proves its reliability and performance, it can eventually take over the entire set of email responsibilities from the monolith, completing the transition.
Use a façade service (if necessary). A façade service is useful when a single request needs to interact with multiple services, or when you want to hide the complexity of the underlying system from the client. However, if the decoupled service doesn't have any public-facing APIs, a façade service might not be necessary.
In the monolithic web app code base, implement a façade service to route requests to the appropriate back end (monolith or microservice). Ensure that the new decoupled service can handle requests independently when it's accessed through the façade.
Implement the Queue-Based Load Leveling pattern
Implement the Queue-Based Load Leveling pattern on the producer portion of the decoupled service to asynchronously handle tasks that don't need immediate responses. This pattern enhances overall system responsiveness and scalability by using a queue to manage workload distribution. It enables the decoupled service to process requests at a consistent rate. To implement this pattern effectively, follow these recommendations:
Use nonblocking message queuing. Ensure that the process that sends messages to the queue doesn't block other processes while it waits for the decoupled service to handle messages in the queue. If the process requires the result of the decoupled-service operation, implement an alternative way to handle the situation while waiting for the queued operation to complete. For example, in Spring Boot, you can use the
StreamBridge
class to asynchronously publish messages to the queue without blocking the calling thread:private final StreamBridge streamBridge; public SupportGuideQueueSender(StreamBridge streamBridge) { this.streamBridge = streamBridge; } // Asynchronously publish a message without blocking the calling thread @Override public void send(String to, String guideUrl, Long requestId) { EmailRequest emailRequest = EmailRequest.newBuilder() .setRequestId(requestId) .setEmailAddress(to) .setUrlToManual(guideUrl) .build(); log.info("EmailRequest: {}", emailRequest); var message = emailRequest.toByteArray(); streamBridge.send(EMAIL_REQUEST_QUEUE, message); log.info("Message sent to the queue"); }
This Java example uses
StreamBridge
to send messages asynchronously. This approach ensures that the main application remains responsive and can handle other tasks concurrently while the decoupled service processes the queued requests at a manageable rate.Implement message retry and removal. Implement a mechanism to retry processing of queued messages that can't be processed successfully. If failures persist, these messages should be removed from the queue. For example, Service Bus has built-in retry and dead-letter queue features.
Configure idempotent message processing. The logic that processes messages from the queue must be idempotent to handle cases in which a message might be processed more than once. In Spring Boot, you can use
@StreamListener
or@KafkaListener
with a unique message identifier to prevent duplicate processing. Or you can organize the business process to operate in a functional approach with Spring Cloud Stream, where theconsume
method is defined in a way that produces the same result when it runs repeatedly. For a list of settings that manage the behavior of message consumption, see Spring Cloud Stream with Service Bus.Manage changes to the user experience. When you use asynchronous processing, tasks might not be completed immediately. To set expectations and avoid confusion, ensure that users know when their tasks are still being processed. Use visual cues or messages to indicate that a task is in progress. Give users the option to receive notifications when their task is done, such as an email or push notification.
Implement the Competing Consumers pattern
Implement the Competing Consumers pattern in the decoupled service to manage incoming tasks from the message queue. This pattern involves distributing tasks across multiple instances of decoupled services. These services process messages from the queue. The pattern enhances load balancing and increases the system's capacity for handling simultaneous requests. The Competing Consumers pattern is effective when:
- The sequence of message processing isn't crucial.
- The queue remains unaffected by malformed messages.
- The processing operation is idempotent, which means it can be applied multiple times without changing the result after the initial application.
To implement the Competing Consumers pattern, follow these recommendations:
Handle concurrent messages. When services receive messages from a queue, ensure that your system scales predictably by configuring the concurrency to match the system design. Load test results can help you determine the appropriate number of concurrent messages to handle. You can start from one to measure how the system performs.
Disable prefetching. Disable prefetching of messages so that consumers only fetch messages when they're ready.
Use reliable message processing modes. Use a reliable processing mode, such as Peek-Lock, that automatically retries messages that fail processing. This mode provides more reliability than deletion-first methods. If one worker fails to handle a message, another must be able to process it without errors, even if the message is processed multiple times.
Implement error handling. Route malformed or unprocessable messages to a separate dead-letter queue. This design prevents repetitive processing. For example, you can catch exceptions during message processing and move problematic messages to the separate queue. With Service Bus, messages are moved to the dead-leter queue after a specified number of delivery attempts or upon explicit rejection by the application.
Handle out-of-order messages. Design consumers to process messages that arrive out of sequence. When you have multiple parallel consumers, they might process messages out of order.
Scale based on queue length. Services that consume messages from a queue should autoscale based on queue length. Scale-based autoscaling enables efficient processing of spikes of incoming messages.
Use a message-reply queue. If your system requires notifications for post-message processing, set up a dedicated reply or response queue. This setup separates operational messaging from notification processes.
Use stateless services. Consider using stateless services to process requests from a queue. Doing so enables easy scaling and efficient resource usage.
Configure logging. Integrate logging and specific exception handling within the message-processing workflow. Focus on capturing serialization errors and directing these problematic messages to a dead-letter mechanism. These logs provide valuable insights for troubleshooting.
The reference implementation uses the Competing Consumers pattern on a stateless service that runs in Container Apps to process email delivery requests from a Service Bus queue.
The processor logs message processing details to help with troubleshooting and monitoring. It captures deserialization errors and provides insights that can be useful during debugging. The service scales at the container level to enable efficient handling of message spikes based on queue length. Here's the code:
@Configuration
public class EmailProcessor {
private static final Logger log = LoggerFactory.getLogger(EmailProcessor.class);
@Bean
Function<byte[], byte[]> consume() {
return message -> {
log.info("New message received");
try {
EmailRequest emailRequest = EmailRequest.parseFrom(message);
log.info("EmailRequest: {}", emailRequest);
EmailResponse emailResponse = EmailResponse.newBuilder()
.setEmailAddress(emailRequest.getEmailAddress())
.setUrlToManual(emailRequest.getUrlToManual())
.setRequestId(emailRequest.getRequestId())
.setMessage("Email sent to " + emailRequest.getEmailAddress() + " with URL to manual " + emailRequest.getUrlToManual())
.setStatus(Status.SUCCESS)
.build();
return emailResponse.toByteArray();
} catch (InvalidProtocolBufferException e) {
throw new RuntimeException("Error parsing email request message", e);
}
};
}
}
Implement the Health Endpoint Monitoring pattern
Implement the Health Endpoint Monitoring pattern in the main app code and decoupled service code to track the health of application endpoints. Orchestrators like AKS or Container Apps can poll these endpoints to verify service health and restart unhealthy instances. Spring Boot Actuator provides built-in support for health checks. It can expose health check endpoints for key dependencies like databases, message brokers, and storage systems. To implement the Health Endpoint Monitoring pattern, follow these recommendations:
Implement health checks. Use Spring Boot Actuator to provide health check endpoints. Actuator exposes an
/actuator/health
endpoint that includes built-in health indicators and custom checks for various dependencies. To enable the health endpoint, add thespring-boot-starter-actuator
dependency in yourpom.xml
orbuild.gradle
file:<!-- Add Spring Boot Actuator dependency --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>
Configure the health endpoint in
application.properties
as shown in the reference implementation:management.endpoints.web.exposure.include=metrics,health,info,retry,retryevents
Validate dependencies. Spring Boot Actuator includes health indicators for various dependencies like databases, message brokers (RabbitMQ or Kafka), and storage services. To validate the availability of Azure services like Azure Blob Storage or Service Bus, use technologies like Azure Spring Apps or Micrometer, which provide health indicators for these services. If you need custom checks, you can implement them by creating a custom
HealthIndicator
bean:import org.springframework.boot.actuate.health.Health; import org.springframework.boot.actuate.health.HealthIndicator; import org.springframework.stereotype.Component; @Component public class CustomAzureServiceBusHealthIndicator implements HealthIndicator { @Override public Health health() { // Implement your health check logic here (for example, ping Service Bus). boolean isServiceBusHealthy = checkServiceBusHealth(); return isServiceBusHealthy ? Health.up().build() : Health.down().build(); } private boolean checkServiceBusHealth() { // Implement health check logic (pinging or connecting to the service). return true; // Placeholder. Implement the actual logic. } }
Configure Azure resources. Configure the Azure resource to use the app's health check URLs to confirm liveness and readiness. For example, you can use Terraform to confirm the liveness and readiness of apps that are deployed to Container Apps. For more information, see Health probes in Container Apps.
Implement the Retry pattern
The Retry pattern enables applications to recover from transient faults. This pattern is central to the Reliable Web App pattern, so your web app should already be using the Retry pattern. Apply the Retry pattern to requests to the messaging systems and requests that are issued by the decoupled services that you extract from the web app. To implement the Retry pattern, follow these recommendations:
Configure retry options. Be sure to configure the client that's responsible for interactions with the message queue with appropriate retry settings. Specify parameters like the maximum number of retries, delay between retries, and maximum delay.
Use exponential backoff. Implement the exponential backoff strategy for retry attempts. This strategy involves increasing the time between each retry exponentially, which helps reduce the load on the system during periods of high failure rates.
Use SDK retry functionality. For services that have specialized SDKs, like Service Bus or Blob Storage, use the built-in retry mechanisms. These built-in mechanisms are optimized for the service's typical use cases, can handle retries more effectively, and require less configuration.
Use standard resilience libraries for HTTP clients. For HTTP clients, you can use Resilience4j together with Spring's RestTemplate or WebClient to handle retries in HTTP communications. You can wrap RestTemplate with Resilience4j's retry logic to handle transient HTTP errors effectively.
Handle message locking. For message-based systems, implement message handling strategies that support retries without data loss. For example, use peek-lock modes when they're available. Ensure that failed messages are retried effectively and moved to a dead-letter queue after repeated failures.
Configuration guidance
The following sections provide guidance for implementing the configuration updates. Each section aligns with one or more of the pillars of the Well-Architected Framework.
Configuration | Reliability (RE) | Security (SE) | Cost Optimization (CO) | Operational Excellence (OE) | Performance Efficiency (PE) | Supporting Well-Architected Framework principles |
---|---|---|---|---|---|---|
Configure authentication and authorization | ✔ | ✔ | SE:05 OE:10 |
|||
Implement independent autoscaling | ✔ | ✔ | ✔ | RE:06 CO:12 PE:05 |
||
Containerize service deployment | ✔ | ✔ | CO:13 PE:09 PE:03 |
Configure authentication and authorization
To configure authentication and authorization on any new Azure services (workload identities) that you add to the web app, follow these recommendations:
Use managed identities for each new service. Each independent service should have its own identity and use managed identities for service-to-service authentication. Managed identities eliminate the need to manage credentials in your code and reduce the risk of credential leakage. They help you avoid including sensitive information like connection strings in your code or configuration files.
Grant least privilege to each new service. Assign only necessary permissions to each new service identity. For example, if an identity only needs to push to a container registry, don't give it pull permissions. Review these permissions regularly and adjust them as necessary. Use different identities for different roles, such as deployment and the application. Doing so limits the potential damage if one identity is compromised.
Use infrastructure as code (IaC). Use Bicep or a similar IaC tool like Terraform to define and manage your cloud resources. IaC ensures consistent application of security configurations in your deployments and enables you to version control your infrastructure setup.
To configure authentication and authorization on users (user identities), follow these recommendations:
Grant least privilege to users. As with services, ensure that users have only the permissions they need to perform their tasks. Regularly review and adjust these permissions.
Conduct regular security audits. Regularly review and audit your security setup. Look for misconfigurations and unnecessary permissions and rectify or remove them immediately.
The reference implementation uses IaC to assign managed identities to added services and specific roles to each identity. It defines roles and permissions access for deployment by defining roles for Container Registry pushes and pulls. Here's the code:
resource "azurerm_role_assignment" "container_app_acr_pull" {
principal_id = var.aca_identity_principal_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.acr.id
}
resource "azurerm_user_assigned_identity" "container_registry_user_assigned_identity" {
name = "ContainerRegistryUserAssignedIdentity"
resource_group_name = var.resource_group
location = var.location
}
resource "azurerm_role_assignment" "container_registry_user_assigned_identity_acr_pull" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azurerm_user_assigned_identity.container_registry_user_assigned_identity.principal_id
}
# For demo purposes, allow the current user to access the container registry.
# Note: When running as a service principal, this is also needed.
resource "azurerm_role_assignment" "acr_contributor_user_role_assignement" {
scope = azurerm_container_registry.acr.id
role_definition_name = "Contributor"
principal_id = data.azuread_client_config.current.object_id
}
Configure independent autoscaling
The Modern Web App pattern starts to break up the monolithic architecture and introduces service decoupling. When you decouple a web app architecture, you can scale decoupled services independently. Scaling the Azure services to support an independent web app service, rather than an entire web app, optimizes scaling costs while meeting demands. To autoscale containers, follow these recommendations:
Use stateless services. Ensure that your services are stateless. If your web app contains in-process session state, externalize it to a distributed cache like Redis or a database like SQL Server.
Configure autoscaling rules. Use the autoscaling configurations that provide the most cost-effective control over your services. For containerized services, event-based scaling, like Kubernetes Event-Driven Autoscaler (KEDA), often provides granular control that allows you to scale based on event metrics. Container Apps and AKS support KEDA. For services that don't support KEDA, such as App Service, use the autoscaling features provided by the platform itself. These features often include scaling based on metrics-based rules or HTTP traffic.
Configure minimum replicas. To prevent cold starts, configure autoscaling settings to maintain a minimum of one replica. A cold start is the initialization of a service from a stopped state. A cold start often delays the response. If minimizing costs is a priority and you can tolerate cold start delays, set the minimum replica count to 0 when you configure autoscaling.
Configure a cooldown period. Apply an appropriate cooldown period to introduce a delay between scaling events. The goal is to prevent excessive scaling activities triggered by temporary load spikes.
Configure queue-based scaling. If your application uses a message queue like Service Bus, configure your autoscaling settings to scale based on the length of the request message queue. The scaler attempts to maintain one replica of the service for every N messages in the queue (rounded up).
For example, the reference implementation uses the Service Bus KEDA scaler to automatically scale Container App based on the length of the Service Bus queue. The scaling rule, named service-bus-queue-length-rule
, adjusts the number of service replicas based on the message count in the specified Service Bus queue. The messageCount
parameter is set to 10, which configures the scaler to add one replica for every 10 messages in the queue. The maximum replica count (max_replicas
) is set to 10. The minimum replica count is implicitly 0 unless it's overridden. This configuration allows the service to scale down to zero when there are no messages in the queue. The connection string for the Service Bus queue is stored as a secret in Azure, named azure-servicebus-connection-string
, which is used to authenticate the scaler to the Service Bus. Here's the Terraform code:
max_replicas = 10
min_replicas = 1
custom_scale_rule {
name = "service-bus-queue-length-rule"
custom_rule_type = "azure-servicebus"
metadata = {
messageCount = 10
namespace = var.servicebus_namespace
queueName = var.email_request_queue_name
}
authentication {
secret_name = "azure-servicebus-connection-string"
trigger_parameter = "connection"
}
}
Containerize service deployment
Containerization is the encapsulation of all dependencies needed by the app in a lightweight image that can be reliably deployed to a wide range of hosts. To containerize deployment, follow these recommendations:
Identify domain boundaries. Start by identifying the domain boundaries in your monolithic application. Doing so helps you determine which parts of the application you can extract into separate services.
Create Docker images. When you create Docker images for your Java services, use official OpenJDK base images. These images contain only the minimal set of packages that Java needs to run. Using these images minimizes both the package size and the attack surface area.
Use multi-stage Dockerfiles. Use a multi-stage Dockerfile to separate build-time assets from the runtime container image. Using this type of file helps to keep your production images small and secure. You can also use a preconfigured build server and copy the JAR file into the container image.
Run as a nonroot user. Run your Java containers as a nonroot user (via user name or UID $APP_UID) to align with the principle of least privilege. Doing so limits the potential effects of a compromised container.
Listen on port 8080. When you run containers as a nonroot user, configure your application to listen on port 8080. This is a common convention for nonroot users.
Encapsulate dependencies. Ensure that all dependencies that the app needs are encapsulated in the Docker container image. Encapsulation allows the app to be reliably deployed to a wide range of hosts.
Choose the right base images. The base image you choose depends on your deployment environment. If you deploy to Container Apps, for instance, you need to use Linux Docker images.
The reference implementation demonstrates a Docker build process for containerizing a Java application. The Dockerfile uses a single-stage build with the OpenJDK base image (mcr.microsoft.com/openjdk/jdk:17-ubuntu
), which provides the necessary Java runtime environment.
The Dockerfile includes the following steps:
- Declaring the volume. A temporary volume (
/tmp
) is defined. This volume provides temporary file storage that's separate from the container's main file system. - Copying artifacts. The application's JAR file (
email-processor.jar
) is copied into the container, together with the Application Insights agent (applicationinsights-agent.jar
) that's used for monitoring. - Setting the entrypoint. The container is configured to run the application with the Application Insights agent enabled. The code uses
java -javaagent
to monitor the application during runtime.
The Dockerfile keeps the image small by only including runtime dependencies. It's suitable for deployment environments like Container Apps that support Linux-based containers.
# Use the OpenJDK 17 base image on Ubuntu as the foundation.
FROM mcr.microsoft.com/openjdk/jdk:17-ubuntu
# Define a volume to allow temporary files to be stored separately from the container's main file system.
VOLUME /tmp
# Copy the packaged JAR file into the container.
COPY target/email-processor.jar app.jar
# Copy the Application Insights agent for monitoring.
COPY target/agent/applicationinsights-agent.jar applicationinsights-agent.jar
# Set the entrypoint to run the application with the Application Insights agent.
ENTRYPOINT ["java", "-javaagent:applicationinsights-agent.jar", "-jar", "/app.jar"]
Deploy the reference implementation
Deploy the reference implementation of the Modern Web App Pattern for Java. There are instructions for both development and production deployment in the repository. After you deploy the implementation, you can simulate and observe design patterns.
The following diagram shows the architecture of the reference implementation:
Download a Visio file of this architecture.