Application development and deployment
Examine patterns and practices of application development, configure Azure Pipelines, and implement site reliability engineering (SRE) best practices. SRE is a software engineering approach to application development and deployment, change management, monitoring, and emergency response.
Plan, train, and proof
Use the following checklist and application development resources to plan your application development and deployment. You should be able to answer these questions:
- Have you prepared your application development environment and setup workflow?
- How will you structure the project folder to support Kubernetes application development?
- Have you identified the state, configuration, and storage requirements of your application?
SRE checklist
Prepare your development environment. Configure your environment with the tools to create containers and set up your development workflow.
For more information, see:
Containerize your application. Familiarize yourself with the end-to-end Kubernetes development experience, including application scaffolding, inner-loop workflows, application management frameworks, CI/CD pipelines, log aggregation, monitoring, and application metrics.
To learn more, see:
Review common Kubernetes scenarios. Kubernetes is often thought of as a platform for delivering microservices, but it's becoming a broader platform. For more information about common Kubernetes scenarios, such as batch analytics and workflow, see Overview of common Kubernetes scenarios (video).
Prepare your application for Kubernetes. Prepare your application file system layout for Kubernetes and organize for weekly or daily releases. Learn how the Kubernetes deployment process enables reliable, zero-downtime upgrades.
For more information, see:
Manage application storage. Understand the performance needs and access methods for pods so that you can provide the appropriate storage options. Plan for ways to back up and test the restore process for attached storage.
To learn more, see:
Manage application secrets. Use a key vault to store and retrieve keys and credentials. Don't store credentials in your application code.
For more information, see:
Deploy to production and apply best practices
As you prepare the application for production, use the following checklist. You should be able to answer these questions:
- Can you monitor all aspects of your application?
- Have you defined resource requirements for your application? How about scaling requirements?
- Can you deploy new versions of the application without affecting production systems?
SRE best practices checklist
Configure readiness and liveness health checks. Kubernetes uses readiness and liveness checks to know when your application is ready to receive traffic and when it needs to be restarted. When you don't define checks, Kubernetes can't determine if your application is running. For more information, see Liveness and readiness checks.
Configure logging, application monitoring, and alerting. Monitoring your containers is critical, especially when you run a production cluster, at scale, with multiple applications. The recommended logging method for containerized applications is to write to the standard output (
stdout
) and standard error (stderr
) streams.For more information, see:
Define resource requirements for the application. A primary way to manage the compute resources within a Kubernetes cluster is to use pod requests and limits. These requests and limits tell the Kubernetes scheduler what compute resources to assign to a pod. For more information, see Define pod resource requests and limits.
Configure application scaling requirements. Kubernetes supports horizontal pod autoscaling to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. To use the autoscaler, all containers in your pods must have CPU requests and limits defined. To learn more, see Configure horizontal pod autoscaling.
Deploy applications by using an automated pipeline and DevOps. The full automation of all steps between code commit to production deployment allows teams to focus on building code and removes the overhead and potential human error in manual steps. Deploying new code is quicker and less risky, which helps teams become more agile, more productive, and more confident about their running code.
To learn more, see:
Optimize and scale
Now that the application is in production, use the application deployment checklist to optimize your workflow and prepare your application and team to scale. You should be able to answer these questions:
- Are cross-cutting application concerns abstracted from your application?
- Are you able to maintain system and application reliability, while still iterating on new features and versions?
Application deployment checklist
Deploy an API gateway. An API gateway serves as an entry point to microservices, decouples clients from your microservices, adds another layer of security, and decreases the complexity of your microservices by removing the burden of handling cross-cutting concerns. For more information, see Use Azure API Management with microservices deployed in Azure Kubernetes Service.
Deploy a service mesh. A service mesh provides capabilities to your workloads, like traffic management, resiliency, policy, security, strong identity, and observability. Your application is decoupled from these operational capabilities, and the service mesh moves them out of the application layer and down to the infrastructure layer.
For more information, see:
Implement SRE practices. SRE is a proven approach that maintains crucial system and application reliability and iterates at the speed that the marketplace demands.
To learn more, see: