Migrate WebSphere applications to WildFly on Azure Kubernetes Service
This guide describes what you should be aware of when you want to migrate an existing WebSphere application to run on WildFly in an Azure Kubernetes Service container.
Pre-migration
To ensure a successful migration, before you start, complete the assessment and inventory steps described in the following sections.
Inventory server capacity
Document the hardware (memory, CPU, disk) of the current production server(s) and the average and peak request counts and resource utilization. You'll need this information regardless of the migration path you choose. It's useful, for example, to help guide selection of the size of the VMs in your node pool, the amount of memory to be used by the container, and how many CPU shares the container needs.
It's possible to resize node pools in AKS. To learn how, see Resize node pools in Azure Kubernetes Service (AKS).
Inventory all secrets
Check all properties and configuration files on the production server(s) for any secrets and passwords. Be sure to check ibm-web-bnd.xml in your WARs. Configuration files that contain passwords or credentials may also be found inside your application.
Inventory all certificates
Document all the certificates used for public SSL endpoints. You can view all certificates on the production server(s) by running the following command:
keytool -list -v -keystore <path to keystore>
Validate that the supported Java version works correctly
Using WildFly on Azure Kubernetes Service requires a specific version of Java, so you'll need to confirm that your application runs correctly using that supported version.
Note
This validation is especially important if your current server is running on an unsupported JDK (such as Oracle JDK or IBM OpenJ9).
To obtain your current Java version, sign in to your production server and run the following command:
java -version
See Requirements for guidance on what version to use to run WildFly.
Inventory JNDI resources
Inventory all JNDI resources. Some, such as JMS message brokers, may require migration or reconfiguration.
Inside your application
Inspect the file WEB-INF/ibm-web-bnd.xml and/or WEB-INF/web.xml.
Document datasources
If your application uses any databases, you need to capture the following information:
- What is the datasource name?
- What is the connection pool configuration?
- Where can I find the JDBC driver JAR file?
For more information, see Configuring database connectivity in the WebSphere documentation.
Determine whether and how the file system is used
Any usage of the file system on the application server will require reconfiguration or, in rare cases, architectural changes. File system may be used by WebSphere modules or by your application code. You may identify some or all of the scenarios described in the following sections.
Read-only static content
If your application currently serves static content, you'll need an alternate location for it. You may wish to consider moving static content to Azure Blob Storage and adding Azure CDN for lightning-fast downloads globally. For more information, see Static website hosting in Azure Storage and Quickstart: Integrate an Azure storage account with Azure CDN.
Dynamically published static content
If your application allows for static content that is uploaded/produced by your application but is immutable after its creation, you can use Azure Blob Storage and Azure CDN as described above, with an Azure Function to handle uploads and CDN refresh. We've provided a sample implementation for your use at Uploading and CDN-preloading static content with Azure Functions.
Dynamic or internal content
For files that are frequently written and read by your application (such as temporary data files), or static files that are visible only to your application, you can mount Azure Storage shares as persistent volumes. For more information, see Create and use a volume with Azure Files in Azure Kubernetes Service (AKS).
Determine whether your application relies on scheduled jobs
Scheduled jobs, such as Quartz Scheduler tasks or Unix cron jobs, should NOT be used with Azure Kubernetes Service (AKS). Azure Kubernetes Service will not prevent you from deploying an application containing scheduled tasks internally. However, if your application is scaled out, the same scheduled job may run more than once per scheduled period. This situation can lead to unintended consequences.
To execute scheduled jobs on your AKS cluster, define Kubernetes CronJobs as needed. For more information, see Running Automated Tasks with a CronJob.
Determine whether a connection to on-premises is needed
If your application needs to access any of your on-premises services, you'll need to provision one of Azure's connectivity services. For more information, see Connect an on-premises network to Azure. Alternatively, you'll need to refactor your application to use publicly available APIs that your on-premises resources expose.
Determine whether Java Message Service (JMS) Queues or Topics are in use
If your application is using JMS Queues or Topics, you'll need to migrate them to an externally hosted JMS server. Azure Service Bus and the Advanced Message Queuing Protocol (AMQP) can be a great migration strategy for those using JMS. For more information, see Use Java Message Service 1.1 with Azure Service Bus standard and AMQP 1.0.
If JMS persistent stores have been configured, you must capture their configuration and apply it after the migration.
Determine whether your application uses WebSphere-specific APIs
If your application uses WebSphere-specific APIs, you'll need to refactor it to remove those dependencies. For example, if you have used a class mentioned in the IBM WebSphere Application Server, Release 9.0 API Specification, you have used a WebSphere specific API in your application.
Determine whether your application uses Entity Beans or EJB 2.x-style CMP Beans
If your application uses Entity Beans or EJB 2.x style CMP beans, you'll need to refactor your application to remove these dependencies.
Determine whether the Java EE Application Client feature is in use
If you have client applications that connect to your (server) application using the Java EE Application Client feature, you'll need to refactor both your client applications and your (server) application to use HTTP APIs.
Determine whether your application contains OS-specific code
If your application contains any code with dependencies on the host OS, then you need to refactor it to remove those dependencies. For example, you may need to replace any use of /
or \
in file system paths with File.Separator
or Paths.get
if your application is running on Windows.
Determine whether EJB timers are in use
If your application uses EJB timers, you'll need to validate that the EJB timer code can be triggered by each WildFly instance independently. This validation is needed because, in the Azure Kubernetes Service deployment scenario, each EJB timer will be triggered on its own WildFly instance.
Determine whether JCA connectors are in use
If your application uses JCA connectors, you'll have to validate the JCA connector can be used on WildFly. If the JCA implementation is tied to WebSphere, you'll have to refactor your application to remove that dependency. If it can be used, then you'll need to add the JARs to the server classpath and put the necessary configuration files in the correct location in the WildFly server directories for it to be available.
Determine whether JAAS is in use
If your application is using JAAS, you'll need to capture how JAAS is configured. If it's using a database, you can convert it to a JAAS domain on WildFly. If it's a custom implementation, you'll need to validate that it can be used on WildFly.
Determine whether your application uses a Resource Adapter
If your application needs a Resource Adapter (RA), it needs to be compatible with WildFly. Determine whether the RA works fine on a standalone instance of WildFly by deploying it to the server and properly configuring it. If the RA works properly, you'll need to add the JARs to the server classpath of the Docker image and put the necessary configuration files in the correct location in the WildFly server directories for it to be available.
Determine whether your application is composed of multiple WARs
If your application is composed of multiple WARs, you should treat each of those WARs as separate applications and go through this guide for each of them.
Determine whether your application is packaged as an EAR
If your application is packaged as an EAR file, be sure to examine the application.xml and application-bnd.xml files and capture their configurations.
Note
If you want to be able to scale each of your web applications independently for better use of your Azure Kubernetes Service (AKS) resources you should break up the EAR into separate web applications.
Identify all outside processes and daemons running on the production servers
If you have any processes running outside the application server, such as monitoring daemons, you'll need to eliminate them or migrate them elsewhere.
Perform in-place testing
Prior to creating your container images, migrate your application to the JDK and WildFly versions that you intend to use on AKS. Test the application thoroughly to ensure compatibility and performance.
Migration
Provision Azure Container Registry and Azure Kubernetes Service
Use the following commands to create a container registry and an Azure Kubernetes cluster with a Service Principal that has the Reader role on the registry. Be sure to choose the appropriate network model for your cluster's networking requirements.
az group create \
--resource-group $resourceGroup \
--location eastus
az acr create \
--resource-group $resourceGroup \
--name $acrName \
--sku Standard
az aks create \
--resource-group $resourceGroup \
--name $aksName \
--attach-acr $acrName \
--network-plugin azure
Create a Docker image for WildFly
To create a Dockerfile, you'll need the following prerequisites:
- A supported JDK.
- An install of WildFly.
- Your JVM runtime options.
- A way to pass in environment variables (if applicable).
You can then perform the steps described in the following sections, where applicable. You can use the WildFly Container Quickstart repo as a starting point for your Dockerfile and web application.
Configure KeyVault FlexVolume
Create an Azure KeyVault and populate all the necessary secrets. For more information, see Quickstart: Set and retrieve a secret from Azure Key Vault using Azure CLI. Then, configure a KeyVault FlexVolume to make those secrets accessible to pods.
You will also need to update the startup script used to bootstrap WildFly. This script must import the certificates into the keystore used by WildFly before starting the server.
Set up data sources
To configure WildFly to access a data source, you'll need to add the JDBC driver JAR to your Docker image, and then execute the appropriate JBoss CLI commands. These commands must set up the data source when building your Docker image.
The following steps provide instructions for PostgreSQL, MySQL and SQL Server.
Download the JDBC driver for PostgreSQL, MySQL, or SQL Server.
Unpack the downloaded archive to get the driver .jar file.
Create a file with a name like
module.xml
and add the following markup. Replace the<module name>
placeholder (including the angle brackets) withorg.postgres
for PostgreSQL,com.mysql
for MySQL, orcom.microsoft
for SQL Server. Replace<JDBC .jar file path>
with the name of the .jar file from the previous step, including the full path to the location you will place the file in your Docker image, for example in/opt/database
.<?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="<module name>"> <resources> <resource-root path="<JDBC .jar file path>" /> </resources> <dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module>
Create a file with a name like
datasource-commands.cli
and add the following code. Replace<JDBC .jar file path>
with the value you used in the previous step. Replace<module file path>
with the file name and path from the previous step, for example/opt/database/module.xml
.Note
Microsoft recommends using the most secure authentication flow available. The authentication flow described in this procedure, such as for databases, caches, messaging, or AI services, requires a very high degree of trust in the application and carries risks not present in other flows. Use this flow only when more secure options, like managed identities for passwordless or keyless connections, are not viable. For local machine operations, prefer user identities for passwordless or keyless connections.
batch module add --name=org.postgres --resources=<JDBC .jar file path> --module-xml=<module file path> /subsystem=datasources/jdbc-driver=postgres:add(driver-name=postgres,driver-module-name=org.postgres,driver-class-name=org.postgresql.Driver,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource) data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=$DATABASE_CONNECTION_URL --user-name=$DATABASE_SERVER_ADMIN_FULL_NAME --password=$DATABASE_SERVER_ADMIN_PASSWORD --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker reload run batch shutdown
Update the the JTA datasource configuration for your application:
Open the
src/main/resources/META-INF/persistence.xml
file for your app and find the<jta-data-source>
element. Replace its contents as shown here:<jta-data-source>java:jboss/datasources/postgresDS</jta-data-source>
Add the following to your
Dockerfile
so the data source is created when you build your Docker imageRUN /bin/bash -c '<WILDFLY_INSTALL_PATH>/bin/standalone.sh --start-mode admin-only &' && \ sleep 30 && \ <WILDFLY_INSTALL_PATH>/bin/jboss-cli.sh -c --file=/opt/database/datasource-commands.cli && \ sleep 30
Determine the
DATABASE_CONNECTION_URL
to use as they are different for each database server, and different than the values on the Azure portal. The URL formats shown here are required for use by WildFly:jdbc:postgresql://<database server name>:5432/<database name>?ssl=true
When creating your deployment YAML at a later stage you will need to pass the following environment variables,
DATABASE_CONNECTION_URL
,DATABASE_SERVER_ADMIN_FULL_NAME
andDATABASE_SERVER_ADMIN_PASSWORD
with the appropriate values.
Note
Microsoft recommends using the most secure authentication flow available. The authentication flow described in this procedure, such as for databases, caches, messaging, or AI services, requires a very high degree of trust in the application and carries risks not present in other flows. Use this flow only when more secure options, like managed identities for passwordless or keyless connections, are not viable. For local machine operations, prefer user identities for passwordless or keyless connections.
For more info on configuring database connectivity with WildFly, see PostgreSQL, MySQL, or SQL Server.
Set up JNDI resources
To set up each JNDI resource you need to configure on WildFly, you will generally use the following steps:
- Download the necessary JAR files and copy them into the Docker image.
- Create a WildFly module.xml file referencing those JAR files.
- Create any configuration needed by the specific JNDI resource.
- Create JBoss CLI script to be used during Docker build to register the JNDI resource.
- Add everything to Dockerfile.
- Pass the appropriate environment variables in your deployment YAML.
The example below shows the steps needed to create the JNDI resource for JMS connectivity to Azure Service Bus.
Download the Apache Qpid JMS provider
Unpack the downloaded archive to get the .jar files.
Create a file with a name like
module.xml
and add the following markup in/opt/servicebus
. Make sure the version numbers of the JAR files align with the names of the JAR files of the previous step.<?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="org.jboss.genericjms.provider"> <resources> <resource-root path="proton-j-0.31.0.jar"/> <resource-root path="qpid-jms-client-0.40.0.jar"/> <resource-root path="slf4j-log4j12-1.7.25.jar"/> <resource-root path="slf4j-api-1.7.25.jar"/> <resource-root path="log4j-1.2.17.jar"/> <resource-root path="netty-buffer-4.1.32.Final.jar" /> <resource-root path="netty-codec-4.1.32.Final.jar" /> <resource-root path="netty-codec-http-4.1.32.Final.jar" /> <resource-root path="netty-common-4.1.32.Final.jar" /> <resource-root path="netty-handler-4.1.32.Final.jar" /> <resource-root path="netty-resolver-4.1.32.Final.jar" /> <resource-root path="netty-transport-4.1.32.Final.jar" /> <resource-root path="netty-transport-native-epoll-4.1.32.Final-linux-x86_64.jar" /> <resource-root path="netty-transport-native-kqueue-4.1.32.Final-osx-x86_64.jar" /> <resource-root path="netty-transport-native-unix-common-4.1.32.Final.jar" /> <resource-root path="qpid-jms-discovery-0.40.0.jar" /> </resources> <dependencies> <module name="javax.api"/> <module name="javax.jms.api"/> </dependencies> </module>
Create a
jndi.properties
file in/opt/servicebus
.connectionfactory.${MDB_CONNECTION_FACTORY}=amqps://${DEFAULT_SBNAMESPACE}.servicebus.windows.net?amqp.idleTimeout=120000&jms.username=${SB_SAS_POLICY}&jms.password=${SB_SAS_KEY} queue.${MDB_QUEUE}=${SB_QUEUE} topic.${MDB_TOPIC}=${SB_TOPIC}
Create a file with a name like
servicebus-commands.cli
and add the following code.batch /subsystem=ee:write-attribute(name=annotation-property-replacement,value=true) /system-property=property.mymdb.queue:add(value=myqueue) /system-property=property.connection.factory:add(value=java:global/remoteJMS/SBF) /subsystem=ee:list-add(name=global-modules, value={"name" => "org.jboss.genericjms.provider", "slot" =>"main"} /subsystem=naming/binding="java:global/remoteJMS":add(binding-type=external-context,module=org.jboss.genericjms.provider,class=javax.naming.InitialContext,environment=[java.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory,org.jboss.as.naming.lookup.by.string=true,java.naming.provider.url=/opt/servicebus/jndi.properties]) /subsystem=resource-adapters/resource-adapter=generic-ra:add(module=org.jboss.genericjms,transaction-support=XATransaction) /subsystem=resource-adapters/resource-adapter=generic-ra/connection-definitions=sbf-cd:add(class-name=org.jboss.resource.adapter.jms.JmsManagedConnectionFactory, jndi-name=java:/jms/${MDB_CONNECTION_FACTORY}) /subsystem=resource-adapters/resource-adapter=generic-ra/connection-definitions=sbf-cd/config-properties=ConnectionFactory:add(value=${MDB_CONNECTION_FACTORY}) /subsystem=resource-adapters/resource-adapter=generic-ra/connection-definitions=sbf-cd/config-properties=JndiParameters:add(value="java.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory;java.naming.provider.url=/opt/servicebus/jndi.properties") /subsystem=resource-adapters/resource-adapter=generic-ra/connection-definitions=sbf-cd:write-attribute(name=security-application,value=true) /subsystem=ejb3:write-attribute(name=default-resource-adapter-name, value=generic-ra) run-batch reload shutdown
Add the following to your
Dockerfile
so the JNDI resource is created when you build your Docker imageRUN /bin/bash -c '<WILDFLY_INSTALL_PATH>/bin/standalone.sh --start-mode admin-only &' && \ sleep 30 && \ <WILDFLY_INSTALL_PATH>/bin/jboss-cli.sh -c --file=/opt/servicebus/servicebus-commands.cli && \ sleep 30
When creating your deployment YAML at a later stage you will need to pass the following environment variables,
MDB_CONNECTION_FACTORY
,DEFAULT_SBNAMESPACE
andSB_SAS_POLICY
,SB_SAS_KEY
,MDB_QUEUE
,SB_QUEUE
,MDB_TOPIC
andSB_TOPIC
with the appropriate values.
Review WildFly configuration
Review the WildFly Admin Guide to cover any additional pre-migration steps not covered by the previous guidance.
Build and push the Docker image to Azure Container Registry
After you've created the Dockerfile, you'll need to build the Docker image and publish it to your Azure container registry.
If you used our WildFly Container Quickstart GitHub repo, the process of building and pushing your image to your Azure container registry would be the equivalent of invoking the following three commands.
In these examples, the MY_ACR
environment variable holds the name of your Azure container registry and the MY_APP_NAME
variable holds the name of the web application you want to use on your Azure container registry.
Build the WAR file:
mvn package
Log into your Azure container registry:
az acr login --name ${MY_ACR}
Build and push the image:
az acr build --image ${MY_ACR}.azurecr.io/${MY_APP_NAME} --file src/main/docker/Dockerfile .
Alternatively, you can use Docker CLI to first build and test the image locally, as shown in the following commands. This approach can simplify testing and refining the image before initial deployment to ACR. However, it requires you to install the Docker CLI and ensure the Docker daemon is running.
Build the image:
docker build -t ${MY_ACR}.azurecr.io/${MY_APP_NAME}
Run the image locally:
docker run -it -p 8080:8080 ${MY_ACR}.azurecr.io/${MY_APP_NAME}
Your can now access your application at http://localhost:8080
.
Log into your Azure container registry:
az acr login --name ${MY_ACR}
Push the image to your Azure container registry:
docker push ${MY_ACR}.azurecr.io/${MY_APP_NAME}
For more in-depth information on building and storing container images in Azure, see the Learn module Build and store container images with Azure Container Registry.
Provision a public IP address
If your application is to be accessible from outside your internal or virtual network(s), you'll need a public static IP address. You should provision this IP address inside your cluster's node resource group, as shown in the following example:
export nodeResourceGroup=$(az aks show \
--resource-group $resourceGroup \
--name $aksName \
--query 'nodeResourceGroup' \
--output tsv)
export publicIp=$(az network public-ip create \
--resource-group $nodeResourceGroup \
--name applicationIp \
--sku Standard \
--allocation-method Static \
--query 'publicIp.ipAddress' \
--output tsv)
echo "Your public IP address is ${publicIp}."
Deploy to Azure Kubernetes Service (AKS)
Create and apply your Kubernetes YAML file(s). For more information, see Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI. If you're creating an external load balancer (whether for your application or for an ingress controller), be sure to provide the IP address provisioned in the previous section as the LoadBalancerIP
.
Include externalized parameters as environment variables. For more information, see Define Environment Variables for a Container. Don't include secrets (such as passwords, API keys, and JDBC connection strings). These are covered in the following section.
Be sure to include memory and CPU settings when creating your deployment YAML so your containers are properly sized.
Configure persistent storage
If your application requires non-volatile storage, configure one or more Persistent Volumes.
Migrate scheduled jobs
To execute scheduled jobs on your AKS cluster, define Kubernetes CronJobs as needed. For more information, see Running Automated Tasks with a CronJob.
Post-migration
Now that you have migrated your application to Azure Kubernetes Service, you should verify that it works as you expect. After you've done that, we have some recommendations for you that can make your application more cloud-native.
Recommendations
Consider adding a DNS name to the IP address allocated to your ingress controller or application load balancer. For more information, see Use TLS with an ingress controller on Azure Kubernetes Service (AKS).
Consider adding HELM charts for your application. A helm chart allows you to parameterize your application deployment for use and customization by a more diverse set of customers.
Design and implement a DevOps strategy. In order to maintain reliability while increasing your development velocity, consider automating deployments and testing with Azure Pipelines. For more information, see Build and deploy to Azure Kubernetes Service with Azure Pipelines.
Enable Azure Monitoring for the cluster. For more information, see Enable monitoring for Kubernetes clusters. This allows Azure monitor to collect container logs, track utilization, and so on.
Consider exposing application-specific metrics via Prometheus. Prometheus is an open-source metrics framework broadly adopted in the Kubernetes community. You can configure Prometheus Metrics scraping in Azure Monitor instead of hosting your own Prometheus server to enable metrics aggregation from your applications and automated response to or escalation of aberrant conditions. For more information, see Enable Prometheus and Grafana.
Design and implement a business continuity and disaster recovery strategy. For mission-critical applications, consider a multi-region deployment architecture. For more information, see High availability and disaster recovery overview for Azure Kubernetes Service (AKS).
Review the Kubernetes version support policy. It's your responsibility to keep updating your AKS cluster to ensure that it's always running a supported version. For more information, see Upgrade options for Azure Kubernetes Service (AKS) clusters.
Have all team members responsible for cluster administration and application development review the pertinent AKS best practices. For more information, see Cluster operator and developer best practices to build and manage applications on Azure Kubernetes Service (AKS).
Make sure your deployment file specifies how rolling updates are done. For more information, see Rolling Update Deployment in the Kubernetes documentation.
Set up auto scaling to deal with peak time loads. For more information, see Use the cluster autoscaler in Azure Kubernetes Service (AKS).
Consider monitoring the code cache size and adding the JVM parameters
-XX:InitialCodeCacheSize
and-XX:ReservedCodeCacheSize
in the Dockerfile to further optimize performance. For more information, see Codecache Tuning in the Oracle documentation.