Use Container Storage Interface (CSI) disk drivers in AKS enabled by Azure Arc
> Applies to: AKS on Azure Stack HCI 22H2, AKS on Windows Server, AKS on Azure Local, version 23H2
This article describes how to use Container Storage Interface (CSI) built-in storage classes to dynamically create disk persistent volumes and create custom storage classes in AKS enabled by Arc.
Overview of CSI in AKS enabled by Arc
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By using CSI, AKS enabled by Arc can write, deploy, and iterate plug-ins to expose new storage systems. Using CSI can also improve existing ones in Kubernetes without having to touch the core Kubernetes code and then wait for its release cycles.
The disk and file CSI drivers used by AKS Arc are CSI specification-compliant drivers.
The CSI storage driver support on AKS Arc allows you to use:
AKS Arc disks that you can use to create a Kubernetes DataDisk resource. These are mounted as ReadWriteOnce, so they're only available to a single pod at a time. For storage volumes that can be accessed by multiple pods simultaneously, use AKS Arc files.
AKS Arc files that you can use to mount an SMB or NFS share to pods. These are mounted as ReadWriteMany, so you can share data across multiple nodes and pods. They can also be mounted as ReadWriteOnce based on the PVC (persistent volume claim) specification.
Dynamically create disk persistent volumes using built-in storage class
A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on how to use storage classes, see Kubernetes storage classes.
In AKS Arc, the default storage class is created by default and uses CSI to create VHDX-backed volumes. The reclaim policy ensures that the underlying VHDX is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable; you just need to edit the persistent volume claim with the new size.
To leverage this storage class, create a PVC and a respective pod that references and uses it. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create a VHDX of the desired size. When you create a pod definition, the PVC is specified to request the desired storage.
Create custom storage class for disks
The default storage class is suitable for most common scenarios. However, in some cases, you may want to create your own storage class that stores PVs at a particular location mapped to a specific performance tier.
If you have Linux workloads (pods), you must create a custom storage class with the parameter fsType: ext4
. This requirement applies to Kubernetes versions 1.19 and 1.20 or later. The following example shows a custom storage class definition with fsType
parameter defined:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: aks-hci-disk-custom
parameters:
blocksize: "33554432"
container: SqlStorageContainer
dynamic: "true"
group: clustergroup-summertime
hostname: TESTPATCHING-91.sys-sqlsvr.local
logicalsectorsize: "4096"
physicalsectorsize: "4096"
port: "55000"
fsType: ext4
provisioner: disk.csi.akshci.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
If you create a custom storage class, you can specify the location where you want to store PVs. If the underlying infrastructure is Azure Local, this new location could be a volume that's backed by high-performing SSDs/NVMe or a cost-optimized volume backed by HDDs.
Creating a custom storage class is a two-step process:
Create a new storage path using the
stack-hci-vm storagepath
cmdlets to create, show, and list the storage paths on your Azure Local cluster. For more information about storage path creation, see storage path.For
$path
, create a storage path named$storagepathname
; for example, C:\ClusterStorage\test-storagepath:az stack-hci-vm storagepath create --resource-group $resource_group --custom-location $customLocationID --name $storagepathname --path $path
Get the storage path resource ID:
$storagepathID = az stack-hci-vm storagepath show --name $storagepathname --resource-group $resource_group --query "id" -o tsv
Create a new custom storage class using the new storage path.
Create a file named sc-aks-hci-disk-custom.yaml, and then copy the manifest from the following YAML file. The storage class is the same as the default storage class except with the new
container
. Use thestorage path ID
created in the previous step forcontainer
. Forgroup
andhostname
, query the default storage class by runningkubectl get storageclass default -o yaml
, and then use the values that are specified:kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: aks-hci-disk-custom provisioner: disk.csi.akshci.com parameters: blocksize: "33554432" container: <storage path ID> dynamic: "true" group: <e.g clustergroup-akshci> # same as the default storageclass hostname: <e.g. ca-a858c18c.ntprod.contoso.com> # same as the default storageclass logicalsectorsize: "4096" physicalsectorsize: "4096" port: "55000" fsType: ext4 # refer to the note above to determine when to include this parameter allowVolumeExpansion: true reclaimPolicy: Delete volumeBindingMode: Immediate
Create the storage class with the kubectl apply command and specify your sc-aks-hci-disk-custom.yaml file:
$ kubectl apply -f sc-aks-hci-disk-custom.yaml storageclass.storage.k8s.io/aks-hci-disk-custom created