Compartilhar via


Storage Configuration Guidelines for SQL Server on Azure VM

SQL Server on Azure Virtual Machines service offers the full control of VM and SQL Server and enables the simplest cloud migration path for SQL Server workloads. SQL IaaS Extension is installed on each SQL VM created through Azure Market Place by default. SQL IaaS Extension offers managed backup, automated security patching, connectivity configuration, AKV integration and optimized storage configuration features. SQL Server images on Azure market place are tuned for performance by SQL Server team according to the workload type. However, for mission critical SQL server workloads, optimum storage configurations depend on the IO characteristics and requirements of the specific workload. This article describes the guidelines from SQL Server team based on the tests performed on Azure IaaS.

Choose Premium Storage and the Right VM Size

Premium Storage is designed to provide consistent high IO performance with low latency; uncached reads operates with average 4ms latency and uncached writes operates with average 2ms latency with up to 80,000 IOPS per VM. In addition to unchaced operations, host SSD-based blob cache technology offers up to 160,000 IOPS with less than ~1ms latency. Because of the consistent IO performance capabilities and high performance we recommend Premium Storage for SQL Server workloads on Azure VM.

Each Premium Storage-supported VM has scale limits and performance specifications for IOPS, bandwidth, and the number of disks that can be attached. VM scale-limits define the maximum throughput that SQL Server Workloads can drive. We recommend choosing the VM size that can scale up to your workloads’ requirements for unchaced and chached IOPS, throughput and size.

For example, Standard DS14_v2 VM size as shown below, offers maximum unchaced disk throughput up to 51,200 IOPS or 768MBps with 64 premium disks and also maximum cached + temporary throughput up to 64,000 IOPS or 512MBps with 576GB blob cache and 224GB local SSD.

You can find the limits and resources assigned for all VM sizes on the Azure Virtual Machine Sizing and Pricing page.

Read only cache significantly improves throughput for read heavy workloads

Azure VMs that supports Premium Storage have a multi-tier caching technology which uses a combination of the Virtual Machine RAM and local SSD for caching. The gain Read Only cache brings for SQL server workloads is dependent on workload characteristics. We tested with an OLTP representative workload which drove up to 50,00IOPS on DS14_v2 VM with 10-P30 disks attached. Data and log files are located on the same storage pool stripped over 10-P30 disks for this test.

For the OLTP workload profile with 80%  read, 20% writes and light Temp DB activity, we observed 42% more throughput when Read Only cache was enabled for all premium disks compared to the same configuration with no cache. For the second OLTP workload profile with 50% read, 50% writes and light Temp DB activity, we observed 25% more throughput with Read Only cache compared to the configuration with no cache.

Separate Data and Log Files

We recommend using separate storage pools for Data and Log files to optimize the gain from the read only cache. Hosting log files on premium storage disks with no cache will save the available space and throughput of the host blob cache only for the read activity and will maximize the benefit. This can be achieved by creating two separate storage pools for data and log files according to size and disk throughput requirements of the workload.

Specifically, we recommend enabling Read Only cache for the premium storage disks that will be included in the storage pool hosting the SQL Server data file. As an example, if total IOPS requirement for the data file is 30,000 then we recommend striping over 6 - P30 disks or 4 - P50 disks after enabling RO cache for all the disks.

We recommend not to configure Read Only cache for the premium storage disks that will be included in the storage pool hosting the log file. And, if disk throughput requirements for the log file is less than 7500 IOPS, then no storage pool is required for the log file as this IOPS can be achieved with single P50 disk.

Tests shows significant throughput gain when log file is separated from the data file especially for the write heavy SQL Server workloads hosted on a VM with small host blob cache. As an average example, we observed 35% more total throughput for the 50% write heavy OLTP test profile when log file is hosted on a storage pool with no cache and data files hosted on a separate storage pool with RO cache compared to hosting data and log files on single storage pool with RO cache.

Place Temp DB on local SSD – only after choosing the right VM size

VM Scale limits define the storage capacity that target VM is capable of with maximum bandwidth, maximum throughput and size limits for the Local SSD Disk and blob cache. For mission critical SQL server workloads with high Temp DB activity, hosting Temp DB on the local SSD significantly impacts workload performance and throughput. Size, throughput and bandwidth limits for the local SSD disk of the target VM should allow the Temp DB IO requirements. And, host blob cache scale limits of the VM should be large enough to allow the total read activity that local SSD reads and host blob cache reads will drive.

“Maximum local disk + SSD Cache performance" limits defines the maximum activity for read and write operations on the local SSD disk and also the read activity from the SSD Cache. If Temp DB is placed on the local SSD disk and also the RO cache is enabled for the Premium Disk hosting SQL Server data file, than the read + write IOPS that driven by the Temp DB and the read IOPS on the RO cache will be sharing the same IOPS and throughput limit. For example, Standard_DS14_v2 has a temporary disk that’s 224 GB in size, a 576 GB host cache, makes available up to 64,000 IOPS and 524 MB/s of storage capacity available for read cache together with the temp DB activity on the local SSD.

Storage Optimized L-Series or Memory Optimized M-Series are great candidates for Temp DB heavy and/or read heavy SQL Server workloads with large local SSD’s up to 14TB and memory up to 4TB.

On-premises comparable mission critical performance is certainly achievable for SQL server workloads on Azure VMs with the right VM size selection and workload optimized SQL Server configuration.