System requirements for small form factor deployments of Azure Local, version 23H2 (preview)
Applies to: Azure Local, version 23H2
This article describes the requirements for machines, storage, and networking for building solutions of Azure Local that use the small form factor hardware. If you purchase class small hardware from the Azure Local Catalog, ensure that these requirements are met before you deploy the Azure Local solutions.
Important
This feature is currently in PREVIEW. See the Supplemental Terms of Use for Microsoft Azure Previews for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
About the small hardware class
Azure Local supports a new class of devices with reduced hardware requirements. This new, low-cost hardware class referenced as small is suited for various edge scenarios across the industry horizontals. To ensure compatibility, interoperability, security, and reliability, this class of hardware must meet Azure Local solution requirements.
The certified devices are listed in the Azure Local Catalog.
Device requirements
The device must be listed in the Azure Local Catalog which indicates that the device has passed the Windows Server certification and the extra qualifications.
The following table lists the requirements for the small hardware:
Component | Description |
---|---|
Number of machines | 1 to 3 machines are supported. Each machine must be the same model, manufacturer, have the same network adapters, and have the same number and type of storage drives. |
CPU | An Intel Xeon or AMD EPYC or later compatible processor with second-level address translation (SLAT). Up to 14 physical cores |
Memory | A minimum of 32 GB per machine and a maximum of 128 GB per machine with Error-Correcting Code (ECC). |
Host network adapters | 1 network adapter that meets the Azure Local host network requirements Enabling RDMA on storage intend is not required. Minimum link speed must be 1 Gbit/s. |
BIOS | Intel VT or AMD-V must be turned on. |
Boot drive | A minimum size of 200 GB. |
Data drives | A minimum single disk of capacity 1 TB. The drives must be all flash single drive type, either Nonvolatile Memory Express (NVME) or Solid-State Drive (SSD). All the drives must be of the same type. No caching. |
Trusted Platform Module (TPM) | TPM version 2.0 hardware must be present and enabled. |
Secure Boot | Secure Boot must be present and turned on. |
Storage Controller | Pass-through. RAID controller cards or SAN (Fibre Channel, iSCSI, FCoE) aren't supported. |
GPU | Optional Up to 192 GB GPU memory per machine. |
Important
For 2411 release, Update, Add-server
, and Repair-server
operations aren't supported for the small hardware class.
Storage requirements
The storage subsystem for an Azure Local running Azure Stack HCI OS is layered on top of Storage Space Direct. When building a solution using class small hardware:
- A minimum of one data drive is required to create a storage pool.
- All drives in the pool must be of the same type, either NVMe or SSD.
- Remember that mixing drive types for caching (NVMe and SSD, or SSD and HDD) isn't supported.
The supported volume configuration for the system is:
- A single, 250 GB, fixed infrastructure volume.
Supported sample storage configurations
Node count | Disk count | Disk type | Volume resiliency level | Sustain faults |
---|---|---|---|---|
1 | 1 | NVMe or SSD | Simple | None |
2 | 1 | NVMe or SSD | Two way mirror | Single fault (drive or node) |
3 | 1 | NVMe or SSD | Two way mirror | Single fault (drive or node) |
1 | 2 | NVMe or SSD | Two way mirror | Single fault (drive) |
2 | 2 | NVMe or SSD | Two way mirror | Two faults (drive and node) |
3 | 2 | NVMe or SSD | Three way mirror | Two faults |
Networking requirements
Network adapters must meet the Azure Local host network requirements. This requirement ensures compatibility and reliability with Hyper-V and also controls how the adapter properties are exposed by the driver (VLAN ID, RSS, VMQ).
The reduced networking requirements are as follows:
- Storage intent doesn't require RDMA to be enabled.
- Single link is needed.
- Minimum link speed of 1 Gbit/s is required.
- A Layer 2 switch with VLAN support is required.
Removal of RDMA allows the use of a layer 2 network switch with VLAN support. This further simplifies the configuration management and reduces the overall solution cost.
Supported sample network configurations
Link count | Intent | Switch type | Sustain faults |
---|---|---|---|
2 | Single intent | Switch | Single fault |
2 | Two intents | Switch | None |
4 | Two intents | Switch | Single fault |
3 | Three intents | Switch | None |
2 | Two intents | No switch for storage | None |
4 | Two intents | No switch for storage | Single fault |
6 | Three intents | No switch for storage | Single fault |
Considerations
Following are some workload considerations for building a solution using the small hardware class:
- Baseline workload IOP requirements for sizing storage configuration to see if small hardware class is the right fit.
- vCPU requirements and target ratio due to lower total physical cores when using small hardware class.
- Total memory requirement with three servers using small hardware class. Keep capacity work one node free for update runs and failure scenarios.
- Network bandwidth requirements.
- The VM creation time is critical as it's influenced by network bandwidth.
Availability
- Spare part logistics to remediate failures timely.
- Part replacement and identification for non-hot swap components.
Next steps
Review firewall, physical network, and host network requirements: