Hi All,
I wanted to provide an update regarding the issue I faced with NVIDIA L4 GPUs on Windows Server Hyper-V VMs using Discrete Device Assignment (DDA). I was able to resolve this issue successfully by following the guidance provided in the NVIDIA documentation:
Bug #2812853: Microsoft DDA not working with some GPUs.
The problem occurred because GPUs with more than 16 GB of memory require additional MMIO (Memory-Mapped Input/Output) space for proper mapping in the guest VM. Without this configuration, the GPU wouldn't be detected properly in the VM.
The workaround involves allocating sufficient HighMemoryMappedIoSpace for the VM based on the GPU's BAR1 memory size and the number of GPUs assigned to the VM. Here's the step-by-step process:
- Use the following formula to calculate the required MMIO space:
Where:MMIO space=2×gpu-bar1-memory×assigned-gpus
- gpu-bar1-memory: The amount of BAR1 memory for one GPU (equal to total GPU memory if not specified).
- assigned-gpus: The number of GPUs assigned to the VM.
- Assign the calculated MMIO space to the VM using the Set-VM PowerShell command on the Hyper-V host.
For a VM with 1 GPU assigned, where each GPU has 23 GB of BAR1 memory:
PowerShell Command:
Run the following on the Hyper-V host to set the required MMIO space:
Set-VM –HighMemoryMappedIoSpace 46GB –VMName <VM_Name>
For 3 NVIDIA L4 GPUs, each with 23 GB of memory assigned to a single VM:
MMIO space=2×23 GB×3=138 GB
Run the following to set the MMIO space for the VM:
Set-VM –HighMemoryMappedIoSpace 138GB –VMName <VM_Name>
Once the MMIO space is configured, reboot the VM and check that the GPU is recognized correctly in the guest VM using tools like lspci or nvidia-smi after installing the driver (for Linux VMs).
I hope this helps anyone facing similar issues. If you have additional questions, feel free to ask!
Best Regards,
Samadhan