Freigeben über


Windows Server 2012 File Server Tip: Avoid loopback configurations for Hyper-V over SMB

When deploying Hyper-V over SMB (storing your live configuration and live VHD/VHDX files on an SMB 3.0 file share), make sure you don’t use a loopback configuration. A loopback configuration means that the Hyper-V role and the File Server role are on the same computer. While you can actually have both roles on the same machine, you should not use a UNC path pointing back to the same server. This is not a supported configuration.

The main reason for this restriction is the way permissions need to be configured for Hyper-V over SMB. You need to grant access on the file share to the computer account of the Hyper-V host. Well, when use a loopback configuration, this permission model does not work (the System account used by Hyper-V only gets translated to a computer computer when you’re accessing a remote file share). The end result will be an “Access Denied” error.

Loopback configurations also include deploying the Hyper-V role and the File Server role in the same Failover Cluster. While this can work when the VM is running in one node and the File Server is running on another node of the same cluster, it will fail if both roles happen to land on the same node at the same time. You could in theory make this work by configuring preferred nodes for each role, effectively making sure they never land on the same cluster node, but you really should configure two separate clusters, one for Hyper-V hosts and the other for File Servers.

If you really do need to have the Hyper-V role and the File Server role running on the same box, it’s really not a problem. Just use a local path using a driver letter (X:FolderFile.VHDX) instead of a UNC path (\serversharefolderFile.VHDX). The same goes for the cluster configuration: just use the local path to the cluster disk or cluster shared volume.

Comments

  • Anonymous
    January 01, 2003
    So to clarify, after I make a normal cluster with Hyper-V, you're saying to use the disks presented from the iSCSI NAS, make them CSV's, and balance the VMs between the nodes using direct file paths? But if I do that, would i still be able to use the features of SMB3, like the Continuous Availability?  Also, how would Live Migrations of VM's work between servers?At least with my convoluted setup, since i have those VM's created specifically for the scale-out file cluster, that UNC they host as the SOFS allows the VM's to have a central place that lets the Hyper-V hosts do the Live Migration.  I just don't know how we'd still be able to have all of the features I have now, without that scale-out file cluster of VM's that live on top of my physical hosts, presenting the SOFS to them.If you're saying that I'll still have the functionality if i just make a standard hyper-v cluster on top of CSV's, then i'll pull my iSCSI LUNs from my FSVM1 and FSVM2 and present them to my Hyper-V hosts.  

  • Anonymous
    January 01, 2003
    @heuristikYou need some sort of shared storage to create a Scale-Out File Server. If I understand your example, the two VMs do not have shared storage to form the cluster, so you can't really create a cluster that way. The simplest solution you could have is using Shared SAS disks with Storage Spaces as your shared storage for the cluster.  

  • Anonymous
    January 01, 2003
    Hi Jose, I know the title of this post and in some of your last posts you said to avoid the loopback between SMB3 and Hyper-V, and that it's not a supported configuration, however the last paragraph of this post isn't the most clear on how to actually make it work in this unsupported configuration.  Do you think you could clarify how to set up a Hyper-V cluster and a Scale-Out File Server cluster on the same two physical boxes?In a test environment, to get around that access denied issue, I'll list out what I did, but from reading this post, it seems as though it can be done a little better.  I would love some guidance on how to do that.Here's my setup (from memory, so I hope I recall it all right):Let's say the physical boxes are named SERVER1 and SERVER2.  SERVER1 has a VM on its local d:VMs directory called FSVM1, and SERVER2 has a VM on its local d:VMs directory named FSVM2.   I have a Synology NAS providing iSCSI, and to set up the scale-out file cluster first, I presented a 1gb disk to FSVM1/FSVM2 for cluster quorum/witness, then a handful of various sized disks for the cluster – FSVMCLUSTER. For simplicity's sake, let's say I have 3x300gb disks, CSV1, CSV2, and CSV3. These disks are then set up to be CSVs, and I create two scale-out file cluster roles – one attached to each server, named FSSCALEOUT1 with preferred node VM1 and FSSCALEOUT2 with preferred node VM2. The SMB shares with continuous availability are named CSV1, CSV2, and CSV3 on those CSVs (C:ClusterStorageVolume1SharesCSV1, C:ClusterStorageVolume2SharesCSV2, etc) Then I present a 1gb disk to SERVER1/SERVER2 and create a cluster for Hyper-V, HVCLUSTER. So finally I can create VMs that I split between //FSSCALEOUT1/CSV1 and //FSSCALEOUT2/CSV1 I’d love to be able to just have the scale-out file cluster and the hyper-v cluster on the same servers without having to create those FSVM1/FSVM2 and using those to host the disks rather than my physical servers directly.

  • Anonymous
    January 01, 2003
    @nmorgowiczIf you want to do it all in a single cluster, you don't need the file server role. Just use a single cluster of Hyper-V servers using the iSCSI storage. Then use the regular local path instead of the UNC path. For instance, just use the C:ClusterStorageVolume1 path instead of using the \fsscaleout1csv1 path.

  • Anonymous
    January 01, 2003
    What if nmorgowicz's scenario was constructed without the ISCSI NAS, and instead had only Direct Attached Disks on the two HV hosts?  This would look something like: --HV1 Host                                                                                                                                        --HV2 Host         --FS1 VM + Pass-through Disks from HV1                         --FS2 VM + Pass-through Disks from HV2                                                                           --SOFS CLUSTER--The result would be a failover / migration solution that does not require a separate SAN architecture.  While that's certainly an attractive concept, I am guessing this is still considered loopback and is therefore unsupported?

  • Anonymous
    January 01, 2003
    The comment has been removed

  • Anonymous
    July 07, 2014
    In this post, I'm providing a reference to the most relevant content related to Windows Server 2012

  • Anonymous
    December 14, 2018
    I am also very interested in this scenario. Could I install 2 instances of Hyper-V server, then deploy on the same servers 2 SMB3 services which will act as a CSV cluster? Both servers have access to shared direct storage (which is completely redundant). For the split brain issue I can provide iSCSI o NFS external share. I do not want to use iSCSI NAS as a VM storage because it has SPOF (NAS motherboard).For this scenario should I install Windows Server as a VM to configure storage space and SMB share or it could be handled at the Hyper-V level? Sorry, I am new to Hyper-V.