次の方法で共有


TechEd 2014: File Server Networking for a Private Cloud Storage Infrastructure

Just a reminder that I will be delivering a session at TechEd 2014 in Houston, Texas. Here are the details:

 

Code   DCIM-B337
Title   File Server Networking for a Private Cloud Storage Infrastructure in Windows Server 2012 R2
Date   Tuesday, May 13 3:15 PM - 4:30 PM
Track   Datacenter and Infrastructure Management
Topic   Windows Server
Abstract   Microsoft has shared the Private Cloud Vision that leverages the storage innovations in Windows Server 2012 and Windows Server 2012 R2. That vision is implemented as a Software-Defined Storage Architecture that leverages JBODs, Storage Spaces, Cluster Shared Volumes, Scale-Out File Servers and SMB3 file shares. In this session, we introduce those concepts and focus on a key aspect of it: the network that connects the compute nodes (Hyper-V hosts) and the storage nodes (File Servers). After stating the overall goals of network fault tolerance and performance, we describe specific features and capabilities in Windows Server that deliver on them, including NIC Teaming, Failover Cluster Networking, SMB Multichannel and SMB Direct. We close by sharing, as end-to-end examples, the two most common networking configurations for Hyper-V over SMB.

 

The talk is really about about Networking for the Private Cloud that uses SMB3. This goes from the overall architecture to the individual features (SMB Multichannel, SMB Direct, NIC Teaming) to a few recommended end-to-end configurations. Obviously we’ll also talk about performance and there’s a new focus on troubleshooting of some common networking issues.

 

If you want to get a taste of a few topics we’ll cover, here are few links to get you warmed up. You should consider this “recommended reading” for those attending the session. 

 

I look forward to seeing you there and make sure to bring your File Server networking questions.

To see the other sessions at TechEd 2014, make sure to visit https://northamerica.msteched.com/Catalog

 

TechEd 2014 Logo   

Comments

  • Anonymous
    March 19, 2014
    Jose, thank you for all your work--this blog is absolutely invaluable. I wish I could go to TechEd and see your presentation!

    I started building our JBOD/Storage Spaces/SOFS/RDMA solution back in September/October, and began testing/qualifying and troubleshooting in November. I was getting some inconsistent results (as I posted on your blog previously), but what stopped the deployment of our solution was when I went to create a fixed VHDX file from a Hyper-V cluster node to an SOFS cluster node--I could never get more than ~55 MB/s (using either the GUI or DISKPART). I opened a ticket with Microsoft Support in January, and they have been troubleshooting the issue since that time.

    Last night, I decided that today I would try to reach out to you again for advice via your blog. I saw that the "Troubleshooting File Server Networking Issues" link was updated, so I clicked it. I had read this article previously, but some troubleshooting since that read suddenly clarifies something. In #12, it says that single copy file performance on a 10 GbE link will cap out around 150 MB/s. When I read that before, I thought "I wish I was getting even 150 MB/s!" and disregarded it as unrelated to our case, since our setup wasn't even close (max ~55 MB/s). During troubleshooting more recently, Support had me perform straight file copies rather than creating fixed VHDX files, and the file copies were limited to about 190 MB/s (with continuous availability turned on), yet when we'd do the transfer through the \SOFS_hostC$ClusterStorage share, we'd get over 1 GB/s. I didn't remember #12 by that time, unfortunately. And unfortunately (double misfortune?), Support is apparently unaware of #12 as well.

    I also recently found out that dynamic VHDX files are supported in production (which I understood as not the case with dynamic VHD files in 2008/R2). If that's the case, I'm not so worried about fixed VHDX creation, and hopefully, this clears up any outstanding issue (I'll need to revisit my test data). Of course, the lost time is rather frustrating, along with the delay of projects.

    It still leaves some unanswered questions, however. For example: is it normal, then, that fixed VHDX creation to a scale-out file server is that slow (~55 MB/s)? I had asked this question several times along the way, but no one ever provided an answer of the throughput they were seeing.
  • Anonymous
    April 09, 2014
    mk