다음을 통해 공유


SMB/CIFS Performance Over WAN Links

I often have customers who ask me to wrestle with the performance of SMB (otherwise known as CIFS) across a WAN link.  Their experience is usually that file transfers from Windows Explorer or from the command prompt don't meet their expectations of their inter-site link, even when FTP (ewwww!) performs much better.

Background

There are a number of reasons that SMB might not perform well across a high-latency link, but the number one reason is that SMB is block-based whereas FTP (or HTTP...please, stop using FTP, folks.  Please?  I beg you.) is a streaming protocol.  The difference?

If I wanted to copy a 1MB file over HTTP, it would look something like this over the wire:

Client: HTTP GET /myfile.zip HTTP/1.0
Server: HTTP 200 OK, followed by 1MB of data

The only things that will slow down the HTTP transfer are a TCP window that is too small, slow start, and congestion avoidance.

The same transfer over SMB would look like this:

Client: C SMB NT Create AndX myfile.zip
Server: S SMB NT Create AndX
Client: C Read AndX offset 0x0 data 0xf000
Server: R Read AndX (with 61440 bytes of data)
Client: C Read AndX offset 0xf000 data 0xf000
Server: R Read AndX (with the next 61440 bytes of data)
(Repeat this another 16 times until we get 1MB of data)

Each time we go from the client to the server and back, it takes an amount of time that is at least equal to the latency between the client and the server.  If we're on a LAN where the latency is a millisecond or two, this isn't such a big deal; however, if the client is in New York and the server is in Seattle, the latency is probably closer to 70ms and the SMB transfer takes at least a second more than the HTTP transfer.

This is the best case scenario, too.  Technically, the largest block of data that SMB can transfer is 64K as it is limited by the length of several fields (two bytes).  This is further bound by two factors:

  1. SizReqBuf -- this is the parameter that SMB uses to determine the actual largest buffer that both client and server will support.  The server passes this value to the client in the R Negotiate command.  For Microsoft OS's, this is generally set to 4K (NT4, WinXP/W2k3 with <512MB RAM) or 16K (Win2k, WinXP/W2k3 >= 512MB RAM).  In the absence of other factors (see the next time), this is the maximum size that will be used for all data transfers.
  2. CAP_LARGE_READX/CAP_LARGE_WRITEX -- these two capabilities are also negotiated by the server in the R Negotiate.  For every Microsoft OS starting with Windows 2000 and moving forward, both of these capabilities are enabled.  These indicate that we can use buffers in excess of SizReqBuf for Read AndX & Write AndX commands; thus, we use 61440 byte buffers for most reads and writes. 

There are a few caveats.  First of all, if SMB signing is enabled, the redirector will do large reads but it will not do large writes; instead, with signing, writes are done in SizReqBuf sized chunks.  This can cause slower writes to domain controllers (where SMB signing is enabled by default).

The second caveat is that CAP_LARGE_READX/CAP_LARGE_WRITEX affect only the Read AndX & Write AndX commands.  Anything that uses Transact or Transact2 (such as directory listings) are limited to SizReqBuf sized buffers.

Finally, if the server is NT4, CAP_LARGE_READX is supported but CAP_LARGE_WRITEX is not; therefore, once again, writes will probably be slower.  (NT4 clients don’t use either of the CAP_LARGE_* capabilities.  They use the strange and irritating raw mode which isn’t even worth going into here…)

 

Putting it all together

The first thing to do is to figure out the optimal TCP window size and to set both client and server to use at least this value for TCP window.  If the TCP window is still at the default 16K and the link can support a higher TCP window, then other changes won’t do much to improve performance.

If you are using NT4 servers or SMB signing, set SizReqBuf on the file servers to 61440.  Even if neither of these criteria apply, it may be worth it to make this change anyway as it will speed up file directory listings and anything else that uses the SMB Transact and Transact2 commands.  (On heavily loaded file servers, it is worth doing some performance monitoring to assess the impact of this change – upping SizReqBuf will increase the amount of non-paged pool used.)

Also be sure to test the latest service pack for the machines in use – there have been numerous performance issues fixed.  There are a few more that may require installation of a hotfix.

Next:  Retransmits and SMB performance, coming soon.