Freigeben über


Avoid timeout when uploading large blobs to Azure

If you're uploading (and I guess downloading) large blobs to Azure you might hit a timeout consistently because the ClientBlobClient have a timeout property. It defaults to 90 seconds which means that if Azure is your bottle neck (and throttling you) anything above 5400 MB will result in a timeout. In real life your connection up to azure is more likely your bottle neck (for example I experienced timeouts of files above approximately 200MB when uploading from home). So make sure you increase this timeout for large files to avoid these timeouts.

There is however another problem with this API in my opinion. As a server I think it makes sense to have a timeout preventing clients from taking up resources by just uploading data very slowly, but this is a client API so as long as I'm uploading data I'm technically fine I think. A much shorter timeout where it only fires if no data can be sent during that period makes more sense to me personally. In a perfect world there would actually be two timeouts; the idle timeout and an overall timeout. But the default (and what I think should be there if I had to choose one) would be the idle timeout. I don't think a client really wants to timeout if transfer is slow as long as there is progress. But again, that's just me...

Comments

  • Anonymous
    March 11, 2013
    You can also use the Blob Transfer Utility to download and upload all your blob files. It's a tool to handle thousands of (small/large) blob transfers in a effective way. Binaries and source code, here: http://bit.ly/blobtransfer