Why changing SendTimeout does not help for hosted WCF services?
In .NET 3.0, you would handle two different timeouts:
· Binding.SendTimeout
This is the timeout that specifies how long the client can wait for the transport to complete data writing until throwing exception. It is client side setting. If the request would likely take longer than the default (1 minute), you would need to increase it.
· Binding.ReceiveTimeout
This is the timeout that specifies how long the service can wait from the beginning of receiving a request until the message is processed. It’s server-side setting. When you send a large message to the service and the service needs long time to process, you would need to increase this setting.
Ideally, using these two timeouts should solve most timeout problems. However, when a WCF service is hosted in IIS/ASP.NET, another setting would also control the lifetime of the request:
· HttpRuntimeSection.ExecutionTimeout
The default of this value is 110 seconds. When you have very slow service operations which would cause timeouts to happen, the request would be aborted and you can find an ASP.NET EventLog entry that tells that the request has timed out. As per the link, you can configure this setting through web.config as following:
<configuration>
<system.web>
<httpRuntime executionTimeout="600"/>
</system.web>
</configuration>
This would set the timeout to be 600 seconds (10 minutes). From code, you can use the following API to achieve the same:
· HttpApplication.Server.ScriptTimeout
If you use ASMX services, you would hit this exact problem too.
Fortunately this has been enhanced in .NET 3.0 SP1 so that this is taken care of internally. The ScriptTimeout was set to be Int.MaxValue for WCF requests. In this way, WCF has the full control of the lifetime of the requests.
Comments
Anonymous
March 10, 2008
PingBack from http://msdnrss.thecoderblogs.com/2008/03/10/Anonymous
February 26, 2009
Hi! I was trying to change timeouts and messages sizes on my wcf service for some time now. I have tried changing different settings without any effect. Now, I really could use some insight into this. The following are some implementation details that might be helpful if you decide to look into this. I have a WCF service hosted under IIS, but the actual implementation is in WCF library. So, the svc file look like this: <%@ ServiceHost Debug="true" Service="LibraryAssembly.LibraryType" Language="C#" %> <%@ Assembly Name="LibraryAssembly" %> Then I call the service from other applications. I have two of them : one is a .NET web site and another one is a classical asp web site (using object created with service moniker). In both cases, it looks like changes to the sendTimeout, receiveTimeout, httpruntime executionTimeout and readerQuotas values do not take effect. I still get 1 minute default timeout for my service and the default message size of about 8k. I feel that there is a proxy that gets created by IIS that for some reason is not aware of my settings. But this is just my guess... I have read a lot on the web about similar problems and seems like people with thick clients can resolve this by altering their app.config files, but this is not possible in my case. I also looked at files generated by VS for service web reference and I did not see any settings that would be related to timeout or message size. Please stir me in the right direction. Thanks!