Udostępnij za pośrednictwem


Request and connection throttling when self hosting with OwinHttpListener

[Disclaimer before we begin: I'm not really an expert on OWIN (henceforth 'owin') or HttpListener - I just researched this as best I could myself so I may get some stuff wrong. Question my authority!]

Self-hosting a web server using owin instead of using IIS is an attractive option to me partly because of the limitations of my own experience: I am not an IIS expert, and console apps seem like they will be a way to complete control over things that matter without having to spend time learning IIS-isms.

However, while the sample code snippets show it is incredibly easy to get started hosting owin in just a few lines of code, and have it call you to implement your application logic through a very simple interface, it leaves a fundamental question - what if I need to understand and control things deeper in the software stack that I am running on? Which for me is currently OwinHttpListener etc.

For instance, recently we had a discussion around security of one such owin hosted HTTPS endpoint for diagnostics/management, and possible DOS attacks. While in our particular case we don't care that much if this particular endpoint goes down due to a DOS attack, we do care if this one little endpoint becomes an easy way to DOS the entire process or machine. How do you know if your self-hosted web application is robust enough to withstand slowloris attacks, or just traffic-based DOS attacks Answering this required me to do some research, and google just didn't seem to have the answers readily at hand, so I spent some time digging through reflector, MSDN, and reading about Windows/HTTP.SYS/IIS mitigations for DOS attacks. Let me now summarize some of the things I think I learned in my research.

1. There only seem to be a few basic strategies to choose from:

Filtering out or blacklisting bad traffic:
-Examples: IP based filtering, request-header-based filtering. The idea here is that you can put a filter in your request pipeline that detects attacking requests/packets somehow by what they look like, and discarding them. Possibly you do this by manually configuring filtering in reaction to an identified DOS attack. In order to do this you must have logs which let you identify what the bad traffic looks like. Possibly you implement some kind of automatic filtering system, which might actually be more of a throttling system...

Throttling traffic:
-You can throttle the server overall and attempt to limit total incoming traffic on your server to what you know your server can handle. The point of this is to stop your application performance degrading by trying to handle too many requests at once...
-At application level, if you're in an authenticated app, you can throttle requests per-user, once you've authenticated their connection...
-You can throttle clients at lower levels again based on something about what the requests you are getting look like, and e.g. only accept a few concurrent TCP sessions per client IP address. You have to be careful with such IP based policies though. Sometimes due to NAT a single IP address can represent thousands of legitimate users...

Out-scaling traffic:
-Throw so much servers and bandwidth into action that you can actually handle all the load coming at you. This one may get expensive, so someone is going to have to weigh up the cost of this vs the cost of not doing this and just being DOSed...

2. In general, your best defense against DOS attacks is a multi-layered one.

At the application level, you can easily write a request handler that throttles the number of simultaneous requests your application instance will try to serve. For instance see answers to:

https://stackoverflow.com/questions/33969/best-way-to-implement-request-throttling-in-asp-net-mvc

But what happens if there are too many requests queued at the next layer below your application, or next piece of server infrastructure in front of your application (think load balancer) waiting to get into your application throttling mechanism, because your application throttling code, even though fast, just isn't handling them fast enough?

BTW. IIS is apparently not vulnerable experimentally to e.g. slowloris. So what about HttpListener?

3. In the case of HttpListener, it is built upon HTTP.SYS and there are already some throttling mechanisms in place, for you to use, with reasonable defaults. But if you are not in IIS and are leveraging HTTP.SYS via the Http Server API you should be able to exercise control too

Here are some of the properties you can theoretically set, once you find the right API:

(per
EntityBody -
The time, in seconds, allowed for the request entity body to arrive.
DrainEntityBody - The time, in seconds, allowed for the HTTP Server API to drain the entity body on a Keep-Alive connection.
RequestQueue - The time, in seconds, allowed  for the request to remain in the request queue before the application picks it up.
IdleConnection - The time, in seconds, allowed for an idle connection.
HeaderWait - The time, in seconds, allowed for the HTTP Server API to parse the request header.
MinSendRate - The minimum send rate, in bytes-per-second, for the response. The default response send rate is 150 bytes-per-second.
MaxConnections (per url group)- The number of connections allowed. Setting this value to HTTP_LIMIT_INFINITE allows an unlimited number of connections.

HttpServerQueueLengthProperty (per request queue) - Modifies or sets the limit on the number of outstanding requests in the request queue.

Note the documented defaults are:

Timer HTTP Server API Default HTTP Server API  Wide Configuration Application Specific Configuration
EntityBody 2 Minutes No Yes
DrainEntityBody 2 Minutes No Yes
RequestQueue 2 Minutes No Yes
IdleConnection 2 Minutes Yes Limited
HeaderWait 2 Minutes Yes Limited
MinSendRate 150 bytes/second No Yes

 

HttpServerQueueLengthProperty ULONG 1000

Now luckily, if you are using HttpListener you don't have to go read the unmanaged code docs and figure out how to P/Invoke everything, because there is a TimeoutManager on HttpListener which lets you set all these properties via the HttpListenerTimeoutManager class - although they are not as thoroughly documented in the .NET api. But of course the even better news I just implied is just that these default limits exist is already going to be giving your application some sensible filtering and throttling goodness.

4. OwinHttpListener comes with some additional checks and balances you can use for throttling requests at its own entry gate (instead of your application's). The way to do this is

owinHttpListener.SetRequestProcessingLimits(int maxAccepts, int maxRequests);

also, for convenience, owinHttpListener also lets you do SetRequestQueueLimit(long length), which modifies the aforementioned RequestQueueLength property of the Http Server API which for some reason HttpListener does not expose.