Re: A "different kind of rate limiting"

From: Will Buckner <will#chegg.com>
Date: Mon, 06 Apr 2009 14:44:50 -0700
Thanks... I'm currently doing that, but if the requests complete in under a second, another one goes to that server. I never want to send more than 1 request per second. I'm fine with modifying the code if someone has a recommendation on where I could find the relevant parts, as I'm not familiar with the code. The basic idea is not to dequeue a request into an available server if that server has previously received another connection within X milliseconds.

Will Buckner
Technical Lead
Chegg.com - #1 in Textbook Rentals
c/ 612.963.5750
e/ will@chegg.com



Karl Pietri wrote:
Gave any thought to using maxconn 1 on the backend server specifications.  it may or may not help depending on your "Why"  but thought i would mention it.

-Karl

On Sat, Apr 4, 2009 at 11:18 PM, Will Buckner <will@chegg.com> wrote:
Hey guys,

I'm trying to find a solution to a problem I'm having.... This might be a unique use case, but the "why" is a bit complicated so I'll just leave that out of the picture for now.

I would like to make a maximum of 50 requests per second to my backend (or, optionally, one request per second per each of the 50 backend servers). This can't be accomplished with normal session rate limiting because of a catch: I don't want HAProxy to reject the request. Is there any way to have HAproxy accept and queue the requests, but throttle the backend requests to 50/sec or 1/server/sec? The goal is to make efficient use of 50 requests per second. If not, can anyone think of any creative ways to accomplish this? Maybe via PF/IPFW? Any help would be appreciated :)

Thanks,
Will


Received on 2009/04/06 23:44

This archive was generated by hypermail 2.2.0 : 2009/04/07 00:00 CEST