Re: [SOLVED] Rails app with maxconn 1 receiving more than 1 request at a time

From: Willy Tarreau <>
Date: Sun, 14 Sep 2008 23:57:01 +0200

On Sun, Sep 14, 2008 at 11:38:08PM +0200, Alexander Staubo wrote:
> On Sun, Sep 14, 2008 at 7:05 PM, Willy Tarreau <> wrote:
> > Do you have the ability to tell your mongrels that you are about to kill
> > them, or alternatively to kill them softly ? If so, then you could write
> > a health check application which will return 404 when it wants to leave,
> > then exit a few seconds afterwards (the time for haproxy's health checks
> > to detect it is in maintenance mode).
> On a normal SIGTERM, Mongrel will close its listening socket and then
> wait for all current requests to finish processing. In other words,
> when combined with HAProxy retries, users should never notice anything
> when a Mongrel is killed gracefully.

OK, I did not know that.

> If we did it your way, then Mongrel would have to keep responding to
> requests when told to shut down. First of all, what is it supposed to
> do with the requests that are *not* health checks? Return 503, thus
> triggering redispatch? Sounds a bit wasteful.

No, not at all. It would simply continue to respond normally.

> Secondly, how long should it wait before terminating? We have set the
> health check interval at 60s, so it would have to wait at least that.
> 60 seconds is a bit long time to wait just to restart a Mongrel
> server.

Maybe, I don't know. It depends on the number of servers and the interval between the restarts. The people I know running other types of servers check them every second, so waiting 5 seconds for a server not to receive any more traffic is not long at all. But I agree that several minutes is long. You can make use of the "fastinter" parameter to speed up health-checks when a server is seen as changing its state. BTW, since those people's servers are usually set up with persistence, they still have to wait a few minutes for the clients to leave.

> Thirdly, you can't speed that up by having Mongrel shut down
> immediately after it has received a health check and returned the 404.

In fact, it makes me realize that I should add a header in haproxy's health-checks, indicating how haproxy sees the server. For such usages, it would help the server take the right decision, especially when there is no persistence.

> We have a number of redundant boxes in our cluster, each running
> HAProxy, each running health checks on all the Mongrel servers. So a
> time-based delay is the only option.

Anyway a time-based delay is the only option. The delay has to cover the time needed to ensure the service is seen as down. The real problem is seems is the time it would take because of the large health-check interval.

> I don't know, it looks iffy.
> > Also, haproxy does not send an alert
> > message when a server disappears after having been in maintenance mode.
> That's really the main reason for doing something like this, I think.
> Having HAProxy know when a backend stops *intentionally* would
> eliminate controlled restarts from showing up as errors in logs and on
> the status page.


Willy Received on 2008/09/14 23:57

This archive was generated by hypermail 2.2.0 : 2008/09/15 00:00 CEST