Re: [SOLVED] Rails app with maxconn 1 receiving more than 1 request at a time

From: Rupert Fiasco <rufiasco#gmail.com>
Date: Tue, 16 Sep 2008 11:21:58 -0700


Thanks for the info.

Yes, I realize that using a TCP only health check is incorrect and does not detect a wedged mongrel (which is what we are experiencing - dont know why, but thats the case). So I switched to using an HTTP health check. At least this way haproxy can pull that mongrel out of the cluster when it detects it as down.

Previously, a wedget mongrel would respond to a socket connection but fail on an HTTP request, so we were still at square one.

Thanks
-Rupert

On Mon, Sep 15, 2008 at 10:09 PM, Willy Tarreau <w#1wt.eu> wrote:
> Hi Rupert, Alexander,
>
> On Mon, Sep 15, 2008 at 04:43:27PM -0700, Rupert Fiasco wrote:
>> > Do you see the problem on the HAProxy status page, or in the Mongrel
>> > "ps" status? It's my impression that HAProxy health checks ignore the
>> > maxconn setting. So we frequently see this "ps" output:
>>
>> Precisely, we do not see it on the stats page
>
> OK, this is very important. The "max" value you observed during the bug
> was what told me it *was* a bug, because that max is reliable since it's
> computed during the connect() call. The fact that it does not grow above
> the limit indicates to me that there is really never more than 1 active
> connection at a time.
>
>> but via that mongrel
>> plugin (the 2nd num in the ps output).. So it makes me think that
>> haproxy thinks its only sending 1 request to that mongrel, yet the
>> mongrel is getting more.
>>
>> > It's my impression that HAProxy health checks ignore the
>> > maxconn setting.
>
> Yes, currently haproxy cannot queue the health checks. It will be possible
> after the architecture rework. It will be necessary to always put them at
> the beginning of the queue though. Another annoying case is for people who
> have multiple haproxy boxes, because your mongrel can only process one's
> health-check at a time. This could result in some of the backup box seeing
> more check failures than the active one.
>
>> Actually we are opting to just use just a TCP health check (versus
>> sending a full HTTP HEAD/OPTIONS request to the backend).
>>
>> Or at least, please correct me if I am wrong. With this config:
>>
>> http://brockwine.com/haproxy.txt
>>
>> and in the absence of an "option httpchk GET /foo" it *will* use a TCP
>> check vs an HTTP check, right?
>
> Yes you're correct.
>
>> If this is the case, then that HTTP request should never hit Mongrel
>> and the ps output should indeed represent full HTTP requests and *not*
>> haproxy backend health checks.
>>
>> Is this correct?
>
> yes, that's exactly that. However, if you are counting the concurrent
> requests on the mongrel side, there is another situation which may
> report more than one request at a time : when the previous one timed
> out. In this case, haproxy aborts, closes the connection and reports
> a 504 to the client. But mongrel has no way to know this (because the
> close in TCP does not exist), and will still maintain this connection
> until it wants to respond. By that time, another connection can be
> sent (because there is no more connection on haproxy's side), hence
> resulting in more than one at a time observed on the mongrel side.
>
> Also, I'm wondering : would there not be any solution to fork several
> mongrel processes on one port upon startup ? This limit of exactly 1
> connection is really annoying, and if the server could call fork()
> 3 or 4 times instead of only one (akin to haproxy's nbproc), it would
> provide huge performance and reliability boosts. Just imagining that
> you cannot even telnet to the port to send a request during traffic
> would really frustrate me :-/
>
> Regards,
> Willy
>
>
Received on 2008/09/16 20:21

This archive was generated by hypermail 2.2.0 : 2008/09/16 20:30 CEST