Re: Haproxy server timeouts?

From: XANi <xani666#gmail.com>
Date: Fri, 04 Dec 2009 20:50:22 +0100

Dnia 2009-12-04, pią o godzinie 14:30 -0500, Naveen Ayyagari pisze:
> We are running mod_php on the apache servers. And we have our
> connection limit set to what we consider fairly low in haproxy.. The
> problem i am describing is more an issue with the number of processes
> executing on the backend machine. I guess we had assumed if we set
> maxconn to a number that no more than that many connections would
> ever be served by the backend server at any given point. However, we
> see that if we have a process that takes a while to execute on the
> backend, that haproxys 'timeout server' drops the connection and
> serves up the next one in the haproxy queue, but the original request
> is still processing on the backend because apache did not kill it and
> wont kill it..
>
>
> So what ends up happening is the server gets overloaded with
> additional new connections, because it is busy processing requests
> that haproxy has already decided to stop listening for.
>
>
> I would like to see apache just stop processing when haproxy drops the
> connection when it hits the 'timeout server' value, such that unneeded
> processing doesn't continue on the backend.
>

use "reply all" please ;p
Do you have separate backend for static content ? mod_php is kinda bad when it comes to serving both static and dynamic content from same machine, thats because even when you request 10 byte gif apache will use big heavy process with mod_php loaded, mod_fcgi (or mod_fastcgi) + php-cgi is usually much better (and u can use mpm_worker which is faster and handes more connections too).

What I have on my setup is:
-server which serves dynamic content have 2xnumber of cores php processes thru fastcgi, haproxy have connlimit per server a bit higher than php processes (say php_processes*1.1) so server dont have to wait for new request
-server which serves have much higher conn. limit (like 100 or 200) coz it usually have it in cache anyway

so basically what i do is:
-do not run too much PHPs, 2xnumber of cores is ok coz if u run too much content switching will eat your CPU and RAM is better used for caching, not another 20 PHP processes ;]
-limit number of connections to "dynamic" servers so there isn't 40 requests waiting to be handled but 2-10 max and rest queued on harproxy (another plus side is if one server is fully loaded it doesn't "hog" connections but it lets haproxy requeue them -separate php from apache (or better, just use lighttpd/nginx)

Regards
Mariusz

Received on 2009/12/04 20:50

This archive was generated by hypermail 2.2.0 : 2009/12/04 21:00 CET