On Thu, Apr 17, 2008 at 11:58:08AM +0200, Sebastian Vieira wrote:
> > It means that your servers sometimes do not even accept connections (maybe
> > they were down, maybe they were totally saturated), and sometimes they did
> > not
> > respond within the timeout. The frontend request errors indicate invalid
> > requests from clients. Such a huge number may indicate that your site is
> > regularly attacked.
> That could be, but i don't think so. I have applied your suggested
> configuration settings and although i don't see any errors in the webservers
> field, the errors for the Frontend field (Req) keep increasing. They're now
> over 800 with 45 minutes uptime of haproxy.
Also, you can check your logs, looking for lines with "PR" flags, indicating that the request was blocked by the proxy. That way, you'll find more information on their nature.
> > Your config is simple and correct. You may want to increase both
> > clitimeout and srvtimeout so that haproxy leaves enough time to your
> > application to respond. Ideally, both values should be equal. If your
> > app sometimes needs one minute, put it slightly higher (80000 = 80s).
> Done that, so that should (imo) get rid of most 504 HTTP errors, right?
> If your servers are refusing connections because they are saturated,
> > you may set a "maxconn" on the server lines. This value should be
> > slightly lower than the "maxclients" you have configured on apache.
> > Thus haproxy will act as a buffer between the clients and apache,
> > preventing them from saturating. For instance :
> > server webs001 x.x.x.11:80 cookie A maxconn 145 check
> Done that too. Although i noticed that MaxClients had been increased to
> 1500, so i've set the haproxy maxconn to 1450.
It's quite high for apache. You should ensure that it also has a high enough MinServers (because apache kills unused processes after a while), and is slow to create new ones. So if you think you need that large a maxclients (check haproxy's stats for this), at least ensure that you will not have to wait *minutes* for the processes to fork.
> Well, let's see now what the customer says. I can't test it since the
> problems don't continuously occur, so i'll have to rely on what visitors
OK, you're in the most uncomfortable situation then.
> I've inspected the website with YSlow and amongst other things we've noticed
> that with one visit to the frontpage of the site it makes approximately 60
> HTTP requests to fully load it. Combine that with the fact that all images
> are stored in the database servers (ouch) and i wonder then if HAProxy is
> such a good solution for this particular situation.
I don't see your point. If a browser has to send 60 requests on a page, it only depends on the application, the LB cannot lower that number. Maybe you should install a reverse cache to relieve your servers (anyone tried varnish ?).
> Please take no offense in this :)
Rest assured that I do not take offense in this. I even like criticism, they are what makes the product evolve. I also sometimes recommend people to user other products such as LVS or Pound when more appropriate.
> Maybe LVS is a better solution here?
I don't see why. Right now you have very rich logs to find out what causes your bad requests, you can monitor the timers in the logs to see what happens on the server side when they do not accept connections, and you can tweak maxconn in order to save them in case of trouble. If you replace it with a log-less LB before having determined what the problem is, well... it's like killing the postman because he brings you bad news !
> Thanks for the help so far!
you're welcome :-)
Willy Received on 2008/04/17 21:59
This archive was generated by hypermail 2.2.0 : 2008/04/17 22:15 CEST