Re: Maintenance mode

From: Aleksandar Lazic <al-haproxy#none.at>
Date: Thu, 11 Sep 2008 00:11:20 +0200


Hi Alexander,

On Mit 10.09.2008 23:03, Alexander Staubo wrote:
>Guys, I would like to bring this subject up again. I have not been able
>to work out a satisfactory solution to the problem.
>
>In a nutshell, when we -- my company -- perform a site update, we want
>to display a static web page showing the current maintenance state. A
>site update usually involves taking down all Rails processes, checking
>out new code, and bringing Rails up again. So while this is going on,
>HAProxy will attempt, and fail, to connect to its backend servers.

Please can you add a little bit more detail about your setup, due the fact that you have started with a nginx example and I'am not sure if you still use nginx or not?

client -> ? -> haproxy -> mongrel -> rails

>There are a few possible solutions, all of them unsatisfactory:
>
>* Using "errorloc" or "errorfile" to show a static page on 503 errors.
>That is what we are using in our current setup. This is unsatisfactory
>because the 503 is an error that occurs in non-maintenance situations;
>telling the user that the site is under maintenance when there's an
>actual error condition just confuses everyone (on one particularly bad
>day, people thought we are updating the site all the time and told us
>to please stop doing it).

You mean all of your mongrel servers was down, sounds a very bad day, get my sorry ;-(

Due the fact that I don't know your config, I yust try a wild guess ;-)

How about to us 2 acls one with nbsrv and one without?

e.g.:

###
errorloc 502 /maintence.http
errorloc 503 /error.http
acl error_state status 503
acl site_dead nbsrv(dynamic) lt 1
acl site_dead nbsrv(static) lt 1

block if error_state site_dead
###

>* Another suggestion has been to use backup servers for this purporse.
>This is unsatisfactory for the same reason that "error*" is.
>
>* An iptables-based solution, suggested earlier, is too roundabout and
>non-intuitive.
>
>* Loading a separate configuration file is not appropriate because a
>box may be running multiple sites, but we want to be able to put a
>single site in maintenance mode without disturbing others.
>
>* Changing the current config and sending a signal to reload HAProxy is
>too intrusive and inconvenient. It can be done programmatically, but
>involves maintaining some sort of master configuration file that you
>filter through a preprocessor into the real config file. It's icky.
>
>Now, I have a suggestion for a proper solution, and if Willy likes it I
>will try my hand at coughing up a patch. The idea is to support
>user-defined variables that are settable at runtime. In the
>configuration, these variables would be usable as ACLs:
>
> frontend http
> ...
> acl variable maintenance_mode true
> use_backend maintenance if maintenance_mode
>
>To control a variable you would invoke the haproxy binary:
>
> $ haproxy -S maintenance_mode=true
>
>or
>
> $ haproxy -S maintenance_mode=false
>
>Using shared memory for these variables is probably the easiest,
>fastest and secure. It would be mapped into HAProxy's local address
>space, so a lookup is essentially just a local memory read, cheap
>enough to check on every request. Similarly, read and write access to
>the variables could then be limited to the HAProxy user, if I remember
>my POSIX shared memory semantics correctly.
>
>Having such variables at hand would also let you do other tricks not
>specifically related to maintenance. For example, you can have external
>monitoring scripts that modify the behaviour of HAProxy based on some
>sort of load parameter.
>
>Thoughts?

Sounds interesting and very intervention into haproxy.

How about to add the possibility to SET some variables in the ACL depending of some states?

e.g.:

acl site_dead nbsrv(dynamic) lt 1
acl site_dead nbsrv(static) lt 1

set maintenance if site_dead

use_backend maintenance if maintenance_mode

with this you don't need to do manually calls on cli.

opinions?

Aleks Received on 2008/09/11 00:11

This archive was generated by hypermail 2.2.0 : 2008/09/11 00:15 CEST