Re: Avoid 503 during failover to backup?

From: Krzysztof Oledzki <ole#ans.pl>
Date: Wed, 3 Dec 2008 15:57:55 +0100 (CET)

On Wed, 3 Dec 2008, Jim Jones wrote:

> Hi Willy,
>
> On Wed, 2008-12-03 at 06:46 +0100, Willy Tarreau wrote:
>>> Hmm. Well, it would be really nice if HAproxy would keep re-scheduling
>>> failed requests until either a global timeout (conntimeout?) is reached
>>> or the request was served. Displaying a 503 to the user should be the
>>> very last resort.
>>
>> Right now, only one attempt is made on another server when the redispatch
>> option is set. It is the last retry which is performed on another server.
>
> I don't think I have ever seen this one re-dispatch succeed
> in our scenario. It's always the same pattern here:
>
> We shutdown all www's, everybody will get a 503 during the next
> few seconds, until haproxy switches to the backups - then they get the
> backup.

Yes, currently backup server are not being considered until get activated which happens only after haproxy detects and marks *all* active servers down.

>>> More desirable would be:
>>>
>>> ???1. Server1 goes down
>>>
>>> 2. Request arrives, haproxy schedules it for server1 because
>>> it hasn't noticed yet that server1 is down
>>>
>>> ???3. Haproxy attempts to connect to server1 but times out.
>>> It reschedules the request and tries again, picking a new server
>>> according to the configured balancing algorithm. It may even
>>> choose a backup now if, in the meantime, it noticed the failure
>>> of server1.
>>
>> It must only do that after the retry counter has expired on the
>> first server. In fact, we might change the behaviour to support
>> multiple redispatches with a counter (just like retry) and set
>> the retry counter to only 1 when we are redispatching. It's
>> probably not that hard.
>
> I must admit that I haven't looked at the haproxy code and don't know
> anything about retry counters. But whatever fixes these 503's is most
> welcome here! :)

I think it is doable, I'll look into it. However, it is not going to solve your problem as is case of failure there is no server haproxy can switch into. Even if there are backup servers it take some time to activate them.

>>> 4. Step 3 would repeat until conntimeout is reached or the
>>> request is successfully served. Only when the timeout is hit
>>> does the user get a 503 from haproxy.
>>>
>>> If haproxy worked like that then 503's could be completely avoided by
>>> setting conntimeout to a value higher than the maximum time that it can
>>> take haproxy to detect failure of all non-backup servers. (unless the
>>> backups fail, too - but well, that *is* a case of 503 then)
>>
>> You're thinking like this because you don't have any stickyness :-)
>>
>> There are many people who don't like the redispatch option because
>> it breaks their applications on temporary network issues. Sometimes,
>> it's better to have the user get a 503 (disguised with an "errorfile"),
>> wait a bit and click "reload", than to have the user completely lose
>> his session, basket, etc... because a server has failed to respond for
>> a few seconds.
>
> Yes. Maybe there should be a way to limit this behaviour only to the
> case of failover to backup. These people may not want redispatching to
> happen between the "primary" servers when one responds slowly or throws
> errors temporarily but I'm sure most of them would also like seamless
> failover to the backups when *all* primaries have failed.

Indeed, "emergency redispatch to backups" is one of my yet-unfinished-patches I'm going to clean and publish, eventually.

>> But I agree that for stateless applications, redispatching multiple
>> times would be nice. However, we would not maintain a list of all
>> attempted servers. We would just re-sumbit to the LB algorithm,
>> hoping to get another server.
>
> That sounds fine with me as long as the redispatching keeps going
> on until a global timeout expires or a working server is found.
>
> If we shutdown all www's at once I could imagine the
> following to happen:
>
> 1. Request comes in with stickyness cookie for www1
> 2. Haproxy tries www1, notices it doesn't respond
> 3. Haproxy redispatches to www2, notices it is down, too
> 4. Haproxy loops in step 3 (constantly redispatching between
> the various www's that it thinks are still up) until it
> notices that *all* www's are gone, then it finally redispatches
> to a backup.

Yep, this is more or less how it looks currently. Except that redispatching is only performed once.

> Now for the people with sticky sessions step 3/4 could be
> made optional. They probably want haproxy to throw a 503 immediately
> after step 2 (maybe with a few retries *during* this step, just in case
> the server was just having a hickup).

And this is what happens if there is no "option redispatch" enabled.

>> BTW, one feature I've wanted for a long time was the ability to
>> switch a server to fastinter as soon as it returns a few errors.
>> Let's say 2-3 consecutive connect errors, timeouts, or invalid
>> responses. I thought about using it on 500 too, but that would
>> be cumbersome for people who load-balance outgoing proxies which
>> would just reflect the errors from the servers they are joining.
>>
>> In your situation, this would considerably help because the fast
>> inter could be triggered very early, and it would even save a few
>> more seconds.
>
> Well, for us this would be merely a bandaid. We'd like to use the
> failover feature for fast-switching to maintenance mode (just shutdown
> all www's, no thinking, no fiddling with haproxy conf) and 503's are
> simply unacceptable in this scenario.
>
> In the general case I'm also a bit sceptical about this feature. It sure
> may be interesting for some people and applications but IMHO most of the
> time you want to give a failing server some time to recover (maybe it's
> just overloaded?) instead of dropping it as fast as possible. The
> fast-drop might lead to nasty flip/flop situations.
>
> This is especially true when you depend on sticky sessions (stateful
> webservers) because the drop of a server will kill the users that were
> bound to that server.

How about triggering the fastinter mode if there were N failures (tcp rst, 4xx/5xx codes) *in a row*? First successfully serviced request should clear this counter.

Best regards,

                                 Krzysztof Olędzki Received on 2008/12/03 15:57

This archive was generated by hypermail 2.2.0 : 2008/12/03 16:15 CET