Re: HAPROXY & pool of keep-alive connections to backend

From: Willy Tarreau <w#1wt.eu>
Date: Sun, 12 Sep 2010 23:39:36 +0200


On Sun, Sep 12, 2010 at 02:29:12PM -0700, dormando wrote:
> So you're specifically talking about the HTTP client retry issue? Where
> haproxy is now the "HTTP client" because it's rebroadcasting a client
> request?

Yes. It's the client because it's the one who decides to send a request to a server after a failure.

> You'll probably flip out at this compromise, but it doesn't really make
> sense to pretend that the load balancer is now the client for all
> purposes; there's still a client at the other end of the load balancer
> issuing the initial request.

The problem is not the client but the server. When you're resending a request to it, you have to know whether it may have started processing your past request or not.

> Which means that, in perlbal, if a backend server does anything but return
> a 200 OK, we do a few things:
>
> - close the backend connection (not a valid keepalive connection, or can
> assume something is going wrong safely)
> - if the backend closes the connection without sending a proper response,
> IIRC we close drop the connection all the way through; backend to client.
> Then the client may decide if it wants to re-issue the request.

Unfortunately, when doing multiplexing, it's possible that it was the first request for this client, and then it will not retry, but immediately return an error. However, I think this is a reasonable compromise. Connection drops are not *that* frequent and leaving it to the client to decide whether to repost or not is the only way to keep safe.

> Whether or not frontend/backend keepalives are enabled would make no
> difference in how the browser would've handled the situation otherwise.

Without multiplexing, this cannot happen on the first request, only on subsequent ones, which the client can handle. With multiplexing, you can fail a first request and the user will get an error.

> If
> haproxy didn't exist and the backend got closed, the client should open a
> new connection and retry, or bounce an error to the user. Dropping the
> connection from the LB front to back emulates this pretty effectively?

Not exactly due to the specifics of multiplexing explained above.

> However, we do special case in X-Reproxy-URL for perlbal :) It's another
> ugly extension, but we pass back a list of URLs which are tried in most
> cases. Due to mogilefs maintaining multiple copies of a file, we can
> always give the LB a prioritized list of ones to try, in case of overload,
> sudden failure, etc.

But as far as I understand it, you only have to process idempotent requests with mogilefs. I'm not saying those are easy, but if you keep a copy of the request, you can safely retry them, which makes it easier to hide the issue to the end user ;-)

Willy Received on 2010/09/12 23:39

This archive was generated by hypermail 2.2.0 : 2010/09/12 23:45 CEST