Re: VIPs with haproxy

From: Willy Tarreau <w#1wt.eu>
Date: Wed, 29 Oct 2008 20:22:23 +0100


On Wed, Oct 29, 2008 at 03:02:57PM -0400, Joseph Hardeman wrote:
> Willy,
>
> Here is my haproxy.conf, we receive on the external IP of the haproxy
> box and redirect the visitor to the external IP of the web server:
>
> global
> maxconn 32000
> ulimit-n 65536
> uid 0
> gid 0
> daemon
> stats socket /tmp/haproxystats
> nbproc 2
> pidfile /var/run/haproxy-private.pid
>
> listen mnnweb_proxy
> maxconn 32000
> bind xxx.xxx.xxx.xxx:80
> mode http
> cookie SERVERID insert nocache indirect
> balance roundrobin
> server web1 xxx.xxx.xxx.xxx:80 cookie webserver01 check inter 5000
> fall 3 rise 1 maxconn 60
> server web2 xxx.xxx.xxx.xxx:80 cookie webserver02 check inter 5000
> fall 3 rise 1 maxconn 60
> server web3 xxx.xxx.xxx.xxx:80 cookie webserver03 check inter 5000
> fall 3 rise 1 maxconn 60
> server web4 xxx.xxx.xxx.xxx:80 cookie webserver04 check inter 5000
> fall 3 rise 1 maxconn 60
> clitimeout 150000
> srvtimeout 30000
> contimeout 10000
> option abortonclose
> option httpclose
> retries 3
> option redispatch
>
> listen health_check 0.0.0.0:60000
> mode health
>
> listen http_health_check 0.0.0.0:60001
> mode health
> option httpchk
>
>
> I was using heartbeat with a vip that it sets up as eth1:0. I have seen
> using siege with 300 users, 5 second delay between calls, and a 10
> second timeout for 5 minute bursts. When using the VIP, web1 and web2
> don't show any issues in the stats page, but web3 and web4 are pegged at
> the maxconns and I see resp errors in the haproxy stats page for those
> two servers. Without the VIP, the load is distributed between all
> servers evenly and I don't see any errors and siege doesn't show any errors.

Just to be sure, your bind address is 0.0.0.0 ? Otherwise, you couldn't connect to haproxy via two IP addresses. As I said, I really see no reason in such a configuration to show a different behaviour depending on the address you connect to, especially since it's just the same frontend for all.

I assume that your VIP is located on the same physical interface as the native address. If that's not the case, you could have some ARP issues to solve first.

Stupid question, would you happen to have a firewall loaded on the machine, or even the ip_conntrack module ? It would be possible that the conntrack hash distributes less evenly with one IP than the other one, though it would still sound a bit surprizing.

> I know its very strange. I am to the point of telling heartbeat to
> shutdown eth1 and start eth1 on the failover server if that is the route
> I have to take. But I would rather use VIPs if possible. Today, I
> actually moved the haproxy to two better servers Dell R200's from the
> Dell 860's it was originally on. And I am seeing the same response
> issues with and without the VIP.

maybe you have a different problem on this machine. Most common ones are gigabit interfaces connected to 10/100 switches forced to 100-full duplex. The interface generally negociates half duplex in this case. You have to set the switch to autoneg to fix this.

Regards,
Willy Received on 2008/10/29 20:22

This archive was generated by hypermail 2.2.0 : 2008/10/29 20:30 CET