Re: VIPs with haproxy

From: Joseph Hardeman <jhardeman#colocube.com>
Date: Wed, 29 Oct 2008 16:04:47 -0400


Willy,

I forgot to mention that all of the machines are on 1000M Full. None of them are set to 100M.

Joe

Joseph Hardeman wrote:
> Willy,
>
> I had the external IP of haproxy set to the VIP interface, should it
> be set to 0.0.0.0? It is now set to bind to the physical IP of the
> external NIC. I have the web server external IP's in the server section.
> The external IP is located on eth1 and when heartbeat brings up the
> VIP, it is placed on the same interface as eth1:0. I do not have an
> ifcfg-eth1:0 file setup in the /etc/sysconfig/network-scripts section
> as heartbeat doesn't need one to setup the VIP. I can add one, with
> the on boot set to no, so if the box is rebooted for some reason it
> doesn't have an IP conflict with the other haproxy system.
>
> We don't have iptables started on any of the these boxes and the
> firewall in front of everything shouldn't be effecting anything as I
> am test from another system within the same network. Even though
> ip_conntracking.c is on the haproxy box, I don't believe that it is
> engaged as when I go to /proc and search for ip_conn* in all of the
> subdirectories, I don't find a file with this name.
>
> Again, thanks for the quick responses and I hope we can figure out
> what might be causing the VIP issues.
>
> Thanks
>
> Joseph
>
>
> Willy Tarreau wrote:
>> On Wed, Oct 29, 2008 at 03:02:57PM -0400, Joseph Hardeman wrote:
>>
>>> Willy,
>>>
>>> Here is my haproxy.conf, we receive on the external IP of the
>>> haproxy box and redirect the visitor to the external IP of the web
>>> server:
>>>
>>> global
>>> maxconn 32000
>>> ulimit-n 65536
>>> uid 0
>>> gid 0
>>> daemon
>>> stats socket /tmp/haproxystats
>>> nbproc 2 pidfile /var/run/haproxy-private.pid
>>>
>>> listen mnnweb_proxy
>>> maxconn 32000
>>> bind xxx.xxx.xxx.xxx:80
>>> mode http
>>> cookie SERVERID insert nocache indirect
>>> balance roundrobin
>>> server web1 xxx.xxx.xxx.xxx:80 cookie webserver01 check inter
>>> 5000 fall 3 rise 1 maxconn 60
>>> server web2 xxx.xxx.xxx.xxx:80 cookie webserver02 check inter
>>> 5000 fall 3 rise 1 maxconn 60
>>> server web3 xxx.xxx.xxx.xxx:80 cookie webserver03 check inter
>>> 5000 fall 3 rise 1 maxconn 60
>>> server web4 xxx.xxx.xxx.xxx:80 cookie webserver04 check inter
>>> 5000 fall 3 rise 1 maxconn 60
>>> clitimeout 150000
>>> srvtimeout 30000
>>> contimeout 10000
>>> option abortonclose
>>> option httpclose
>>> retries 3
>>> option redispatch
>>>
>>> listen health_check 0.0.0.0:60000
>>> mode health
>>>
>>> listen http_health_check 0.0.0.0:60001
>>> mode health
>>> option httpchk
>>>
>>>
>>> I was using heartbeat with a vip that it sets up as eth1:0. I have
>>> seen using siege with 300 users, 5 second delay between calls, and a
>>> 10 second timeout for 5 minute bursts. When using the VIP, web1 and
>>> web2 don't show any issues in the stats page, but web3 and web4 are
>>> pegged at the maxconns and I see resp errors in the haproxy stats
>>> page for those two servers. Without the VIP, the load is
>>> distributed between all servers evenly and I don't see any errors
>>> and siege doesn't show any errors.
>>>
>>
>> Just to be sure, your bind address is 0.0.0.0 ? Otherwise, you couldn't
>> connect to haproxy via two IP addresses. As I said, I really see no
>> reason in such a configuration to show a different behaviour depending
>> on the address you connect to, especially since it's just the same
>> frontend
>> for all.
>>
>> I assume that your VIP is located on the same physical interface as the
>> native address. If that's not the case, you could have some ARP issues
>> to solve first.
>>
>> Stupid question, would you happen to have a firewall loaded on the
>> machine,
>> or even the ip_conntrack module ? It would be possible that the
>> conntrack
>> hash distributes less evenly with one IP than the other one, though
>> it would
>> still sound a bit surprizing.
>>
>>
>>> I know its very strange. I am to the point of telling heartbeat to
>>> shutdown eth1 and start eth1 on the failover server if that is the
>>> route I have to take. But I would rather use VIPs if possible.
>>> Today, I actually moved the haproxy to two better servers Dell
>>> R200's from the Dell 860's it was originally on. And I am seeing
>>> the same response issues with and without the VIP.
>>>
>>
>> maybe you have a different problem on this machine. Most common ones are
>> gigabit interfaces connected to 10/100 switches forced to 100-full
>> duplex.
>> The interface generally negociates half duplex in this case. You have to
>> set the switch to autoneg to fix this.
>>
>> Regards,
>> Willy
>>
>>
>>
>>
>

-- 
This message has been scanned for viruses by Colocube's AV Scanner
Received on 2008/10/29 21:04

This archive was generated by hypermail 2.2.0 : 2008/10/29 21:16 CET