Re: Config recommendations for LDAP(S) load-balancing

From: Willy Tarreau <w#1wt.eu>
Date: Fri, 18 Dec 2009 07:10:44 +0100


Hi Paul,

On Thu, Dec 17, 2009 at 03:23:28PM -0800, Paul Hirose wrote:
> We're starting down the path of using HAProxy (among other things such
> as Crossroads, or LVS-NAT/LVS-DR) and putting them in front of our
> LDAP/S servers. We're starting w/HAProxy cause it was so easy to set
> up :) It obviously works out-of-the-box, so to speak. But I was
> wondering if anyone had any recommendations for tuning parameters (be
> it HAProxy or /etc/sysctl.conf, or anything else, at least software
> wise.) The LBs are a pair of old Sun X2100's running Centos 5.4,
> pretty generic stock.

I have mixed memories of the Sun X2100. It had two on-board NICs, one tg3 and one nforce. The nforce relied on the forcedeth driver which used to drop a high percentage of packets on Rx at high packet rates (but those were still accounted as properly received). Basically, (as all nforce chips), it was only useful as an admin port to SSH into the box. Also the one I tested was equipped with a dual-core opteron which had unsynchronized TSC counters, causing lots of trouble with timeouts because the clock was jumping back and forth. That was the reason I finally implemented the internal monotonic clock in haproxy. If you boot with "notsc", it's OK but slower though.

> As best as I understand our current setup, there are tons of tiny
> connections from our SMTP server pool to our LDAP server pool. Our
> existing load-balancer isn't cutting it anymore (at least we think
> that's the case.) As a trial, we want to simply replace our existing
> LB with a HAProxy based LB for starters, just to see if the problem
> we're having remains. If it all goes away, then we know the problems
> we're having are indeed because of our existing LB. If the problems
> remain, then maybe it's not the LB after all, and we need to dig more.

Is your current LB installed on the same machine as the LDAP server ?

> I realize it's a bit tough to make recommendations, and there are many
> different ways to do this. But as far as haproxy.cfg or sysctl.conf
> or other more "common" locations, if anyone has a suggestions on
> configurations geared to process tons of tiny short connections,
> that'd be great. Low latency, small packets, short lived connections,
> many connections at one time.

What you describe exactly matches an HTTP workload. You still have to set your timeouts large enough to cover the occasional slow requests (maybe 10 seconds or so). But what you need first is :

After that, everything depends on precise numbers and what you'll find in the logs. For instance, you may notice that you sometimes fail to connect to your servers. It will then probably be because they're not able to cope with large number of connections, and if so, you should set a "maxconn" parameter in haproxy's config on their "server" lines to match the servers' limitations. Also, as a rule of thumb, when you stay below 1000 connections per second and below about 5000 concurrent connections, you generally don't need any specific tuning.

If you're running at more than 5000 concurrent connections, you need to be careful about the per-socket memory, as the default and max values of tcp_rmem and tcp_wmem can quickly use all the network buffers. Lowering those values can then save a lot of memory.

Also, if you are logging every connection, please avoid the standard syslog daemon as shipped with the red hat. By default it does synchronous writes. You can change that by prepending a minus sign in front of each file name, but it's still not optimised for high loads. Better install syslog-ng on another port then.

If you are running at high connection rates (say more than 10000 per second), it could help a bit to try version 1.4-dev. It has some options to save some network packets, reducing the network and server loads on both sides.

Hoping this helps,
Willy Received on 2009/12/18 07:10

This archive was generated by hypermail 2.2.0 : 2009/12/18 07:15 CET