On Sat, Sep 27, 2008 at 02:12:16PM +0200, Patrick Viet wrote:
> HAproxy can run on just about any hardware that can run Linux or most
> Unices. The real question is how many simultaneous connections, how
> many connections per second, and what throughput you want to handle...
> In my experience, Pentium 4 with 2GB RAM has been able to handle
> arround 10 000 connect / sec, and base Core2duo nearly 20 000 / sec,
> throughputs over 1Gbit/s with no particular problem.
I agree with Patrick's numbers. I would complete them by indicating that default buffer sizes are 16kB, and there are two buffers per connection, so a connection consumes about 32kB of memory in haproxy. Add 1 kB to that for other session information. If you have to support, say, 10000 concurrent connections, haproxy will eat about 330 MB of RAM.
But be very careful about system tuning. Network parameters can make you system use insane amounts of memory. By default, 75% of the RAM is usable by the TCP stack (check tcp_mem). I've just seen some of the parameters you said you'll be using. These ones can be dangerous when you have high numbers of connections :
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
If you have 10000 concurrent connections, you'll then have 20000 sockets, and will be allocating 87380+65536 by default for each, meaning up to 3GB of socket buffers. I would suggest that you lower those values, especially the write buffer which is not very useful since you already have haproxy's. Having a 16kB buffer by default adds to haproxy's which result in 32 kB by default.
Also, your default read buffer size should be a multiple of your default MSS (1460). I suggest using 16060 or 32120.
Last, reduce the max size. There's no point using 16MB buffers for a socket, those are just used for benchmarks. With 16MB, you can achieve 1 Gbps over a single socket from any two points in the world (133ms RTT for 20000km at the speed of light). This is pretty useless and will consume large chunks of memory when clients disappear from the net during a large file download.
Using 1M here is often more than enough.
That said, I'm used to tweak systems to achieve 20k conns per GB of physical RAM. It's not easy because the system tends to use more than you want, but it is possible. If you don't want to spend too much time tweaking sysctls and running benchmarks, I simply suggest never going above 10k conns per GB.
Willy Received on 2008/09/28 08:28
This archive was generated by hypermail 2.2.0 : 2008/09/28 08:31 CEST