Re: Hardware requirements

From: Marcus Herou <marcus.herou#tailsweep.com>
Date: Sun, 28 Sep 2008 13:42:23 +0200


Thanks a lot to you both Patrick and Willy!

I will reduce the amount of memory used as you said. To be on the safe side I will buy at least 4G RAM since the cost is so low these days. And a core2duo will be more than enough on the CPU part i guess. So a cheap box for around 1000€ will be enough. I starting to wonder why the BIG IP beasts costs +25 000€ when you get really far with a software LB.

Yep I have the stats page enabled which is really helpful. I have not enabled logging though...

HAProxy is just currently eating 20mb so we can scale a whole lot more I guess. All our requests are so fast (no php, no perl, no ruby just in-memory java operations) so we will hopefully never get to more than a 1000 connections declaring never >= 6 months :)

Talking to you guys makes my performance questions seem small and pityable and damn we are the #4 site reach wise in Sweden reaching 3.5 million UB each week. What kind of sites are you guys really running ?

Regards

//Marcus

On Sun, Sep 28, 2008 at 8:28 AM, Willy Tarreau <w#1wt.eu> wrote:

> On Sat, Sep 27, 2008 at 02:12:16PM +0200, Patrick Viet wrote:
> > HAproxy can run on just about any hardware that can run Linux or most
> > Unices. The real question is how many simultaneous connections, how
> > many connections per second, and what throughput you want to handle...
> >
> > In my experience, Pentium 4 with 2GB RAM has been able to handle
> > arround 10 000 connect / sec, and base Core2duo nearly 20 000 / sec,
> > throughputs over 1Gbit/s with no particular problem.
>
> I agree with Patrick's numbers. I would complete them by indicating that
> default buffer sizes are 16kB, and there are two buffers per connection,
> so a connection consumes about 32kB of memory in haproxy. Add 1 kB to
> that for other session information. If you have to support, say, 10000
> concurrent connections, haproxy will eat about 330 MB of RAM.
>
> But be very careful about system tuning. Network parameters can make
> you system use insane amounts of memory. By default, 75% of the RAM
> is usable by the TCP stack (check tcp_mem). I've just seen some of
> the parameters you said you'll be using. These ones can be dangerous
> when you have high numbers of connections :
>
> net.ipv4.tcp_rmem=4096 87380 16777216
> net.ipv4.tcp_wmem=4096 65536 16777216
>
> If you have 10000 concurrent connections, you'll then have 20000 sockets,
> and will be allocating 87380+65536 by default for each, meaning up to 3GB
> of socket buffers. I would suggest that you lower those values, especially
> the write buffer which is not very useful since you already have haproxy's.
> Having a 16kB buffer by default adds to haproxy's which result in 32 kB
> by default.
>
> Also, your default read buffer size should be a multiple of your default
> MSS
> (1460). I suggest using 16060 or 32120.
>
> Last, reduce the max size. There's no point using 16MB buffers for a
> socket,
> those are just used for benchmarks. With 16MB, you can achieve 1 Gbps over
> a single socket from any two points in the world (133ms RTT for 20000km at
> the speed of light). This is pretty useless and will consume large chunks
> of
> memory when clients disappear from the net during a large file download.
>
> Using 1M here is often more than enough.
>
> That said, I'm used to tweak systems to achieve 20k conns per GB of
> physical
> RAM. It's not easy because the system tends to use more than you want, but
> it is possible. If you don't want to spend too much time tweaking sysctls
> and running benchmarks, I simply suggest never going above 10k conns per
> GB.
>
> Regards,
> Willy
>
>

-- 
Marcus Herou CTO and co-founder Tailsweep AB
+46702561312
marcus.herou#tailsweep.com
http://www.tailsweep.com/
http://blogg.tailsweep.com/
Received on 2008/09/28 13:42

This archive was generated by hypermail 2.2.0 : 2008/09/28 13:47 CEST