Re: Running haproxy on the cluster nodes

From: Martin Goldman <martin#mgoldman.com>
Date: Wed, 12 Dec 2007 15:36:56 -0500


I did a few throughput tests using iperf between servers and consistently got a result of about 725Mbps -- it's not 1000, but it's a lot more than 320 at least. Is that a reasonable test?

hdparm is reporting a "Timing buffered disk reads" value of about 50MB/sec for my disks (which are SATA). So it seems like it might be reasonable for the individual web servers to max out at 40-something MB/sec. What I don't quite understand is, is haproxy actually hitting the disk? If not, it would seem the cluster should be able to handle more requests than the web servers individually. Does that make sense?

Thanks,
Martin

On 12/12/07, Willy Tarreau <w#1wt.eu> wrote:
>
> On Wed, Dec 12, 2007 at 08:20:07AM -0500, Martin Goldman wrote:
> > Thanks again for your help, Willy.
> >
> > It looks like you were right on the keepalive issue. When I tried this,
> > requests per second on my tiny file doubled to about 35,000 on the
> cluster.
>
> cool! And on the individual servers ?
>
> > Requests per second on the 100K file were basically unchanged, however.
> >
> > I tried copying a 512MB file between two of the servers involved and the
> > throughput I received was about 45MB/sec. I understand that
> theoretically
> > one should be able to achieve 125MB/sec over GigE, but I'm not sure what
> one
> > could expect to get in a real-world scenario. I suppose I should
> investigate
> > that more.
>
> On a few concurrent sessions (around 10, just to compensate for the small
> amount of possible dead time), you should get 118600000 bytes/s of
> payload.
> Ensure that you're not saturating the disks on the server side. For such a
> test, you should put the files in RAM (either cached, or on a tmpfs).
>
> Welcome to the world of high performance benchmarks ;-)
>
> Cheers,
> Willy
>
>
Received on 2007/12/12 21:36

This archive was generated by hypermail 2.2.0 : 2007/12/12 21:45 CET