Re: Tuning Linux for performance

From: Willy Tarreau <w#1wt.eu>
Date: Fri, 26 Jun 2009 03:17:18 +0200


Hi Ashwani,

On Mon, Jun 15, 2009 at 12:37:56PM -0700, Ashwani Wason wrote:
> Willy,
>
> In one of the relatively recent posts you mentioned, "The rest is
> "just" kernel parameter tuning. I'm thinking about writing a tuning
> guide for 2.6 kernels. I've once again been contacted by a big site
> this week-end which was dying under load because the sysctls had not
> been tuned, and that's a shame :-/".
>
> I am wondering if you ever managed to create that guide.

No, it takes too much time and as you see I don't have much :-( I need to make a compilation of all mails I exchange on the subject.

> I am planning
> to evaluate HAProxy and compare its performance with one of my own TCP
> proxy on Linux. So far for a target benchmark and a prototype (simple
> data copy from clients to the servers and back) I have been able to
> achieve ~800 mbps (reported by application, not including TCP/IP/link
> headers) with a single process (no threads) instance on 2.6.18 on a
> 2.6GHz CPU (machine with four CPUs) and two e1000g NICs (one for
> client and one for server).

Depending on object size, you should get more. With large objects with such NICs, you should reach about 948 Mbps with 1500-bytes ethernet frames, and about 988 Mbps with jumbo frames.

> I have done the usual stuff that I
> required to get this to work (bind the interrupts from NICs to two
> CPUs, bind the process to the third CPU, do TCP tuning like syn
> backlog, tw reuse/recycle/buckets, tcp wmem/rmem, driver tuning such
> as rx rings, etc.) The link is gbps so obviously my expectation is to
> get as close to gbps as possible. Routing tests have shown that I
> should be able to do ~970mbps. However, with two processes I am unable
> to reach that limit. Without going into much details as to what
> exactly is happening (which I don't mind going into if useful), I was
> looking for information on how you are tuning for gigabit links.

it really depends if you're processing small or large objects. From my experience, PCI-based ethernet cards never go beyond 550000 packets/s and PCI-e gig cards are limited to 630 kpps. I think this is caused by latency induced by bus arbitration or data serialization. I observe between 800 and 850 kkps on the 10gig cards (PCI-e 8x).

So with that in mind, if you're processing small objects, you are quickly limited by the number of packets you can send, which is the reason why last development version of haproxy tries to play with the TCP stack to merge some packets.

If you're running your tests on large objects on local network, you should not be really sensible to TCP tuning. However you should check the buffer size that your proxy uses. If you read 4kB at a time, you will waste your time waking up for nearly nothing.

Hoping this helps,
Willy Received on 2009/06/26 03:17

This archive was generated by hypermail 2.2.0 : 2009/06/26 03:30 CEST