Re: Perfect sysctl

From: Marcus Herou <marcus.herou#tailsweep.com>
Date: Wed, 30 Dec 2009 14:04:26 +0100


Hi Willy, thanks for your answer it got filtered, that's why I missed it for two weeks.

Let's start with describing the service.

We are hosting javascripts of the sizes up to 20K and serve flash and image banners as well which of course are larger. That is basically it.. Ad Serving.

On the LB's we have about 2MByte/s per LB = 2x2MByte/s = 4MByte/s ~30MBit/s at peak, that is not the issue.

I've created a little script which parse the "active connections" from the HAProxy stat interface and plots it into Cacti, it peaks at 100 (2x100) connections per machine which is very little in your world I guess.

I've attached a plot of tcp-connections as well. Nothing fancy there either besides that the number of TIME_WAIT sockets are in the 10000 range (log scale)

Here's the problem:

Everyother day I receive alarms from Pingdom that the service is not available and if I watch the syslog I get at about the same timings hints about possible SYN flood. At the same timings we receive emails from sites using us that our service is damn slow.

What I feel is that we get "hickups" on the LB's somehow and that requests get queued. If I count the number of rows in the access logs on the machines behind the LB it decreases at the same timings and with the same factor on each machine (perhaps 10-20%) leading me to think that the narrow point is not on the backend side.

A little more about the backend servers:

We have an ad publishing system which pushes data to the web-servers enabling them to act almost 100% static, this have been the key thing which I tuned some years ago. Initially every request went to a DB but now just a simple Hashtable which is replicated from a "master".

The backend servers have very little to do and consumes very little resources:
Example:
top - 11:34:23 up 366 days, 1:15, 1 user, load average: 0.37, 0.25, 0.23 Tasks: 79 total, 1 running, 78 sleeping, 0 stopped, 0 zombie Cpu(s): 0.8%us, 0.5%sy, 0.0%ni, 94.0%id, 4.6%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 4052904k total, 4008696k used, 44208k free, 292932k buffers Swap: 3903784k total, 9240k used, 3894544k free, 2145340k cached

This is top on one of the LBs:
top - 11:35:07 up 433 days, 16:52, 2 users, load average: 0.12, 0.17, 0.16 Tasks: 69 total, 1 running, 68 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.8%sy, 0.0%ni, 97.6%id, 0.0%wa, 0.2%hi, 1.2%si, 0.0%st
Mem: 4052904k total, 2715152k used, 1337752k free, 176564k buffers Swap: 3903784k total, 0k used, 3903784k free, 2268308k cached

Same here nothing fancy.

I do not blaim HAProxy but rather I believe we have some form of queue in the kernel or some other limit...

Cheers

//Marcus Herou

On Tue, Dec 15, 2009 at 11:19 PM, Willy Tarreau <w#1wt.eu> wrote:

> Hi Marcus,
>
> On Tue, Dec 15, 2009 at 10:53:31AM +0100, Marcus Herou wrote:
> > Hi guys.
> >
> > I would appreciate it a lot if someone could share a sysctl.conf which is
> > known to run smoothly on a HAProxy machine with busy sites behind it.
>
> This is a question I regularly hear.
>
> "very busy sites" does not mean much. Tuning is a tradeoff between being
> good at one job and being good at another one. People who run with very
> large numbers of concurrent connections will not tune the same way as
> people forwarding high data rates, which in turn will not tune the same
> way as people experiencing high session setup/teardown rates.
>
> People with large numbers of servers sometimes don't want to wait long
> on each request either while people with small numbers of servers will
> prefer to wait long in order not to sacrifice large parts of their clients
> in case something temporarily goes wrong.
>
> You see, that's just a tradeoff. You need to define a little bit your
> workload (bit rate, session rate, session concurrency, number of servers,
> response times, etc...). The more info you provide, the finer the tuning.
>
> > There
> > are so many variables that one possibly can fuck up so it is better to
> start
> > from something which is known to work.
>
> Well, I can tell you for sure that among the few people who are *really*
> experiencing high loads on busy machines, you won't find similar tuning,
> and that the few common parts will not help at all alone.
>
> And I would really recommend against blindly copy-pasting tuning parameters
> from another machine, as you can see your system collapse for no apparent
> reason (typical error is to copy tcp_mem settings with the wrong units).
>
> Regards,
> Willy
>
>

-- 
Marcus Herou CTO and co-founder Tailsweep AB
+46702561312
marcus.herou#tailsweep.com
http://www.tailsweep.com/

tcp_connections.png
Received on 2009/12/30 14:04

This archive was generated by hypermail 2.2.0 : 2009/12/30 14:15 CET