Re: Connection limiting & Sorry servers

From: Willy Tarreau <>
Date: Wed, 5 Aug 2009 18:26:16 +0200

On Wed, Aug 05, 2009 at 05:52:50PM +0200, Bo??tjan Mer??un wrote:
> Hi Willy
> On Mon, 2009-08-03 at 09:21 +0200, Willy Tarreau wrote:
> > why are you saying that ? Except for rare cases of huge bugs, a server
> > is not limited in requests per second. At full speed, it will simply use
> > 100% of the CPU, which is why you bought it after all. When a server dies,
> > it's almost always because a limited resource has been exhausted, and most
> > often this resource is memory. In some cases, it may be other limits such
> > as sockets, file descriptors, etc... which cause some unexpected exceptions
> > not to be properly caught.
> We have a problem that our servers open connections to some 3rd party,
> and if we get too many users at the same time, they get too many
> connections.

So you're agreeing that the problem comes from "too many connections". This is exactly what "maxconn" is solving.

> > I'm well aware of the problem, many sites have the same. The queuing
> > mechanism in haproxy was developped exactly for that. The first user
> > was a gaming site which went from 50 req/s to 10000 req/s on patch days.
> > They too thought their servers could not handle that, while it was just
> > a matter of concurrent connections once again. By enabling the queueing
> > mechanism, they could sustain the 10000 req/s with only a few hundred
> > concurrent connections.
> If that is the case, I will try the same and only limit max connections and see, what will happen.
> If that will actually work, I will have much simpler situation to handle.

I bet so ;-)

Willy Received on 2009/08/05 18:26

This archive was generated by hypermail 2.2.0 : 2009/08/05 18:30 CEST