Re: haproxy and orbited (COMET server)

From: Roberto Saccon <rsaccon#gmail.com>
Date: Thu, 31 Jan 2008 20:28:51 -0200


Willy,

thanks a lot for the detailed explanations. AFAIK nginx also does not implement keep-alive.

btw. done properly, I belive Comet does scale, the best example is gmail, which uses Comet technologies, but of course you are right, that if you don't have full control of all the network equipment at serverside, a lot of things can go wrong. Server saturation is not a problem, at least not with my approach, which uses Erlang MicroThreads.

I look forward to HAProxy keep-alive support in a few months (If at that time I still believe that the problems you mentioned can be handled !)

regards
Roberto

On Jan 31, 2008 7:07 PM, Willy Tarreau <w#1wt.eu> wrote:
> Hi Christoph and Robert,
>
> thanks for the links and explanations.
>
> On Thu, Jan 31, 2008 at 01:30:55PM -0700, Christoph Dorn wrote:
> > The "COMET" concept encompasses the streaming of data to the client
> > where the server is able to push new data to the client via a HTTP
> > connection which the client opens to the server and the server keeps
> > open for an indefinite period of time.
>
> Well, what an awful concept! Had it appeared at the beginning of the web,
> maybe it could have been the subject of an HTTP evolution to 1.2 for instance.
> But now... With all those proxies, anti-virii, firewalls, load balancers...
> does it have any chance to ever work for at least a small bunch of users ?
>
> It is said on those sites that the problem is on the server side. That's wrong.
> The server's admin knows how to develop and tune it. The problem is on the
> client side on sites with large number of clients which monopolize non-sharable
> expensive connections for long periods. Proxies and even more anti-virii will
> not like this at all. Also, firewalls and hardware load-balancers will silently
> drop the connection once it has timed out, leaving both the client and the
> server in the classical situation of an established connection on which nothing
> passes... Pretty weird. Last, the client itself which can only access the
> server through a proxy configured to time-out after 2 minutes, or even one
> minute (more common) would probably have an unpleasant experience. I hope that
> at least the client knows how to transparently reconnect everytime the
> connection tears down :-/
>
> > The server must be able to handle a large number of open client
> > connections (subscribers) and provide a mechanism for an application to
> > trigger events to be sent to the subscribers, usually via another
> > TPC-based control protocol.
>
> To be honnest, I would really wish such a concept to succeed, because it would
> help getting rid of all the crappy heavy technologies people are using to build
> slow web sites. You know, the ones which work on their PC when they are alone,
> which enter production without any modification, and which saturate a server
> when the second user connects...
>
> People will not have many other choices than discovering what a socket is,
> and it will be a good thing. But I think that the chances of success of a
> technology which requires as many conditions to work and which puts such
> a burden on many components, are pretty dim.
>
> Even most of the docs found on the sites all speak about the challenge of
> making this scale... That's not very encouraging.
>
> > To summarize again, the solution I seek is to be able to service HTTP
> > requests on the same hostname and port where the requests for one set of
> > URI's get sent to apache and the requests for a second set of URI's get
> > sent to a COMET-optimized server that will handle a large number of open
> > connections.
>
> The problem you'll have with haproxy is not to scale to tens of thousands
> of concurrent connections, but to perform content switching on keep-alive
> connections. Haproxy does not support keep-alive (yet, I'm working on trying
> to get basic things working). So anything after the first request is considered
> data and will not be analyzed. That means that even if you push the timeouts
> very far, a client connecting to a server would always remain on that server
> until the connection closes. This will change when keep-alive gets supported,
> but it will not be before a few months from now it seems.
>
> Maybe nginx would be able to do that (I don't know all of its features). But
> it's known to scale at least. On the contrary, pound uses threads and will
> exhibit the well-described problems of this model after a few thousands
> connections.
>
> Depending on the site you're doing that for, maybe you'd want to turn to
> commercial solutions such as Zeus ZXTM which should scale and support
> keep-alive ?
>
> I hope this helps at least a little bit, but I'm sorry I'm not very positive
> about the future of such a technology :-/
>
> Regards,
> Willy
>
>

-- 
Roberto Saccon
http://rsaccon.com
Received on 2008/01/31 23:28

This archive was generated by hypermail 2.2.0 : 2008/01/31 23:45 CET