Re: priority servers in an instance

From: Karl Pietri <karl#slideshare.com>
Date: Wed, 25 Feb 2009 23:05:02 -0800


from my testing and from the docs it says it only applies to the frontend that it is defined in.

-Karl

On Wed, Feb 25, 2009 at 10:57 PM, Michael Fortson <mfortson#gmail.com>wrote:

> Oops, scratch that success report... dst_conn seems to apply to all
> connections, not just the current front-end. And it doesn't take an
> argument. With multiple front-ends, I'm not sure how it can be used to
> put a limit on only one of them.
>
>
>
> On Wed, Feb 25, 2009 at 10:52 PM, Michael Fortson <mfortson#gmail.com>
> wrote:
> > I think we're missing connslots support until the next release (3.16
> > is mentioned in the archives as the first that's going to have it).
> > Willy must be used to having it from testing the next version :)
> >
> > switched to dst_conn and gt -- worked great. Thanks Karl!
> >
> >
> >
> > On Wed, Feb 25, 2009 at 10:47 PM, Karl Pietri <karl#slideshare.com>
> wrote:
> >> its the -gt make it just gt.
> >>
> >> this is what i ended up going with:
> >>
> >> frontend priority_rails_farm xx.xx.xx.xxx:80
> >> mode http
> >> option forwardfor
> >> acl priority_full dst_conn gt 4
> >> use_backend rails_farm if priority_full
> >> default_backend priority_rails_farm
> >>
> >> the backend priority_rails_farm has 4 servers with maxconn 1 in it.
> >>
> >> -Karl
> >>
> >> On Wed, Feb 25, 2009 at 10:38 PM, Michael Fortson <mfortson#gmail.com>
> >> wrote:
> >>>
> >>> When trying this, I get:
> >>> [ALERT] 056/063556 (24031) : parsing [/etc/haproxy/haproxy.cfg:290] :
> >>> error detected while parsing ACL 'nearly_full'.
> >>> [ALERT] 056/063556 (24031) : Error reading configuration file :
> >>> /etc/haproxy/haproxy.cfg
> >>>
> >>> (haproxy version 1.3.15.7)
> >>>
> >>> source:
> >>> acl nearly_full connslots(fast_mongrels) lt 10
> >>> use_backend everything if nearly_full
> >>>
> >>> also tried:
> >>> acl nearly_full connslots(fast_mongrels) -lt 10
> >>> use_backend everything if nearly_full
> >>>
> >>>
> >>> hrm...
> >>>
> >>> On Sun, Feb 22, 2009 at 12:00 PM, Willy Tarreau <w#1wt.eu> wrote:
> >>> > On Sun, Feb 22, 2009 at 10:03:22AM -0800, Michael Fortson wrote:
> >>> >> That's really cool. I've been doing it with weighting, but this is
> much
> >>> >> nicer.
> >>> >
> >>> > it was proposed and developped by someone on the list (I don't
> remember
> >>> > whom right now) for exactly this purpose.
> >>> >
> >>> >> Am I right in assuming that in this example, when nearly_full is
> >>> >> triggered,
> >>> >> it will switch entirely to that?
> >>> >
> >>> > yes, back1 will get traffic only when it's not considered full, and
> >>> > back2 will get the excess traffic.
> >>> >
> >>> >> how does the balance between the two
> >>> >> backends happen in this instance?
> >>> >
> >>> > There's no balance. The second backend only receives overloads. See
> >>> > that as a cheap vs expensive pool of servers (or local vs remote).
> >>> >
> >>> >> Should you just repeat the definition of
> >>> >> the first backend within the second to go "wide" with the server
> >>> >> spread?
> >>> >
> >>> > Yes, this seems appropriate depending on your workload. Maybe you'll
> >>> > remove "maxqueue" from the second though.
> >>> >
> >>> > Hoping this helps,
> >>> > Willy
> >>> >
> >>> >
> >>
> >>
> >
>
Received on 2009/02/26 08:05

This archive was generated by hypermail 2.2.0 : 2009/02/26 09:15 CET