Hi everyone !
More than 4 months have elapsed since 1.3.14. This is far too long.
While reviewing changes, I realized that we did a lot of work in
this time, the diff is 11400 lines long.
Now that the code has stabilized, I'm really pleased to release 1.3.15.
It will also help contributors to resync.
A lot of new features have been merged since 1.3.14. I will not enumerate
all of them right here, but among the most noticeable ones that come to
mind are :
- updates to the statistics subsystem (HTTP and UNIX), mostly by
Krzysztof, who also contributed an SNMP agent. Several new indicators
are present in the statistics, and it is now possible to assign IDs
to proxies and servers, and to specifically request stats for a given
- server health-checks now follow several interval timers. One is for
the down state, another for a transitional state, and the usual one
remains for the up state. This brings the ability to speed up detection
of failures and at the same time reduce the number of checks sent to a
- a server may now track another one instead of running a check. This is
useful with source hash on multiple protocols when one client must
absolutely go to the same server for all protocols, as well as in order
to reduce the number of checks sent to a server which is referenced by
- a new "leastconn" load-balancing algorithm. People have been requesting
this for years. In the beginning, I did not agree because people wanted
it for HTTP (which is wrong without keep-alive), and the only contributed
implementation was slow so I did not merge it. Now that people are using
haproxy to load-balance database servers, and terminal servers, it clearly
makes sense to have such an algorithm. This new implementation is fast,
respects weights, and does both leastconn + round-robin, so that if two
servers have the same number of connections, and one single short
connection regularly comes in, each server will get it in turn. This is
a dynamic algorithm, so it is compatible with slowstart.
- a new "POST" parameter analysis to complement the already existing
"url_param" hashing. Some guys at Nokia needed url_param to match a
session identifier both in the query string of GET requests, and in
arguments passed in POST requests. So they have implemented the
feature. The way it was done is particularly clean, it supports
content-length and chunks, detects "Expect" headers, and lets the
user define the maximal data length for the analysis.
- a new trick for multi-site load balancing, based on redirections.
Instead of having haproxy forward a request to a server, it can now
return a 302 response with the Location header set to the incoming
request, prefixed by a per-server location. This may be useful either
for multi-site load balancing with small inter-site links, or for
large static file servers (eg: video). Non-GET/HEAD requests are
still forwarded to the server though.
- fully transparent proxy on Linux. With the appropriate patches (tproxy-v4
should be OK), haproxy may bind to any foreign address for the listening
sockets, and to any address including the client's when connecting to the
server. It does this without any NAT, and having both features enabled
make it possible to install it on a gateway or firewall to transparently
analyze, log, filter, or switch traffic. Load-balancing active FTP also
becomes possible with that feature.
- better handling of connection failures. If a connection fails to establish
to a server, now haproxy does not immediately retry because there's a
chance that it will fail again till the retry counter expires. Instead, it
will wait for 1 second before trying again. This saves the user's sessions
arriving on the load-balancer during a quick server restart. The redispatch
methods have also been improved so that we do our best to avoid
reconnecting to the same server of other ones are available.
- the build process has been reworked, and the Makefile now provides an
- the documentation now references all supported keywords.
There are surely other things I'm missing. Sorry if I did not cite your
particular contribution, please complain loudly.
The mailing list is speeding up contributions and ideas. We are now slightly
more than 100 sharing ideas, code and experience together and providing
support to newcomers. This is really nice.
For the next step, I have already started working on header processing.
I thought I could do something simple and efficient, applying what the
RFC suggests, ie merging headers with the same name (apache does this
BTW). It would have made code and configuration easier... Except that
there are buggy browsers which have trouble with multiple values in a
"set-cookie" header (reason why they're never merged). Grrr... I'll
have to do that differently. BTW, I found RFC4229 which enumerates a
lot of well-known HTTP headers.
I have also started working on supporting a bitmask to apply to the
source address before performing a hash in "balance source" mode. This
would make it possible to assign the same server to a client whose
address changes between a same network (classical problem with proxy
farms). For this I need to extend the hashing algorithms, and I found
very nice work in this area on Bob Jenkins' site (I've merged the
experimentation code already). This feature is easy to implement and
will be merged soon. A second punch in the hash algorithm will be the
double hash in order to rebalance only the users of a dead server. I
still don't know whether it's desirable to apply it unconditionally
Most of the rest will depend on the header rework being finished. I
have a draft-looking TODO list that I will polish a bit and publish
so that people can participate (ideas or code).
I've built the 1.3.15 for Linux and Solaris. Source and executables
available at the usual places :
While there are many new features, the code appears to be stable. This
version survived the 10 Gbps and 40000 hits/s tests :-)
However, still treat it with more care than you would for a trivial
Received on 2008/04/19 23:53