In addition to the per-pool or per-server options, there are also a few global options that control the behavior of relayd. These settings are under Services > Load Balancer on the Settings tab:
The global timeout in milliseconds for checks. Leave blank to use the default value of
1000ms (1 second). If a loaded server pool takes longer to respond to requests, increase this timeout.
The interval in seconds at which the member of a pool will be checked. Leave blank to use the default interval of
10seconds. To check the servers more (or less) frequently, adjust the timing accordingly.
Number of processes used by
relaydfor handling inbound connections to relays. This option is only active for relays using DNS mode. It does not have any effect on TCP mode since that uses a redirect, not a relay. Leave blank to use the default value of
5processes. If the server is busy, increase this amount to accommodate the load.
The last step in configuring Load Balancing is to configure firewall rules to allow traffic to the pool.
For TCP mode, the firewall rules must permit traffic to the internal private IP addresses of the servers, the same as with NAT rules, as well as the port they are listening on internally. Create an alias for the servers in the pool to make the process easier, and create a single firewall rule on the interface where the traffic destined to the pool will be initiated (usually WAN) allowing the appropriate source (usually any) to a destination of the alias created for the pool. A specific example of this is provided in Configuring firewall rules. For more information on firewall rules, refer to Firewall.
For DNS mode, firewall rules must allow traffic directly to the Virtual Server IP address and port, not the pool servers.
There is one additional configuration option available for server load balancing, under System > Advanced, on the Miscellaneous tab. Under Load Balancing, called Use sticky connections. Checking this box will attempt to send clients with an active connection to the pool server to the same server for any subsequent connections.
Once the client closes all active connections, and the closed state times out, the sticky connection is lost. This may be desirable for some web load balancing configurations where client requests must only go to a single server, for session or other reasons. This isn’t perfect, as if the client’s web browser closes all TCP connections to the server after loading a page and sits there for 10 minutes or more before loading the next page, the next page may be served from a different server. Generally this isn’t an issue as most web browsers won’t immediately close a connection, and the state exists long enough to not make it a problem, but if the site is strictly reliant on a specific client never getting a different server in the pool regardless of how long the browser sits there inactive, look for a different load balancing solution. There is a box under the option to control the Source Tracking Timeout which can allow the knowledge of the client/server relationship to persist longer.
Sticky is generally unreliable for this purpose and can also have other unintended side effects. Full-featured proxy packages such as HAProxy have much better mechanisms and options for maintaining client/server relationships.
For additional information, you may access the Hangouts Archive to view the January 2015 Hangout on Server Load Balancing and Failover, which includes information on configuring HAProxy.