Important

Netgate is offering COVID-19 aid for pfSense software users, learn more.

Hardware Tuning and Troubleshooting

The underlying operating system beneath pfSense® software can be fine-tuned in several ways. A few of these tunables are available under Advanced Options (See System Tunables Tab). Others are outlined in the FreeBSD main page tuning(7).

The default installation includes a well-rounded set of values tuned for good performance without being overly aggressive. There are cases where hardware or drivers necessitate changing values or a specific network workload requires changes to perform optimally.

The hardware sold in the Netgate Store is tuned further since Netgate has detailed knowledge of the hardware, removing the need to rely on more general assumptions.

Note

Changes to /boot/loader.conf.local require a firewall reboot to take effect.

General Issues

Mbuf Exhaustion

A common problem encountered by users of commodity hardware is mbuf exhaustion. To oversimplify, “mbufs” are network memory buffers; portions of RAM set aside for use by networking for moving data around.

The count of active mbufs is shown on the dashboard and is tracked by a graph under Status > Monitoring.

See also

For details on mbufs and monitoring mbuf usage, see Mbuf Clusters.

If the firewall runs out of mbufs, it can lead to a kernel panic and reboot under certain network loads that exhaust all available network memory buffers. In certain cases this condition can also result in expected interfaces not being initialized and made available by the operating system. This is more common with NICs that use multiple queues or are otherwise optimized for performance over resource usage.

Additionally, mbuf usage increases when the firewall is using certain features such as Limiters.

To increase the amount of mbufs available, add the following to /boot/loader.conf.local:

kern.ipc.nmbclusters="1000000"

That number can be again be doubled or more as needed, but be careful not to exceed available kernel memory. On 64 bit systems with multiple GB of RAM, set it to 1 million (1000000).

Some network interfaces may need other similar values raised such as kern.ipc.nmbjumbop. In addition to the graphs mentioned above, check the output of the command netstat -m to verify if any areas are near exhaustion.

NIC Queue Count

For performance reasons some networks cards use multiple queues for processing packets. On multi-core systems, usually a driver will want to use one queue per CPU core. A few cases exist where this can lead to stability problems, which can be resolved by reducing the number of queues used by the NIC. To reduce the number of queues, specify the new value in /boot/loader.conf.local, such as:

hw.igb.num_queues=1

The name of the sysctl OID varies by network card, but it is usually located in the output of sysctl -a, under hw.<drivername>.

Disable MSIX

Message Signaled Interrupts are an alternative to classic style Interrupts for retrieving data from hardware. Some cards behave better with MSI, MSIX, or classic style Interrupts, but the card will try the best available choice (MSIX, then MSI, then Interrupts).

MSIX and MSI can be disabled via loader tunables. Add the following to /boot/loader.conf.local:

hw.pci.enable_msix=0
hw.pci.enable_msi=0

To nudge the card to use MSI, disable only MSIX. To nudge the card to use regular Interrupts, disable both MSI and MSIX.

PPPoE with Multi-Queue NICs

Network cards which support multiple queues rely on hashing to assign traffic to a particular queue. This works well with IPv4/IPv6 TCP and UDP traffic, for example, but fails with other protocols such as those used for PPPoE.

This can lead to a network card under performing with the default network settings, as noted on #4821 and FreeBSD PR 203856.

Adding a System Tunable or loader.conf.local entry for net.isr.dispatch=deferred can lead to performance gains on affected hardware.

Tuning the values of net.isr.maxthreads and net.isr.numthreads may yield additional performance gains. Generally these are best left at default values matching the number of CPU cores, but depending on the workload may work better at lower values.

Warning

In the past, deferred mode has led to issues on 32-bit platforms, such as crashes/panics, especially with ALTQ. There have been no recent reports, however, so it should be safe on current releases.

TSO/LRO

The settings for Hardware TCP Segmentation Offload (TSO) and Hardware Large Receive Offload (LRO) under System > Advanced on the Networking tab default to checked (disabled) for good reason. Nearly all hardware/drivers have issues with these settings, and they can lead to throughput issues. Ensure the options are checked. Sometimes disabling via sysctl is also necessary.

Add the following to /boot/loader.conf.local:

net.inet.tcp.tso=0

IP Input Queue (intr_queue)

This will show the current setting:

sysctl net.inet.ip.intr_queue_maxlen

However, in largely loaded installations this may not be enough. Here is how to check:

sysctl net.inet.ip.intr_queue_drops

If the above shows values above 0, try doubling the current value of net.inet.ip.intr_queue_maxlen.

For example:

sysctl net.inet.ip.intr_queue_maxlen=3000

Keep performing the above until the point is found where drops are eliminated without any adverse effects.

Afterwards, add an entry under System > Advanced, System Tunables tab to set net.inet.ip.intr_queue_maxlen to 3000

Card-Specific Issues

Broadcom bce(4) Cards

Several users have noted issues with certain Broadcom network cards, especially those built into Dell hardware. If bce interfaces are behaving erratically, dropping packets, or causing crashes, then the following tweaks may help.

Add the following to /boot/loader.conf.local:

kern.ipc.nmbclusters="1000000"
hw.bce.tso_enable=0
hw.pci.enable_msix=0

That will increase the amount of network memory buffers, disable TSO directly, and disable msix.

Packet loss with many (small) UDP packets

If a lot of packet loss is observed with UDP on bce cards, try changing the netisr settings. These can be set as system tunables under System > Advanced, on the System Tunables tab. On that page, add two new tunables:

net.isr.direct_force=1
net.isr.direct=1

Broadcom bge(4) Cards

See above, but change “bce” to “bge” in the setting names.

Chelsio cxgbe(4) Cards

It is possible to disable the allocation of resources that are not related to the router so that the network adapter can use its entire set of resources for the corresponding functions:

Add the following to /boot/loader.conf.local:

hw.cxgbe.toecaps_allowed=0
hw.cxgbe.rdmacaps_allowed=0
hw.cxgbe.iscsicaps_allowed=0
hw.cxgbe.fcoecaps_allowed=0

Intel igb(4) and em(4) Cards

Certain intel igb cards, especially multi-port cards, can easily exhaust mbufs and cause kernel panics. The following tweak will prevent this from being an issue. Add the following to /boot/loader.conf.local:

kern.ipc.nmbclusters="1000000"

That will increase the amount of network memory buffers, allowing the driver enough headroom for its optimal operation.

Not all NICs and PHYs are the same, even if they share a common driver or chipset. pfSense software tries to drive network cards as fast and efficiently as possible, and some hardware combinations are unable to handle the load properly when pushed past their limits, or in certain configurations or network environments. Even if the NICs and drivers claim to support certain features like multiple queues, they may fail in practice when they are used, either due to the hardware or a specific configuration that requires a single queue. In these cases, it may be necessary to reduce the queues to one per card. Accomplish this by placing the following in /boot/loader.conf.local:

hw.igb.num_queues=1

Intel ix(4) Cards

In /boot/loader.conf.local:

kern.ipc.nmbclusters="1000000"
kern.ipc.nmbjumbop="524288"

As a sysctl (system tunable):

hw.intr_storm_threshold=10000

Flow Control

In some circumstances, flow control may need to be disabled. The exact method depends on the hardware involved, as in the following examples:

These example entries go in /boot/loader.conf.local:

cxgbe(4)
hw.cxgbe.pause_settings=0
em(4)
hw.em.fc_setting=0
igb(4)
hw.igb.fc_setting=0
ixgbe(4) (aka ix)
hw.ix.flow_control=0

For ix and others, the flow control value can be further tuned:

  • 0: No Flow Control

  • 1: Receive Pause

  • 2: Transmit Pause

  • 3: Full Flow Control, Default