Netgate is offering COVID-19 aid for pfSense software users, learn more.
Hardware Tuning and Troubleshooting¶
The underlying operating system beneath pfSense® software can be fine-tuned in several ways. A few of these tunables are available under Advanced Options (See System Tunables Tab). Others are outlined in the FreeBSD main page tuning(7).
The default installation includes a well-rounded set of values tuned for good performance without being overly aggressive. There are cases where hardware or drivers necessitate changing values or a specific network workload requires changes to perform optimally.
The hardware sold in the Netgate Store is tuned further since Netgate has detailed knowledge of the hardware, removing the need to rely on more general assumptions.
/boot/loader.conf.local require a firewall reboot to
A common problem encountered by users of commodity hardware is mbuf exhaustion. To oversimplify, “mbufs” are network memory buffers; portions of RAM set aside for use by networking for moving data around.
The count of active mbufs is shown on the dashboard and is tracked by a graph under Status > Monitoring.
For details on mbufs and monitoring mbuf usage, see Mbuf Clusters.
If the firewall runs out of mbufs, it can lead to a kernel panic and reboot under certain network loads that exhaust all available network memory buffers. In certain cases this condition can also result in expected interfaces not being initialized and made available by the operating system. This is more common with NICs that use multiple queues or are otherwise optimized for performance over resource usage.
Additionally, mbuf usage increases when the firewall is using certain features such as Limiters.
To increase the amount of mbufs available, add the following to
That number can be again be doubled or more as needed, but be careful not to exceed available kernel memory. On 64 bit systems with multiple GB of RAM, set it to 1 million (1000000).
Some network interfaces may need other similar values raised such as
kern.ipc.nmbjumbop. In addition to the graphs mentioned above, check the
output of the command
netstat -m to verify if any areas are near exhaustion.
For performance reasons some networks cards use multiple queues for processing
packets. On multi-core systems, usually a driver will want to use one queue per
CPU core. A few cases exist where this can lead to
stability problems, which can be resolved by reducing the number of queues used
by the NIC. To reduce the number of queues, specify the new value in
/boot/loader.conf.local, such as:
The name of the sysctl OID varies by network card, but it is usually located in
the output of
sysctl -a, under
Message Signaled Interrupts are an alternative to classic style Interrupts for retrieving data from hardware. Some cards behave better with MSI, MSIX, or classic style Interrupts, but the card will try the best available choice (MSIX, then MSI, then Interrupts).
MSIX and MSI can be disabled via loader tunables. Add the following to
To nudge the card to use MSI, disable only MSIX. To nudge the card to use regular Interrupts, disable both MSI and MSIX.
Network cards which support multiple queues rely on hashing to assign traffic to a particular queue. This works well with IPv4/IPv6 TCP and UDP traffic, for example, but fails with other protocols such as those used for PPPoE.
Adding a System Tunable or
loader.conf.local entry for
net.isr.dispatch=deferred can lead to performance gains on affected
Tuning the values of
net.isr.numthreads may yield
additional performance gains. Generally these are best left at default values
matching the number of CPU cores, but depending on the workload may work better
at lower values.
In the past,
deferred mode has led to issues on 32-bit
platforms, such as crashes/panics, especially with ALTQ. There have been no
recent reports, however, so it should be safe on current releases.
The settings for Hardware TCP Segmentation Offload (TSO) and Hardware Large Receive Offload (LRO) under System > Advanced on the Networking tab default to checked (disabled) for good reason. Nearly all hardware/drivers have issues with these settings, and they can lead to throughput issues. Ensure the options are checked. Sometimes disabling via sysctl is also necessary.
Add the following to
This will show the current setting:
However, in largely loaded installations this may not be enough. Here is how to check:
If the above shows values above
0, try doubling the current value of
Keep performing the above until the point is found where drops are eliminated without any adverse effects.
Afterwards, add an entry under System > Advanced, System Tunables tab to
Several users have noted issues with certain Broadcom network cards, especially
those built into Dell hardware. If
bce interfaces are behaving erratically,
dropping packets, or causing crashes, then the following tweaks may help.
Add the following to
kern.ipc.nmbclusters="1000000" hw.bce.tso_enable=0 hw.pci.enable_msix=0
That will increase the amount of network memory buffers, disable TSO directly, and disable msix.
Packet loss with many (small) UDP packets¶
If a lot of packet loss is observed with UDP on bce cards, try changing the
netisr settings. These can be set as system tunables under System >
Advanced, on the System Tunables tab. On that page, add two new tunables:
It is possible to disable the allocation of resources that are not related to the router so that the network adapter can use its entire set of resources for the corresponding functions:
Add the following to
hw.cxgbe.toecaps_allowed=0 hw.cxgbe.rdmacaps_allowed=0 hw.cxgbe.iscsicaps_allowed=0 hw.cxgbe.fcoecaps_allowed=0
Certain intel igb cards, especially multi-port cards, can easily exhaust mbufs
and cause kernel panics. The following tweak will prevent this from being an
issue. Add the following to
That will increase the amount of network memory buffers, allowing the driver enough headroom for its optimal operation.
Not all NICs and PHYs are the same, even if they share a common driver or
chipset. pfSense software tries to drive network cards as fast and efficiently
as possible, and some hardware combinations are unable to handle the load
properly when pushed past their limits, or in certain configurations or network
environments. Even if the NICs and drivers claim to support certain features
like multiple queues, they may fail in practice when they are used, either due
to the hardware or a specific configuration that requires a single queue. In
these cases, it may be necessary to reduce the queues to one per card.
Accomplish this by placing the following in
As a sysctl (system tunable):
VMware VMXNET interfaces support multiple queues but do not utilize them by default. Multiple queues enable network performance to scale with the number of vCPUs and allows for parallel packet processing.
The following example values are for a virtual machine with 8 vCPUs.
Edit or create
/boot/loader.conf.lcoal and add the following content:
hw.pci.honor_msi_blacklist=0 hw.vmx.txnqueue=8 hw.vmx.rxnqueue=8 hw.vmx.txndesc=2048 hw.vmx.rxndesc=2048
Save the file, then reboot and check the queues with
vmstat -i at a command
In some circumstances, flow control may need to be disabled. The exact method depends on the hardware involved, as in the following examples:
These example entries go in
- ixgbe(4) (aka ix)
For ix and others, the flow control value can be further tuned:
0: No Flow Control
1: Receive Pause
2: Transmit Pause
3: Full Flow Control, Default