OpenVPN Data Channel Offload (DCO)

OpenVPN Data Channel Offload (DCO) allows for huge performance gains when processing encrypted OpenVPN data by reducing the amount of context switching that happens for each packet. DCO accomplishes this by keeping most of the data handling tasks in the kernel rather than repeatedly switching between kernel and user space for encryption and packet handling. This makes the overall processing of each packet more efficient while also potentially taking advantage of hardware encryption offloading support in the kernel. DCO also adds support for multi-threaded encryption, allowing for even more performance gains.

Netgate worked with OpenVPN to develop and integrate support for OpenVPN Data Channel Offload (DCO) into FreeBSD and pfSense® Plus software version 22.05 and later.

Warning

pfSense® Plus software version 22.05 or later is required to use OpenVPN DCO. OpenVPN DCO is not available on pfSense CE Software.

DCO is not a change to the protocol, it is a change in how an endpoint processes encrypted data. Thus, DCO is beneficial even when only one endpoint is capable of DCO. That said, tunnels employing DCO on all peers will see the most benefit. With DCO on only one peer the performance improvement can still be notable but not as significant as the gains with DCO support on both endpoints.

Note

Some OpenVPN features and use cases are not compatible with DCO. See Limitations for a list of known DCO limitations.

Using OpenVPN DCO

DCO support is a per-tunnel option and it is not automatically enabled by default for new or upgraded tunnels. Existing tunnels will continue to function as they have in the past.

DCO can be enabled for both new and existing tunnels by using a simple checkbox option on OpenVPN server and client instances. The current best practice is to create a new tunnel with DCO to minimize the chance of problems with existing clients.

Limitations

There are a few limitations in OpenVPN DCO generally and in the current DCO implementation on FreeBSD/pfSense software, including:

  • Encryption is limited to AES-256-GCM, AES-128-GCM, and ChaCha20-Poly1305.

  • DCO support requires a TLS-based tunnel, such as SSL/TLS, SSL/TLS+User Auth, or User Auth.

  • DCO support is only present in OpenVPN 2.6.0 and later.

  • DCO is only compatible with UDP, it cannot be used with TCP.

  • DCO is not yet able to utilize internal routing in OpenVPN (iroute). This means that although remote access use cases work, and site-to-site setups with one client per server work, it does not yet function with multiple site-to-site clients on a single server which require LAN-to-LAN routing.

  • Using a /30 or smaller tunnel network for peer-to-peer tunnels (one server with one client) is not compatible with DCO. There are problems with the code for this mode in OpenVPN which can lead to failed connections and instability.

  • Compression is not supported with DCO. The GUI disables compression options when DCO is enabled for an instance, but for a client instance the server could still push a compression option which would make the client fail to pass traffic.

  • Some features are not compatible with DCO or are not relevant with DCO. These options include:

    • Explicit exit notify

    • Inactivity timeouts

    • UDP fast I/O

    • Send/receive buffer sizes

  • Per-peer data usage is not tracked properly.

    Until this is resolved peer data usage on the OpenVPN status page will not reflect the actual amount of data transferred between peers.

DCO and Routing

DCO does not currently honor internal routes from client-specific overrides (i.e. iroute) for multiple site-to-site clients on a single server, but it does honor kernel route destinations that would normally be ignored by non-DCO OpenVPN.

Assign clients static addresses in overrides (after patching #13274) and then setup custom routes in OpenVPN custom options with complete destinations defined or even setup FRR and exchange routes via BGP.

DCO and Hardware Cryptographic Acceleration

For optimal performance with DCO, ensure a hardware cryptographic accelerator is present and enabled.

QAT currently offers the highest performance for AES-256-GCM. If the hardware supports QAT, enable QAT.

If there is no QAT device available but the CPU supports SIMD instruction sets, then enable IPsec-MB and use AES-GCM, ChaCha20-Poly1305, or even AES-CBC. This can also benefit uses of these ciphers which are not yet accelerated by QAT.

If the hardware does not support QAT or IPsec-MB but it does support AES-NI, ensure the AES-NI kernel module is loaded for optimal performance with AES-256-GCM. Though OpenSSL can utilize AES-NI without the module loaded, performance is poor in that state and can even be slower than with DCO disabled.

Note

pfSense Plus software supports ChaCha20-Poly1305 with OpenVPN DCO, but currently only IPsec-MB can accelerate that algorithm. At this time, neither AES-NI nor QAT can accelerate ChaCha20-Poly1305. Some newer QAT hardware may be capable of accelerating ChaCha20-Poly1305, but the current QAT drivers do not yet include support for that encryption algorithm.