Tip
This is the documentation for the 23.06 version. Looking for the documentation of the latest version? Have a look here.
DPDK Configuration¶
Commands in this section configure hardware settings for DPDK devices.
DPDK Settings¶
- dataplane dpdk dev <pci-id> (crypto|crypto-vf):
Configures QAT devices for cryptographic acceleration. See Setup QAT Compatible Hardware for details.
- dataplane dpdk dev (<pci-id>|<vmbus-uuid>) network [name <name>] [num-rx-queues [<rq>]] [num-tx-queues [<tq>]] [num-rx-desc [<rd>]] [num-tx-desc [<td>]] [tso (off|on)] [devargs <name>=<value>]:
Configures a manually approved list of network interface PCI devices or Hyper-V/Azure VMBUS device UUIDs and their options. Typically the dataplane will automatically attempt to use eligible interfaces, but this command overrides that behavior by explicitly listing devices which will be used by the dataplane.
See also
See Setup NICs in Dataplane for more information and examples for adding devices in this manner.
Warning
Adding devices in this way is not compatible with
dataplane dpdk blacklist
, but when devices are listed manually viadataplane dpdk dev
denying in that way is unnecessary.- name <name>:
Sets a custom name for a network device in TNSR instead of the automatically generated name (
<Link Speed><Bus Location>
). For example device0000:06:00.0
can have a custom name ofWAN
instead of the defaultGigabitEthernet6/0/0
. Used for convenience and to make interface names self-documenting.See also
See Customizing Interface Names for additional details including limitations on names.
- num-rx-queues [<rq>] num-tx-queues [<tq>]:
Receive and transmit queue sizes for this device.
- num-rx-desc [<rd>] [num-tx-desc [<td>]:
Receive and transmit descriptor sizes for this device. Certain network cards, such as Fortville models, may need the descriptors set to
2048
to avoid dropping packets at high loads.- tso (on|off):
TCP segmentation offload (TSO). When enabled on hardware which supports TSO, packet data is offloaded to hardware in large quantities and the hardware handles segmentation into MTU-sized chunks rather than performing segmentation in software. This results in improved throughput as shifting the per-packet processing to hardware reduces the burden on the network stack. Disabled by default.
Note
The default values for these configuration options can be set by
dataplane dpdk dev default network <options>
. These default values are used by the dataplane when an interface does not have a specific value set. Thename
option must be unique for each interface and thus does not support a default value.- devargs <name>=<value>:
Configures a device argument name and value pair with those components separated by
=
. Device arguments enable or control optional features on a device. For example,dataplane dpdk dev 0000:06:00.0 network devargs disable_source_pruning=1
.A single command can only set one name and value pair. However, it is possible to set multiple device arguments by running the command multiple times each one with a different device argument name and value pair.
Note
The combined length of a each
name=value
pair must be 128 bytes or less.Warning
Use extreme caution when forming these entries. Due to the large variation in drivers and acceptable parameters, TNSR cannot validate
devarg
entries.An invalid or incompatible
devarg
entry will cause the dataplane to not attach to the affected interfaces correctly, rendering the interface inoperable until thedevarg
entry is corrected.See also
Each driver supports a different set of arguments. Look in the DPDK NIC Drivers Documentation for information on the arguments supported by a specific poll mode driver (PMD).
A few examples of DPDK documentation pages for PMDs with configuration options are:
- dataplane dpdk blacklist <vendor-id>:<device-id>:
Prevents the dataplane from automatically attaching to any device which matches a specific PCI vendor and device identifier. Useful for preventing the dataplane from attaching to hardware devices which are known to be incompatible.
Warning
Listing devices in this way is not compatible with
dataplane dpdk dev
.- dataplane dpdk blacklist (<pci-id>|<vmbus-uuid>):
Similar to the previous form, but explicitly prevents the dataplane from attaching to a specific PCI device or Hyper-V/Azure VMBUS device UUID.
Warning
Listing devices in this way is not compatible with
dataplane dpdk dev
.- dataplane dpdk decimal-interface-names:
Disabled by default. When set, interface names automatically generated by the dataplane will use decimal values for bus location values rather than hexadecimal values. Linux uses decimal values when forming interface names (e.g.
enp0s20f1
), so administrators may find using decimal values more familiar.For example, device ID
0000:00:14.1
(enp0s20f1
in the host OS) would normally beGigabitEthernet0/14/1
since the value14
in the bus slot is in hexadecimal. Withdecimal-interface-names
set, the name would beGigabitEthernet0/20/1
instead.- dataplane dpdk iova-mode (pa|va):
Manually configures the IO Virtual Addresses (IOVA) mode used by DPDK when performing hardware IO from user space. Hardware must use IO addresses, but it cannot utilize user space virtual addresses directly. These IO addresses can be either physical addresses (PA) or virtual addresses (VA). No matter which mode is set, these are abstracted to TNSR as IOVA addresses so it does not need to use them directly.
In most cases the default IOVA mode selected by DPDK is optimal.
See also
For more detail on IOVA, consult the DPDK documentation.
- pa:
Physical Address mode. IOVA addresses used by DPDK correspond to physical addresses, and both physical and virtual memory layouts match. This mode is safest from the perspective of the hardware, and is the mode chosen by default. Most hardware supports PA mode at a minimum.
The primary downside of PA mode is that memory fragmentation in physical space must also be reflected in virtual memory space.
- va:
Virtual Address mode. IOVA addresses do not follow the layout of physical memory; Physical memory is changed to match the virtual memory instead. Because virtual memory appears as one continuous segment, large memory allocations are more likely to succeed.
The primary downside of VA mode is that it relies on kernel support and the availability of IOMMU.
- dataplane dpdk log-level (alert|critical|debug|emergency|error|info|notice|warning):
Sets the log level for messages generated by DPDK. The default log level is
notice
.- dataplane dpdk lro:
Enables Large Receive Offload (LRO) on compatible interfaces. When LRO is enabled, incoming connection streams are buffered in hardware until they can be reassembled and processed in large batches rather than processing each packet as it arrives individually. This can result in improved throughput as shifting the per-packet processing to hardware reduces the burden on the network stack. Disabled by default.
Warning
While this can improve performance in certain cases it also alters the incoming packets which may be undesirable in routing roles.
- dataplane dpdk no-multi-seg:
Disables multi-segment buffers for network devices. Can improve performance, but disables jumbo MTU support.
Required for Mellanox devices.
Warning
This option is not currently compatible with Intel X552 10G network interfaces. When enabled on incompatible hardware this option can lead to instability such as dataplane crashes while under load.
- dataplane dpdk no-pci:
Disables scanning of the PCI bus for interface candidates when the dataplane starts. By default, interfaces which are administratively down in the host OS can be selected for use by the dataplane.
- dataplane dpdk no-tx-checksum-offload:
Disables transmit checksum offloading of TCP/UDP for network devices.
- dataplane dpdk outer-checksum-offload:
Enable hardware checksum offload for tunnel packets. Requires
tcp-udp-checksum
.- dataplane dpdk tcp-udp-checksum:
Enables receive checksum offloading of TCP/UDP for network devices. Disabled by default.
- dataplane dpdk telemetry:
Enables the telemetry thread in DPDK to collect performance statistics. Disabled by default as it consumes resources and can decrease performance.
- dataplane dpdk uio-driver [<driver-name>]:
Configures the UIO driver for interfaces.
For more information on the driver choices, see Interface Drivers.
See also
Interface Drivers¶
The interface driver controls how the dataplane communicates with interfaces, either hardware or virtual functions. Certain types of interfaces may only be compatible with certain drivers, either because of the hardware or how they are utilized. For example, if the interface is directly attached to bare metal hardware or via hardware passthrough of virtual functions in a guest VM. When interfaces are compatible with more than one driver, certain drivers may offer increased performance or features that make it a better choice for a given workload or environment.
Dataplane interface drivers currently fall into two categories:
See also
The procedure for changing drivers is covered in Interface Driver Management.
Note
Mellanox devices use RDMA and not UIO, so changing the driver may not have any effect on their behavior. If a Mellanox device does not appear automatically, TNSR may not support that device.
vfio-pci¶
The VFIO PCI driver (vfio-pci
) is a safer alternative to UIO and, in theory,
more widely compatible. It employs techniques to communicate with hardware from
userspace safely within defined boundaries. The VFIO driver framework is
device-agnostic so in theory can work on most interfaces without compatibility
concerns. VFIO support is built into the kernel and does not require loading an
extra module.
This is the current default driver and the driver recommended by DPDK.
Note
The vfio-pci
driver has compatibility issues with certain QAT devices,
including DH895x, C3xxx, and C62x devices. Though there is a way to
bypass the compatibility check and let it work,
the current best practice for users with QAT devices is to continue using the
igb_uio
driver.
Warning
When the vfio-pci
driver is active, TNSR automatically configures
the driver with noiommu
mode for compatibility with QAT and other
functions. Some may consider noiommu
mode unsafe as it provides the user
full access to a DMA capable device without the security of I/O management.
Take this into consideration when choosing the vfio-pci
driver.
igb_uio¶
The IGB UIO driver (igb_uio
) handles hardware or virtual interfaces. It
supports a wide variety of hardware, including various Intel Ethernet
controllers. While not as safe as vfio-pci
, it can perform faster in certain
environments and workloads. TNSR must load an additional kernel module for this
driver when it is active.
Note
Some devices, such as ENA and VMXNET3, have trouble setting up interrupts
with the igb_uio
driver. For these devices, use the vfio-pci
driver
instead.
uio_pci_generic¶
The generic UIO PCI driver (uio_pci_generic
) can work with PCI hardware
interfaces which support legacy interrupts. This driver does not support virtual
function interfaces.
Note
Ethernet 700 Series Network Adapters based on the Intel Ethernet Controller
X710/XL710/XXV710 and Intel Ethernet Connection X722 are not compatible with
this driver. For these devices, use the vfio-pci
or igb_uio
driver
instead.