Tip

This is the documentation for the 24.06 version. Looking for the documentation of the latest version? Have a look here.

Linux-cp ConfigurationΒΆ

Linux-cp is an interface between the dataplane and the operating system which interacts with daemons, interfaces, routing, and more. There are a few parameters which can fine-tune behavior of Linux-cp:

dataplane linux-cp nl-rx-buffer-size <n>:

Specifies the size (in bytes) of the receive buffer used by the netlink socket on which Linux-cp listens for kernel networking announcements. The default socket buffer size is determined by the kernel sysctl /proc/sys/net/core/rmem_default, which is around 200kB. When very large amounts of routes are received via BGP (e.g. a full internet route feed of about 800k routes), the kernel can easily send a large burst of messages which quickly fill up this buffer causing it to overflow.

Linux-cp explicitly sets its receive buffer to a larger size (128MB) by default. This parameter can tune it even higher.

dataplane linux-cp nl-batch-size <n>:

Specifies the maximum number of incoming netlink messages (e.g. route, interface, or address changes) which will be processed by the main dataplane (VPP) thread at a time. Default value is 2048 messages.

Messages are processed in batches so that processing of a large burst of messages will not monopolize the CPU for an extended period of time, causing other tasks to be delayed.

dataplane linux-cp nl-batch-delay-ms <n>:

Specifies the time to wait between completing processing of one batch and starting processing of the next batch.

Default value is 50 milliseconds. 50 ms between batches implies that a maximum of 20 batches could be processed in one second by default.

Batch processing is intended to make the size of the socket buffer less important. The socket buffer could be increased to be very large and still will be stressed by increasing the number of routes being added. With batch processing of messages, TNSR splits this process into two separate phases: Reading incoming messages from the socket and processing those messages. The socket needs to be read regularly to avoid filling or overflowing the buffer. Processing each message takes time and CPU resources to complete. To prevent message processing from causing delays in reading the socket, whenever there is data available to read on the socket, TNSR will read all of the available messages. The messages will be processed by a separate scheduled task. The batch size and batch delay can be used to tune how much time can be spent on processing messages in order to ensure that there is enough time to read additional incoming messages.

When tuning these parameters, configure at least one worker thread. Without worker threads this processing shares a CPU with packet processing and forwarding. If there is a large number of routes being added, the processing of those routes will compete with packet processing for CPU time and it is more likely that the netlink socket buffer will fill. Worker threads allow route processing to occur in the main thread and packet processing to occur in the workers and there will be no contention for CPU resources between the two.