Using TNSR on KVM¶
TNSR can be run on a Linux Kernel Virtual Machine (KVM) hypervisor host. The
advice on this page is specifically geared toward KVM running on CentOS/RHEL,
qcow2 virtual machine image is available from Netgate®
to use as a basis for running TNSR on KVM.
Installing TNSR on KVM¶
When creating the virtual machine, use the requirements on Supported Platforms as a guide for determining configuration parameters before starting. For example:
Number of CPUs, Cores, and their topology
Amount of RAM
Network connectivity type, number of interfaces, networks to which interfaces connect
Creating a VM¶
The following command will create a new virtual machine from the KVM CLI with the following configuration:
2 virtual CPUs (1 socket, 2 cores per CPU, 1 thread per core)
tnsr.qcow2image as a virtio disk
2 virtio-based network interfaces
# virt-install --name TNSR --vcpus=2,sockets=1,cores=2,threads=1 \ --os-type linux --os-variant centos7.0 --cpu host \ --network=default,model=virtio --ram 4096 --noautoconsole --import \ --disk /var/lib/libvirt/images/tnsr.qcow2,device=disk,bus=virtio \ --network bridge=br0,model=virtio --network bridge=br1,model=virtio
After installation, the management console can be accessed with the following command:
# virsh console TNSR
KVM Fronteneds/GUIs can also accomplish the same goal. Use whichever method is preferred by the hypervisor administrator.
Virtio interfaces use
tap as a backend, which requires a
each packet forwarded. Due to this design, the stock configuration can result in
poor performance. The tuning suggestions in this section will help obtain higher
performance in these environments.
Though these suggested changes have been found to improve performance in testing, every installation and workload is different. Real-world results may vary depending on the environment. Generally speaking, values should only be changed from the defaults in cases where performance is lower than expected.
1024instead of the default
To set these values in the
libvirtxml configuration for a VM, see Changing VM Parameters.
Increase the number of
vhostbackend driver configuration, especially if TNSR is configured to use worker threads. This information is also in the section linked above.
Try using SR-IOV VFs instead of Virtio interfaces.
Try using a DPDK accelerated OpenVSwitch (OVS-DPDK) instead of a standard linux bridge.
Changing VM Parameters¶
Some values must be changed by editing the VM settings XML directly. This includes the receive and transmit ring queue sizes and the number of queues.
When setting the receive and transmit ring queue sizes, keep in mind that some environments impose specific requirements on the values. For example, they may only work with certain drivers, or may have value restrictions such as being a power of 2 (256, 512, 1024, etc.).
To edit the VM XML parameters, use the following command:
# virsh edit TNSR [...]
interface tag(s) and the
driver tags inside. In the driver tag,
edit or add the desired attributes and values. For example, to set
1024 size transmit and receive ring queue sizes:
<interface [...]> [...] <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off' queues='5' rx_queue_size='1024' tx_queue_size='1024'> [...] </driver> [...] </interface>
Details of the above XML block have been omitted for brevity and generality. Interfaces will vary in their specific settings.
Start the VM, and check the
qemu command line, which should contain
From within the VM, at a shell prompt, confirm the ring queue sizes
# ethtool -g eth0 Ring parameters for eth0: Pre-set maximums: RX: 1024 [...] TX: 1024 Current hardware settings: RX: 1024 [...] TX: 1024 [...]
If the number of queues was changed, confirm that as well:
# ethtool -l eth0 Channel parameters for eth0: Pre-set maximums: [...] Combined: 5 [...]