Using TNSR on KVM¶
TNSR can be run on a Linux Kernel Virtual Machine (KVM) hypervisor host. The
advice on this page is specifically geared toward KVM managed by
Installing TNSR on KVM¶
When creating the virtual machine, use the requirements on Supported Platforms as a guide for determining configuration parameters before starting. For example:
Number of CPUs, Cores, and their topology
Amount of RAM
Network connectivity type, number of interfaces, networks to which interfaces connect
Creating a VM¶
Before starting, obtain the installation ISO and place it in a known location.
The following command will create a new virtual machine from the KVM CLI with the following configuration:
2 virtual CPUs (1 socket, 2 cores per CPU, 1 thread per core)
Set CPU to
A new 32GB virtio disk named
3 virtio-based network interfaces
# virt-install --name TNSR --vcpus=2,sockets=1,cores=2,threads=1 \ --os-type linux --os-variant ubuntu20.04 --cpu host --ram 4096 \ --disk /var/lib/libvirt/images/tnsr.qcow2,size=32,device=disk,bus=virtio \ --network=default,model=virtio --network bridge=br0,model=virtio \ --network bridge=br1,model=virtio \ --nographics --noautoconsole \ --location /root/TNSR-Ubuntu.iso,kernel=casper/vmlinuz,initrd=casper/initrd \ --extra-args 'console=ttyS0,115200n8 quiet fsck.mode=skip \ network-config=disabled autoinstall ds=nocloud;s=/cdrom/server/'
Replace the parameters as needed to conform to the local KVM environment.
In particular, the
--disk path, ISO
--location path, and bridge
device or network names will likely be different.
Access the management console with the following command:
# virsh console TNSR
From the console, follow the standard TNSR installation procedure and the VM will shut down afterward. Start it again and reconnect to the console:
# virsh start TNSR # virsh console TNSR
KVM Frontends/GUIs can also accomplish the same goal in different ways. Use whichever method is preferred by the hypervisor administrator.
Virtio interfaces use
tap as a backend, which requires a
each packet forwarded. Due to this design, the stock configuration can result in
poor performance. The tuning suggestions in this section will help obtain higher
performance in these environments.
Though these suggested changes have been found to improve performance in testing, every installation and workload is different. Real-world results may vary depending on the environment. Generally speaking, values should only be changed from the defaults in cases where performance is lower than expected.
1024instead of the default
To set these values in the
libvirtxml configuration for a VM, see Changing VM Parameters.
Increase the number of
vhostbackend driver configuration, especially if TNSR is configured to use worker threads. This information is also in the section linked above.
Try using SR-IOV VFs instead of Virtio interfaces.
Try using a DPDK accelerated OpenVSwitch (OVS-DPDK) instead of a standard linux bridge.
Changing VM Parameters¶
Some values must be changed by editing the VM settings XML directly. This includes the receive and transmit ring queue sizes and the number of queues.
When setting the receive and transmit ring queue sizes, keep in mind that some environments impose specific requirements on the values. For example, they may only work with certain drivers, or may have value restrictions such as being a power of 2 (256, 512, 1024, etc.).
To edit the VM XML parameters, use the following command:
# virsh edit TNSR [...]
interface tag(s) and the
driver tags inside. In the driver tag,
edit or add the desired attributes and values. For example, to set
1024 size transmit and receive ring queue sizes:
<interface [...]> [...] <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off' queues='5' rx_queue_size='1024' tx_queue_size='1024'> [...] </driver> [...] </interface>
Details of the above XML block have been omitted for brevity and generality. Interfaces will vary in their specific settings.
Start the VM, and check the
qemu command line, which should contain
From within the VM, at a shell prompt, confirm the ring queue sizes
# ethtool -g eth0 Ring parameters for eth0: Pre-set maximums: RX: 1024 [...] TX: 1024 Current hardware settings: RX: 1024 [...] TX: 1024 [...]
If the number of queues was changed, confirm that as well:
# ethtool -l eth0 Channel parameters for eth0: Pre-set maximums: [...] Combined: 5 [...]