Tip

This is the documentation for the 23.11 version. Looking for the documentation of the latest version? Have a look here.

Router for Proxmox® VE Virtual Machines

This recipe configures TNSR running on Proxmox® VE as a router for virtual machines (VMs) on one or more hypervisor nodes. The VMs communicate directly with TNSR using vHost User Interfaces in place of using traditional hypervisor networking (e.g. Linux bond or Open vSwitch type links).

Though this recipe is tailored toward Proxmox VE, the same techniques are possible on similar setups, such as with KVM or QEMU on their own.

Note

Customers with a TNSR Business subscription can install TNSR on top of Proxmox VE by adding the Netgate TNSR repositories to the Proxmox VE configuration along with a valid TNSR update certificate.

Requirements/Limitations

  • Balloon driver is not supported with shared memory backend.

  • Virtual Machine interfaces attached to TNSR will not show in the Proxmox VE GUI.

TNSR Prerequisites

vHost User Behavior Tuning

In TNSR, set the number of coalesce frames for vHost user interfaces to 4 which improves VM-to-VM performance:

tnsr(config)# dataplane vhost-user coalesce-frames 4
tnsr(config)# configuration copy running startup
tnsr(config)# service dataplane restart

Configure a Bridge Domain

Create a bridge domain which TNSR will use to bridge traffic to/from the VM:

tnsr(config)# interface bridge domain 2
tnsr(config-bridge)# flood
tnsr(config-bridge)# uu-flood
tnsr(config-bridge)# forward
tnsr(config-bridge)# learn
tnsr(config-bridge)# exit

Adding a physical interface to a bridge domain sets it in L2 mode. TNSR will not perform L3 operations on that interface anymore.

Note

For basic bridging, L3 is not necessary.

If TNSR must perform L3 operations or control L3 options on the bridge, those must be handled using a BVI interface:

tnsr(config)# interface loopback bridge2loop
tnsr(config-loopback)# instance 2
tnsr(config-loopback)# exit
tnsr(config)# interface loop2
tnsr(config-interface)# bridge domain 2 bvi
tnsr(config-interface)# enable
tnsr(config-interface)# exit

From there, it is possible to configure the loop2 interface with an IP address, routes, etc.

Hypervisor Prerequisites

Create Virtual Machine(s)

Create a new guest VM and note its VM ID (e.g. 100). Later steps require knowledge of this ID to determine other identifiers and paths to configuration files.

Shared Memory Tuning

TNSR accesses the Guest VM memory using a KVM/QEMU shared memory interface. By default the Linux Kernel only allocates 8GB to the shared memory driver. In most cases 8GB will not be enough. This allocation must be set large enough to cover the full memory allocation of all VMs connected to TNSR.

Note

Shared memory is dynamically allocated from system memory, so increasing the allocation does not take memory away from the system.

To increase the allocation, first edit /etc/fstab and either alter the line for /run/shm or add it:

none  /run/shm  tmpfs  defaults,size=40g  0  0

The exact value will vary, but internal testing has shown a value approximately 80% of system RAM is adequate.

Next, either reboot the host or run the following command from a shell:

$ sudo mount -o remount /dev/shm

TNSR vHost User Interface Configuration

In the TNSR CLI, create the vHost user interface instance, enabling optimizations.

The best practice is to use the same format as Proxmox VE, <VM#><InterfaceInstance> for the interface instance and /var/run/vpp/vm-<VM#>-<InterfaceInstance>.sock for the socket filename. However, number the interface instances starting at 0 for the TNSR dataplane, ignoring interfaces defined by the Proxmox VE GUI.

In this example, vHost user interface 1000 is VM 100 interface 0.

First define the vHost user interface instance:

tnsr(config)# interface vhost-user 1000
tnsr(config-vhost-user)# server-mode
tnsr(config-vhost-user)# enable gso
tnsr(config-vhost-user)# enable packed
tnsr(config-vhost-user)# enable event-index
tnsr(config-vhost-user)# sock-filename /var/run/vpp/vm-100-0.sock
tnsr(config-vhost-user)# exit

Next, configure the resulting interface and join it to the bridge domain:

tnsr(config)# interface VirtualEthernet0/0/1000
tnsr(config-interface)# mac 3c:ec:ef:d0:10:00
tnsr(config-interface)# bridge domain 2
tnsr(config-interface)# enable
tnsr(config-interface)# exit

Virtual Machine Interface Configuration

The next part is more complicated. The settings for vHost user interfaces are not exposed in the Proxmox VE GUI. Applying the required settings involves manually editing the configuration file for each virtual machine.

Now assemble the required VM parameters step by step.

Using the socket path determined earlier:

-chardev socket,id=char1,path=/var/run/vpp/vm-100-0.sock,reconnect=2

Using the VM-ID for convenience, or using any other valid name

-netdev type=vhost-user,id=vm-100-0,chardev=char1,vhostforce=on

If the MAC address of the VM is not hard coded, the hypervisor will assign one randomly each time the VM starts. Use the netdev ID from above.

Additionally, for performance reasons, set both the rx_queue_size (the default for Proxmox VE VMs) and tx_queue_size to 1024 (increased from the default 256). This helps to reduce transmission errors under load.

-device
virtio-net-pci,mac=3c:ec:ef:d1:10:00,rx_queue_size=1024,tx_queue_size=1024,netdev=vm-100-0

Warning

If the MAC is hard coded on both TNSR and the VM, ensure every interface on TNSR and the VM uses different values for MAC addresses.

When creating multiple VM interfaces, repeat the above steps incrementing the ID as necessary.

Now define the memory backend. The size= parameter must match the RAM allocated to the VM. This is only performed once per VM, no matter how many interfaces are connected to the VM.

-object memory-backend-file,id=mem1000,size=4096M,mem-path=/dev/shm,share=on
-numa node,memdev=mem1000

Now combine all of these arguments into a single args: line:

args: -chardev socket,id=char1,path=/var/run/vpp/vm-100-0.sock,reconnect=2
-netdev type=vhost-user,id=vm-100-0,chardev=char1,vhostforce=on
-device
virtio-net-pci,mac=3c:ec:ef:d1:10:00,rx_queue_size=1024,tx_queue_size=1024,netdev=vm-100-0
-object memory-backend-file,id=mem1000,size=4096M,mem-path=/dev/shm,share=on
-numa node,memdev=mem1000

Warning

The above set of parameters is shown here as multiple lines so that it is easier to read, but all of these arguments must be on a single line in the configuration file. Each of the above “lines” can be separated by a space.

Add this line to the Proxmox VE virtual machine configuration file where ### is the VM ID: /etc/pve/qemu-server/###.conf.

Repeat this for each VM and interface.

Live Migration / Multiple Hypervisor Nodes

Live migration has been observed to work under the following conditions:

  • TNSR is present and configured correctly on each Proxmox VE instance.

  • The bridge domain numbering is consistent across all TNSR instances.

  • Each TNSR instance uses a consistent vhost-user and VirtualEthernet0/0/x configuration for the VM to be migrated.

    Note

    This is possible because the VirtualEthernet0/0/x device may be defined on every TNSR instance without collision so long as the VM only exists once in the cluster.