OnWorks Linux and Windows Online WorkStations

Logo

Free Hosting Online for WorkStations

< Previous | Contents | Next >

5.5. OpenVswitch-DPDK


Being a library it doesn't do a lot on its own, so it depends on emerging projects making use of it. One consumer of the library that already is bundled in the Ubuntu 16.04 release is OpenVswitch with DPDK support in the package openvswitch-switch-dpdk.

Here an example how to install and configure a basic OpenVswitch using DPDK for later use via libvirt/ qemu-kvm.


sudo apt-get install openvswitch-switch-dpdk

sudo update-alternatives --set ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd- dpdk

echo "DPDK_OPTS='--dpdk -c 0x1 -n 4 -m 2048 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664'" | sudo tee -a /etc/default/openvswitch-switch

sudo service openvswitch-switch restart


Please remember that you have to assign devices to DPDK compatible drivers (see above) before restarting.


The section --vhost-owner libvirt-qemu:kvm --vhost-perm 0664 will set vhost_user ports up with owner/ permissions to be compatible with Ubuntus way of running qemu-kvm/libvirt with reduced privileges for more security.

Please note that the section -m 2048 is the most basic numa setup for a single socket system. If you have multiple sockets you might want to define how to split your memory among them, for example -m 1024, 1024. Please be aware that DPDK will try to work only with local memory to the network cards it works with (for performance reasons). That said if you have multiple nodes, but all network cards on one, you should consider spreading your cards. If not at least allocate your memory to the node where the cards reside, for example in a two node all to node #2: -m 0, 2048. You can use the tool lstopo from the package hwloc-nox to see on which socket your cards are located.

The OpenVswitch you now started supports all port types OpenVswitch usually does, plus DPDK port types. Here an example how to create a bridge and - instead of a normal external port - add an external DPDK port to it.


ovs-vsctl add-br ovsdpdkbr0 -- set bridge ovsdpdkbr0 datapath_type=netdev ovs-vsctl add-port ovsdpdkbr0 dpdk0 -- set Interface dpdk0 type=dpdk



image

The enablement of DPDK in Open vSwitch has changed in version 2.6. So for users of releases

>=16.10, but also for users of the Ubuntu Cloud Archive37 >=neutron the enablement has changed compared to that for users of Ubuntu 16.04. The options formerly passed via DPDK_OPTS are now configured via ovs-vsctl into the Open vSwitch configuration database.


The same example as above would in the new way look like:


image

37 https://wiki.ubuntu.com/OpenStack/CloudArchive



# Enable DPDK

ovs-vsctl set Open_vSwitch . "other_config:dpdk-init=true"

# run on core 0

ovs-vsctl set Open_vSwitch . "other_config:dpdk-lcore-mask=0x1"

# Allocate 2G huge pages (not Numa node aware)

ovs-vsctl set Open_vSwitch . "other_config:dpdk-alloc-mem=2048"

# group/permissions for vhost-user sockets (required to work with libvirt/qemu) ovs-vsctl set Open_vSwitch . \

"other_config:dpdk-extra=--vhost-owner libvirt-qemu:kvm --vhost-perm 0666"


Please see the associated upstream documentation and the man page of the vswitch configuration as provided by the package for more details:

/usr/share/doc/openvswitch-common/INSTALL.DPDK.md.gz

/usr/share/doc/openvswitch-common/INSTALL.DPDK-ADVANCED.md.gz

man ovs-vswitchd.conf.db


Top OS Cloud Computing at OnWorks: