SR-IOV functionality in OpenStack¶
This documentation is written for the OpenStack Train version.
SR-IOV is a feature that allows to virtualize a PCIe Ethernet device, presenting it as a set of PCIe devices. In addition, each virtual PCIe device can be directly connected to a virtual machine running in Opensatck. This approach allows achieving low traffic delays and brings the speed of the virtual interface closer to the physical one.
Important
To be able to work with SR-IOV, this functionality must be supported by the hardware.
Note
This description uses the physical interface eth1
. The name of this interface may differ depending on the environment being configured.
Configuring virtual devices (performed on the hypervisor):¶
Make sure SR-IOV and VT-d functionality is enabled in BIOS.
Enable the IOMMU by adding the “intel_iommu=on” option to the kernel options in the
/etc/default/grub
file. Example with enabled option:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 iommu=pt"
Create virtual interfaces on all hypervisors:
echo '8' > /sys/class/net/eth1/device/sriov_numvfs
Check the maximum number of virtual interfaces supported on your system:
# cat /sys/class/net/eth3/device/sriov_totalvfs
Make sure the virtual interfaces are created and in the UP state:
ip link show eth3
Add the ability to create virtual interfaces after reboot:
# echo "echo '7' > /sys/class/net/eth1/device/sriov_numvfs" >> /etc/rc.local
Configuring work with PCI devices in nova-compute (performed on the hypervisor):¶
In the
nova.conf
file, add information about which PCI devices nova-compute can use:[pci] passthrough_whitelist = { "devname": "eth1", "physical_network": "physnet1"}
Restart nova-compute services.
Configuring neutron-server (performed on the controller):¶
Add
sriovnicswitch
to the drivers you use. To do this, edit theml2_conf.ini
file on each controller:[ml2] mechanism_drivers = openvswitch,sriovnicswitch
Make sure your physical network is specified in the
ml2_conf.ini
file settings:[ml2_type_vlan] network_vlan_ranges = physnet1
Restart neutron-server.
Configuring nova-scheduler (performed on the controller):¶
On all controllers running the nova-scheduler service, add a new filter
PciPassthroughFilter
by adding the following to nova.conf:[filter_scheduler] enabled_filters = AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter available_filters = nova.scheduler.filters.all_filters
Restart the nova-scheduler service.
sr-iov agent configuration (performed on the hypervisor):¶
Edit the
sriov_agent.ini
file according to your cloud settings:[securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver [sriov_nic] physical_device_mappings = physnet1:eth1 exclude_devices =
Install the sr-iov agent for neutron and run it:
apt install neutron-sriov-agent systemctl enable neutron-sriov-agent systemctl start neutron-sriov-agent
Starting instance with SR-IOV ports¶
Create an external network and subnet with the desired parameters:
openstack network create --provider-physical-network physnet1 \ --provider-network-type sriov-net openstack subnet create --network sriov-net \ --subnet-pool shared-default-subnetpool-v4 \ sriov-subnet
Get the id of the created network:
net_id=$(openstack network show sriov-net -c id -f value)
Create a port on the network with type direct and get its id:
openstack port create --network $net_id --vnic-type direct \ sriov-port port_id=$(openstack port show sriov-port -c id -f value)
Start the instance:
openstack server create --flavor m1.large --image ubuntu_18.04 \ --nic port-id=$port_id \ test-sriov