注:以82574网卡为例,透传给虚拟机,controller节点是192.168.0.201,compute节点是192.168.0.202
查看
- 查看
PCI
lspci -nn
image.png- 查看设备由什么驱动管理使用
lspci -vv -s 04:00.0 | grep driver
image.png
- 通过
libvirt解绑网卡(实际上不需要)
注:实际上不需要,因为
openstack启动使用这个网卡的虚拟机后会自动把网卡的使用者转为vfio-pci,关闭虚拟机后会自动转回e1000e
image.png
- 查看
virsh nodedev-list | grep pci
image.png- 查询详细信息
virsh nodedev-dumpxml pci_0000_04_00_0
image.png- 解绑
virsh nodedev-detach pci_0000_04_00_0
image.png
- 配置(
compute节点)
- 获取
vendor_idproduct_id
lspci -nn
image.png
或virsh nodedev-dumpxml pci_0000_04_00_0
image.png
vendor_id=8086
product_id=10d3- 配置
vi /etc/nova/nova.conf[pci] ... passthrough_whitelist = [{"product_id":"10d3", "vendor_id":"8086"}]
- 配置(
controller节点)
- 配置
nova-scheduler
vi /etc/nova/nova.conf[DEFAULT] ... scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter scheduler_available_filters = nova.scheduler.filters.all_filters
配置
vi /etc/nova/nova.conf[pci] ... alias = { "name": "a1", "product_id": "10d3", "vendor_id": "8086", "device_type": "type-PCI" }
- 重启
systemctl restart openstack-nova-scheduler.service
systemctl restart openstack-nova-api.service
- 创建
flavor和instance
- 创建
flavor
image.png- 创建
instance
使用这个flavor创建一个instance- 验证
virsh dumpxml instance-0000001b
image.png
注意:请注意解绑compute节点的pci设备
引用:
https://docs.openstack.org/nova/pike/admin/pci-passthrough.html
http://blog.csdn.net/hangdongzhang/article/details/77745557









