Thanks for replying.
It's customized Linux based on ubuntu rootfs.
kernel 5.15.53
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
I built a customized linux vm, when I run dpdk test with dpdk-testpmd, it shows error as
"hn_vf_attach(): Couldn't find port for VF
hn_vf_add(): RNDIS reports VF but device not found, retrying"
the full log as below.
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1016) device: deed:00:02.0 (socket 0)
mlx5_common: DevX read access NIC register=0X9055 failed errno=22 status=0 syndrome=0
mlx5_net: No available register for sampler.
mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0
mlx5_common: Key "mac" is unknown for the provided classes.
EAL: Requested device deed:00:02.0 cannot be used
EAL: Bus (pci) probe failed.
hn_vf_attach(): Couldn't find port for VF
hn_vf_add(): RNDIS reports VF but device not found, retrying
TELEMETRY: No legacy callbacks, legacy socket not created
Set txonly packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
hn_vf_attach(): Couldn't find port for VF
hn_vf_add(): RNDIS reports VF but device not found, retrying
Port 0: 00:0D:3A:34:69:DA
Checking link statuses...
Done
No commandline core given, start packet forwarding
txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
txonly packet forwarding packets/burst=32
packet len=64 - nb packet segments=1
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=128 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
Port statistics ====================================
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
hn_vf_attach(): Couldn't find port for VF
hn_vf_add(): RNDIS reports VF but device not found, retrying
hn_vf_attach(): Couldn't find port for VF
hn_vf_add(): RNDIS reports VF but device not found, retrying
**
dmesg log as:
[ 6.151804] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0: link becomes ready
[ 8.609137] mlx5_core 75d3:00:02.0 enP30163s1: Disabling LRO, not supported in legacy RQ
[ 8.610859] mlx5_core 75d3:00:02.0 enP30163s1: Disabling LRO, not supported in legacy RQ
[ 8.612864] mlx5_core 75d3:00:02.0 enP30163s1: Disabling LRO, not supported in legacy RQ
[ 8.613831] mlx5_core 75d3:00:02.0 enP30163s1: Disabling LRO, not supported in legacy RQ
[ 8.821302] loop0: detected capacity change from 0 to 8
[ 8.842848] mlx5_core 80bc:00:02.0 enP32956s2: Disabling LRO, not supported in legacy RQ
[ 8.844700] mlx5_core 80bc:00:02.0 enP32956s2: Disabling LRO, not supported in legacy RQ
[ 8.847495] mlx5_core 80bc:00:02.0 enP32956s2: Disabling LRO, not supported in legacy RQ
[ 8.849066] mlx5_core 80bc:00:02.0 enP32956s2: Disabling LRO, not supported in legacy RQ
[ 9.035447] kauditd_printk_skb: 3 callbacks suppressed
[ 9.035451] audit: type=1400 audit(1738022329.631:14): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=1035 comm="apparmor_parser"
[ 9.064196] audit: type=1400 audit(1738022329.659:15): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=1035 comm="apparmor_parser"
[ 9.067151] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ
[ 9.068993] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ
[ 9.071117] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ
[ 9.072348] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ
[ 10.017708] fbcon: Taking over console
[ 10.017853] Console: switching to colour frame buffer device 128x48
[ 51.026576] hv_balloon: Max. dynamic memory size: 32768 MB
[ 534.610255] RPC: Registered named UNIX socket transport module.
[ 534.610260] RPC: Registered udp transport module.
[ 534.610261] RPC: Registered tcp transport module.
[ 534.610261] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 535.082260] RPC: Registered rdma transport module.
[ 535.082264] RPC: Registered rdma backchannel transport module.
[ 1115.762368] hv_netvsc 000d3a34-69da-000d-3a34-69da000d3a34 enp2s2: Data path switched from VF: enP57069s3
[ 1115.825184] hv_vmbus: registering driver uio_hv_generic
[ 1115.825710] hv_netvsc 000d3a34-69da-000d-3a34-69da000d3a34 enp2s2: VF unregistering: enP57069s3
[ 1115.825719] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ
[ 1164.343188] mlx5_core deed:00:02.0 enP57069s3: Link up
Thanks for replying.
It's customized Linux based on ubuntu rootfs.
kernel 5.15.53
Thank you for replying.
you suggested "sudo ./dpdk-devbind.py --bind=vfio-pci <your PCI address of your NIC>", is the PCI address of the VF?
Based on document below doc a), DPDK applications must run over the master PMD (NetVSC PMD) that is exposed in Azure. If the application runs directly over the VF PMD, it doesn't receive all packets that are destined to the VM, since some packets show up over the synthetic interface.
I followed the instructs in doc b), I use uio_hv_generic instead of you suggested vfio-pci, please help clarify this.
I used the following instructions.
modprobe uio_hv_generic
echo $NET_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/new_id
echo $DEV_UUID > /sys/bus/vmbus/drivers/hv_netvsc/unbind
echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind
dpdk-testpmd -l 1-3 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --txd=128 --rxd=128 --stats 2
a)https://learn.microsoft.com/en-us/azure/virtual-network/setup-dpdk?tabs=redhat
b)https://learn.microsoft.com/en-us/azure/virtual-network/setup-dpdk-mana