Accelerated Networking in RHEL 10 and derivates

Affected Products:

  • All ProComputers packaged RHEL 10 and derivative images — including CentOS Stream 10 and Rocky Linux 10 — on Microsoft Azure, that have Accelerated Networking enabled.
  • AlmaLinux 10 does not have this issue!

Opened: 2025-09-10

Severity: Severity 3 (Medium)

Symptoms:
Beginning with RHEL 10, Accelerated Networking may not operate correctly on some Azure VM sizes that list support for this feature.

According to Azure documentation, when Accelerated Networking is enabled in Azure, each NIC gets a second VF (virtual function) interface from the host’s Mellanox NIC.

In Linux, this appears as a PCI device and requires mlx4 or mlx5 drivers. Since Azure may redeploy VMs on hosts with different Mellanox models, both drivers must be present to ensure accelerated networking works across restarts.

RHEL 10 no longer include the mlx4 driver, therefore if a VM is placed on a host with mlx4 cards, accelerated networking will not function.

In order to check if the Accelerated Networking works or not, follow these simple steps:

  1. In a VM with a single network interface, check to see if you have the 2nd VF one enabled:
$ sudo ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 7c:ed:8d:04:8b:19 brd ff:ff:ff:ff:ff:ff
    altname enx7ced8d048b19
    inet 192.168.0.10/24 brd 192.168.0.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eed:8dff:fe04:8b19/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP group default qlen 1000
    link/ether 7c:ed:8d:04:8b:19 brd ff:ff:ff:ff:ff:ff
    altname enP4209p0s2
    altname enP4209s1
    inet6 fe80::7eed:8dff:fe04:8b19/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

In the above output, the 2nd eth1 interface (the SLAVE one) is the Accelerated Networking interface that provides low-latency and high-throughput. If it is not present and you see only the lo and eth0 interfaces, than Accelerated Networking is not available.

  1. Check to see if you have any network packets flowing through the VF interface:
$ sudo ethtool -S eth0 | grep ' vf_'
     vf_rx_packets: 5127
     vf_rx_bytes: 4351941
     vf_tx_packets: 4205
     vf_tx_bytes: 906239
     vf_tx_dropped: 0

In the above output, if you see values higher than zero in vf_* fields, than network packets are flowing through the Accelerated Networking interface. If you all vf_* fields are zero in value, then Accelerated Networking is not available. Please note that you need to grep on the eth0 interface, and not on the eth1.

  1. Check to see if the mlx4 or mlx5 drivers have been loaded:
# dmesg | grep -E "mlx5|mlx4|hv_netvsc"
[    1.754296] hv_vmbus: registering driver hv_netvsc
[    1.963826] hv_netvsc 7ced8d04-8b19-7ced-8d04-8b197ced8d04 eth0: VF slot 1 added
[    2.725808] mlx5_core 1071:00:02.0: enabling device (0000 -> 0002)
[    2.728984] mlx5_core 1071:00:02.0: PTM is not supported by PCIe
[    2.729004] mlx5_core 1071:00:02.0: firmware version: 16.30.5000
[    2.918629] hv_netvsc 7ced8d04-8b19-7ced-8d04-8b197ced8d04 eth0: VF registering: eth1
[    2.918854] mlx5_core 1071:00:02.0 eth1: joined to eth0
[    2.922392] mlx5_core 1071:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(256) RxCqeCmprss(0 basic)
[    2.965779] ib_srpt MAD registration failed for mlx5_0-1.
[    2.966272] ib_srpt srpt_add_one(mlx5_0) failed.
[   31.417628] mlx5_core 1071:00:02.0 eth1: Link up
[   31.450983] hv_netvsc 7ced8d04-8b19-7ced-8d04-8b197ced8d04 eth0: Data path switched to VF: eth1
[   32.898898] hv_netvsc 7ced8d04-8b19-7ced-8d04-8b197ced8d04 eth0: Data path switched from VF: eth1
[   34.508219] mlx5_core 1071:00:02.0 eth1: Link up
[   34.528321] hv_netvsc 7ced8d04-8b19-7ced-8d04-8b197ced8d04 eth0: Data path switched to VF: eth1

In the above output, the mlx5 driver has been automatically loaded, and therefore it enables Accelerated Networkig. In different Azure hosts with different physical NIC interfaces used, the mlx4 driver might be used, that was removed in RHEL 10.

mlx4 driver

  • Supports ConnectX-3 family (older generation).
  • NICs typically presented as:
    1. Mellanox Technologies MT27500 Family [ConnectX-3]
    2. Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
  • Azure VMs that used these were the earlier sizes supporting AN (older D/DSv2, some F/Fs_v2, etc.).
  • In modern Azure regions, mlx4-based Accelerated Networking is deprecated — only mlx5 remains for new VM families.

mlx5 driver

  • Supports ConnectX-4 and newer (ConnectX-4, ConnectX-5, and later).
  • NICs typically show up as:
    1. Mellanox Technologies MT27700 Family [ConnectX-4]
    2. Mellanox Technologies MT27800 Family [ConnectX-5]
  • These are the current standard for Accelerated Networking in Azure.
  • Used across most modern VM series: D_v3/v4/v5, E_v3/v4/v5, Fsv2, Lsv2, HBv2/HBv3/HBv4, HC, NDv2/NDv4, NCas_T4_v3, etc.
  1. Check to see which Mellanox card is present in the Azure host:
$ sudo lspci | grep Mellanox
1071:00:02.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] (rev 80)

In the above output, a Mellanox ConnectX-5 physical NIC is used, and therefore the mlx5 driver is loaded. Both Mellanox ConnectX-5 and ConnectX-4 are using the mlx5 driver.

$ sudo lspci | grep Mellanox
0462:00:02.0 Ethernet controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

If the output is with Mellanox ConnectX-3, then the mlx4 driver must be loaded, and this one was removed from RHEL 10.

Solution:
Currently this issue has been addressed with Microsoft Azure support teams, and it is waiting resolution. We will update this article when more information is available.

2025-09-18: Update1:
Message from Microsof Support: Based on our current investigation, three major Linux distributions (RHEL, CentOS, and Rocky) have removed the mlx4 driver , which is required to support Mellanox CX-3 cards. Our engineering team is actively working with Red Hat on this matter to explore a possible solution. We will share an update with you as soon as we have more information.

2025-09-19: Update2:
One solution to the missing mlx4 driver in RHEL 10 and derivates would be to use a 3rd party compiled kernel that still include the mlx4 driver, like the one provided by ELRepo Project.

The ELRepo Project focuses on hardware related packages to enhance your experience with Enterprise Linux. This includes SCSI/SATA/PATA drivers, filesystem drivers, graphics drivers, network drivers, sound drivers and video drivers.

The kernel-ml packages are built from the sources available from the “main line stable” branch of The Linux Kernel Archives. There is also kernel-lt that is based on a “long term support” branch.

In order to install one of these kernels, please do the following:

$ sudo dnf install https://www.elrepo.org/elrepo-release-10.el10.elrepo.noarch.rpm
$ sudo dnf --enablerepo=elrepo-kernel install kernel-ml
$ sudo reboot

According to ELRepo kernel-ml page:

These packages are provided ‘As-Is’ with no implied warranty or support. Using the kernel-ml may expose your system to security, performance and/or data corruption issues. Since timely updates may not be available from the ELRepo Project, the end user has the ultimate responsibility for deciding whether to continue using the kernel-ml packages in regular service. These packages are not signed for SecureBoot .

2025-09-20: Update3:
ELRepo Project has provided a kmod-mlx4 package for el10, that can be installed on top of an RHEL 10 based kernel, and therefore does not require a full kernel replacement.

In order to install the ELRepo kmod-mlx4 package, please so the following:

$ sudo dnf install https://www.elrepo.org/elrepo-release-10.el10.elrepo.noarch.rpm
$ sudo dnf install kmod-mlx4
$ sudo reboot

The the module stops working after the OS upgrade (e.g, 10.0 to 10.1), and requires a rebuild for the new kernel series.

ProComputers recommends to read the full ELRepo FAQ in order to understand what kmod-mlx4 package is and how it works.

2025-09-23: Update4:
We have tested AlmaLinux 10, which does not have this issue. The AlmaLinux 10 kernel has not removed the mlx4 driver from the kernel.

FAQ:

  1. How to check accelerated networking is working in the VM?
    Please look at this Microsoft article.
  2. Are there any VM sizes that are using the mlx5 driver only? Would it be possible for Microsoft to make such a list and document it?
    Microsoft Support is checking with the engineering team and will get back once there is an update.
  3. Is there an estimated date when a solution will be available?
    Since this has a dependency on Red Hat, Microsoft is unable to provide an ETA at this time.

References

Copyright notice:
Red Hat and CentOS are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and other countries. We are not affiliated with, endorsed by or sponsored by Red Hat or the CentOS Project.

All other trademarks are the property of their respective owners.