~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/networking/device_drivers/ethernet/microsoft/netvsc.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 .. SPDX-License-Identifier: GPL-2.0
  2 
  3 ======================
  4 Hyper-V network driver
  5 ======================
  6 
  7 Compatibility
  8 =============
  9 
 10 This driver is compatible with Windows Server 2012 R2, 2016 and
 11 Windows 10.
 12 
 13 Features
 14 ========
 15 
 16 Checksum offload
 17 ----------------
 18   The netvsc driver supports checksum offload as long as the
 19   Hyper-V host version does. Windows Server 2016 and Azure
 20   support checksum offload for TCP and UDP for both IPv4 and
 21   IPv6. Windows Server 2012 only supports checksum offload for TCP.
 22 
 23 Receive Side Scaling
 24 --------------------
 25   Hyper-V supports receive side scaling. For TCP & UDP, packets can
 26   be distributed among available queues based on IP address and port
 27   number.
 28 
 29   For TCP & UDP, we can switch hash level between L3 and L4 by ethtool
 30   command. TCP/UDP over IPv4 and v6 can be set differently. The default
 31   hash level is L4. We currently only allow switching TX hash level
 32   from within the guests.
 33 
 34   On Azure, fragmented UDP packets have high loss rate with L4
 35   hashing. Using L3 hashing is recommended in this case.
 36 
 37   For example, for UDP over IPv4 on eth0:
 38 
 39   To include UDP port numbers in hashing::
 40 
 41         ethtool -N eth0 rx-flow-hash udp4 sdfn
 42 
 43   To exclude UDP port numbers in hashing::
 44 
 45         ethtool -N eth0 rx-flow-hash udp4 sd
 46 
 47   To show UDP hash level::
 48 
 49         ethtool -n eth0 rx-flow-hash udp4
 50 
 51 Generic Receive Offload, aka GRO
 52 --------------------------------
 53   The driver supports GRO and it is enabled by default. GRO coalesces
 54   like packets and significantly reduces CPU usage under heavy Rx
 55   load.
 56 
 57 Large Receive Offload (LRO), or Receive Side Coalescing (RSC)
 58 -------------------------------------------------------------
 59   The driver supports LRO/RSC in the vSwitch feature. It reduces the per packet
 60   processing overhead by coalescing multiple TCP segments when possible. The
 61   feature is enabled by default on VMs running on Windows Server 2019 and
 62   later. It may be changed by ethtool command::
 63 
 64         ethtool -K eth0 lro on
 65         ethtool -K eth0 lro off
 66 
 67 SR-IOV support
 68 --------------
 69   Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
 70   is enabled in both the vSwitch and the guest configuration, then the
 71   Virtual Function (VF) device is passed to the guest as a PCI
 72   device. In this case, both a synthetic (netvsc) and VF device are
 73   visible in the guest OS and both NIC's have the same MAC address.
 74 
 75   The VF is enslaved by netvsc device.  The netvsc driver will transparently
 76   switch the data path to the VF when it is available and up.
 77   Network state (addresses, firewall, etc) should be applied only to the
 78   netvsc device; the slave device should not be accessed directly in
 79   most cases.  The exceptions are if some special queue discipline or
 80   flow direction is desired, these should be applied directly to the
 81   VF slave device.
 82 
 83 Receive Buffer
 84 --------------
 85   Packets are received into a receive area which is created when device
 86   is probed. The receive area is broken into MTU sized chunks and each may
 87   contain one or more packets. The number of receive sections may be changed
 88   via ethtool Rx ring parameters.
 89 
 90   There is a similar send buffer which is used to aggregate packets
 91   for sending.  The send area is broken into chunks, typically of 6144
 92   bytes, each of section may contain one or more packets. Small
 93   packets are usually transmitted via copy to the send buffer. However,
 94   if the buffer is temporarily exhausted, or the packet to be transmitted is
 95   an LSO packet, the driver will provide the host with pointers to the data
 96   from the SKB. This attempts to achieve a balance between the overhead of
 97   data copy and the impact of remapping VM memory to be accessible by the
 98   host.
 99 
100 XDP support
101 -----------
102   XDP (eXpress Data Path) is a feature that runs eBPF bytecode at the early
103   stage when packets arrive at a NIC card. The goal is to increase performance
104   for packet processing, reducing the overhead of SKB allocation and other
105   upper network layers.
106 
107   hv_netvsc supports XDP in native mode, and transparently sets the XDP
108   program on the associated VF NIC as well.
109 
110   Setting / unsetting XDP program on synthetic NIC (netvsc) propagates to
111   VF NIC automatically. Setting / unsetting XDP program on VF NIC directly
112   is not recommended, also not propagated to synthetic NIC, and may be
113   overwritten by setting of synthetic NIC.
114 
115   XDP program cannot run with LRO (RSC) enabled, so you need to disable LRO
116   before running XDP::
117 
118         ethtool -K eth0 lro off
119 
120   XDP_REDIRECT action is not yet supported.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php