1 .. SPDX-License-Identifier: GPL-2.0+ 2 3 ================================================================= 4 Linux Base Driver for the Intel(R) Ethernet Controller 800 Series 5 ================================================================= 6 7 Intel ice Linux driver. 8 Copyright(c) 2018-2021 Intel Corporation. 9 10 Contents 11 ======== 12 13 - Overview 14 - Identifying Your Adapter 15 - Important Notes 16 - Additional Features & Configurations 17 - Performance Optimization 18 19 20 The associated Virtual Function (VF) driver for this driver is iavf. 21 22 Driver information can be obtained using ethtool and lspci. 23 24 For questions related to hardware requirements, refer to the documentation 25 supplied with your Intel adapter. All hardware requirements listed apply to use 26 with Linux. 27 28 This driver supports XDP (Express Data Path) and AF_XDP zero-copy. Note that 29 XDP is blocked for frame sizes larger than 3KB. 30 31 32 Identifying Your Adapter 33 ======================== 34 For information on how to identify your adapter, and for the latest Intel 35 network drivers, refer to the Intel Support website: 36 https://www.intel.com/support 37 38 39 Important Notes 40 =============== 41 42 Packet drops may occur under receive stress 43 ------------------------------------------- 44 Devices based on the Intel(R) Ethernet Controller 800 Series are designed to 45 tolerate a limited amount of system latency during PCIe and DMA transactions. 46 If these transactions take longer than the tolerated latency, it can impact the 47 length of time the packets are buffered in the device and associated memory, 48 which may result in dropped packets. These packets drops typically do not have 49 a noticeable impact on throughput and performance under standard workloads. 50 51 If these packet drops appear to affect your workload, the following may improve 52 the situation: 53 54 1) Make sure that your system's physical memory is in a high-performance 55 configuration, as recommended by the platform vendor. A common 56 recommendation is for all channels to be populated with a single DIMM 57 module. 58 2) In your system's BIOS/UEFI settings, select the "Performance" profile. 59 3) Your distribution may provide tools like "tuned," which can help tweak 60 kernel settings to achieve better standard settings for different workloads. 61 62 63 Configuring SR-IOV for improved network security 64 ------------------------------------------------ 65 In a virtualized environment, on Intel(R) Ethernet Network Adapters that 66 support SR-IOV, the virtual function (VF) may be subject to malicious behavior. 67 Software-generated layer two frames, like IEEE 802.3x (link flow control), IEEE 68 802.1Qbb (priority based flow-control), and others of this type, are not 69 expected and can throttle traffic between the host and the virtual switch, 70 reducing performance. To resolve this issue, and to ensure isolation from 71 unintended traffic streams, configure all SR-IOV enabled ports for VLAN tagging 72 from the administrative interface on the PF. This configuration allows 73 unexpected, and potentially malicious, frames to be dropped. 74 75 See "Configuring VLAN Tagging on SR-IOV Enabled Adapter Ports" later in this 76 README for configuration instructions. 77 78 79 Do not unload port driver if VF with active VM is bound to it 80 ------------------------------------------------------------- 81 Do not unload a port's driver if a Virtual Function (VF) with an active Virtual 82 Machine (VM) is bound to it. Doing so will cause the port to appear to hang. 83 Once the VM shuts down, or otherwise releases the VF, the command will 84 complete. 85 86 87 Additional Features and Configurations 88 ====================================== 89 90 ethtool 91 ------- 92 The driver utilizes the ethtool interface for driver configuration and 93 diagnostics, as well as displaying statistical information. The latest ethtool 94 version is required for this functionality. Download it at: 95 https://kernel.org/pub/software/network/ethtool/ 96 97 NOTE: The rx_bytes value of ethtool does not match the rx_bytes value of 98 Netdev, due to the 4-byte CRC being stripped by the device. The difference 99 between the two rx_bytes values will be 4 x the number of Rx packets. For 100 example, if Rx packets are 10 and Netdev (software statistics) displays 101 rx_bytes as "X", then ethtool (hardware statistics) will display rx_bytes as 102 "X+40" (4 bytes CRC x 10 packets). 103 104 105 Viewing Link Messages 106 --------------------- 107 Link messages will not be displayed to the console if the distribution is 108 restricting system messages. In order to see network driver link messages on 109 your console, set dmesg to eight by entering the following:: 110 111 # dmesg -n 8 112 113 NOTE: This setting is not saved across reboots. 114 115 116 Dynamic Device Personalization 117 ------------------------------ 118 Dynamic Device Personalization (DDP) allows you to change the packet processing 119 pipeline of a device by applying a profile package to the device at runtime. 120 Profiles can be used to, for example, add support for new protocols, change 121 existing protocols, or change default settings. DDP profiles can also be rolled 122 back without rebooting the system. 123 124 The DDP package loads during device initialization. The driver looks for 125 ``intel/ice/ddp/ice.pkg`` in your firmware root (typically ``/lib/firmware/`` 126 or ``/lib/firmware/updates/``) and checks that it contains a valid DDP package 127 file. 128 129 NOTE: Your distribution should likely have provided the latest DDP file, but if 130 ice.pkg is missing, you can find it in the linux-firmware repository or from 131 intel.com. 132 133 If the driver is unable to load the DDP package, the device will enter Safe 134 Mode. Safe Mode disables advanced and performance features and supports only 135 basic traffic and minimal functionality, such as updating the NVM or 136 downloading a new driver or DDP package. Safe Mode only applies to the affected 137 physical function and does not impact any other PFs. See the "Intel(R) Ethernet 138 Adapters and Devices User Guide" for more details on DDP and Safe Mode. 139 140 NOTES: 141 142 - If you encounter issues with the DDP package file, you may need to download 143 an updated driver or DDP package file. See the log messages for more 144 information. 145 146 - The ice.pkg file is a symbolic link to the default DDP package file. 147 148 - You cannot update the DDP package if any PF drivers are already loaded. To 149 overwrite a package, unload all PFs and then reload the driver with the new 150 package. 151 152 - Only the first loaded PF per device can download a package for that device. 153 154 You can install specific DDP package files for different physical devices in 155 the same system. To install a specific DDP package file: 156 157 1. Download the DDP package file you want for your device. 158 159 2. Rename the file ice-xxxxxxxxxxxxxxxx.pkg, where 'xxxxxxxxxxxxxxxx' is the 160 unique 64-bit PCI Express device serial number (in hex) of the device you 161 want the package downloaded on. The filename must include the complete 162 serial number (including leading zeros) and be all lowercase. For example, 163 if the 64-bit serial number is b887a3ffffca0568, then the file name would be 164 ice-b887a3ffffca0568.pkg. 165 166 To find the serial number from the PCI bus address, you can use the 167 following command:: 168 169 # lspci -vv -s af:00.0 | grep -i Serial 170 Capabilities: [150 v1] Device Serial Number b8-87-a3-ff-ff-ca-05-68 171 172 You can use the following command to format the serial number without the 173 dashes:: 174 175 # lspci -vv -s af:00.0 | grep -i Serial | awk '{print $7}' | sed s/-//g 176 b887a3ffffca0568 177 178 3. Copy the renamed DDP package file to 179 ``/lib/firmware/updates/intel/ice/ddp/``. If the directory does not yet 180 exist, create it before copying the file. 181 182 4. Unload all of the PFs on the device. 183 184 5. Reload the driver with the new package. 185 186 NOTE: The presence of a device-specific DDP package file overrides the loading 187 of the default DDP package file (ice.pkg). 188 189 190 Intel(R) Ethernet Flow Director 191 ------------------------------- 192 The Intel Ethernet Flow Director performs the following tasks: 193 194 - Directs receive packets according to their flows to different queues 195 - Enables tight control on routing a flow in the platform 196 - Matches flows and CPU cores for flow affinity 197 198 NOTE: This driver supports the following flow types: 199 200 - IPv4 201 - TCPv4 202 - UDPv4 203 - SCTPv4 204 - IPv6 205 - TCPv6 206 - UDPv6 207 - SCTPv6 208 209 Each flow type supports valid combinations of IP addresses (source or 210 destination) and UDP/TCP/SCTP ports (source and destination). You can supply 211 only a source IP address, a source IP address and a destination port, or any 212 combination of one or more of these four parameters. 213 214 NOTE: This driver allows you to filter traffic based on a user-defined flexible 215 two-byte pattern and offset by using the ethtool user-def and mask fields. Only 216 L3 and L4 flow types are supported for user-defined flexible filters. For a 217 given flow type, you must clear all Intel Ethernet Flow Director filters before 218 changing the input set (for that flow type). 219 220 221 Flow Director Filters 222 --------------------- 223 Flow Director filters are used to direct traffic that matches specified 224 characteristics. They are enabled through ethtool's ntuple interface. To enable 225 or disable the Intel Ethernet Flow Director and these filters:: 226 227 # ethtool -K <ethX> ntuple <off|on> 228 229 NOTE: When you disable ntuple filters, all the user programmed filters are 230 flushed from the driver cache and hardware. All needed filters must be re-added 231 when ntuple is re-enabled. 232 233 To display all of the active filters:: 234 235 # ethtool -u <ethX> 236 237 To add a new filter:: 238 239 # ethtool -U <ethX> flow-type <type> src-ip <ip> [m <ip_mask>] dst-ip <ip> 240 [m <ip_mask>] src-port <port> [m <port_mask>] dst-port <port> [m <port_mask>] 241 action <queue> 242 243 Where: 244 <ethX> - the Ethernet device to program 245 <type> - can be ip4, tcp4, udp4, sctp4, ip6, tcp6, udp6, sctp6 246 <ip> - the IP address to match on 247 <ip_mask> - the IPv4 address to mask on 248 NOTE: These filters use inverted masks. 249 <port> - the port number to match on 250 <port_mask> - the 16-bit integer for masking 251 NOTE: These filters use inverted masks. 252 <queue> - the queue to direct traffic toward (-1 discards the 253 matched traffic) 254 255 To delete a filter:: 256 257 # ethtool -U <ethX> delete <N> 258 259 Where <N> is the filter ID displayed when printing all the active filters, 260 and may also have been specified using "loc <N>" when adding the filter. 261 262 EXAMPLES: 263 264 To add a filter that directs packet to queue 2:: 265 266 # ethtool -U <ethX> flow-type tcp4 src-ip 192.168.10.1 dst-ip \ 267 192.168.10.2 src-port 2000 dst-port 2001 action 2 [loc 1] 268 269 To set a filter using only the source and destination IP address:: 270 271 # ethtool -U <ethX> flow-type tcp4 src-ip 192.168.10.1 dst-ip \ 272 192.168.10.2 action 2 [loc 1] 273 274 To set a filter based on a user-defined pattern and offset:: 275 276 # ethtool -U <ethX> flow-type tcp4 src-ip 192.168.10.1 dst-ip \ 277 192.168.10.2 user-def 0x4FFFF action 2 [loc 1] 278 279 where the value of the user-def field contains the offset (4 bytes) and 280 the pattern (0xffff). 281 282 To match TCP traffic sent from 192.168.0.1, port 5300, directed to 192.168.0.5, 283 port 80, and then send it to queue 7:: 284 285 # ethtool -U enp130s0 flow-type tcp4 src-ip 192.168.0.1 dst-ip 192.168.0.5 286 src-port 5300 dst-port 80 action 7 287 288 To add a TCPv4 filter with a partial mask for a source IP subnet:: 289 290 # ethtool -U <ethX> flow-type tcp4 src-ip 192.168.0.0 m 0.255.255.255 dst-ip 291 192.168.5.12 src-port 12600 dst-port 31 action 12 292 293 NOTES: 294 295 For each flow-type, the programmed filters must all have the same matching 296 input set. For example, issuing the following two commands is acceptable:: 297 298 # ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7 299 # ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.5 src-port 55 action 10 300 301 Issuing the next two commands, however, is not acceptable, since the first 302 specifies src-ip and the second specifies dst-ip:: 303 304 # ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7 305 # ethtool -U enp130s0 flow-type ip4 dst-ip 192.168.0.5 src-port 55 action 10 306 307 The second command will fail with an error. You may program multiple filters 308 with the same fields, using different values, but, on one device, you may not 309 program two tcp4 filters with different matching fields. 310 311 The ice driver does not support matching on a subportion of a field, thus 312 partial mask fields are not supported. 313 314 315 Flex Byte Flow Director Filters 316 ------------------------------- 317 The driver also supports matching user-defined data within the packet payload. 318 This flexible data is specified using the "user-def" field of the ethtool 319 command in the following way: 320 321 .. table:: 322 323 ============================== ============================ 324 ``31 28 24 20 16`` ``15 12 8 4 0`` 325 ``offset into packet payload`` ``2 bytes of flexible data`` 326 ============================== ============================ 327 328 For example, 329 330 :: 331 332 ... user-def 0x4FFFF ... 333 334 tells the filter to look 4 bytes into the payload and match that value against 335 0xFFFF. The offset is based on the beginning of the payload, and not the 336 beginning of the packet. Thus 337 338 :: 339 340 flow-type tcp4 ... user-def 0x8BEAF ... 341 342 would match TCP/IPv4 packets which have the value 0xBEAF 8 bytes into the 343 TCP/IPv4 payload. 344 345 Note that ICMP headers are parsed as 4 bytes of header and 4 bytes of payload. 346 Thus to match the first byte of the payload, you must actually add 4 bytes to 347 the offset. Also note that ip4 filters match both ICMP frames as well as raw 348 (unknown) ip4 frames, where the payload will be the L3 payload of the IP4 349 frame. 350 351 The maximum offset is 64. The hardware will only read up to 64 bytes of data 352 from the payload. The offset must be even because the flexible data is 2 bytes 353 long and must be aligned to byte 0 of the packet payload. 354 355 The user-defined flexible offset is also considered part of the input set and 356 cannot be programmed separately for multiple filters of the same type. However, 357 the flexible data is not part of the input set and multiple filters may use the 358 same offset but match against different data. 359 360 361 RSS Hash Flow 362 ------------- 363 Allows you to set the hash bytes per flow type and any combination of one or 364 more options for Receive Side Scaling (RSS) hash byte configuration. 365 366 :: 367 368 # ethtool -N <ethX> rx-flow-hash <type> <option> 369 370 Where <type> is: 371 tcp4 signifying TCP over IPv4 372 udp4 signifying UDP over IPv4 373 gtpc4 signifying GTP-C over IPv4 374 gtpc4t signifying GTP-C (include TEID) over IPv4 375 gtpu4 signifying GTP-U over IPV4 376 gtpu4e signifying GTP-U and Extension Header over IPV4 377 gtpu4u signifying GTP-U PSC Uplink over IPV4 378 gtpu4d signifying GTP-U PSC Downlink over IPV4 379 tcp6 signifying TCP over IPv6 380 udp6 signifying UDP over IPv6 381 gtpc6 signifying GTP-C over IPv6 382 gtpc6t signifying GTP-C (include TEID) over IPv6 383 gtpu6 signifying GTP-U over IPV6 384 gtpu6e signifying GTP-U and Extension Header over IPV6 385 gtpu6u signifying GTP-U PSC Uplink over IPV6 386 gtpu6d signifying GTP-U PSC Downlink over IPV6 387 And <option> is one or more of: 388 s Hash on the IP source address of the Rx packet. 389 d Hash on the IP destination address of the Rx packet. 390 f Hash on bytes 0 and 1 of the Layer 4 header of the Rx packet. 391 n Hash on bytes 2 and 3 of the Layer 4 header of the Rx packet. 392 e Hash on GTP Packet on TEID (4bytes) of the Rx packet. 393 394 395 Accelerated Receive Flow Steering (aRFS) 396 ---------------------------------------- 397 Devices based on the Intel(R) Ethernet Controller 800 Series support 398 Accelerated Receive Flow Steering (aRFS) on the PF. aRFS is a load-balancing 399 mechanism that allows you to direct packets to the same CPU where an 400 application is running or consuming the packets in that flow. 401 402 NOTES: 403 404 - aRFS requires that ntuple filtering is enabled via ethtool. 405 - aRFS support is limited to the following packet types: 406 407 - TCP over IPv4 and IPv6 408 - UDP over IPv4 and IPv6 409 - Nonfragmented packets 410 411 - aRFS only supports Flow Director filters, which consist of the 412 source/destination IP addresses and source/destination ports. 413 - aRFS and ethtool's ntuple interface both use the device's Flow Director. aRFS 414 and ntuple features can coexist, but you may encounter unexpected results if 415 there's a conflict between aRFS and ntuple requests. See "Intel(R) Ethernet 416 Flow Director" for additional information. 417 418 To set up aRFS: 419 420 1. Enable the Intel Ethernet Flow Director and ntuple filters using ethtool. 421 422 :: 423 424 # ethtool -K <ethX> ntuple on 425 426 2. Set up the number of entries in the global flow table. For example: 427 428 :: 429 430 # NUM_RPS_ENTRIES=16384 431 # echo $NUM_RPS_ENTRIES > /proc/sys/net/core/rps_sock_flow_entries 432 433 3. Set up the number of entries in the per-queue flow table. For example: 434 435 :: 436 437 # NUM_RX_QUEUES=64 438 # for file in /sys/class/net/$IFACE/queues/rx-*/rps_flow_cnt; do 439 # echo $(($NUM_RPS_ENTRIES/$NUM_RX_QUEUES)) > $file; 440 # done 441 442 4. Disable the IRQ balance daemon (this is only a temporary stop of the service 443 until the next reboot). 444 445 :: 446 447 # systemctl stop irqbalance 448 449 5. Configure the interrupt affinity. 450 451 See ``/Documentation/core-api/irq/irq-affinity.rst`` 452 453 454 To disable aRFS using ethtool:: 455 456 # ethtool -K <ethX> ntuple off 457 458 NOTE: This command will disable ntuple filters and clear any aRFS filters in 459 software and hardware. 460 461 Example Use Case: 462 463 1. Set the server application on the desired CPU (e.g., CPU 4). 464 465 :: 466 467 # taskset -c 4 netserver 468 469 2. Use netperf to route traffic from the client to CPU 4 on the server with 470 aRFS configured. This example uses TCP over IPv4. 471 472 :: 473 474 # netperf -H <Host IPv4 Address> -t TCP_STREAM 475 476 477 Enabling Virtual Functions (VFs) 478 -------------------------------- 479 Use sysfs to enable virtual functions (VF). 480 481 For example, you can create 4 VFs as follows:: 482 483 # echo 4 > /sys/class/net/<ethX>/device/sriov_numvfs 484 485 To disable VFs, write 0 to the same file:: 486 487 # echo 0 > /sys/class/net/<ethX>/device/sriov_numvfs 488 489 The maximum number of VFs for the ice driver is 256 total (all ports). To check 490 how many VFs each PF supports, use the following command:: 491 492 # cat /sys/class/net/<ethX>/device/sriov_totalvfs 493 494 Note: You cannot use SR-IOV when link aggregation (LAG)/bonding is active, and 495 vice versa. To enforce this, the driver checks for this mutual exclusion. 496 497 498 Displaying VF Statistics on the PF 499 ---------------------------------- 500 Use the following command to display the statistics for the PF and its VFs:: 501 502 # ip -s link show dev <ethX> 503 504 NOTE: The output of this command can be very large due to the maximum number of 505 possible VFs. 506 507 The PF driver will display a subset of the statistics for the PF and for all 508 VFs that are configured. The PF will always print a statistics block for each 509 of the possible VFs, and it will show zero for all unconfigured VFs. 510 511 512 Configuring VLAN Tagging on SR-IOV Enabled Adapter Ports 513 -------------------------------------------------------- 514 To configure VLAN tagging for the ports on an SR-IOV enabled adapter, use the 515 following command. The VLAN configuration should be done before the VF driver 516 is loaded or the VM is booted. The VF is not aware of the VLAN tag being 517 inserted on transmit and removed on received frames (sometimes called "port 518 VLAN" mode). 519 520 :: 521 522 # ip link set dev <ethX> vf <id> vlan <vlan id> 523 524 For example, the following will configure PF eth0 and the first VF on VLAN 10:: 525 526 # ip link set dev eth0 vf 0 vlan 10 527 528 529 Enabling a VF link if the port is disconnected 530 ---------------------------------------------- 531 If the physical function (PF) link is down, you can force link up (from the 532 host PF) on any virtual functions (VF) bound to the PF. 533 534 For example, to force link up on VF 0 bound to PF eth0:: 535 536 # ip link set eth0 vf 0 state enable 537 538 Note: If the command does not work, it may not be supported by your system. 539 540 541 Setting the MAC Address for a VF 542 -------------------------------- 543 To change the MAC address for the specified VF:: 544 545 # ip link set <ethX> vf 0 mac <address> 546 547 For example:: 548 549 # ip link set <ethX> vf 0 mac 00:01:02:03:04:05 550 551 This setting lasts until the PF is reloaded. 552 553 NOTE: Assigning a MAC address for a VF from the host will disable any 554 subsequent requests to change the MAC address from within the VM. This is a 555 security feature. The VM is not aware of this restriction, so if this is 556 attempted in the VM, it will trigger MDD events. 557 558 559 Trusted VFs and VF Promiscuous Mode 560 ----------------------------------- 561 This feature allows you to designate a particular VF as trusted and allows that 562 trusted VF to request selective promiscuous mode on the Physical Function (PF). 563 564 To set a VF as trusted or untrusted, enter the following command in the 565 Hypervisor:: 566 567 # ip link set dev <ethX> vf 1 trust [on|off] 568 569 NOTE: It's important to set the VF to trusted before setting promiscuous mode. 570 If the VM is not trusted, the PF will ignore promiscuous mode requests from the 571 VF. If the VM becomes trusted after the VF driver is loaded, you must make a 572 new request to set the VF to promiscuous. 573 574 Once the VF is designated as trusted, use the following commands in the VM to 575 set the VF to promiscuous mode. 576 577 For promiscuous all:: 578 579 # ip link set <ethX> promisc on 580 Where <ethX> is a VF interface in the VM 581 582 For promiscuous Multicast:: 583 584 # ip link set <ethX> allmulticast on 585 Where <ethX> is a VF interface in the VM 586 587 NOTE: By default, the ethtool private flag vf-true-promisc-support is set to 588 "off," meaning that promiscuous mode for the VF will be limited. To set the 589 promiscuous mode for the VF to true promiscuous and allow the VF to see all 590 ingress traffic, use the following command:: 591 592 # ethtool --set-priv-flags <ethX> vf-true-promisc-support on 593 594 The vf-true-promisc-support private flag does not enable promiscuous mode; 595 rather, it designates which type of promiscuous mode (limited or true) you will 596 get when you enable promiscuous mode using the ip link commands above. Note 597 that this is a global setting that affects the entire device. However, the 598 vf-true-promisc-support private flag is only exposed to the first PF of the 599 device. The PF remains in limited promiscuous mode regardless of the 600 vf-true-promisc-support setting. 601 602 Next, add a VLAN interface on the VF interface. For example:: 603 604 # ip link add link eth2 name eth2.100 type vlan id 100 605 606 Note that the order in which you set the VF to promiscuous mode and add the 607 VLAN interface does not matter (you can do either first). The result in this 608 example is that the VF will get all traffic that is tagged with VLAN 100. 609 610 611 Malicious Driver Detection (MDD) for VFs 612 ---------------------------------------- 613 Some Intel Ethernet devices use Malicious Driver Detection (MDD) to detect 614 malicious traffic from the VF and disable Tx/Rx queues or drop the offending 615 packet until a VF driver reset occurs. You can view MDD messages in the PF's 616 system log using the dmesg command. 617 618 - If the PF driver logs MDD events from the VF, confirm that the correct VF 619 driver is installed. 620 - To restore functionality, you can manually reload the VF or VM or enable 621 automatic VF resets. 622 - When automatic VF resets are enabled, the PF driver will immediately reset 623 the VF and reenable queues when it detects MDD events on the receive path. 624 - If automatic VF resets are disabled, the PF will not automatically reset the 625 VF when it detects MDD events. 626 627 To enable or disable automatic VF resets, use the following command:: 628 629 # ethtool --set-priv-flags <ethX> mdd-auto-reset-vf on|off 630 631 632 MAC and VLAN Anti-Spoofing Feature for VFs 633 ------------------------------------------ 634 When a malicious driver on a Virtual Function (VF) interface attempts to send a 635 spoofed packet, it is dropped by the hardware and not transmitted. 636 637 NOTE: This feature can be disabled for a specific VF:: 638 639 # ip link set <ethX> vf <vf id> spoofchk {off|on} 640 641 642 Jumbo Frames 643 ------------ 644 Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) 645 to a value larger than the default value of 1500. 646 647 Use the ifconfig command to increase the MTU size. For example, enter the 648 following where <ethX> is the interface number:: 649 650 # ifconfig <ethX> mtu 9000 up 651 652 Alternatively, you can use the ip command as follows:: 653 654 # ip link set mtu 9000 dev <ethX> 655 # ip link set up dev <ethX> 656 657 This setting is not saved across reboots. 658 659 660 NOTE: The maximum MTU setting for jumbo frames is 9702. This corresponds to the 661 maximum jumbo frame size of 9728 bytes. 662 663 NOTE: This driver will attempt to use multiple page sized buffers to receive 664 each jumbo packet. This should help to avoid buffer starvation issues when 665 allocating receive packets. 666 667 NOTE: Packet loss may have a greater impact on throughput when you use jumbo 668 frames. If you observe a drop in performance after enabling jumbo frames, 669 enabling flow control may mitigate the issue. 670 671 672 Speed and Duplex Configuration 673 ------------------------------ 674 In addressing speed and duplex configuration issues, you need to distinguish 675 between copper-based adapters and fiber-based adapters. 676 677 In the default mode, an Intel(R) Ethernet Network Adapter using copper 678 connections will attempt to auto-negotiate with its link partner to determine 679 the best setting. If the adapter cannot establish link with the link partner 680 using auto-negotiation, you may need to manually configure the adapter and link 681 partner to identical settings to establish link and pass packets. This should 682 only be needed when attempting to link with an older switch that does not 683 support auto-negotiation or one that has been forced to a specific speed or 684 duplex mode. Your link partner must match the setting you choose. 1 Gbps speeds 685 and higher cannot be forced. Use the autonegotiation advertising setting to 686 manually set devices for 1 Gbps and higher. 687 688 Speed, duplex, and autonegotiation advertising are configured through the 689 ethtool utility. For the latest version, download and install ethtool from the 690 following website: 691 692 https://kernel.org/pub/software/network/ethtool/ 693 694 To see the speed configurations your device supports, run the following:: 695 696 # ethtool <ethX> 697 698 Caution: Only experienced network administrators should force speed and duplex 699 or change autonegotiation advertising manually. The settings at the switch must 700 always match the adapter settings. Adapter performance may suffer or your 701 adapter may not operate if you configure the adapter differently from your 702 switch. 703 704 705 Data Center Bridging (DCB) 706 -------------------------- 707 NOTE: The kernel assumes that TC0 is available, and will disable Priority Flow 708 Control (PFC) on the device if TC0 is not available. To fix this, ensure TC0 is 709 enabled when setting up DCB on your switch. 710 711 DCB is a configuration Quality of Service implementation in hardware. It uses 712 the VLAN priority tag (802.1p) to filter traffic. That means that there are 8 713 different priorities that traffic can be filtered into. It also enables 714 priority flow control (802.1Qbb) which can limit or eliminate the number of 715 dropped packets during network stress. Bandwidth can be allocated to each of 716 these priorities, which is enforced at the hardware level (802.1Qaz). 717 718 DCB is normally configured on the network using the DCBX protocol (802.1Qaz), a 719 specialization of LLDP (802.1AB). The ice driver supports the following 720 mutually exclusive variants of DCBX support: 721 722 1) Firmware-based LLDP Agent 723 2) Software-based LLDP Agent 724 725 In firmware-based mode, firmware intercepts all LLDP traffic and handles DCBX 726 negotiation transparently for the user. In this mode, the adapter operates in 727 "willing" DCBX mode, receiving DCB settings from the link partner (typically a 728 switch). The local user can only query the negotiated DCB configuration. For 729 information on configuring DCBX parameters on a switch, please consult the 730 switch manufacturer's documentation. 731 732 In software-based mode, LLDP traffic is forwarded to the network stack and user 733 space, where a software agent can handle it. In this mode, the adapter can 734 operate in either "willing" or "nonwilling" DCBX mode and DCB configuration can 735 be both queried and set locally. This mode requires the FW-based LLDP Agent to 736 be disabled. 737 738 NOTE: 739 740 - You can enable and disable the firmware-based LLDP Agent using an ethtool 741 private flag. Refer to the "FW-LLDP (Firmware Link Layer Discovery Protocol)" 742 section in this README for more information. 743 - In software-based DCBX mode, you can configure DCB parameters using software 744 LLDP/DCBX agents that interface with the Linux kernel's DCB Netlink API. We 745 recommend using OpenLLDP as the DCBX agent when running in software mode. For 746 more information, see the OpenLLDP man pages and 747 https://github.com/intel/openlldp. 748 - The driver implements the DCB netlink interface layer to allow the user space 749 to communicate with the driver and query DCB configuration for the port. 750 - iSCSI with DCB is not supported. 751 752 753 FW-LLDP (Firmware Link Layer Discovery Protocol) 754 ------------------------------------------------ 755 Use ethtool to change FW-LLDP settings. The FW-LLDP setting is per port and 756 persists across boots. 757 758 To enable LLDP:: 759 760 # ethtool --set-priv-flags <ethX> fw-lldp-agent on 761 762 To disable LLDP:: 763 764 # ethtool --set-priv-flags <ethX> fw-lldp-agent off 765 766 To check the current LLDP setting:: 767 768 # ethtool --show-priv-flags <ethX> 769 770 NOTE: You must enable the UEFI HII "LLDP Agent" attribute for this setting to 771 take effect. If "LLDP AGENT" is set to disabled, you cannot enable it from the 772 OS. 773 774 775 Flow Control 776 ------------ 777 Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable 778 receiving and transmitting pause frames for ice. When transmit is enabled, 779 pause frames are generated when the receive packet buffer crosses a predefined 780 threshold. When receive is enabled, the transmit unit will halt for the time 781 delay specified when a pause frame is received. 782 783 NOTE: You must have a flow control capable link partner. 784 785 Flow Control is disabled by default. 786 787 Use ethtool to change the flow control settings. 788 789 To enable or disable Rx or Tx Flow Control:: 790 791 # ethtool -A <ethX> rx <on|off> tx <on|off> 792 793 Note: This command only enables or disables Flow Control if auto-negotiation is 794 disabled. If auto-negotiation is enabled, this command changes the parameters 795 used for auto-negotiation with the link partner. 796 797 Note: Flow Control auto-negotiation is part of link auto-negotiation. Depending 798 on your device, you may not be able to change the auto-negotiation setting. 799 800 NOTE: 801 802 - The ice driver requires flow control on both the port and link partner. If 803 flow control is disabled on one of the sides, the port may appear to hang on 804 heavy traffic. 805 - You may encounter issues with link-level flow control (LFC) after disabling 806 DCB. The LFC status may show as enabled but traffic is not paused. To resolve 807 this issue, disable and reenable LFC using ethtool:: 808 809 # ethtool -A <ethX> rx off tx off 810 # ethtool -A <ethX> rx on tx on 811 812 813 NAPI 814 ---- 815 816 This driver supports NAPI (Rx polling mode). 817 818 See :ref:`Documentation/networking/napi.rst <napi>` for more information. 819 820 MACVLAN 821 ------- 822 This driver supports MACVLAN. Kernel support for MACVLAN can be tested by 823 checking if the MACVLAN driver is loaded. You can run 'lsmod | grep macvlan' to 824 see if the MACVLAN driver is loaded or run 'modprobe macvlan' to try to load 825 the MACVLAN driver. 826 827 NOTE: 828 829 - In passthru mode, you can only set up one MACVLAN device. It will inherit the 830 MAC address of the underlying PF (Physical Function) device. 831 832 833 IEEE 802.1ad (QinQ) Support 834 --------------------------- 835 The IEEE 802.1ad standard, informally known as QinQ, allows for multiple VLAN 836 IDs within a single Ethernet frame. VLAN IDs are sometimes referred to as 837 "tags," and multiple VLAN IDs are thus referred to as a "tag stack." Tag stacks 838 allow L2 tunneling and the ability to segregate traffic within a particular 839 VLAN ID, among other uses. 840 841 NOTES: 842 843 - Receive checksum offloads and VLAN acceleration are not supported for 802.1ad 844 (QinQ) packets. 845 846 - 0x88A8 traffic will not be received unless VLAN stripping is disabled with 847 the following command:: 848 849 # ethtool -K <ethX> rxvlan off 850 851 - 0x88A8/0x8100 double VLANs cannot be used with 0x8100 or 0x8100/0x8100 VLANS 852 configured on the same port. 0x88a8/0x8100 traffic will not be received if 853 0x8100 VLANs are configured. 854 855 - The VF can only transmit 0x88A8/0x8100 (i.e., 802.1ad/802.1Q) traffic if: 856 857 1) The VF is not assigned a port VLAN. 858 2) spoofchk is disabled from the PF. If you enable spoofchk, the VF will 859 not transmit 0x88A8/0x8100 traffic. 860 861 - The VF may not receive all network traffic based on the Inner VLAN header 862 when VF true promiscuous mode (vf-true-promisc-support) and double VLANs are 863 enabled in SR-IOV mode. 864 865 The following are examples of how to configure 802.1ad (QinQ):: 866 867 # ip link add link eth0 eth0.24 type vlan proto 802.1ad id 24 868 # ip link add link eth0.24 eth0.24.371 type vlan proto 802.1Q id 371 869 870 Where "24" and "371" are example VLAN IDs. 871 872 873 Tunnel/Overlay Stateless Offloads 874 --------------------------------- 875 Supported tunnels and overlays include VXLAN, GENEVE, and others depending on 876 hardware and software configuration. Stateless offloads are enabled by default. 877 878 To view the current state of all offloads:: 879 880 # ethtool -k <ethX> 881 882 883 UDP Segmentation Offload 884 ------------------------ 885 Allows the adapter to offload transmit segmentation of UDP packets with 886 payloads up to 64K into valid Ethernet frames. Because the adapter hardware is 887 able to complete data segmentation much faster than operating system software, 888 this feature may improve transmission performance. 889 In addition, the adapter may use fewer CPU resources. 890 891 NOTE: 892 893 - The application sending UDP packets must support UDP segmentation offload. 894 895 To enable/disable UDP Segmentation Offload, issue the following command:: 896 897 # ethtool -K <ethX> tx-udp-segmentation [off|on] 898 899 900 GNSS module 901 ----------- 902 Requires kernel compiled with CONFIG_GNSS=y or CONFIG_GNSS=m. 903 Allows user to read messages from the GNSS hardware module and write supported 904 commands. If the module is physically present, a GNSS device is spawned: 905 ``/dev/gnss<id>``. 906 The protocol of write command is dependent on the GNSS hardware module as the 907 driver writes raw bytes by the GNSS object to the receiver through i2c. Please 908 refer to the hardware GNSS module documentation for configuration details. 909 910 911 Firmware (FW) logging 912 --------------------- 913 The driver supports FW logging via the debugfs interface on PF 0 only. The FW 914 running on the NIC must support FW logging; if the FW doesn't support FW logging 915 the 'fwlog' file will not get created in the ice debugfs directory. 916 917 Module configuration 918 ~~~~~~~~~~~~~~~~~~~~ 919 Firmware logging is configured on a per module basis. Each module can be set to 920 a value independent of the other modules (unless the module 'all' is specified). 921 The modules will be instantiated under the 'fwlog/modules' directory. 922 923 The user can set the log level for a module by writing to the module file like 924 this:: 925 926 # echo <log_level> > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/modules/<module> 927 928 where 929 930 * log_level is a name as described below. Each level includes the 931 messages from the previous/lower level 932 933 * none 934 * error 935 * warning 936 * normal 937 * verbose 938 939 * module is a name that represents the module to receive events for. The 940 module names are 941 942 * general 943 * ctrl 944 * link 945 * link_topo 946 * dnl 947 * i2c 948 * sdp 949 * mdio 950 * adminq 951 * hdma 952 * lldp 953 * dcbx 954 * dcb 955 * xlr 956 * nvm 957 * auth 958 * vpd 959 * iosf 960 * parser 961 * sw 962 * scheduler 963 * txq 964 * rsvd 965 * post 966 * watchdog 967 * task_dispatch 968 * mng 969 * synce 970 * health 971 * tsdrv 972 * pfreg 973 * mdlver 974 * all 975 976 The name 'all' is special and allows the user to set all of the modules to the 977 specified log_level or to read the log_level of all of the modules. 978 979 Example usage to configure the modules 980 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 981 982 To set a single module to 'verbose':: 983 984 # echo verbose > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/modules/link 985 986 To set multiple modules then issue the command multiple times:: 987 988 # echo verbose > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/modules/link 989 # echo warning > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/modules/ctrl 990 # echo none > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/modules/dcb 991 992 To set all the modules to the same value:: 993 994 # echo normal > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/modules/all 995 996 To read the log_level of a specific module (e.g. module 'general'):: 997 998 # cat /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/modules/general 999 1000 To read the log_level of all the modules:: 1001 1002 # cat /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/modules/all 1003 1004 Enabling FW log 1005 ~~~~~~~~~~~~~~~ 1006 Configuring the modules indicates to the FW that the configured modules should 1007 generate events that the driver is interested in, but it **does not** send the 1008 events to the driver until the enable message is sent to the FW. To do this 1009 the user can write a 1 (enable) or 0 (disable) to 'fwlog/enable'. An example 1010 is:: 1011 1012 # echo 1 > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/enable 1013 1014 Retrieving FW log data 1015 ~~~~~~~~~~~~~~~~~~~~~~ 1016 The FW log data can be retrieved by reading from 'fwlog/data'. The user can 1017 write any value to 'fwlog/data' to clear the data. The data can only be cleared 1018 when FW logging is disabled. The FW log data is a binary file that is sent to 1019 Intel and used to help debug user issues. 1020 1021 An example to read the data is:: 1022 1023 # cat /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/data > fwlog.bin 1024 1025 An example to clear the data is:: 1026 1027 # echo 0 > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/data 1028 1029 Changing how often the log events are sent to the driver 1030 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1031 The driver receives FW log data from the Admin Receive Queue (ARQ). The 1032 frequency that the FW sends the ARQ events can be configured by writing to 1033 'fwlog/nr_messages'. The range is 1-128 (1 means push every log message, 128 1034 means push only when the max AQ command buffer is full). The suggested value is 1035 10. The user can see what the value is configured to by reading 1036 'fwlog/nr_messages'. An example to set the value is:: 1037 1038 # echo 50 > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/nr_messages 1039 1040 Configuring the amount of memory used to store FW log data 1041 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1042 The driver stores FW log data within the driver. The default size of the memory 1043 used to store the data is 1MB. Some use cases may require more or less data so 1044 the user can change the amount of memory that is allocated for FW log data. 1045 To change the amount of memory then write to 'fwlog/log_size'. The value must be 1046 one of: 128K, 256K, 512K, 1M, or 2M. FW logging must be disabled to change the 1047 value. An example of changing the value is:: 1048 1049 # echo 128K > /sys/kernel/debug/ice/0000\:18\:00.0/fwlog/log_size 1050 1051 1052 Performance Optimization 1053 ======================== 1054 Driver defaults are meant to fit a wide variety of workloads, but if further 1055 optimization is required, we recommend experimenting with the following 1056 settings. 1057 1058 1059 Rx Descriptor Ring Size 1060 ----------------------- 1061 To reduce the number of Rx packet discards, increase the number of Rx 1062 descriptors for each Rx ring using ethtool. 1063 1064 Check if the interface is dropping Rx packets due to buffers being full 1065 (rx_dropped.nic can mean that there is no PCIe bandwidth):: 1066 1067 # ethtool -S <ethX> | grep "rx_dropped" 1068 1069 If the previous command shows drops on queues, it may help to increase 1070 the number of descriptors using 'ethtool -G':: 1071 1072 # ethtool -G <ethX> rx <N> 1073 Where <N> is the desired number of ring entries/descriptors 1074 1075 This can provide temporary buffering for issues that create latency while 1076 the CPUs process descriptors. 1077 1078 1079 Interrupt Rate Limiting 1080 ----------------------- 1081 This driver supports an adaptive interrupt throttle rate (ITR) mechanism that 1082 is tuned for general workloads. The user can customize the interrupt rate 1083 control for specific workloads, via ethtool, adjusting the number of 1084 microseconds between interrupts. 1085 1086 To set the interrupt rate manually, you must disable adaptive mode:: 1087 1088 # ethtool -C <ethX> adaptive-rx off adaptive-tx off 1089 1090 For lower CPU utilization: 1091 1092 Disable adaptive ITR and lower Rx and Tx interrupts. The examples below 1093 affect every queue of the specified interface. 1094 1095 Setting rx-usecs and tx-usecs to 80 will limit interrupts to about 1096 12,500 interrupts per second per queue:: 1097 1098 # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 80 tx-usecs 80 1099 1100 For reduced latency: 1101 1102 Disable adaptive ITR and ITR by setting rx-usecs and tx-usecs to 0 1103 using ethtool:: 1104 1105 # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 0 tx-usecs 0 1106 1107 Per-queue interrupt rate settings: 1108 1109 The following examples are for queues 1 and 3, but you can adjust other 1110 queues. 1111 1112 To disable Rx adaptive ITR and set static Rx ITR to 10 microseconds or 1113 about 100,000 interrupts/second, for queues 1 and 3:: 1114 1115 # ethtool --per-queue <ethX> queue_mask 0xa --coalesce adaptive-rx off 1116 rx-usecs 10 1117 1118 To show the current coalesce settings for queues 1 and 3:: 1119 1120 # ethtool --per-queue <ethX> queue_mask 0xa --show-coalesce 1121 1122 Bounding interrupt rates using rx-usecs-high: 1123 1124 :Valid Range: 0-236 (0=no limit) 1125 1126 The range of 0-236 microseconds provides an effective range of 4,237 to 1127 250,000 interrupts per second. The value of rx-usecs-high can be set 1128 independently of rx-usecs and tx-usecs in the same ethtool command, and is 1129 also independent of the adaptive interrupt moderation algorithm. The 1130 underlying hardware supports granularity in 4-microsecond intervals, so 1131 adjacent values may result in the same interrupt rate. 1132 1133 The following command would disable adaptive interrupt moderation, and allow 1134 a maximum of 5 microseconds before indicating a receive or transmit was 1135 complete. However, instead of resulting in as many as 200,000 interrupts per 1136 second, it limits total interrupts per second to 50,000 via the rx-usecs-high 1137 parameter. 1138 1139 :: 1140 1141 # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs-high 20 1142 rx-usecs 5 tx-usecs 5 1143 1144 1145 Virtualized Environments 1146 ------------------------ 1147 In addition to the other suggestions in this section, the following may be 1148 helpful to optimize performance in VMs. 1149 1150 Using the appropriate mechanism (vcpupin) in the VM, pin the CPUs to 1151 individual LCPUs, making sure to use a set of CPUs included in the 1152 device's local_cpulist: ``/sys/class/net/<ethX>/device/local_cpulist``. 1153 1154 Configure as many Rx/Tx queues in the VM as available. (See the iavf driver 1155 documentation for the number of queues supported.) For example:: 1156 1157 # ethtool -L <virt_interface> rx <max> tx <max> 1158 1159 1160 Support 1161 ======= 1162 For general information, go to the Intel support website at: 1163 https://www.intel.com/support/ 1164 1165 If an issue is identified with the released source code on a supported kernel 1166 with a supported adapter, email the specific information related to the issue 1167 to intel-wired-lan@lists.osuosl.org. 1168 1169 1170 Trademarks 1171 ========== 1172 Intel is a trademark or registered trademark of Intel Corporation or its 1173 subsidiaries in the United States and/or other countries. 1174 1175 * Other names and brands may be claimed as the property of others.
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.