1 .. SPDX-License-Identifier: GPL-2.0+ 2 3 =========================================================================== 4 Linux Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters 5 =========================================================================== 6 7 Intel 10 Gigabit Linux driver. 8 Copyright(c) 1999-2018 Intel Corporation. 9 10 Contents 11 ======== 12 13 - Identifying Your Adapter 14 - Command Line Parameters 15 - Additional Configurations 16 - Known Issues 17 - Support 18 19 Identifying Your Adapter 20 ======================== 21 The driver is compatible with devices based on the following: 22 23 * Intel(R) Ethernet Controller 82598 24 * Intel(R) Ethernet Controller 82599 25 * Intel(R) Ethernet Controller X520 26 * Intel(R) Ethernet Controller X540 27 * Intel(R) Ethernet Controller x550 28 * Intel(R) Ethernet Controller X552 29 * Intel(R) Ethernet Controller X553 30 31 For information on how to identify your adapter, and for the latest Intel 32 network drivers, refer to the Intel Support website: 33 https://www.intel.com/support 34 35 SFP+ Devices with Pluggable Optics 36 ---------------------------------- 37 38 82599-BASED ADAPTERS 39 ~~~~~~~~~~~~~~~~~~~~ 40 NOTES: 41 - If your 82599-based Intel(R) Network Adapter came with Intel optics or is an 42 Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel optics 43 and/or the direct attach cables listed below. 44 - When 82599-based SFP+ devices are connected back to back, they should be set 45 to the same Speed setting via ethtool. Results may vary if you mix speed 46 settings. 47 48 +---------------+---------------------------------------+------------------+ 49 | Supplier | Type | Part Numbers | 50 +===============+=======================================+==================+ 51 | SR Modules | 52 +---------------+---------------------------------------+------------------+ 53 | Intel | DUAL RATE 1G/10G SFP+ SR (bailed) | FTLX8571D3BCV-IT | 54 +---------------+---------------------------------------+------------------+ 55 | Intel | DUAL RATE 1G/10G SFP+ SR (bailed) | AFBR-703SDZ-IN2 | 56 +---------------+---------------------------------------+------------------+ 57 | Intel | DUAL RATE 1G/10G SFP+ SR (bailed) | AFBR-703SDDZ-IN1 | 58 +---------------+---------------------------------------+------------------+ 59 | LR Modules | 60 +---------------+---------------------------------------+------------------+ 61 | Intel | DUAL RATE 1G/10G SFP+ LR (bailed) | FTLX1471D3BCV-IT | 62 +---------------+---------------------------------------+------------------+ 63 | Intel | DUAL RATE 1G/10G SFP+ LR (bailed) | AFCT-701SDZ-IN2 | 64 +---------------+---------------------------------------+------------------+ 65 | Intel | DUAL RATE 1G/10G SFP+ LR (bailed) | AFCT-701SDDZ-IN1 | 66 +---------------+---------------------------------------+------------------+ 67 68 The following is a list of 3rd party SFP+ modules that have received some 69 testing. Not all modules are applicable to all devices. 70 71 +---------------+---------------------------------------+------------------+ 72 | Supplier | Type | Part Numbers | 73 +===============+=======================================+==================+ 74 | Finisar | SFP+ SR bailed, 10g single rate | FTLX8571D3BCL | 75 +---------------+---------------------------------------+------------------+ 76 | Avago | SFP+ SR bailed, 10g single rate | AFBR-700SDZ | 77 +---------------+---------------------------------------+------------------+ 78 | Finisar | SFP+ LR bailed, 10g single rate | FTLX1471D3BCL | 79 +---------------+---------------------------------------+------------------+ 80 | Finisar | DUAL RATE 1G/10G SFP+ SR (No Bail) | FTLX8571D3QCV-IT | 81 +---------------+---------------------------------------+------------------+ 82 | Avago | DUAL RATE 1G/10G SFP+ SR (No Bail) | AFBR-703SDZ-IN1 | 83 +---------------+---------------------------------------+------------------+ 84 | Finisar | DUAL RATE 1G/10G SFP+ LR (No Bail) | FTLX1471D3QCV-IT | 85 +---------------+---------------------------------------+------------------+ 86 | Avago | DUAL RATE 1G/10G SFP+ LR (No Bail) | AFCT-701SDZ-IN1 | 87 +---------------+---------------------------------------+------------------+ 88 | Finisar | 1000BASE-T SFP | FCLF8522P2BTL | 89 +---------------+---------------------------------------+------------------+ 90 | Avago | 1000BASE-T | ABCU-5710RZ | 91 +---------------+---------------------------------------+------------------+ 92 | HP | 1000BASE-SX SFP | 453153-001 | 93 +---------------+---------------------------------------+------------------+ 94 95 82599-based adapters support all passive and active limiting direct attach 96 cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. 97 98 Laser turns off for SFP+ when ifconfig ethX down 99 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 100 "ifconfig ethX down" turns off the laser for 82599-based SFP+ fiber adapters. 101 "ifconfig ethX up" turns on the laser. 102 Alternatively, you can use "ip link set [down/up] dev ethX" to turn the 103 laser off and on. 104 105 106 82599-based QSFP+ Adapters 107 ~~~~~~~~~~~~~~~~~~~~~~~~~~ 108 NOTES: 109 - If your 82599-based Intel(R) Network Adapter came with Intel optics, it only 110 supports Intel optics. 111 - 82599-based QSFP+ adapters only support 4x10 Gbps connections. 1x40 Gbps 112 connections are not supported. QSFP+ link partners must be configured for 113 4x10 Gbps. 114 - 82599-based QSFP+ adapters do not support automatic link speed detection. 115 The link speed must be configured to either 10 Gbps or 1 Gbps to match the link 116 partners speed capabilities. Incorrect speed configurations will result in 117 failure to link. 118 - Intel(R) Ethernet Converged Network Adapter X520-Q1 only supports the optics 119 and direct attach cables listed below. 120 121 +---------------+---------------------------------------+------------------+ 122 | Supplier | Type | Part Numbers | 123 +===============+=======================================+==================+ 124 | Intel | DUAL RATE 1G/10G QSFP+ SRL (bailed) | E10GQSFPSR | 125 +---------------+---------------------------------------+------------------+ 126 127 82599-based QSFP+ adapters support all passive and active limiting QSFP+ 128 direct attach cables that comply with SFF-8436 v4.1 specifications. 129 130 82598-BASED ADAPTERS 131 ~~~~~~~~~~~~~~~~~~~~ 132 NOTES: 133 - Intel(r) Ethernet Network Adapters that support removable optical modules 134 only support their original module type (for example, the Intel(R) 10 Gigabit 135 SR Dual Port Express Module only supports SR optical modules). If you plug in 136 a different type of module, the driver will not load. 137 - Hot Swapping/hot plugging optical modules is not supported. 138 - Only single speed, 10 gigabit modules are supported. 139 - LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module 140 types are not supported. Please see your system documentation for details. 141 142 The following is a list of SFP+ modules and direct attach cables that have 143 received some testing. Not all modules are applicable to all devices. 144 145 +---------------+---------------------------------------+------------------+ 146 | Supplier | Type | Part Numbers | 147 +===============+=======================================+==================+ 148 | Finisar | SFP+ SR bailed, 10g single rate | FTLX8571D3BCL | 149 +---------------+---------------------------------------+------------------+ 150 | Avago | SFP+ SR bailed, 10g single rate | AFBR-700SDZ | 151 +---------------+---------------------------------------+------------------+ 152 | Finisar | SFP+ LR bailed, 10g single rate | FTLX1471D3BCL | 153 +---------------+---------------------------------------+------------------+ 154 155 82598-based adapters support all passive direct attach cables that comply with 156 SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables 157 are not supported. 158 159 Third party optic modules and cables referred to above are listed only for the 160 purpose of highlighting third party specifications and potential 161 compatibility, and are not recommendations or endorsements or sponsorship of 162 any third party's product by Intel. Intel is not endorsing or promoting 163 products made by any third party and the third party reference is provided 164 only to share information regarding certain optic modules and cables with the 165 above specifications. There may be other manufacturers or suppliers, producing 166 or supplying optic modules and cables with similar or matching descriptions. 167 Customers must use their own discretion and diligence to purchase optic 168 modules and cables from any third party of their choice. Customers are solely 169 responsible for assessing the suitability of the product and/or devices and 170 for the selection of the vendor for purchasing any product. THE OPTIC MODULES 171 AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL 172 ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED 173 WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY PRODUCTS OR 174 SELECTION OF VENDOR BY CUSTOMERS. 175 176 Command Line Parameters 177 ======================= 178 179 max_vfs 180 ------- 181 :Valid Range: 1-63 182 183 This parameter adds support for SR-IOV. It causes the driver to spawn up to 184 max_vfs worth of virtual functions. 185 If the value is greater than 0 it will also force the VMDq parameter to be 1 or 186 more. 187 188 NOTE: This parameter is only used on kernel 3.7.x and below. On kernel 3.8.x 189 and above, use sysfs to enable VFs. Also, for Red Hat distributions, this 190 parameter is only used on version 6.6 and older. For version 6.7 and newer, use 191 sysfs. For example:: 192 193 #echo $num_vf_enabled > /sys/class/net/$dev/device/sriov_numvfs // enable VFs 194 #echo 0 > /sys/class/net/$dev/device/sriov_numvfs //disable VFs 195 196 The parameters for the driver are referenced by position. Thus, if you have a 197 dual port adapter, or more than one adapter in your system, and want N virtual 198 functions per port, you must specify a number for each port with each parameter 199 separated by a comma. For example:: 200 201 modprobe ixgbe max_vfs=4 202 203 This will spawn 4 VFs on the first port. 204 205 :: 206 207 modprobe ixgbe max_vfs=2,4 208 209 This will spawn 2 VFs on the first port and 4 VFs on the second port. 210 211 NOTE: Caution must be used in loading the driver with these parameters. 212 Depending on your system configuration, number of slots, etc., it is impossible 213 to predict in all cases where the positions would be on the command line. 214 215 NOTE: Neither the device nor the driver control how VFs are mapped into config 216 space. Bus layout will vary by operating system. On operating systems that 217 support it, you can check sysfs to find the mapping. 218 219 NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering 220 and VLAN tag stripping/insertion will remain enabled. Please remove the old 221 VLAN filter before the new VLAN filter is added. For example, 222 223 :: 224 225 ip link set eth0 vf 0 vlan 100 // set VLAN 100 for VF 0 226 ip link set eth0 vf 0 vlan 0 // Delete VLAN 100 227 ip link set eth0 vf 0 vlan 200 // set a new VLAN 200 for VF 0 228 229 With kernel 3.6, the driver supports the simultaneous usage of max_vfs and DCB 230 features, subject to the constraints described below. Prior to kernel 3.6, the 231 driver did not support the simultaneous operation of max_vfs greater than 0 and 232 the DCB features (multiple traffic classes utilizing Priority Flow Control and 233 Extended Transmission Selection). 234 235 When DCB is enabled, network traffic is transmitted and received through 236 multiple traffic classes (packet buffers in the NIC). The traffic is associated 237 with a specific class based on priority, which has a value of 0 through 7 used 238 in the VLAN tag. When SR-IOV is not enabled, each traffic class is associated 239 with a set of receive/transmit descriptor queue pairs. The number of queue 240 pairs for a given traffic class depends on the hardware configuration. When 241 SR-IOV is enabled, the descriptor queue pairs are grouped into pools. The 242 Physical Function (PF) and each Virtual Function (VF) is allocated a pool of 243 receive/transmit descriptor queue pairs. When multiple traffic classes are 244 configured (for example, DCB is enabled), each pool contains a queue pair from 245 each traffic class. When a single traffic class is configured in the hardware, 246 the pools contain multiple queue pairs from the single traffic class. 247 248 The number of VFs that can be allocated depends on the number of traffic 249 classes that can be enabled. The configurable number of traffic classes for 250 each enabled VF is as follows: 251 0 - 15 VFs = Up to 8 traffic classes, depending on device support 252 16 - 31 VFs = Up to 4 traffic classes 253 32 - 63 VFs = 1 traffic class 254 255 When VFs are configured, the PF is allocated one pool as well. The PF supports 256 the DCB features with the constraint that each traffic class will only use a 257 single queue pair. When zero VFs are configured, the PF can support multiple 258 queue pairs per traffic class. 259 260 allow_unsupported_sfp 261 --------------------- 262 :Valid Range: 0,1 263 :Default Value: 0 (disabled) 264 265 This parameter allows unsupported and untested SFP+ modules on 82599-based 266 adapters, as long as the type of module is known to the driver. 267 268 debug 269 ----- 270 :Valid Range: 0-16 (0=none,...,16=all) 271 :Default Value: 0 272 273 This parameter adjusts the level of debug messages displayed in the system 274 logs. 275 276 277 Additional Features and Configurations 278 ====================================== 279 280 Flow Control 281 ------------ 282 Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable 283 receiving and transmitting pause frames for ixgbe. When transmit is enabled, 284 pause frames are generated when the receive packet buffer crosses a predefined 285 threshold. When receive is enabled, the transmit unit will halt for the time 286 delay specified when a pause frame is received. 287 288 NOTE: You must have a flow control capable link partner. 289 290 Flow Control is enabled by default. 291 292 Use ethtool to change the flow control settings. To enable or disable Rx or 293 Tx Flow Control:: 294 295 ethtool -A eth? rx <on|off> tx <on|off> 296 297 Note: This command only enables or disables Flow Control if auto-negotiation is 298 disabled. If auto-negotiation is enabled, this command changes the parameters 299 used for auto-negotiation with the link partner. 300 301 To enable or disable auto-negotiation:: 302 303 ethtool -s eth? autoneg <on|off> 304 305 Note: Flow Control auto-negotiation is part of link auto-negotiation. Depending 306 on your device, you may not be able to change the auto-negotiation setting. 307 308 NOTE: For 82598 backplane cards entering 1 gigabit mode, flow control default 309 behavior is changed to off. Flow control in 1 gigabit mode on these devices can 310 lead to transmit hangs. 311 312 Intel(R) Ethernet Flow Director 313 ------------------------------- 314 The Intel Ethernet Flow Director performs the following tasks: 315 316 - Directs receive packets according to their flows to different queues. 317 - Enables tight control on routing a flow in the platform. 318 - Matches flows and CPU cores for flow affinity. 319 - Supports multiple parameters for flexible flow classification and load 320 balancing (in SFP mode only). 321 322 NOTE: Intel Ethernet Flow Director masking works in the opposite manner from 323 subnet masking. In the following command:: 324 325 #ethtool -N eth11 flow-type ip4 src-ip 172.4.1.2 m 255.0.0.0 dst-ip \ 326 172.21.1.1 m 255.128.0.0 action 31 327 328 The src-ip value that is written to the filter will be 0.4.1.2, not 172.0.0.0 329 as might be expected. Similarly, the dst-ip value written to the filter will be 330 0.21.1.1, not 172.0.0.0. 331 332 To enable or disable the Intel Ethernet Flow Director:: 333 334 # ethtool -K ethX ntuple <on|off> 335 336 When disabling ntuple filters, all the user programmed filters are flushed from 337 the driver cache and hardware. All needed filters must be re-added when ntuple 338 is re-enabled. 339 340 To add a filter that directs packet to queue 2, use -U or -N switch:: 341 342 # ethtool -N ethX flow-type tcp4 src-ip 192.168.10.1 dst-ip \ 343 192.168.10.2 src-port 2000 dst-port 2001 action 2 [loc 1] 344 345 To see the list of filters currently present:: 346 347 # ethtool <-u|-n> ethX 348 349 Sideband Perfect Filters 350 ------------------------ 351 Sideband Perfect Filters are used to direct traffic that matches specified 352 characteristics. They are enabled through ethtool's ntuple interface. To add a 353 new filter use the following command:: 354 355 ethtool -U <device> flow-type <type> src-ip <ip> dst-ip <ip> src-port <port> \ 356 dst-port <port> action <queue> 357 358 Where: 359 <device> - the ethernet device to program 360 <type> - can be ip4, tcp4, udp4, or sctp4 361 <ip> - the IP address to match on 362 <port> - the port number to match on 363 <queue> - the queue to direct traffic towards (-1 discards the matched traffic) 364 365 Use the following command to delete a filter:: 366 367 ethtool -U <device> delete <N> 368 369 Where <N> is the filter id displayed when printing all the active filters, and 370 may also have been specified using "loc <N>" when adding the filter. 371 372 The following example matches TCP traffic sent from 192.168.0.1, port 5300, 373 directed to 192.168.0.5, port 80, and sends it to queue 7:: 374 375 ethtool -U enp130s0 flow-type tcp4 src-ip 192.168.0.1 dst-ip 192.168.0.5 \ 376 src-port 5300 dst-port 80 action 7 377 378 For each flow-type, the programmed filters must all have the same matching 379 input set. For example, issuing the following two commands is acceptable:: 380 381 ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7 382 ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.5 src-port 55 action 10 383 384 Issuing the next two commands, however, is not acceptable, since the first 385 specifies src-ip and the second specifies dst-ip:: 386 387 ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7 388 ethtool -U enp130s0 flow-type ip4 dst-ip 192.168.0.5 src-port 55 action 10 389 390 The second command will fail with an error. You may program multiple filters 391 with the same fields, using different values, but, on one device, you may not 392 program two TCP4 filters with different matching fields. 393 394 Matching on a sub-portion of a field is not supported by the ixgbe driver, thus 395 partial mask fields are not supported. 396 397 To create filters that direct traffic to a specific Virtual Function, use the 398 "user-def" parameter. Specify the user-def as a 64 bit value, where the lower 32 399 bits represents the queue number, while the next 8 bits represent which VF. 400 Note that 0 is the PF, so the VF identifier is offset by 1. For example:: 401 402 ... user-def 0x800000002 ... 403 404 specifies to direct traffic to Virtual Function 7 (8 minus 1) into queue 2 of 405 that VF. 406 407 Note that these filters will not break internal routing rules, and will not 408 route traffic that otherwise would not have been sent to the specified Virtual 409 Function. 410 411 Jumbo Frames 412 ------------ 413 Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) 414 to a value larger than the default value of 1500. 415 416 Use the ifconfig command to increase the MTU size. For example, enter the 417 following where <x> is the interface number:: 418 419 ifconfig eth<x> mtu 9000 up 420 421 Alternatively, you can use the ip command as follows:: 422 423 ip link set mtu 9000 dev eth<x> 424 ip link set up dev eth<x> 425 426 This setting is not saved across reboots. The setting change can be made 427 permanent by adding 'MTU=9000' to the file:: 428 429 /etc/sysconfig/network-scripts/ifcfg-eth<x> // for RHEL 430 /etc/sysconfig/network/<config_file> // for SLES 431 432 NOTE: The maximum MTU setting for Jumbo Frames is 9710. This value coincides 433 with the maximum Jumbo Frames size of 9728 bytes. 434 435 NOTE: This driver will attempt to use multiple page sized buffers to receive 436 each jumbo packet. This should help to avoid buffer starvation issues when 437 allocating receive packets. 438 439 NOTE: For 82599-based network connections, if you are enabling jumbo frames in 440 a virtual function (VF), jumbo frames must first be enabled in the physical 441 function (PF). The VF MTU setting cannot be larger than the PF MTU. 442 443 NBASE-T Support 444 --------------- 445 The ixgbe driver supports NBASE-T on some devices. However, the advertisement 446 of NBASE-T speeds is suppressed by default, to accommodate broken network 447 switches which cannot cope with advertised NBASE-T speeds. Use the ethtool 448 command to enable advertising NBASE-T speeds on devices which support it:: 449 450 ethtool -s eth? advertise 0x1800000001028 451 452 On Linux systems with INTERFACES(5), this can be specified as a pre-up command 453 in /etc/network/interfaces so that the interface is always brought up with 454 NBASE-T support, e.g.:: 455 456 iface eth? inet dhcp 457 pre-up ethtool -s eth? advertise 0x1800000001028 || true 458 459 Generic Receive Offload, aka GRO 460 -------------------------------- 461 The driver supports the in-kernel software implementation of GRO. GRO has 462 shown that by coalescing Rx traffic into larger chunks of data, CPU 463 utilization can be significantly reduced when under large Rx load. GRO is an 464 evolution of the previously-used LRO interface. GRO is able to coalesce 465 other protocols besides TCP. It's also safe to use with configurations that 466 are problematic for LRO, namely bridging and iSCSI. 467 468 Data Center Bridging (DCB) 469 -------------------------- 470 NOTE: 471 The kernel assumes that TC0 is available, and will disable Priority Flow 472 Control (PFC) on the device if TC0 is not available. To fix this, ensure TC0 is 473 enabled when setting up DCB on your switch. 474 475 DCB is a configuration Quality of Service implementation in hardware. It uses 476 the VLAN priority tag (802.1p) to filter traffic. That means that there are 8 477 different priorities that traffic can be filtered into. It also enables 478 priority flow control (802.1Qbb) which can limit or eliminate the number of 479 dropped packets during network stress. Bandwidth can be allocated to each of 480 these priorities, which is enforced at the hardware level (802.1Qaz). 481 482 Adapter firmware implements LLDP and DCBX protocol agents as per 802.1AB and 483 802.1Qaz respectively. The firmware based DCBX agent runs in willing mode only 484 and can accept settings from a DCBX capable peer. Software configuration of 485 DCBX parameters via dcbtool/lldptool are not supported. 486 487 The ixgbe driver implements the DCB netlink interface layer to allow user-space 488 to communicate with the driver and query DCB configuration for the port. 489 490 ethtool 491 ------- 492 The driver utilizes the ethtool interface for driver configuration and 493 diagnostics, as well as displaying statistical information. The latest ethtool 494 version is required for this functionality. Download it at: 495 https://www.kernel.org/pub/software/network/ethtool/ 496 497 FCoE 498 ---- 499 The ixgbe driver supports Fiber Channel over Ethernet (FCoE) and Data Center 500 Bridging (DCB). This code has no default effect on the regular driver 501 operation. Configuring DCB and FCoE is outside the scope of this README. Refer 502 to http://www.open-fcoe.org/ for FCoE project information and contact 503 ixgbe-eedc@lists.sourceforge.net for DCB information. 504 505 MAC and VLAN anti-spoofing feature 506 ---------------------------------- 507 When a malicious driver attempts to send a spoofed packet, it is dropped by the 508 hardware and not transmitted. 509 510 An interrupt is sent to the PF driver notifying it of the spoof attempt. When a 511 spoofed packet is detected, the PF driver will send the following message to 512 the system log (displayed by the "dmesg" command):: 513 514 ixgbe ethX: ixgbe_spoof_check: n spoofed packets detected 515 516 where "x" is the PF interface number; and "n" is number of spoofed packets. 517 NOTE: This feature can be disabled for a specific Virtual Function (VF):: 518 519 ip link set <pf dev> vf <vf id> spoofchk {off|on} 520 521 IPsec Offload 522 ------------- 523 The ixgbe driver supports IPsec Hardware Offload. When creating Security 524 Associations with "ip xfrm ..." the 'offload' tag option can be used to 525 register the IPsec SA with the driver in order to get higher throughput in 526 the secure communications. 527 528 The offload is also supported for ixgbe's VFs, but the VF must be set as 529 'trusted' and the support must be enabled with:: 530 531 ethtool --set-priv-flags eth<x> vf-ipsec on 532 ip link set eth<x> vf <y> trust on 533 534 535 Known Issues/Troubleshooting 536 ============================ 537 538 Enabling SR-IOV in a 64-bit Microsoft Windows Server 2012/R2 guest OS 539 --------------------------------------------------------------------- 540 Linux KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. 541 This includes traditional PCIe devices, as well as SR-IOV-capable devices based 542 on the Intel Ethernet Controller XL710. 543 544 545 Support 546 ======= 547 For general information, go to the Intel support website at: 548 https://www.intel.com/support/ 549 550 If an issue is identified with the released source code on a supported kernel 551 with a supported adapter, email the specific information related to the issue 552 to intel-wired-lan@lists.osuosl.org.
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.