1 Linux Base Driver for 10 Gigabit PCI Express Intel(R) Network Connection
2 ========================================================================
4 Intel Gigabit Linux driver.
5 Copyright(c) 1999 - 2010 Intel Corporation.
10 - Identifying Your Adapter
11 - Additional Configurations
16 Identifying Your Adapter
17 ========================
19 The driver in this release is compatible with 82598 and 82599-based Intel
22 For more information on how to identify your adapter, go to the Adapter &
25 http://support.intel.com/support/network/sb/CS-012904.htm
27 SFP+ Devices with Pluggable Optics
28 ----------------------------------
32 NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or
33 is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel
34 optics and/or the direct attach cables listed below.
36 When 82599-based SFP+ devices are connected back to back, they should be set to
37 the same Speed setting via ethtool. Results may vary if you mix speed settings.
38 82598-based adapters support all passive direct attach cables that comply
39 with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
40 cables are not supported.
42 Supplier Type Part Numbers
45 Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT
46 Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1
47 Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
49 Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
50 Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1
51 Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2
53 The following is a list of 3rd party SFP+ modules and direct attach cables that
54 have received some testing. Not all modules are applicable to all devices.
56 Supplier Type Part Numbers
58 Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
59 Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
60 Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
62 Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
63 Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
64 Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
65 Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
66 Finistar 1000BASE-T SFP FCLF8522P2BTL
67 Avago 1000BASE-T SFP ABCU-5710RZ
69 82599-based adapters support all passive and active limiting direct attach
70 cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
72 Laser turns off for SFP+ when ifconfig down
73 -------------------------------------------
74 "ifconfig down" turns off the laser for 82599-based SFP+ fiber adapters.
75 "ifconfig up" turns on the later.
80 NOTES for 82598-Based Adapters:
81 - Intel(R) Network Adapters that support removable optical modules only support
82 their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port
83 Express Module only supports SR optical modules). If you plug in a different
84 type of module, the driver will not load.
85 - Hot Swapping/hot plugging optical modules is not supported.
86 - Only single speed, 10 gigabit modules are supported.
87 - LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module
88 types are not supported. Please see your system documentation for details.
90 The following is a list of 3rd party SFP+ modules and direct attach cables that
91 have received some testing. Not all modules are applicable to all devices.
93 Supplier Type Part Numbers
95 Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
96 Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
97 Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
99 82598-based adapters support all passive direct attach cables that comply
100 with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
101 cables are not supported.
106 Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable
107 receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE
108 frames are generated when the receive packet buffer crosses a predefined
109 threshold. When rx is enabled, the transmit unit will halt for the time delay
110 specified when a PAUSE frame is received.
112 Flow Control is enabled by default. If you want to disable a flow control
113 capable link partner, use ethtool:
115 ethtool -A eth? autoneg off RX off TX off
117 NOTE: For 82598 backplane cards entering 1 gig mode, flow control default
118 behavior is changed to off. Flow control in 1 gig mode on these devices can
121 Additional Configurations
122 =========================
126 The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
127 enabled by changing the MTU to a value larger than the default of 1500.
128 The maximum value for the MTU is 16110. Use the ifconfig command to
129 increase the MTU size. For example:
131 ifconfig ethx mtu 9000 up
133 The maximum MTU setting for Jumbo Frames is 16110. This value coincides
134 with the maximum Jumbo Frames size of 16128.
136 Generic Receive Offload, aka GRO
137 --------------------------------
138 The driver supports the in-kernel software implementation of GRO. GRO has
139 shown that by coalescing Rx traffic into larger chunks of data, CPU
140 utilization can be significantly reduced when under large Rx load. GRO is an
141 evolution of the previously-used LRO interface. GRO is able to coalesce
142 other protocols besides TCP. It's also safe to use with configurations that
143 are problematic for LRO, namely bridging and iSCSI.
145 Data Center Bridging, aka DCB
146 -----------------------------
147 DCB is a configuration Quality of Service implementation in hardware.
148 It uses the VLAN priority tag (802.1p) to filter traffic. That means
149 that there are 8 different priorities that traffic can be filtered into.
150 It also enables priority flow control which can limit or eliminate the
151 number of dropped packets during network stress. Bandwidth can be
152 allocated to each of these priorities, which is enforced at the hardware
155 To enable DCB support in ixgbe, you must enable the DCB netlink layer to
156 allow the userspace tools (see below) to communicate with the driver.
157 This can be found in the kernel configuration here:
159 -> Networking support
160 -> Networking options
161 -> Data Center Bridging support
163 Once this is selected, DCB support must be selected for ixgbe. This can
167 -> Network device support (NETDEVICES [=y])
168 -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
169 -> Intel(R) 10GbE PCI Express adapters support
170 -> Data Center Bridging (DCB) Support
172 After these options are selected, you must rebuild your kernel and your
175 In order to use DCB, userspace tools must be downloaded and installed.
176 The dcbd tools can be found at:
182 The driver utilizes the ethtool interface for driver configuration and
183 diagnostics, as well as displaying statistical information. The latest
184 ethtool version is required for this functionality.
186 The latest release of ethtool can be found from
187 http://ftp.kernel.org/pub/software/network/ethtool/
191 This release of the ixgbe driver contains new code to enable users to use
192 Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB)
193 functionality that is supported by the 82598-based hardware. This code has
194 no default effect on the regular driver operation, and configuring DCB and
195 FCoE is outside the scope of this driver README. Refer to
196 http://www.open-fcoe.org/ for FCoE project information and contact
197 e1000-eedc@lists.sourceforge.net for DCB information.
199 MAC and VLAN anti-spoofing feature
200 ----------------------------------
201 When a malicious driver attempts to send a spoofed packet, it is dropped by
202 the hardware and not transmitted. An interrupt is sent to the PF driver
203 notifying it of the spoof attempt.
205 When a spoofed packet is detected the PF driver will send the following
206 message to the system log (displayed by the "dmesg" command):
208 Spoof event(s) detected on VF (n)
210 Where n=the VF that attempted to do the spoofing.
216 An excellent article on performance tuning can be found at:
218 http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
224 Enabling SR-IOV in a 32-bit Microsoft* Windows* Server 2008 Guest OS using
225 Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE controller under KVM
226 -----------------------------------------------------------------------------
227 KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This
228 includes traditional PCIe devices, as well as SR-IOV-capable devices using
229 Intel 82576-based and 82599-based controllers.
231 While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF)
232 to a Linux-based VM running 2.6.32 or later kernel works fine, there is a
233 known issue with Microsoft Windows Server 2008 VM that results in a "yellow
234 bang" error. This problem is within the KVM VMM itself, not the Intel driver,
235 or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU
236 model for the guests, and this older CPU model does not support MSI-X
237 interrupts, which is a requirement for Intel SR-IOV.
239 If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode
240 with KVM and a Microsoft Windows Server 2008 guest try the following
241 workaround. The workaround is to tell KVM to emulate a different model of CPU
242 when using qemu to create the KVM guest:
244 "-cpu qemu64,model=13"
250 For general information, go to the Intel support website at:
252 http://support.intel.com
254 or the Intel Wired Networking project hosted by Sourceforge at:
256 http://e1000.sourceforge.net
258 If an issue is identified with the released source code on the supported
259 kernel with a supported adapter, email the specific information related
260 to the issue to e1000-devel@lists.sf.net