Ok. I didn't make 2.4.0 in 2000. Tough. I tried, but we had some
[davej-history.git] / drivers / net / 3c59x.c
blobae4054d351448f61d529af9af4011beb9a5693a1
1 /* EtherLinkXL.c: A 3Com EtherLink PCI III/XL ethernet driver for linux. */
2 /*
3 Written 1996-1999 by Donald Becker.
5 This software may be used and distributed according to the terms
6 of the GNU Public License, incorporated herein by reference.
8 This driver is for the 3Com "Vortex" and "Boomerang" series ethercards.
9 Members of the series include Fast EtherLink 3c590/3c592/3c595/3c597
10 and the EtherLink XL 3c900 and 3c905 cards.
12 The author may be reached as becker@scyld.com, or C/O
13 Center of Excellence in Space Data and Information Sciences
14 Code 930.5, Goddard Space Flight Center, Greenbelt MD 20771
16 Linux Kernel Additions:
18 0.99H+lk0.9 - David S. Miller - softnet, PCI DMA updates
19 0.99H+lk1.0 - Jeff Garzik <jgarzik@mandrakesoft.com>
20 Remove compatibility defines for kernel versions < 2.2.x.
21 Update for new 2.3.x module interface
22 LK1.1.2 (March 19, 2000)
23 * New PCI interface (jgarzik)
25 LK1.1.3 25 April 2000, Andrew Morton <andrewm@uow.edu.au>
26 - Merged with 3c575_cb.c
27 - Don't set RxComplete in boomerang interrupt enable reg
28 - spinlock in vortex_timer to protect mdio functions
29 - disable local interrupts around call to vortex_interrupt in
30 vortex_tx_timeout() (So vortex_interrupt can use spin_lock())
31 - Select window 3 in vortex_timer()'s write to Wn3_MAC_Ctrl
32 - In vortex_start_xmit(), move the lock to _after_ we've altered
33 vp->cur_tx and vp->tx_full. This defeats the race between
34 vortex_start_xmit() and vortex_interrupt which was identified
35 by Bogdan Costescu.
36 - Merged back support for six new cards from various sources
37 - Set vortex_have_pci if pci_module_init returns zero (fixes cardbus
38 insertion oops)
39 - Tell it that 3c905C has NWAY for 100bT autoneg
40 - Fix handling of SetStatusEnd in 'Too much work..' code, as
41 per 2.3.99's 3c575_cb (Dave Hinds).
42 - Split ISR into two for vortex & boomerang
43 - Fix MOD_INC/DEC races
44 - Handle resource allocation failures.
45 - Fix 3CCFE575CT LED polarity
46 - Make tx_interrupt_mitigation the default
48 LK1.1.4 25 April 2000, Andrew Morton <andrewm@uow.edu.au>
49 - Add extra TxReset to vortex_up() to fix 575_cb hotplug initialisation probs.
50 - Put vortex_info_tbl into __devinitdata
51 - In the vortex_error StatsFull HACK, disable stats in vp->intr_enable as well
52 as in the hardware.
53 - Increased the loop counter in wait_for_completion from 2,000 to 4,000.
55 LK1.1.5 28 April 2000, andrewm
56 - Added powerpc defines (John Daniel <jdaniel@etresoft.com> said these work...)
57 - Some extra diagnostics
58 - In vortex_error(), reset the Tx on maxCollisions. Otherwise most
59 chips usually get a Tx timeout.
60 - Added extra_reset module parm
61 - Replaced some inline timer manip with mod_timer
62 (Franois romieu <Francois.Romieu@nic.fr>)
63 - In vortex_up(), don't make Wn3_config initialisation dependent upon has_nway
64 (this came across from 3c575_cb).
66 LK1.1.6 06 Jun 2000, andrewm
67 - Backed out the PPC defines.
68 - Use del_timer_sync(), mod_timer().
69 - Fix wrapped ulong comparison in boomerang_rx()
70 - Add IS_TORNADO, use it to suppress 3c905C checksum error msg
71 (Donald Becker, I Lee Hetherington <ilh@sls.lcs.mit.edu>)
72 - Replace union wn3_config with BFINS/BFEXT manipulation for
73 sparc64 (Pete Zaitcev, Peter Jones)
74 - In vortex_error, do_tx_reset and vortex_tx_timeout(Vortex):
75 do a netif_wake_queue() to better recover from errors. (Anders Pedersen,
76 Donald Becker)
77 - Print a warning on out-of-memory (rate limited to 1 per 10 secs)
78 - Added two more Cardbus 575 NICs: 5b57 and 6564 (Paul Wagland)
80 LK1.1.7 2 Jul 2000 andrewm
81 - Better handling of shared IRQs
82 - Reset the transmitter on a Tx reclaim error
83 - Fixed crash under OOM during vortex_open() (Mark Hemment)
84 - Fix Rx cessation problem during OOM (help from Mark Hemment)
85 - The spinlocks around the mdio access were blocking interrupts for 300uS.
86 Fix all this to use spin_lock_bh() within mdio_read/write
87 - Only write to TxFreeThreshold if it's a boomerang - other NICs don't
88 have one.
89 - Added 802.3x MAC-layer flow control support
91 LK1.1.8 13 Aug 2000 andrewm
92 - Ignore request_region() return value - already reserved if Cardbus.
93 - Merged some additional Cardbus flags from Don's 0.99Qk
94 - Some fixes for 3c556 (Fred Maciel)
95 - Fix for EISA initialisation (Jan Rkorajski)
96 - Renamed MII_XCVR_PWR and EEPROM_230 to align with 3c575_cb and D. Becker's drivers
97 - Fixed MII_XCVR_PWR for 3CCFE575CT
98 - Added INVERT_LED_PWR, used it.
99 - Backed out the extra_reset stuff
101 LK1.1.9 12 Sep 2000 andrewm
102 - Backed out the tx_reset_resume flags. It was a no-op.
103 - In vortex_error, don't reset the Tx on txReclaim errors
104 - In vortex_error, don't reset the Tx on maxCollisions errors.
105 Hence backed out all the DownListPtr logic here.
106 - In vortex_error, give Tornado cards a partial TxReset on
107 maxCollisions (David Hinds). Defined MAX_COLLISION_RESET for this.
108 - Redid some driver flags and device names based on pcmcia_cs-3.1.20.
109 - Fixed a bug where, if vp->tx_full is set when the interface
110 is downed, it remains set when the interface is upped. Bad
111 things happen.
113 LK1.1.10 17 Sep 2000 andrewm
114 - Added EEPROM_8BIT for 3c555 (Fred Maciel)
115 - Added experimental support for the 3c556B Laptop Hurricane (Louis Gerbarg)
116 - Add HAS_NWAY to "3c900 Cyclone 10Mbps TPO"
118 LK1.1.11 13 Nov 2000 andrewm
119 - Dump MOD_INC/DEC_USE_COUNT, use SET_MODULE_OWNER
121 - See http://www.uow.edu.au/~andrewm/linux/#3c59x-2.3 for more details.
122 - Also see Documentation/networking/vortex.txt
126 * FIXME: This driver _could_ support MTU changing, but doesn't. See Don's hamaci.c implementation
127 * as well as other drivers
129 * NOTE: If you make 'vortex_debug' a constant (#define vortex_debug 0) the driver shrinks by 2k
130 * due to dead code elimination. There will be some performance benefits from this due to
131 * elimination of all the tests and reduced cache footprint.
134 /* A few values that may be tweaked. */
135 /* Keep the ring sizes a power of two for efficiency. */
136 #define TX_RING_SIZE 16
137 #define RX_RING_SIZE 32
138 #define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
140 /* "Knobs" that adjust features and parameters. */
141 /* Set the copy breakpoint for the copy-only-tiny-frames scheme.
142 Setting to > 1512 effectively disables this feature. */
143 static const int rx_copybreak = 200;
144 /* Allow setting MTU to a larger size, bypassing the normal ethernet setup. */
145 static const int mtu = 1500;
146 /* Maximum events (Rx packets, etc.) to handle at each interrupt. */
147 static int max_interrupt_work = 32;
148 /* Tx timeout interval (millisecs) */
149 static int watchdog = 400;
151 /* Allow aggregation of Tx interrupts. Saves CPU load at the cost
152 * of possible Tx stalls if the system is blocking interrupts
153 * somewhere else. Undefine this to disable.
154 * AKPM 26 April 2000: enabling this still gets vestigial Tx timeouts
155 * in a heavily loaded (collision-prone) 10BaseT LAN. Should be OK with
156 * switched Ethernet.
157 * AKPM 24May00: vestigial timeouts have been removed by later fixes.
159 #define tx_interrupt_mitigation 1
161 /* Put out somewhat more debugging messages. (0: no msg, 1 minimal .. 6). */
162 #define vortex_debug debug
163 #ifdef VORTEX_DEBUG
164 static int vortex_debug = VORTEX_DEBUG;
165 #else
166 static int vortex_debug = 1;
167 #endif
169 /* Some values here only for performance evaluation and path-coverage
170 debugging. */
171 static int rx_nocopy = 0, rx_copy = 0, queued_packet = 0, rx_csumhits;
173 #ifndef __OPTIMIZE__
174 #error You must compile this file with the correct options!
175 #error See the last lines of the source file.
176 #error You must compile this driver with "-O".
177 #endif
179 #include <linux/module.h>
180 #include <linux/kernel.h>
181 #include <linux/sched.h>
182 #include <linux/string.h>
183 #include <linux/timer.h>
184 #include <linux/errno.h>
185 #include <linux/in.h>
186 #include <linux/ioport.h>
187 #include <linux/malloc.h>
188 #include <linux/interrupt.h>
189 #include <linux/pci.h>
190 #include <linux/init.h>
191 #include <linux/netdevice.h>
192 #include <linux/etherdevice.h>
193 #include <linux/skbuff.h>
194 #include <asm/irq.h> /* For NR_IRQS only. */
195 #include <asm/bitops.h>
196 #include <asm/io.h>
198 /* Kernel compatibility defines, some common to David Hinds' PCMCIA package.
199 This is only in the support-all-kernels source code. */
201 #define RUN_AT(x) (jiffies + (x))
203 #include <linux/delay.h>
205 static char version[] __devinitdata =
206 "3c59x.c:LK1.1.11 13 Nov 2000 Donald Becker and others. http://www.scyld.com/network/vortex.html " "$Revision: 1.102.2.46 $\n";
208 MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
209 MODULE_DESCRIPTION("3Com 3c59x/3c90x/3c575 series Vortex/Boomerang/Cyclone driver");
210 MODULE_PARM(debug, "i");
211 MODULE_PARM(options, "1-" __MODULE_STRING(8) "i");
212 MODULE_PARM(full_duplex, "1-" __MODULE_STRING(8) "i");
213 MODULE_PARM(flow_ctrl, "1-" __MODULE_STRING(8) "i");
214 MODULE_PARM(rx_copybreak, "i");
215 MODULE_PARM(max_interrupt_work, "i");
216 MODULE_PARM(compaq_ioaddr, "i");
217 MODULE_PARM(compaq_irq, "i");
218 MODULE_PARM(compaq_device_id, "i");
219 MODULE_PARM(watchdog, "i");
221 /* Operational parameter that usually are not changed. */
223 /* The Vortex size is twice that of the original EtherLinkIII series: the
224 runtime register window, window 1, is now always mapped in.
225 The Boomerang size is twice as large as the Vortex -- it has additional
226 bus master control registers. */
227 #define VORTEX_TOTAL_SIZE 0x20
228 #define BOOMERANG_TOTAL_SIZE 0x40
230 /* Set iff a MII transceiver on any interface requires mdio preamble.
231 This only set with the original DP83840 on older 3c905 boards, so the extra
232 code size of a per-interface flag is not worthwhile. */
233 static char mii_preamble_required;
235 #define PFX "3c59x: "
240 Theory of Operation
242 I. Board Compatibility
244 This device driver is designed for the 3Com FastEtherLink and FastEtherLink
245 XL, 3Com's PCI to 10/100baseT adapters. It also works with the 10Mbs
246 versions of the FastEtherLink cards. The supported product IDs are
247 3c590, 3c592, 3c595, 3c597, 3c900, 3c905
249 The related ISA 3c515 is supported with a separate driver, 3c515.c, included
250 with the kernel source or available from
251 cesdis.gsfc.nasa.gov:/pub/linux/drivers/3c515.html
253 II. Board-specific settings
255 PCI bus devices are configured by the system at boot time, so no jumpers
256 need to be set on the board. The system BIOS should be set to assign the
257 PCI INTA signal to an otherwise unused system IRQ line.
259 The EEPROM settings for media type and forced-full-duplex are observed.
260 The EEPROM media type should be left at the default "autoselect" unless using
261 10base2 or AUI connections which cannot be reliably detected.
263 III. Driver operation
265 The 3c59x series use an interface that's very similar to the previous 3c5x9
266 series. The primary interface is two programmed-I/O FIFOs, with an
267 alternate single-contiguous-region bus-master transfer (see next).
269 The 3c900 "Boomerang" series uses a full-bus-master interface with separate
270 lists of transmit and receive descriptors, similar to the AMD LANCE/PCnet,
271 DEC Tulip and Intel Speedo3. The first chip version retains a compatible
272 programmed-I/O interface that has been removed in 'B' and subsequent board
273 revisions.
275 One extension that is advertised in a very large font is that the adapters
276 are capable of being bus masters. On the Vortex chip this capability was
277 only for a single contiguous region making it far less useful than the full
278 bus master capability. There is a significant performance impact of taking
279 an extra interrupt or polling for the completion of each transfer, as well
280 as difficulty sharing the single transfer engine between the transmit and
281 receive threads. Using DMA transfers is a win only with large blocks or
282 with the flawed versions of the Intel Orion motherboard PCI controller.
284 The Boomerang chip's full-bus-master interface is useful, and has the
285 currently-unused advantages over other similar chips that queued transmit
286 packets may be reordered and receive buffer groups are associated with a
287 single frame.
289 With full-bus-master support, this driver uses a "RX_COPYBREAK" scheme.
290 Rather than a fixed intermediate receive buffer, this scheme allocates
291 full-sized skbuffs as receive buffers. The value RX_COPYBREAK is used as
292 the copying breakpoint: it is chosen to trade-off the memory wasted by
293 passing the full-sized skbuff to the queue layer for all frames vs. the
294 copying cost of copying a frame to a correctly-sized skbuff.
296 IIIC. Synchronization
297 The driver runs as two independent, single-threaded flows of control. One
298 is the send-packet routine, which enforces single-threaded use by the
299 dev->tbusy flag. The other thread is the interrupt handler, which is single
300 threaded by the hardware and other software.
302 IV. Notes
304 Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing development
305 3c590, 3c595, and 3c900 boards.
306 The name "Vortex" is the internal 3Com project name for the PCI ASIC, and
307 the EISA version is called "Demon". According to Terry these names come
308 from rides at the local amusement park.
310 The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes!
311 This driver only supports ethernet packets because of the skbuff allocation
312 limit of 4K.
315 /* This table drives the PCI probe routines. It's mostly boilerplate in all
316 of the drivers, and will likely be provided by some future kernel.
318 enum pci_flags_bit {
319 PCI_USES_IO=1, PCI_USES_MEM=2, PCI_USES_MASTER=4,
320 PCI_ADDR0=0x10<<0, PCI_ADDR1=0x10<<1, PCI_ADDR2=0x10<<2, PCI_ADDR3=0x10<<3,
323 enum { IS_VORTEX=1, IS_BOOMERANG=2, IS_CYCLONE=4, IS_TORNADO=8,
324 EEPROM_8BIT=0x10, /* AKPM: Uses 0x230 as the base bitmaps for EEPROM reads */
325 HAS_PWR_CTRL=0x20, HAS_MII=0x40, HAS_NWAY=0x80, HAS_CB_FNS=0x100,
326 INVERT_MII_PWR=0x200, INVERT_LED_PWR=0x400, MAX_COLLISION_RESET=0x800,
327 EEPROM_OFFSET=0x1000 };
329 enum vortex_chips {
330 CH_3C590 = 0,
331 CH_3C592,
332 CH_3C597,
333 CH_3C595_1,
334 CH_3C595_2,
336 CH_3C595_3,
337 CH_3C900_1,
338 CH_3C900_2,
339 CH_3C900_3,
340 CH_3C900_4,
342 CH_3C900_5,
343 CH_3C900B_FL,
344 CH_3C905_1,
345 CH_3C905_2,
346 CH_3C905B_1,
348 CH_3C905B_2,
349 CH_3C905B_FX,
350 CH_3C905C,
351 CH_3C980,
352 CH_3C9805,
354 CH_3CSOHO100_TX,
355 CH_3C555,
356 CH_3C556,
357 CH_3C556B,
358 CH_3C575,
360 CH_3C575_1,
361 CH_3CCFE575,
362 CH_3CCFE575CT,
363 CH_3CCFE656,
364 CH_3CCFEM656,
366 CH_3CCFEM656_1,
367 CH_3C450,
371 /* note: this array directly indexed by above enums, and MUST
372 * be kept in sync with both the enums above, and the PCI device
373 * table below
375 static struct vortex_chip_info {
376 const char *name;
377 int flags;
378 int drv_flags;
379 int io_size;
380 } vortex_info_tbl[] __devinitdata = {
381 #define EISA_TBL_OFFSET 0 /* Offset of this entry for vortex_eisa_init */
382 {"3c590 Vortex 10Mbps",
383 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
384 {"3c592 EISA 10mbps Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
385 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
386 {"3c597 EISA Fast Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
387 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
388 {"3c595 Vortex 100baseTx",
389 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
390 {"3c595 Vortex 100baseT4",
391 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
393 {"3c595 Vortex 100base-MII",
394 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
395 {"3c900 Boomerang 10baseT",
396 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG, 64, },
397 {"3c900 Boomerang 10Mbps Combo",
398 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG, 64, },
399 {"3c900 Cyclone 10Mbps TPO", /* AKPM: from Don's 0.99M */
400 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY, 128, },
401 {"3c900 Cyclone 10Mbps Combo",
402 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE, 128, },
404 {"3c900 Cyclone 10Mbps TPC", /* AKPM: from Don's 0.99M */
405 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE, 128, },
406 {"3c900B-FL Cyclone 10base-FL",
407 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE, 128, },
408 {"3c905 Boomerang 100baseTx",
409 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII, 64, },
410 {"3c905 Boomerang 100baseT4",
411 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII, 64, },
412 {"3c905B Cyclone 100baseTx",
413 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY, 128, },
415 {"3c905B Cyclone 10/100/BNC",
416 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY, 128, },
417 {"3c905B-FX Cyclone 100baseFx",
418 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE, 128, },
419 {"3c905C Tornado",
420 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY, 128, },
421 {"3c980 Cyclone",
422 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE, 128, },
423 {"3c980 10/100 Base-TX NIC(Python-T)",
424 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE, 128, },
426 {"3cSOHO100-TX Hurricane",
427 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE, 128, },
428 {"3c555 Laptop Hurricane",
429 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|EEPROM_8BIT, 128, },
430 {"3c556 Laptop Tornado",
431 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_8BIT|HAS_CB_FNS|INVERT_MII_PWR, 128, },
432 {"3c556B Laptop Hurricane",
433 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_OFFSET|HAS_CB_FNS|INVERT_MII_PWR, 128, },
434 {"3c575 [Megahertz] 10/100 LAN CardBus",
435 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
437 {"3c575 Boomerang CardBus",
438 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
439 {"3CCFE575BT Cyclone CardBus",
440 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_LED_PWR, 128, },
441 {"3CCFE575CT Tornado CardBus",
442 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|MAX_COLLISION_RESET, 128, },
443 {"3CCFE656 Cyclone CardBus",
444 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|INVERT_LED_PWR, 128, },
445 {"3CCFEM656B Cyclone+Winmodem CardBus",
446 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|INVERT_LED_PWR, 128, },
448 {"3CXFEM656C Tornado+Winmodem CardBus", /* From pcmcia-cs-3.1.5 */
449 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|MAX_COLLISION_RESET, 128, },
450 {"3c450 HomePNA Tornado", /* AKPM: from Don's 0.99Q */
451 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY, 128, },
452 {0,}, /* 0 terminated list. */
456 static struct pci_device_id vortex_pci_tbl[] __devinitdata = {
457 { 0x10B7, 0x5900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C590 },
458 { 0x10B7, 0x5920, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C592 },
459 { 0x10B7, 0x5970, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C597 },
460 { 0x10B7, 0x5950, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_1 },
461 { 0x10B7, 0x5951, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_2 },
463 { 0x10B7, 0x5952, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_3 },
464 { 0x10B7, 0x9000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_1 },
465 { 0x10B7, 0x9001, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_2 },
466 { 0x10B7, 0x9004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_3 },
467 { 0x10B7, 0x9005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_4 },
469 { 0x10B7, 0x9006, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_5 },
470 { 0x10B7, 0x900A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900B_FL },
471 { 0x10B7, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_1 },
472 { 0x10B7, 0x9051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_2 },
473 { 0x10B7, 0x9055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_1 },
475 { 0x10B7, 0x9058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_2 },
476 { 0x10B7, 0x905A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_FX },
477 { 0x10B7, 0x9200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905C },
478 { 0x10B7, 0x9800, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C980 },
479 { 0x10B7, 0x9805, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C9805 },
481 { 0x10B7, 0x7646, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CSOHO100_TX },
482 { 0x10B7, 0x5055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C555 },
483 { 0x10B7, 0x6055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556 },
484 { 0x10B7, 0x6056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556B },
485 { 0x10B7, 0x5b57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575 },
487 { 0x10B7, 0x5057, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575_1 },
488 { 0x10B7, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575 },
489 { 0x10B7, 0x5257, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575CT },
490 { 0x10B7, 0x6560, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE656 },
491 { 0x10B7, 0x6562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656 },
493 { 0x10B7, 0x6564, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656_1 },
494 { 0x10B7, 0x4500, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C450 },
495 {0,} /* 0 terminated list. */
497 MODULE_DEVICE_TABLE(pci, vortex_pci_tbl);
500 /* Operational definitions.
501 These are not used by other compilation units and thus are not
502 exported in a ".h" file.
504 First the windows. There are eight register windows, with the command
505 and status registers available in each.
507 #define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
508 #define EL3_CMD 0x0e
509 #define EL3_STATUS 0x0e
511 /* The top five bits written to EL3_CMD are a command, the lower
512 11 bits are the parameter, if applicable.
513 Note that 11 parameters bits was fine for ethernet, but the new chip
514 can handle FDDI length frames (~4500 octets) and now parameters count
515 32-bit 'Dwords' rather than octets. */
517 enum vortex_cmd {
518 TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
519 RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11,
520 UpStall = 6<<11, UpUnstall = (6<<11)+1,
521 DownStall = (6<<11)+2, DownUnstall = (6<<11)+3,
522 RxDiscard = 8<<11, TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
523 FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
524 SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
525 SetTxThreshold = 18<<11, SetTxStart = 19<<11,
526 StartDMAUp = 20<<11, StartDMADown = (20<<11)+1, StatsEnable = 21<<11,
527 StatsDisable = 22<<11, StopCoax = 23<<11, SetFilterBit = 25<<11,};
529 /* The SetRxFilter command accepts the following classes: */
530 enum RxFilter {
531 RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8 };
533 /* Bits in the general status register. */
534 enum vortex_status {
535 IntLatch = 0x0001, HostError = 0x0002, TxComplete = 0x0004,
536 TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
537 IntReq = 0x0040, StatsFull = 0x0080,
538 DMADone = 1<<8, DownComplete = 1<<9, UpComplete = 1<<10,
539 DMAInProgress = 1<<11, /* DMA controller is still busy.*/
540 CmdInProgress = 1<<12, /* EL3_CMD is still busy.*/
543 /* Register window 1 offsets, the window used in normal operation.
544 On the Vortex this window is always mapped at offsets 0x10-0x1f. */
545 enum Window1 {
546 TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
547 RxStatus = 0x18, Timer=0x1A, TxStatus = 0x1B,
548 TxFree = 0x1C, /* Remaining free bytes in Tx buffer. */
550 enum Window0 {
551 Wn0EepromCmd = 10, /* Window 0: EEPROM command register. */
552 Wn0EepromData = 12, /* Window 0: EEPROM results register. */
553 IntrStatus=0x0E, /* Valid in all windows. */
555 enum Win0_EEPROM_bits {
556 EEPROM_Read = 0x80, EEPROM_WRITE = 0x40, EEPROM_ERASE = 0xC0,
557 EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
558 EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
560 /* EEPROM locations. */
561 enum eeprom_offset {
562 PhysAddr01=0, PhysAddr23=1, PhysAddr45=2, ModelID=3,
563 EtherLink3ID=7, IFXcvrIO=8, IRQLine=9,
564 NodeAddr01=10, NodeAddr23=11, NodeAddr45=12,
565 DriverTune=13, Checksum=15};
567 enum Window2 { /* Window 2. */
568 Wn2_ResetOptions=12,
570 enum Window3 { /* Window 3: MAC/config bits. */
571 Wn3_Config=0, Wn3_MAC_Ctrl=6, Wn3_Options=8,
574 #define BFEXT(value, offset, bitcount) \
575 ((((unsigned long)(value)) >> (offset)) & ((1 << (bitcount)) - 1))
577 #define BFINS(lhs, rhs, offset, bitcount) \
578 (((lhs) & ~((((1 << (bitcount)) - 1)) << (offset))) | \
579 (((rhs) & ((1 << (bitcount)) - 1)) << (offset)))
581 #define RAM_SIZE(v) BFEXT(v, 0, 3)
582 #define RAM_WIDTH(v) BFEXT(v, 3, 1)
583 #define RAM_SPEED(v) BFEXT(v, 4, 2)
584 #define ROM_SIZE(v) BFEXT(v, 6, 2)
585 #define RAM_SPLIT(v) BFEXT(v, 16, 2)
586 #define XCVR(v) BFEXT(v, 20, 4)
587 #define AUTOSELECT(v) BFEXT(v, 24, 1)
589 enum Window4 { /* Window 4: Xcvr/media bits. */
590 Wn4_FIFODiag = 4, Wn4_NetDiag = 6, Wn4_PhysicalMgmt=8, Wn4_Media = 10,
592 enum Win4_Media_bits {
593 Media_SQE = 0x0008, /* Enable SQE error counting for AUI. */
594 Media_10TP = 0x00C0, /* Enable link beat and jabber for 10baseT. */
595 Media_Lnk = 0x0080, /* Enable just link beat for 100TX/100FX. */
596 Media_LnkBeat = 0x0800,
598 enum Window7 { /* Window 7: Bus Master control. */
599 Wn7_MasterAddr = 0, Wn7_MasterLen = 6, Wn7_MasterStatus = 12,
601 /* Boomerang bus master control registers. */
602 enum MasterCtrl {
603 PktStatus = 0x20, DownListPtr = 0x24, FragAddr = 0x28, FragLen = 0x2c,
604 TxFreeThreshold = 0x2f, UpPktStatus = 0x30, UpListPtr = 0x38,
607 /* The Rx and Tx descriptor lists.
608 Caution Alpha hackers: these types are 32 bits! Note also the 8 byte
609 alignment contraint on tx_ring[] and rx_ring[]. */
610 #define LAST_FRAG 0x80000000 /* Last Addr/Len pair in descriptor. */
611 #define DN_COMPLETE 0x00010000 /* This packet has been downloaded */
612 struct boom_rx_desc {
613 u32 next; /* Last entry points to 0. */
614 s32 status;
615 u32 addr; /* Up to 63 addr/len pairs possible. */
616 s32 length; /* Set LAST_FRAG to indicate last pair. */
618 /* Values for the Rx status entry. */
619 enum rx_desc_status {
620 RxDComplete=0x00008000, RxDError=0x4000,
621 /* See boomerang_rx() for actual error bits */
622 IPChksumErr=1<<25, TCPChksumErr=1<<26, UDPChksumErr=1<<27,
623 IPChksumValid=1<<29, TCPChksumValid=1<<30, UDPChksumValid=1<<31,
626 struct boom_tx_desc {
627 u32 next; /* Last entry points to 0. */
628 s32 status; /* bits 0:12 length, others see below. */
629 u32 addr;
630 s32 length;
633 /* Values for the Tx status entry. */
634 enum tx_desc_status {
635 CRCDisable=0x2000, TxDComplete=0x8000,
636 AddIPChksum=0x02000000, AddTCPChksum=0x04000000, AddUDPChksum=0x08000000,
637 TxIntrUploaded=0x80000000, /* IRQ when in FIFO, but maybe not sent. */
640 /* Chip features we care about in vp->capabilities, read from the EEPROM. */
641 enum ChipCaps { CapBusMaster=0x20, CapPwrMgmt=0x2000 };
643 struct vortex_private {
644 /* The Rx and Tx rings should be quad-word-aligned. */
645 struct boom_rx_desc* rx_ring;
646 struct boom_tx_desc* tx_ring;
647 dma_addr_t rx_ring_dma;
648 dma_addr_t tx_ring_dma;
649 /* The addresses of transmit- and receive-in-place skbuffs. */
650 struct sk_buff* rx_skbuff[RX_RING_SIZE];
651 struct sk_buff* tx_skbuff[TX_RING_SIZE];
652 struct net_device *next_module; /* NULL if PCI device */
653 unsigned int cur_rx, cur_tx; /* The next free ring entry */
654 unsigned int dirty_rx, dirty_tx; /* The ring entries to be free()ed. */
655 struct net_device_stats stats;
656 struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
657 dma_addr_t tx_skb_dma; /* Allocated DMA address for bus master ctrl DMA. */
659 /* PCI configuration space information. */
660 struct pci_dev *pdev;
661 char *cb_fn_base; /* CardBus function status addr space. */
663 /* The remainder are related to chip state, mostly media selection. */
664 struct timer_list timer; /* Media selection timer. */
665 struct timer_list rx_oom_timer; /* Rx skb allocation retry timer */
666 int options; /* User-settable misc. driver options. */
667 unsigned int media_override:4, /* Passed-in media type. */
668 default_media:4, /* Read from the EEPROM/Wn3_Config. */
669 full_duplex:1, force_fd:1, autoselect:1,
670 bus_master:1, /* Vortex can only do a fragment bus-m. */
671 full_bus_master_tx:1, full_bus_master_rx:2, /* Boomerang */
672 flow_ctrl:1, /* Use 802.3x flow control (PAUSE only) */
673 partner_flow_ctrl:1, /* Partner supports flow control */
674 tx_full:1,
675 has_nway:1,
676 open:1,
677 must_free_region:1; /* Flag: if zero, Cardbus owns the I/O region */
678 int drv_flags;
679 u16 status_enable;
680 u16 intr_enable;
681 u16 available_media; /* From Wn3_Options. */
682 u16 capabilities, info1, info2; /* Various, from EEPROM. */
683 u16 advertising; /* NWay media advertisement */
684 unsigned char phys[2]; /* MII device addresses. */
685 u16 deferred; /* Resend these interrupts when we
686 * bale from the ISR */
687 u16 io_size; /* Size of PCI region (for release_region) */
688 spinlock_t lock; /* Serialise access to device & its vortex_private */
689 spinlock_t mdio_lock; /* Serialise access to mdio hardware */
692 /* The action to take with a media selection timer tick.
693 Note that we deviate from the 3Com order by checking 10base2 before AUI.
695 enum xcvr_types {
696 XCVR_10baseT=0, XCVR_AUI, XCVR_10baseTOnly, XCVR_10base2, XCVR_100baseTx,
697 XCVR_100baseFx, XCVR_MII=6, XCVR_NWAY=8, XCVR_ExtMII=9, XCVR_Default=10,
700 static struct media_table {
701 char *name;
702 unsigned int media_bits:16, /* Bits to set in Wn4_Media register. */
703 mask:8, /* The transceiver-present bit in Wn3_Config.*/
704 next:8; /* The media type to try next. */
705 int wait; /* Time before we check media status. */
706 } media_tbl[] = {
707 { "10baseT", Media_10TP,0x08, XCVR_10base2, (14*HZ)/10},
708 { "10Mbs AUI", Media_SQE, 0x20, XCVR_Default, (1*HZ)/10},
709 { "undefined", 0, 0x80, XCVR_10baseT, 10000},
710 { "10base2", 0, 0x10, XCVR_AUI, (1*HZ)/10},
711 { "100baseTX", Media_Lnk, 0x02, XCVR_100baseFx, (14*HZ)/10},
712 { "100baseFX", Media_Lnk, 0x04, XCVR_MII, (14*HZ)/10},
713 { "MII", 0, 0x41, XCVR_10baseT, 3*HZ },
714 { "undefined", 0, 0x01, XCVR_10baseT, 10000},
715 { "Autonegotiate", 0, 0x41, XCVR_10baseT, 3*HZ},
716 { "MII-External", 0, 0x41, XCVR_10baseT, 3*HZ },
717 { "Default", 0, 0xFF, XCVR_10baseT, 10000},
720 static int vortex_probe1(struct pci_dev *pdev, long ioaddr, int irq,
721 int chip_idx, int card_idx);
722 static void vortex_up(struct net_device *dev);
723 static void vortex_down(struct net_device *dev);
724 static int vortex_open(struct net_device *dev);
725 static void mdio_sync(long ioaddr, int bits);
726 static int mdio_read(struct net_device *dev, int phy_id, int location);
727 static void mdio_write(struct net_device *vp, int phy_id, int location, int value);
728 static void vortex_timer(unsigned long arg);
729 static void rx_oom_timer(unsigned long arg);
730 static int vortex_start_xmit(struct sk_buff *skb, struct net_device *dev);
731 static int boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev);
732 static int vortex_rx(struct net_device *dev);
733 static int boomerang_rx(struct net_device *dev);
734 static void vortex_interrupt(int irq, void *dev_id, struct pt_regs *regs);
735 static void boomerang_interrupt(int irq, void *dev_id, struct pt_regs *regs);
736 static int vortex_close(struct net_device *dev);
737 static void dump_tx_ring(struct net_device *dev);
738 static void update_stats(long ioaddr, struct net_device *dev);
739 static struct net_device_stats *vortex_get_stats(struct net_device *dev);
740 static void set_rx_mode(struct net_device *dev);
741 static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
742 static void vortex_tx_timeout(struct net_device *dev);
743 static void acpi_set_WOL(struct net_device *dev);
745 /* This driver uses 'options' to pass the media type, full-duplex flag, etc. */
746 /* Option count limit only -- unlimited interfaces are supported. */
747 #define MAX_UNITS 8
748 static int options[MAX_UNITS] = { -1, -1, -1, -1, -1, -1, -1, -1,};
749 static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
750 static int flow_ctrl[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
752 /* #define dev_alloc_skb dev_alloc_skb_debug */
754 /* A list of all installed Vortex EISA devices, for removing the driver module. */
755 static struct net_device *root_vortex_eisa_dev;
757 /* Variables to work-around the Compaq PCI BIOS32 problem. */
758 static int compaq_ioaddr, compaq_irq, compaq_device_id = 0x5900;
760 static int vortex_cards_found;
762 static void vortex_suspend (struct pci_dev *pdev)
764 struct net_device *dev = pdev->driver_data;
766 printk(KERN_DEBUG "vortex_suspend(%s)\n", dev->name);
768 if (dev && dev->priv) {
769 struct vortex_private *vp = (struct vortex_private *)dev->priv;
770 if (vp->open) {
771 netif_device_detach(dev);
772 vortex_down(dev);
777 static void vortex_resume (struct pci_dev *pdev)
779 struct net_device *dev = pdev->driver_data;
781 printk(KERN_DEBUG "vortex_resume(%s)\n", dev->name);
783 if (dev && dev->priv) {
784 struct vortex_private *vp = (struct vortex_private *)dev->priv;
785 if (vp->open) {
786 vortex_up(dev);
787 netif_device_attach(dev);
792 /* returns count found (>= 0), or negative on error */
793 static int __init vortex_eisa_init (void)
795 long ioaddr;
796 int rc;
797 int orig_cards_found = vortex_cards_found;
799 /* Now check all slots of the EISA bus. */
800 if (!EISA_bus)
801 return 0;
803 for (ioaddr = 0x1000; ioaddr < 0x9000; ioaddr += 0x1000) {
804 int device_id;
806 if (request_region(ioaddr, VORTEX_TOTAL_SIZE, "3c59x") == NULL)
807 continue;
809 /* Check the standard EISA ID register for an encoded '3Com'. */
810 if (inw(ioaddr + 0xC80) != 0x6d50) {
811 release_region (ioaddr, VORTEX_TOTAL_SIZE);
812 continue;
815 /* Check for a product that we support, 3c59{2,7} any rev. */
816 device_id = (inb(ioaddr + 0xC82)<<8) + inb(ioaddr + 0xC83);
817 if ((device_id & 0xFF00) != 0x5900) {
818 release_region (ioaddr, VORTEX_TOTAL_SIZE);
819 continue;
822 rc = vortex_probe1(NULL, ioaddr, inw(ioaddr + 0xC88) >> 12,
823 EISA_TBL_OFFSET,
824 vortex_cards_found);
825 if (rc == 0)
826 vortex_cards_found++;
827 else
828 release_region (ioaddr, VORTEX_TOTAL_SIZE);
831 /* Special code to work-around the Compaq PCI BIOS32 problem. */
832 if (compaq_ioaddr) {
833 vortex_probe1(NULL, compaq_ioaddr, compaq_irq,
834 compaq_device_id, vortex_cards_found++);
837 return vortex_cards_found - orig_cards_found;
840 /* returns count (>= 0), or negative on error */
841 static int __devinit vortex_init_one (struct pci_dev *pdev,
842 const struct pci_device_id *ent)
844 int rc;
846 rc = vortex_probe1 (pdev, pci_resource_start (pdev, 0), pdev->irq,
847 ent->driver_data, vortex_cards_found);
848 if (rc == 0)
849 vortex_cards_found++;
850 return rc;
854 * Start up the PCI device which is described by *pdev.
855 * Return 0 on success.
857 * NOTE: pdev can be NULL, for the case of an EISA driver
859 static int __devinit vortex_probe1(struct pci_dev *pdev,
860 long ioaddr, int irq,
861 int chip_idx, int card_idx)
863 struct vortex_private *vp;
864 int option;
865 unsigned int eeprom[0x40], checksum = 0; /* EEPROM contents */
866 int i;
867 struct net_device *dev;
868 static int printed_version;
869 int retval;
870 struct vortex_chip_info * const vci = &vortex_info_tbl[chip_idx];
872 if (!printed_version) {
873 printk (KERN_INFO "%s", version);
874 printk (KERN_INFO "See Documentation/networking/vortex.txt\n");
875 printed_version = 1;
878 dev = init_etherdev(NULL, sizeof(*vp));
879 if (!dev) {
880 printk (KERN_ERR PFX "unable to allocate etherdev, aborting\n");
881 retval = -ENOMEM;
882 goto out;
884 SET_MODULE_OWNER(dev);
886 printk(KERN_INFO "%s: 3Com %s %s at 0x%lx, ",
887 dev->name,
888 pdev ? "PCI" : "EISA",
889 vci->name,
890 ioaddr);
892 /* private struct aligned and zeroed by init_etherdev */
893 vp = dev->priv;
894 dev->base_addr = ioaddr;
895 dev->irq = irq;
896 dev->mtu = mtu;
897 vp->drv_flags = vci->drv_flags;
898 vp->has_nway = (vci->drv_flags & HAS_NWAY) ? 1 : 0;
899 vp->io_size = vci->io_size;
901 /* module list only for EISA devices */
902 if (pdev == NULL) {
903 vp->next_module = root_vortex_eisa_dev;
904 root_vortex_eisa_dev = dev;
907 /* PCI-only startup logic */
908 if (pdev) {
909 /* EISA resources already marked, so only PCI needs to do this here */
910 /* Ignore return value, because Cardbus drivers already allocate for us */
911 if (request_region(ioaddr, vci->io_size, dev->name) != NULL) {
912 vp->must_free_region = 1;
915 /* wake up and enable device */
916 if (pci_enable_device (pdev)) {
917 retval = -EIO;
918 goto free_region;
921 /* enable bus-mastering if necessary */
922 if (vci->flags & PCI_USES_MASTER)
923 pci_set_master (pdev);
926 spin_lock_init(&vp->lock);
927 spin_lock_init(&vp->mdio_lock);
928 vp->pdev = pdev;
930 /* Makes sure rings are at least 16 byte aligned. */
931 vp->rx_ring = pci_alloc_consistent(pdev, sizeof(struct boom_rx_desc) * RX_RING_SIZE
932 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
933 &vp->rx_ring_dma);
934 if (vp->rx_ring == 0) {
935 retval = -ENOMEM;
936 goto free_region;
939 vp->tx_ring = (struct boom_tx_desc *)(vp->rx_ring + RX_RING_SIZE);
940 vp->tx_ring_dma = vp->rx_ring_dma + sizeof(struct boom_rx_desc) * RX_RING_SIZE;
942 /* if we are a PCI driver, we store info in pdev->driver_data
943 * instead of a module list */
944 if (pdev)
945 pdev->driver_data = dev;
947 /* The lower four bits are the media type. */
948 if (dev->mem_start) {
950 * AKPM: ewww.. The 'options' param is passed in as the third arg to the
951 * LILO 'ether=' argument for non-modular use
953 option = dev->mem_start;
955 else if (card_idx < MAX_UNITS)
956 option = options[card_idx];
957 else
958 option = -1;
960 vp->media_override = 7;
961 if (option >= 0) {
962 vp->media_override = ((option & 7) == 2) ? 0 : option & 15;
963 vp->full_duplex = (option & 0x200) ? 1 : 0;
964 vp->bus_master = (option & 16) ? 1 : 0;
967 if (card_idx < MAX_UNITS) {
968 if (full_duplex[card_idx] > 0)
969 vp->full_duplex = 1;
970 if (flow_ctrl[card_idx] > 0)
971 vp->flow_ctrl = 1;
974 vp->force_fd = vp->full_duplex;
975 vp->options = option;
976 /* Read the station address from the EEPROM. */
977 EL3WINDOW(0);
979 int base;
981 if (vci->drv_flags & EEPROM_8BIT)
982 base = 0x230;
983 else if (vci->drv_flags & EEPROM_OFFSET)
984 base = EEPROM_Read + 0x30;
985 else
986 base = EEPROM_Read;
988 for (i = 0; i < 0x40; i++) {
989 int timer;
990 outw(base + i, ioaddr + Wn0EepromCmd);
991 /* Pause for at least 162 us. for the read to take place. */
992 for (timer = 10; timer >= 0; timer--) {
993 udelay(162);
994 if ((inw(ioaddr + Wn0EepromCmd) & 0x8000) == 0)
995 break;
997 eeprom[i] = inw(ioaddr + Wn0EepromData);
1000 for (i = 0; i < 0x18; i++)
1001 checksum ^= eeprom[i];
1002 checksum = (checksum ^ (checksum >> 8)) & 0xff;
1003 if (checksum != 0x00) { /* Grrr, needless incompatible change 3Com. */
1004 while (i < 0x21)
1005 checksum ^= eeprom[i++];
1006 checksum = (checksum ^ (checksum >> 8)) & 0xff;
1008 if ((checksum != 0x00) && !(vci->drv_flags & IS_TORNADO))
1009 printk(" ***INVALID CHECKSUM %4.4x*** ", checksum);
1010 for (i = 0; i < 3; i++)
1011 ((u16 *)dev->dev_addr)[i] = htons(eeprom[i + 10]);
1012 for (i = 0; i < 6; i++)
1013 printk("%c%2.2x", i ? ':' : ' ', dev->dev_addr[i]);
1014 EL3WINDOW(2);
1015 for (i = 0; i < 6; i++)
1016 outb(dev->dev_addr[i], ioaddr + i);
1018 #ifdef __sparc__
1019 printk(", IRQ %s\n", __irq_itoa(dev->irq));
1020 #else
1021 printk(", IRQ %d\n", dev->irq);
1022 /* Tell them about an invalid IRQ. */
1023 if (vortex_debug && (dev->irq <= 0 || dev->irq >= NR_IRQS))
1024 printk(KERN_WARNING " *** Warning: IRQ %d is unlikely to work! ***\n",
1025 dev->irq);
1026 #endif
1028 if (pdev && vci->drv_flags & HAS_CB_FNS) {
1029 unsigned long fn_st_addr; /* Cardbus function status space */
1030 unsigned short n;
1032 fn_st_addr = pci_resource_start (pdev, 2);
1033 if (fn_st_addr)
1034 vp->cb_fn_base = ioremap(fn_st_addr, 128);
1035 printk(KERN_INFO "%s: CardBus functions mapped %8.8lx->%p\n",
1036 dev->name, fn_st_addr, vp->cb_fn_base);
1037 EL3WINDOW(2);
1039 n = inw(ioaddr + Wn2_ResetOptions) & ~0x4010;
1040 if (vp->drv_flags & INVERT_LED_PWR)
1041 n |= 0x10;
1042 if (vp->drv_flags & INVERT_MII_PWR)
1043 n |= 0x4000;
1044 outw(n, ioaddr + Wn2_ResetOptions);
1047 /* Extract our information from the EEPROM data. */
1048 vp->info1 = eeprom[13];
1049 vp->info2 = eeprom[15];
1050 vp->capabilities = eeprom[16];
1052 if (vp->info1 & 0x8000) {
1053 vp->full_duplex = 1;
1054 printk(KERN_INFO "Full duplex capable\n");
1058 static const char * ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
1059 unsigned int config;
1060 EL3WINDOW(3);
1061 vp->available_media = inw(ioaddr + Wn3_Options);
1062 if ((vp->available_media & 0xff) == 0) /* Broken 3c916 */
1063 vp->available_media = 0x40;
1064 config = inl(ioaddr + Wn3_Config);
1065 if (vortex_debug > 1)
1066 printk(KERN_DEBUG " Internal config register is %4.4x, "
1067 "transceivers %#x.\n", config, inw(ioaddr + Wn3_Options));
1068 printk(KERN_INFO " %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
1069 8 << RAM_SIZE(config),
1070 RAM_WIDTH(config) ? "word" : "byte",
1071 ram_split[RAM_SPLIT(config)],
1072 AUTOSELECT(config) ? "autoselect/" : "",
1073 XCVR(config) > XCVR_ExtMII ? "<invalid transceiver>" :
1074 media_tbl[XCVR(config)].name);
1075 vp->default_media = XCVR(config);
1076 vp->autoselect = AUTOSELECT(config);
1079 if (vp->media_override != 7) {
1080 printk(KERN_INFO " Media override to transceiver type %d (%s).\n",
1081 vp->media_override, media_tbl[vp->media_override].name);
1082 dev->if_port = vp->media_override;
1083 } else
1084 dev->if_port = vp->default_media;
1086 if (dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
1087 int phy, phy_idx = 0;
1088 EL3WINDOW(4);
1089 mii_preamble_required++;
1090 mii_preamble_required++;
1091 mdio_read(dev, 24, 1);
1092 for (phy = 1; phy <= 32 && phy_idx < sizeof(vp->phys); phy++) {
1093 int mii_status, phyx = phy & 0x1f;
1094 mii_status = mdio_read(dev, phyx, 1);
1095 if (mii_status && mii_status != 0xffff) {
1096 vp->phys[phy_idx++] = phyx;
1097 printk(KERN_INFO " MII transceiver found at address %d,"
1098 " status %4x.\n", phyx, mii_status);
1099 if ((mii_status & 0x0040) == 0)
1100 mii_preamble_required++;
1103 mii_preamble_required--;
1104 if (phy_idx == 0) {
1105 printk(KERN_WARNING" ***WARNING*** No MII transceivers found!\n");
1106 vp->phys[0] = 24;
1107 } else {
1108 vp->advertising = mdio_read(dev, vp->phys[0], 4);
1109 if (vp->full_duplex) {
1110 /* Only advertise the FD media types. */
1111 vp->advertising &= ~0x02A0;
1112 mdio_write(dev, vp->phys[0], 4, vp->advertising);
1117 if (vp->capabilities & CapPwrMgmt)
1118 acpi_set_WOL(dev);
1120 if (vp->capabilities & CapBusMaster) {
1121 vp->full_bus_master_tx = 1;
1122 printk(KERN_INFO" Enabling bus-master transmits and %s receives.\n",
1123 (vp->info2 & 1) ? "early" : "whole-frame" );
1124 vp->full_bus_master_rx = (vp->info2 & 1) ? 1 : 2;
1125 vp->bus_master = 0; /* AKPM: vortex only */
1128 /* The 3c59x-specific entries in the device structure. */
1129 dev->open = vortex_open;
1130 dev->hard_start_xmit = vp->full_bus_master_tx ?
1131 boomerang_start_xmit : vortex_start_xmit;
1132 dev->stop = vortex_close;
1133 dev->get_stats = vortex_get_stats;
1134 dev->do_ioctl = vortex_ioctl;
1135 dev->set_multicast_list = set_rx_mode;
1136 dev->tx_timeout = vortex_tx_timeout;
1137 dev->watchdog_timeo = (watchdog * HZ) / 1000;
1139 return 0;
1141 free_region:
1142 if (vp->must_free_region)
1143 release_region(ioaddr, vci->io_size);
1144 unregister_netdev(dev);
1145 kfree (dev);
1146 printk(KERN_ERR PFX "vortex_probe1 fails. Returns %d\n", retval);
1147 out:
1148 return retval;
1151 static void wait_for_completion(struct net_device *dev, int cmd)
1153 int i = 4000;
1155 outw(cmd, dev->base_addr + EL3_CMD);
1156 while (--i > 0) {
1157 if (!(inw(dev->base_addr + EL3_STATUS) & CmdInProgress))
1158 return;
1160 printk(KERN_ERR "%s: command 0x%04x did not complete! Status=0x%x\n",
1161 dev->name, cmd, inw(dev->base_addr + EL3_STATUS));
1164 static void
1165 vortex_up(struct net_device *dev)
1167 long ioaddr = dev->base_addr;
1168 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1169 unsigned int config;
1170 int i, device_id;
1172 if (vp->pdev)
1173 device_id = vp->pdev->device;
1174 else
1175 device_id = 0x5900; /* EISA */
1177 /* Before initializing select the active media port. */
1178 EL3WINDOW(3);
1179 config = inl(ioaddr + Wn3_Config);
1181 if (vp->media_override != 7) {
1182 if (vortex_debug > 1)
1183 printk(KERN_INFO "%s: Media override to transceiver %d (%s).\n",
1184 dev->name, vp->media_override,
1185 media_tbl[vp->media_override].name);
1186 dev->if_port = vp->media_override;
1187 } else if (vp->autoselect) {
1188 if (vp->has_nway) {
1189 printk(KERN_INFO "%s: using NWAY autonegotiation\n", dev->name);
1190 dev->if_port = XCVR_NWAY;
1191 } else {
1192 /* Find first available media type, starting with 100baseTx. */
1193 dev->if_port = XCVR_100baseTx;
1194 while (! (vp->available_media & media_tbl[dev->if_port].mask))
1195 dev->if_port = media_tbl[dev->if_port].next;
1196 printk(KERN_INFO "%s: first available media type: %s\n",
1197 dev->name,
1198 media_tbl[dev->if_port].name);
1200 } else {
1201 dev->if_port = vp->default_media;
1202 printk(KERN_INFO "%s: using default media %s\n",
1203 dev->name, media_tbl[dev->if_port].name);
1206 init_timer(&vp->timer);
1207 vp->timer.expires = RUN_AT(media_tbl[dev->if_port].wait);
1208 vp->timer.data = (unsigned long)dev;
1209 vp->timer.function = vortex_timer; /* timer handler */
1210 add_timer(&vp->timer);
1212 init_timer(&vp->rx_oom_timer);
1213 vp->rx_oom_timer.data = (unsigned long)dev;
1214 vp->rx_oom_timer.function = rx_oom_timer;
1216 if (vortex_debug > 1)
1217 printk(KERN_DEBUG "%s: Initial media type %s.\n",
1218 dev->name, media_tbl[dev->if_port].name);
1220 vp->full_duplex = vp->force_fd;
1221 config = BFINS(config, dev->if_port, 20, 4);
1222 //AKPM if (!vp->has_nway)
1224 if (vortex_debug > 6)
1225 printk(KERN_DEBUG "vortex_up(): writing 0x%x to InternalConfig\n",
1226 config);
1227 outl(config, ioaddr + Wn3_Config);
1230 if (dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
1231 int mii_reg1, mii_reg5;
1232 EL3WINDOW(4);
1233 /* Read BMSR (reg1) only to clear old status. */
1234 mii_reg1 = mdio_read(dev, vp->phys[0], 1);
1235 mii_reg5 = mdio_read(dev, vp->phys[0], 5);
1236 if (mii_reg5 == 0xffff || mii_reg5 == 0x0000)
1237 ; /* No MII device or no link partner report */
1238 else if ((mii_reg5 & 0x0100) != 0 /* 100baseTx-FD */
1239 || (mii_reg5 & 0x00C0) == 0x0040) /* 10T-FD, but not 100-HD */
1240 vp->full_duplex = 1;
1241 vp->partner_flow_ctrl = ((mii_reg5 & 0x0400) != 0);
1242 if (vortex_debug > 1)
1243 printk(KERN_INFO "%s: MII #%d status %4.4x, link partner capability %4.4x,"
1244 " setting %s-duplex.\n", dev->name, vp->phys[0],
1245 mii_reg1, mii_reg5, vp->full_duplex ? "full" : "half");
1246 EL3WINDOW(3);
1249 /* Set the full-duplex bit. */
1250 outw( ((vp->info1 & 0x8000) || vp->full_duplex ? 0x20 : 0) |
1251 (dev->mtu > 1500 ? 0x40 : 0) |
1252 ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ? 0x100 : 0),
1253 ioaddr + Wn3_MAC_Ctrl);
1255 if (vortex_debug > 1) {
1256 printk(KERN_DEBUG "%s: vortex_up() InternalConfig %8.8x.\n",
1257 dev->name, config);
1260 wait_for_completion(dev, TxReset);
1261 wait_for_completion(dev, RxReset);
1263 outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
1265 if (vortex_debug > 1) {
1266 EL3WINDOW(4);
1267 printk(KERN_DEBUG "%s: vortex_up() irq %d media status %4.4x.\n",
1268 dev->name, dev->irq, inw(ioaddr + Wn4_Media));
1271 /* Set the station address and mask in window 2 each time opened. */
1272 EL3WINDOW(2);
1273 for (i = 0; i < 6; i++)
1274 outb(dev->dev_addr[i], ioaddr + i);
1275 for (; i < 12; i+=2)
1276 outw(0, ioaddr + i);
1278 if (vp->cb_fn_base) {
1279 unsigned short n = inw(ioaddr + Wn2_ResetOptions) & ~0x4010;
1280 if (vp->drv_flags & INVERT_LED_PWR)
1281 n |= 0x10;
1282 if (vp->drv_flags & INVERT_MII_PWR)
1283 n |= 0x4000;
1284 outw(n, ioaddr + Wn2_ResetOptions);
1287 if (dev->if_port == XCVR_10base2)
1288 /* Start the thinnet transceiver. We should really wait 50ms...*/
1289 outw(StartCoax, ioaddr + EL3_CMD);
1290 if (dev->if_port != XCVR_NWAY) {
1291 EL3WINDOW(4);
1292 outw((inw(ioaddr + Wn4_Media) & ~(Media_10TP|Media_SQE)) |
1293 media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
1296 /* Switch to the stats window, and clear all stats by reading. */
1297 outw(StatsDisable, ioaddr + EL3_CMD);
1298 EL3WINDOW(6);
1299 for (i = 0; i < 10; i++)
1300 inb(ioaddr + i);
1301 inw(ioaddr + 10);
1302 inw(ioaddr + 12);
1303 /* New: On the Vortex we must also clear the BadSSD counter. */
1304 EL3WINDOW(4);
1305 inb(ioaddr + 12);
1306 /* ..and on the Boomerang we enable the extra statistics bits. */
1307 outw(0x0040, ioaddr + Wn4_NetDiag);
1309 /* Switch to register set 7 for normal use. */
1310 EL3WINDOW(7);
1312 if (vp->full_bus_master_rx) { /* Boomerang bus master. */
1313 vp->cur_rx = vp->dirty_rx = 0;
1314 /* Initialize the RxEarly register as recommended. */
1315 outw(SetRxThreshold + (1536>>2), ioaddr + EL3_CMD);
1316 outl(0x0020, ioaddr + PktStatus);
1317 outl(vp->rx_ring_dma, ioaddr + UpListPtr);
1319 if (vp->full_bus_master_tx) { /* Boomerang bus master Tx. */
1320 vp->cur_tx = vp->dirty_tx = 0;
1321 if (vp->drv_flags & IS_BOOMERANG)
1322 outb(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold); /* Room for a packet. */
1323 /* Clear the Rx, Tx rings. */
1324 for (i = 0; i < RX_RING_SIZE; i++) /* AKPM: this is done in vortex_open, too */
1325 vp->rx_ring[i].status = 0;
1326 for (i = 0; i < TX_RING_SIZE; i++)
1327 vp->tx_skbuff[i] = 0;
1328 outl(0, ioaddr + DownListPtr);
1330 /* Set receiver mode: presumably accept b-case and phys addr only. */
1331 set_rx_mode(dev);
1332 outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
1334 outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
1335 outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
1336 /* Allow status bits to be seen. */
1337 vp->status_enable = SetStatusEnb | HostError|IntReq|StatsFull|TxComplete|
1338 (vp->full_bus_master_tx ? DownComplete : TxAvailable) |
1339 (vp->full_bus_master_rx ? UpComplete : RxComplete) |
1340 (vp->bus_master ? DMADone : 0);
1341 vp->intr_enable = SetIntrEnb | IntLatch | TxAvailable |
1342 (vp->full_bus_master_rx ? 0 : RxComplete) |
1343 StatsFull | HostError | TxComplete | IntReq
1344 | (vp->bus_master ? DMADone : 0) | UpComplete | DownComplete;
1345 outw(vp->status_enable, ioaddr + EL3_CMD);
1346 /* Ack all pending events, and set active indicator mask. */
1347 outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
1348 ioaddr + EL3_CMD);
1349 outw(vp->intr_enable, ioaddr + EL3_CMD);
1350 if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
1351 writel(0x8000, vp->cb_fn_base + 4);
1352 netif_start_queue (dev);
1355 static int
1356 vortex_open(struct net_device *dev)
1358 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1359 int i;
1360 int retval;
1362 /* Use the now-standard shared IRQ implementation. */
1363 if ((retval = request_irq(dev->irq, vp->full_bus_master_rx ?
1364 &boomerang_interrupt : &vortex_interrupt, SA_SHIRQ, dev->name, dev))) {
1365 printk(KERN_ERR "%s: Could not reserve IRQ %d\n", dev->name, dev->irq);
1366 goto out;
1369 if (vp->full_bus_master_rx) { /* Boomerang bus master. */
1370 if (vortex_debug > 2)
1371 printk(KERN_DEBUG "%s: Filling in the Rx ring.\n", dev->name);
1372 for (i = 0; i < RX_RING_SIZE; i++) {
1373 struct sk_buff *skb;
1374 vp->rx_ring[i].next = cpu_to_le32(vp->rx_ring_dma + sizeof(struct boom_rx_desc) * (i+1));
1375 vp->rx_ring[i].status = 0; /* Clear complete bit. */
1376 vp->rx_ring[i].length = cpu_to_le32(PKT_BUF_SZ | LAST_FRAG);
1377 skb = dev_alloc_skb(PKT_BUF_SZ);
1378 vp->rx_skbuff[i] = skb;
1379 if (skb == NULL)
1380 break; /* Bad news! */
1381 skb->dev = dev; /* Mark as being used by this device. */
1382 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
1383 vp->rx_ring[i].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->tail, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
1385 if (i != RX_RING_SIZE) {
1386 int j;
1387 for (j = 0; j < RX_RING_SIZE; j++) {
1388 if (vp->rx_skbuff[j]) {
1389 dev_kfree_skb(vp->rx_skbuff[j]);
1390 vp->rx_skbuff[j] = 0;
1393 retval = -ENOMEM;
1394 goto out_free_irq;
1396 /* Wrap the ring. */
1397 vp->rx_ring[i-1].next = cpu_to_le32(vp->rx_ring_dma);
1400 vortex_up(dev);
1401 vp->open = 1;
1402 vp->tx_full = 0;
1403 return 0;
1405 out_free_irq:
1406 free_irq(dev->irq, dev);
1407 out:
1408 if (vortex_debug > 1)
1409 printk(KERN_ERR "%s: vortex_open() fails: returning %d\n", dev->name, retval);
1410 return retval;
1413 static void vortex_timer(unsigned long data)
1415 struct net_device *dev = (struct net_device *)data;
1416 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1417 long ioaddr = dev->base_addr;
1418 int next_tick = 60*HZ;
1419 int ok = 0;
1420 int media_status, mii_status, old_window;
1422 if (vortex_debug > 2) {
1423 printk(KERN_DEBUG "%s: Media selection timer tick happened, %s.\n",
1424 dev->name, media_tbl[dev->if_port].name);
1425 printk(KERN_DEBUG "dev->watchdog_timeo=%d\n", dev->watchdog_timeo);
1428 disable_irq(dev->irq);
1429 old_window = inw(ioaddr + EL3_CMD) >> 13;
1430 EL3WINDOW(4);
1431 media_status = inw(ioaddr + Wn4_Media);
1432 switch (dev->if_port) {
1433 case XCVR_10baseT: case XCVR_100baseTx: case XCVR_100baseFx:
1434 if (media_status & Media_LnkBeat) {
1435 ok = 1;
1436 if (vortex_debug > 1)
1437 printk(KERN_DEBUG "%s: Media %s has link beat, %x.\n",
1438 dev->name, media_tbl[dev->if_port].name, media_status);
1439 } else if (vortex_debug > 1)
1440 printk(KERN_DEBUG "%s: Media %s has no link beat, %x.\n",
1441 dev->name, media_tbl[dev->if_port].name, media_status);
1442 break;
1443 case XCVR_MII: case XCVR_NWAY:
1445 mii_status = mdio_read(dev, vp->phys[0], 1);
1446 ok = 1;
1447 if (vortex_debug > 2)
1448 printk(KERN_DEBUG "%s: MII transceiver has status %4.4x.\n",
1449 dev->name, mii_status);
1450 if (mii_status & 0x0004) {
1451 int mii_reg5 = mdio_read(dev, vp->phys[0], 5);
1452 if (! vp->force_fd && mii_reg5 != 0xffff) {
1453 int duplex = (mii_reg5&0x0100) ||
1454 (mii_reg5 & 0x01C0) == 0x0040;
1455 if (vp->full_duplex != duplex) {
1456 vp->full_duplex = duplex;
1457 printk(KERN_INFO "%s: Setting %s-duplex based on MII "
1458 "#%d link partner capability of %4.4x.\n",
1459 dev->name, vp->full_duplex ? "full" : "half",
1460 vp->phys[0], mii_reg5);
1461 /* Set the full-duplex bit. */
1462 EL3WINDOW(3); /* AKPM: this was missing from 2.3.99 3c59x.c! */
1463 outw( (vp->full_duplex ? 0x20 : 0) |
1464 (dev->mtu > 1500 ? 0x40 : 0) |
1465 ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ? 0x100 : 0),
1466 ioaddr + Wn3_MAC_Ctrl);
1467 if (vortex_debug > 1)
1468 printk(KERN_DEBUG "Setting duplex in Wn3_MAC_Ctrl\n");
1469 /* AKPM: bug: should reset Tx and Rx after setting Duplex. Page 180 */
1474 break;
1475 default: /* Other media types handled by Tx timeouts. */
1476 if (vortex_debug > 1)
1477 printk(KERN_DEBUG "%s: Media %s has no indication, %x.\n",
1478 dev->name, media_tbl[dev->if_port].name, media_status);
1479 ok = 1;
1481 if ( ! ok) {
1482 unsigned int config;
1484 do {
1485 dev->if_port = media_tbl[dev->if_port].next;
1486 } while ( ! (vp->available_media & media_tbl[dev->if_port].mask));
1487 if (dev->if_port == XCVR_Default) { /* Go back to default. */
1488 dev->if_port = vp->default_media;
1489 if (vortex_debug > 1)
1490 printk(KERN_DEBUG "%s: Media selection failing, using default "
1491 "%s port.\n",
1492 dev->name, media_tbl[dev->if_port].name);
1493 } else {
1494 if (vortex_debug > 1)
1495 printk(KERN_DEBUG "%s: Media selection failed, now trying "
1496 "%s port.\n",
1497 dev->name, media_tbl[dev->if_port].name);
1498 next_tick = media_tbl[dev->if_port].wait;
1500 outw((media_status & ~(Media_10TP|Media_SQE)) |
1501 media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
1503 EL3WINDOW(3);
1504 config = inl(ioaddr + Wn3_Config);
1505 config = BFINS(config, dev->if_port, 20, 4);
1506 outl(config, ioaddr + Wn3_Config);
1508 outw(dev->if_port == XCVR_10base2 ? StartCoax : StopCoax,
1509 ioaddr + EL3_CMD);
1510 if (vortex_debug > 1)
1511 printk(KERN_DEBUG "wrote 0x%08x to Wn3_Config\n", config);
1512 /* AKPM: FIXME: Should reset Rx & Tx here. P60 of 3c90xc.pdf */
1514 EL3WINDOW(old_window);
1515 enable_irq(dev->irq);
1517 if (vortex_debug > 2)
1518 printk(KERN_DEBUG "%s: Media selection timer finished, %s.\n",
1519 dev->name, media_tbl[dev->if_port].name);
1521 mod_timer(&vp->timer, RUN_AT(next_tick));
1522 if (vp->deferred)
1523 outw(FakeIntr, ioaddr + EL3_CMD);
1524 return;
1527 static void vortex_tx_timeout(struct net_device *dev)
1529 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1530 long ioaddr = dev->base_addr;
1532 printk(KERN_ERR "%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
1533 dev->name, inb(ioaddr + TxStatus),
1534 inw(ioaddr + EL3_STATUS));
1536 /* Slight code bloat to be user friendly. */
1537 if ((inb(ioaddr + TxStatus) & 0x88) == 0x88)
1538 printk(KERN_ERR "%s: Transmitter encountered 16 collisions --"
1539 " network cable problem?\n", dev->name);
1540 if (inw(ioaddr + EL3_STATUS) & IntLatch) {
1541 printk(KERN_ERR "%s: Interrupt posted but not delivered --"
1542 " IRQ blocked by another device?\n", dev->name);
1543 /* Bad idea here.. but we might as well handle a few events. */
1546 * AKPM: block interrupts because vortex_interrupt
1547 * does a bare spin_lock()
1549 unsigned long flags;
1550 local_irq_save(flags);
1551 if (vp->full_bus_master_tx)
1552 boomerang_interrupt(dev->irq, dev, 0);
1553 else
1554 vortex_interrupt(dev->irq, dev, 0);
1555 local_irq_restore(flags);
1559 if (vortex_debug > 0)
1560 dump_tx_ring(dev);
1562 wait_for_completion(dev, TxReset);
1564 vp->stats.tx_errors++;
1565 if (vp->full_bus_master_tx) {
1566 if (vortex_debug > 0)
1567 printk(KERN_DEBUG "%s: Resetting the Tx ring pointer.\n",
1568 dev->name);
1569 if (vp->cur_tx - vp->dirty_tx > 0 && inl(ioaddr + DownListPtr) == 0)
1570 outl(vp->tx_ring_dma + (vp->dirty_tx % TX_RING_SIZE) * sizeof(struct boom_tx_desc),
1571 ioaddr + DownListPtr);
1572 if (vp->tx_full && (vp->cur_tx - vp->dirty_tx <= TX_RING_SIZE - 1)) {
1573 vp->tx_full = 0;
1574 netif_wake_queue (dev);
1576 if (vp->tx_full)
1577 netif_stop_queue (dev);
1578 if (vp->drv_flags & IS_BOOMERANG)
1579 outb(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold);
1580 outw(DownUnstall, ioaddr + EL3_CMD);
1581 } else {
1582 vp->stats.tx_dropped++;
1583 netif_wake_queue(dev);
1586 /* Issue Tx Enable */
1587 outw(TxEnable, ioaddr + EL3_CMD);
1588 dev->trans_start = jiffies;
1590 /* Switch to register set 7 for normal use. */
1591 EL3WINDOW(7);
1595 * Handle uncommon interrupt sources. This is a separate routine to minimize
1596 * the cache impact.
1598 static void
1599 vortex_error(struct net_device *dev, int status)
1601 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1602 long ioaddr = dev->base_addr;
1603 int do_tx_reset = 0, reset_mask = 0;
1604 unsigned char tx_status = 0;
1606 if (vortex_debug > 2) {
1607 printk(KERN_DEBUG "%s: vortex_error(), status=0x%x\n", dev->name, status);
1610 if (status & TxComplete) { /* Really "TxError" for us. */
1611 tx_status = inb(ioaddr + TxStatus);
1612 /* Presumably a tx-timeout. We must merely re-enable. */
1613 if (vortex_debug > 2
1614 || (tx_status != 0x88 && vortex_debug > 0)) {
1615 printk(KERN_DEBUG"%s: Transmit error, Tx status register %2.2x.\n",
1616 dev->name, tx_status);
1617 dump_tx_ring(dev);
1619 if (tx_status & 0x14) vp->stats.tx_fifo_errors++;
1620 if (tx_status & 0x38) vp->stats.tx_aborted_errors++;
1621 outb(0, ioaddr + TxStatus);
1622 if (tx_status & 0x30) { /* txJabber or txUnderrun */
1623 do_tx_reset = 1;
1624 } else if ((tx_status & 0x08) && (vp->drv_flags & MAX_COLLISION_RESET)) { /* maxCollisions */
1625 do_tx_reset = 1;
1626 reset_mask = 0x0108; /* Reset interface logic, but not download logic */
1627 } else { /* Merely re-enable the transmitter. */
1628 outw(TxEnable, ioaddr + EL3_CMD);
1632 if (status & RxEarly) { /* Rx early is unused. */
1633 vortex_rx(dev);
1634 outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
1636 if (status & StatsFull) { /* Empty statistics. */
1637 static int DoneDidThat;
1638 if (vortex_debug > 4)
1639 printk(KERN_DEBUG "%s: Updating stats.\n", dev->name);
1640 update_stats(ioaddr, dev);
1641 /* HACK: Disable statistics as an interrupt source. */
1642 /* This occurs when we have the wrong media type! */
1643 if (DoneDidThat == 0 &&
1644 inw(ioaddr + EL3_STATUS) & StatsFull) {
1645 printk(KERN_WARNING "%s: Updating statistics failed, disabling "
1646 "stats as an interrupt source.\n", dev->name);
1647 EL3WINDOW(5);
1648 outw(SetIntrEnb | (inw(ioaddr + 10) & ~StatsFull), ioaddr + EL3_CMD);
1649 vp->intr_enable &= ~StatsFull;
1650 EL3WINDOW(7);
1651 DoneDidThat++;
1654 if (status & IntReq) { /* Restore all interrupt sources. */
1655 outw(vp->status_enable, ioaddr + EL3_CMD);
1656 outw(vp->intr_enable, ioaddr + EL3_CMD);
1658 if (status & HostError) {
1659 u16 fifo_diag;
1660 EL3WINDOW(4);
1661 fifo_diag = inw(ioaddr + Wn4_FIFODiag);
1662 printk(KERN_ERR "%s: Host error, FIFO diagnostic register %4.4x.\n",
1663 dev->name, fifo_diag);
1664 /* Adapter failure requires Tx/Rx reset and reinit. */
1665 if (vp->full_bus_master_tx) {
1666 /* In this case, blow the card away */
1667 vortex_down(dev);
1668 wait_for_completion(dev, TotalReset | 0xff);
1669 vortex_up(dev); /* AKPM: bug. vortex_up() assumes that the rx ring is full. It may not be. */
1670 } else if (fifo_diag & 0x0400)
1671 do_tx_reset = 1;
1672 if (fifo_diag & 0x3000) {
1673 wait_for_completion(dev, RxReset);
1674 /* Set the Rx filter to the current state. */
1675 set_rx_mode(dev);
1676 outw(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
1677 outw(AckIntr | HostError, ioaddr + EL3_CMD);
1681 if (do_tx_reset) {
1682 wait_for_completion(dev, TxReset|reset_mask);
1683 outw(TxEnable, ioaddr + EL3_CMD);
1684 if (!vp->full_bus_master_tx)
1685 netif_wake_queue(dev);
1689 static int
1690 vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
1692 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1693 long ioaddr = dev->base_addr;
1695 /* Put out the doubleword header... */
1696 outl(skb->len, ioaddr + TX_FIFO);
1697 if (vp->bus_master) {
1698 /* Set the bus-master controller to transfer the packet. */
1699 int len = (skb->len + 3) & ~3;
1700 outl( vp->tx_skb_dma = pci_map_single(vp->pdev, skb->data, len, PCI_DMA_TODEVICE),
1701 ioaddr + Wn7_MasterAddr);
1702 outw(len, ioaddr + Wn7_MasterLen);
1703 vp->tx_skb = skb;
1704 outw(StartDMADown, ioaddr + EL3_CMD);
1705 /* netif_wake_queue() will be called at the DMADone interrupt. */
1706 } else {
1707 /* ... and the packet rounded to a doubleword. */
1708 outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
1709 dev_kfree_skb (skb);
1710 if (inw(ioaddr + TxFree) > 1536) {
1711 netif_start_queue (dev); /* AKPM: redundant? */
1712 } else {
1713 /* Interrupt us when the FIFO has room for max-sized packet. */
1714 netif_stop_queue(dev);
1715 outw(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
1719 dev->trans_start = jiffies;
1721 /* Clear the Tx status stack. */
1723 int tx_status;
1724 int i = 32;
1726 while (--i > 0 && (tx_status = inb(ioaddr + TxStatus)) > 0) {
1727 if (tx_status & 0x3C) { /* A Tx-disabling error occurred. */
1728 if (vortex_debug > 2)
1729 printk(KERN_DEBUG "%s: Tx error, status %2.2x.\n",
1730 dev->name, tx_status);
1731 if (tx_status & 0x04) vp->stats.tx_fifo_errors++;
1732 if (tx_status & 0x38) vp->stats.tx_aborted_errors++;
1733 if (tx_status & 0x30) {
1734 wait_for_completion(dev, TxReset);
1736 outw(TxEnable, ioaddr + EL3_CMD);
1738 outb(0x00, ioaddr + TxStatus); /* Pop the status stack. */
1741 return 0;
1744 static int
1745 boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
1747 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1748 long ioaddr = dev->base_addr;
1749 /* Calculate the next Tx descriptor entry. */
1750 int entry = vp->cur_tx % TX_RING_SIZE;
1751 struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE];
1752 unsigned long flags;
1754 if (vortex_debug > 6) {
1755 printk(KERN_DEBUG "boomerang_start_xmit()\n");
1756 if (vortex_debug > 3)
1757 printk(KERN_DEBUG "%s: Trying to send a packet, Tx index %d.\n",
1758 dev->name, vp->cur_tx);
1761 if (vp->tx_full) {
1762 if (vortex_debug > 0)
1763 printk(KERN_WARNING "%s: Tx Ring full, refusing to send buffer.\n",
1764 dev->name);
1765 return 1;
1767 vp->tx_skbuff[entry] = skb;
1768 vp->tx_ring[entry].next = 0;
1769 vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->data, skb->len, PCI_DMA_TODEVICE));
1770 vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG);
1771 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
1773 spin_lock_irqsave(&vp->lock, flags);
1774 /* Wait for the stall to complete. */
1775 wait_for_completion(dev, DownStall);
1776 prev_entry->next = cpu_to_le32(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc));
1777 if (inl(ioaddr + DownListPtr) == 0) {
1778 outl(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc), ioaddr + DownListPtr);
1779 queued_packet++;
1782 vp->cur_tx++;
1783 if (vp->cur_tx - vp->dirty_tx > TX_RING_SIZE - 1) {
1784 vp->tx_full = 1;
1785 netif_stop_queue (dev);
1786 } else { /* Clear previous interrupt enable. */
1787 #if defined(tx_interrupt_mitigation)
1788 prev_entry->status &= cpu_to_le32(~TxIntrUploaded);
1789 #endif
1790 /* netif_start_queue (dev); */ /* AKPM: redundant? */
1792 outw(DownUnstall, ioaddr + EL3_CMD);
1793 spin_unlock_irqrestore(&vp->lock, flags);
1794 dev->trans_start = jiffies;
1795 return 0;
1798 /* The interrupt handler does all of the Rx thread work and cleans up
1799 after the Tx thread. */
1802 * This is the ISR for the vortex series chips.
1803 * full_bus_master_tx == 0 && full_bus_master_rx == 0
1806 static void vortex_interrupt(int irq, void *dev_id, struct pt_regs *regs)
1808 struct net_device *dev = dev_id;
1809 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1810 long ioaddr;
1811 int status;
1812 int work_done = max_interrupt_work;
1814 ioaddr = dev->base_addr;
1815 spin_lock(&vp->lock);
1817 status = inw(ioaddr + EL3_STATUS);
1819 if (vortex_debug > 6)
1820 printk("vortex_interrupt(). status=0x%4x\n", status);
1822 if ((status & IntLatch) == 0)
1823 goto handler_exit; /* No interrupt: shared IRQs cause this */
1825 if (status & IntReq) {
1826 status |= vp->deferred;
1827 vp->deferred = 0;
1830 if (status == 0xffff) /* AKPM: h/w no longer present (hotplug)? */
1831 goto handler_exit;
1833 if (vortex_debug > 4)
1834 printk(KERN_DEBUG "%s: interrupt, status %4.4x, latency %d ticks.\n",
1835 dev->name, status, inb(ioaddr + Timer));
1837 do {
1838 if (vortex_debug > 5)
1839 printk(KERN_DEBUG "%s: In interrupt loop, status %4.4x.\n",
1840 dev->name, status);
1841 if (status & RxComplete)
1842 vortex_rx(dev);
1844 if (status & TxAvailable) {
1845 if (vortex_debug > 5)
1846 printk(KERN_DEBUG " TX room bit was handled.\n");
1847 /* There's room in the FIFO for a full-sized packet. */
1848 outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
1849 netif_wake_queue (dev);
1852 if (status & DMADone) {
1853 if (inw(ioaddr + Wn7_MasterStatus) & 0x1000) {
1854 outw(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
1855 pci_unmap_single(vp->pdev, vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3, PCI_DMA_TODEVICE);
1856 dev_kfree_skb_irq(vp->tx_skb); /* Release the transferred buffer */
1857 if (inw(ioaddr + TxFree) > 1536) {
1859 * AKPM: FIXME: I don't think we need this. If the queue was stopped due to
1860 * insufficient FIFO room, the TxAvailable test will succeed and call
1861 * netif_wake_queue()
1863 netif_wake_queue(dev);
1864 } else { /* Interrupt when FIFO has room for max-sized packet. */
1865 outw(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
1866 netif_stop_queue(dev); /* AKPM: This is new */
1870 /* Check for all uncommon interrupts at once. */
1871 if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq)) {
1872 if (status == 0xffff)
1873 break;
1874 vortex_error(dev, status);
1877 if (--work_done < 0) {
1878 printk(KERN_WARNING "%s: Too much work in interrupt, status "
1879 "%4.4x.\n", dev->name, status);
1880 /* Disable all pending interrupts. */
1881 do {
1882 vp->deferred |= status;
1883 outw(SetStatusEnb | (~vp->deferred & vp->status_enable),
1884 ioaddr + EL3_CMD);
1885 outw(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
1886 } while ((status = inw(ioaddr + EL3_CMD)) & IntLatch);
1887 /* The timer will reenable interrupts. */
1888 mod_timer(&vp->timer, jiffies + 1*HZ);
1889 break;
1891 /* Acknowledge the IRQ. */
1892 outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
1893 } while ((status = inw(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
1895 if (vortex_debug > 4)
1896 printk(KERN_DEBUG "%s: exiting interrupt, status %4.4x.\n",
1897 dev->name, status);
1898 handler_exit:
1899 spin_unlock(&vp->lock);
1903 * This is the ISR for the boomerang series chips.
1904 * full_bus_master_tx == 1 && full_bus_master_rx == 1
1907 static void boomerang_interrupt(int irq, void *dev_id, struct pt_regs *regs)
1909 struct net_device *dev = dev_id;
1910 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1911 long ioaddr;
1912 int status;
1913 int work_done = max_interrupt_work;
1915 ioaddr = dev->base_addr;
1918 * It seems dopey to put the spinlock this early, but we could race against vortex_tx_timeout
1919 * and boomerang_start_xmit
1921 spin_lock(&vp->lock);
1923 status = inw(ioaddr + EL3_STATUS);
1925 if (vortex_debug > 6)
1926 printk(KERN_DEBUG "boomerang_interrupt. status=0x%4x\n", status);
1928 if ((status & IntLatch) == 0)
1929 goto handler_exit; /* No interrupt: shared IRQs can cause this */
1931 if (status == 0xffff) { /* AKPM: h/w no longer present (hotplug)? */
1932 if (vortex_debug > 1)
1933 printk(KERN_DEBUG "boomerang_interrupt(1): status = 0xffff\n");
1934 goto handler_exit;
1937 if (status & IntReq) {
1938 status |= vp->deferred;
1939 vp->deferred = 0;
1942 if (vortex_debug > 4)
1943 printk(KERN_DEBUG "%s: interrupt, status %4.4x, latency %d ticks.\n",
1944 dev->name, status, inb(ioaddr + Timer));
1945 do {
1946 if (vortex_debug > 5)
1947 printk(KERN_DEBUG "%s: In interrupt loop, status %4.4x.\n",
1948 dev->name, status);
1949 if (status & UpComplete) {
1950 outw(AckIntr | UpComplete, ioaddr + EL3_CMD);
1951 if (vortex_debug > 5)
1952 printk(KERN_DEBUG "boomerang_interrupt->boomerang_rx\n");
1953 boomerang_rx(dev);
1956 if (status & DownComplete) {
1957 unsigned int dirty_tx = vp->dirty_tx;
1959 outw(AckIntr | DownComplete, ioaddr + EL3_CMD);
1960 while (vp->cur_tx - dirty_tx > 0) {
1961 int entry = dirty_tx % TX_RING_SIZE;
1962 #if 1 /* AKPM: the latter is faster, but cyclone-only */
1963 if (inl(ioaddr + DownListPtr) ==
1964 vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc))
1965 break; /* It still hasn't been processed. */
1966 #else
1967 if ((vp->tx_ring[entry].status & DN_COMPLETE) == 0)
1968 break; /* It still hasn't been processed. */
1969 #endif
1971 if (vp->tx_skbuff[entry]) {
1972 struct sk_buff *skb = vp->tx_skbuff[entry];
1974 pci_unmap_single(vp->pdev,
1975 le32_to_cpu(vp->tx_ring[entry].addr), skb->len, PCI_DMA_TODEVICE);
1976 dev_kfree_skb_irq(skb);
1977 vp->tx_skbuff[entry] = 0;
1978 } else {
1979 printk(KERN_DEBUG "boomerang_interrupt: no skb!\n");
1981 /* vp->stats.tx_packets++; Counted below. */
1982 dirty_tx++;
1984 vp->dirty_tx = dirty_tx;
1985 if (vp->tx_full && (vp->cur_tx - dirty_tx <= TX_RING_SIZE - 1)) {
1986 if (vortex_debug > 6)
1987 printk(KERN_DEBUG "boomerang_interrupt: clearing tx_full\n");
1988 vp->tx_full = 0;
1989 netif_wake_queue (dev);
1993 /* Check for all uncommon interrupts at once. */
1994 if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq))
1995 vortex_error(dev, status);
1997 if (--work_done < 0) {
1998 printk(KERN_WARNING "%s: Too much work in interrupt, status "
1999 "%4.4x.\n", dev->name, status);
2000 /* Disable all pending interrupts. */
2001 do {
2002 vp->deferred |= status;
2003 outw(SetStatusEnb | (~vp->deferred & vp->status_enable),
2004 ioaddr + EL3_CMD);
2005 outw(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
2006 } while ((status = inw(ioaddr + EL3_CMD)) & IntLatch);
2007 /* The timer will reenable interrupts. */
2008 mod_timer(&vp->timer, jiffies + 1*HZ);
2009 break;
2011 /* Acknowledge the IRQ. */
2012 outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
2013 if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
2014 writel(0x8000, vp->cb_fn_base + 4);
2016 } while ((status = inw(ioaddr + EL3_STATUS)) & IntLatch);
2018 if (vortex_debug > 4)
2019 printk(KERN_DEBUG "%s: exiting interrupt, status %4.4x.\n",
2020 dev->name, status);
2021 handler_exit:
2022 spin_unlock(&vp->lock);
2025 static int vortex_rx(struct net_device *dev)
2027 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2028 long ioaddr = dev->base_addr;
2029 int i;
2030 short rx_status;
2032 if (vortex_debug > 5)
2033 printk(KERN_DEBUG "vortex_rx(): status %4.4x, rx_status %4.4x.\n",
2034 inw(ioaddr+EL3_STATUS), inw(ioaddr+RxStatus));
2035 while ((rx_status = inw(ioaddr + RxStatus)) > 0) {
2036 if (rx_status & 0x4000) { /* Error, update stats. */
2037 unsigned char rx_error = inb(ioaddr + RxErrors);
2038 if (vortex_debug > 2)
2039 printk(KERN_DEBUG " Rx error: status %2.2x.\n", rx_error);
2040 vp->stats.rx_errors++;
2041 if (rx_error & 0x01) vp->stats.rx_over_errors++;
2042 if (rx_error & 0x02) vp->stats.rx_length_errors++;
2043 if (rx_error & 0x04) vp->stats.rx_frame_errors++;
2044 if (rx_error & 0x08) vp->stats.rx_crc_errors++;
2045 if (rx_error & 0x10) vp->stats.rx_length_errors++;
2046 } else {
2047 /* The packet length: up to 4.5K!. */
2048 int pkt_len = rx_status & 0x1fff;
2049 struct sk_buff *skb;
2051 skb = dev_alloc_skb(pkt_len + 5);
2052 if (vortex_debug > 4)
2053 printk(KERN_DEBUG "Receiving packet size %d status %4.4x.\n",
2054 pkt_len, rx_status);
2055 if (skb != NULL) {
2056 skb->dev = dev;
2057 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2058 /* 'skb_put()' points to the start of sk_buff data area. */
2059 if (vp->bus_master &&
2060 ! (inw(ioaddr + Wn7_MasterStatus) & 0x8000)) {
2061 dma_addr_t dma = pci_map_single(vp->pdev, skb_put(skb, pkt_len),
2062 pkt_len, PCI_DMA_FROMDEVICE);
2063 outl(dma, ioaddr + Wn7_MasterAddr);
2064 outw((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
2065 outw(StartDMAUp, ioaddr + EL3_CMD);
2066 while (inw(ioaddr + Wn7_MasterStatus) & 0x8000)
2068 pci_unmap_single(vp->pdev, dma, pkt_len, PCI_DMA_FROMDEVICE);
2069 } else {
2070 insl(ioaddr + RX_FIFO, skb_put(skb, pkt_len),
2071 (pkt_len + 3) >> 2);
2073 outw(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
2074 skb->protocol = eth_type_trans(skb, dev);
2075 netif_rx(skb);
2076 dev->last_rx = jiffies;
2077 vp->stats.rx_packets++;
2078 /* Wait a limited time to go to next packet. */
2079 for (i = 200; i >= 0; i--)
2080 if ( ! (inw(ioaddr + EL3_STATUS) & CmdInProgress))
2081 break;
2082 continue;
2083 } else if (vortex_debug > 0)
2084 printk(KERN_NOTICE "%s: No memory to allocate a sk_buff of "
2085 "size %d.\n", dev->name, pkt_len);
2087 vp->stats.rx_dropped++;
2088 wait_for_completion(dev, RxDiscard);
2091 return 0;
2094 static int
2095 boomerang_rx(struct net_device *dev)
2097 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2098 int entry = vp->cur_rx % RX_RING_SIZE;
2099 long ioaddr = dev->base_addr;
2100 int rx_status;
2101 int rx_work_limit = vp->dirty_rx + RX_RING_SIZE - vp->cur_rx;
2103 if (vortex_debug > 5)
2104 printk(KERN_DEBUG "boomerang_rx(): status %4.4x\n", inw(ioaddr+EL3_STATUS));
2106 while ((rx_status = le32_to_cpu(vp->rx_ring[entry].status)) & RxDComplete){
2107 if (--rx_work_limit < 0)
2108 break;
2109 if (rx_status & RxDError) { /* Error, update stats. */
2110 unsigned char rx_error = rx_status >> 16;
2111 if (vortex_debug > 2)
2112 printk(KERN_DEBUG " Rx error: status %2.2x.\n", rx_error);
2113 vp->stats.rx_errors++;
2114 if (rx_error & 0x01) vp->stats.rx_over_errors++;
2115 if (rx_error & 0x02) vp->stats.rx_length_errors++;
2116 if (rx_error & 0x04) vp->stats.rx_frame_errors++;
2117 if (rx_error & 0x08) vp->stats.rx_crc_errors++;
2118 if (rx_error & 0x10) vp->stats.rx_length_errors++;
2119 } else {
2120 /* The packet length: up to 4.5K!. */
2121 int pkt_len = rx_status & 0x1fff;
2122 struct sk_buff *skb;
2123 dma_addr_t dma = le32_to_cpu(vp->rx_ring[entry].addr);
2125 vp->stats.rx_bytes += pkt_len;
2126 if (vortex_debug > 4)
2127 printk(KERN_DEBUG "Receiving packet size %d status %4.4x.\n",
2128 pkt_len, rx_status);
2130 /* Check if the packet is long enough to just accept without
2131 copying to a properly sized skbuff. */
2132 if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != 0) {
2133 skb->dev = dev;
2134 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2135 pci_dma_sync_single(vp->pdev, dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2136 /* 'skb_put()' points to the start of sk_buff data area. */
2137 memcpy(skb_put(skb, pkt_len),
2138 vp->rx_skbuff[entry]->tail,
2139 pkt_len);
2140 rx_copy++;
2141 } else {
2142 /* Pass up the skbuff already on the Rx ring. */
2143 skb = vp->rx_skbuff[entry];
2144 vp->rx_skbuff[entry] = NULL;
2145 skb_put(skb, pkt_len);
2146 pci_unmap_single(vp->pdev, dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2147 rx_nocopy++;
2149 skb->protocol = eth_type_trans(skb, dev);
2150 { /* Use hardware checksum info. */
2151 int csum_bits = rx_status & 0xee000000;
2152 if (csum_bits &&
2153 (csum_bits == (IPChksumValid | TCPChksumValid) ||
2154 csum_bits == (IPChksumValid | UDPChksumValid))) {
2155 skb->ip_summed = CHECKSUM_UNNECESSARY;
2156 rx_csumhits++;
2159 netif_rx(skb);
2160 dev->last_rx = jiffies;
2161 vp->stats.rx_packets++;
2163 entry = (++vp->cur_rx) % RX_RING_SIZE;
2165 /* Refill the Rx ring buffers. */
2166 for (; vp->cur_rx - vp->dirty_rx > 0; vp->dirty_rx++) {
2167 struct sk_buff *skb;
2168 entry = vp->dirty_rx % RX_RING_SIZE;
2169 if (vp->rx_skbuff[entry] == NULL) {
2170 skb = dev_alloc_skb(PKT_BUF_SZ);
2171 if (skb == NULL) {
2172 static unsigned long last_jif;
2173 if ((jiffies - last_jif) > 10 * HZ) {
2174 printk(KERN_WARNING "%s: memory shortage\n", dev->name);
2175 last_jif = jiffies;
2177 if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE)
2178 mod_timer(&vp->rx_oom_timer, RUN_AT(HZ * 1));
2179 break; /* Bad news! */
2181 skb->dev = dev; /* Mark as being used by this device. */
2182 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2183 vp->rx_ring[entry].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->tail, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
2184 vp->rx_skbuff[entry] = skb;
2186 vp->rx_ring[entry].status = 0; /* Clear complete bit. */
2187 outw(UpUnstall, ioaddr + EL3_CMD);
2189 return 0;
2193 * If we've hit a total OOM refilling the Rx ring we poll once a second
2194 * for some memory. Otherwise there is no way to restart the rx process.
2196 static void
2197 rx_oom_timer(unsigned long arg)
2199 struct net_device *dev = (struct net_device *)arg;
2200 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2202 spin_lock_irq(&vp->lock);
2203 if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE) /* This test is redundant, but makes me feel good */
2204 boomerang_rx(dev);
2205 if (vortex_debug > 1) {
2206 printk(KERN_DEBUG "%s: rx_oom_timer %s\n", dev->name,
2207 ((vp->cur_rx - vp->dirty_rx) != RX_RING_SIZE) ? "succeeded" : "retrying");
2209 spin_unlock_irq(&vp->lock);
2212 static void
2213 vortex_down(struct net_device *dev)
2215 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2216 long ioaddr = dev->base_addr;
2218 netif_stop_queue (dev);
2220 del_timer_sync(&vp->rx_oom_timer);
2221 del_timer_sync(&vp->timer);
2223 /* Turn off statistics ASAP. We update vp->stats below. */
2224 outw(StatsDisable, ioaddr + EL3_CMD);
2226 /* Disable the receiver and transmitter. */
2227 outw(RxDisable, ioaddr + EL3_CMD);
2228 outw(TxDisable, ioaddr + EL3_CMD);
2230 if (dev->if_port == XCVR_10base2)
2231 /* Turn off thinnet power. Green! */
2232 outw(StopCoax, ioaddr + EL3_CMD);
2234 outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
2236 update_stats(ioaddr, dev);
2237 if (vp->full_bus_master_rx)
2238 outl(0, ioaddr + UpListPtr);
2239 if (vp->full_bus_master_tx)
2240 outl(0, ioaddr + DownListPtr);
2242 if (vp->capabilities & CapPwrMgmt)
2243 acpi_set_WOL(dev);
2246 static int
2247 vortex_close(struct net_device *dev)
2249 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2250 long ioaddr = dev->base_addr;
2251 int i;
2253 if (netif_device_present(dev))
2254 vortex_down(dev);
2256 if (vortex_debug > 1) {
2257 printk(KERN_DEBUG"%s: vortex_close() status %4.4x, Tx status %2.2x.\n",
2258 dev->name, inw(ioaddr + EL3_STATUS), inb(ioaddr + TxStatus));
2259 printk(KERN_DEBUG "%s: vortex close stats: rx_nocopy %d rx_copy %d"
2260 " tx_queued %d Rx pre-checksummed %d.\n",
2261 dev->name, rx_nocopy, rx_copy, queued_packet, rx_csumhits);
2264 free_irq(dev->irq, dev);
2266 if (vp->full_bus_master_rx) { /* Free Boomerang bus master Rx buffers. */
2267 for (i = 0; i < RX_RING_SIZE; i++)
2268 if (vp->rx_skbuff[i]) {
2269 pci_unmap_single( vp->pdev, le32_to_cpu(vp->rx_ring[i].addr),
2270 PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2271 dev_kfree_skb(vp->rx_skbuff[i]);
2272 vp->rx_skbuff[i] = 0;
2275 if (vp->full_bus_master_tx) { /* Free Boomerang bus master Tx buffers. */
2276 for (i = 0; i < TX_RING_SIZE; i++)
2277 if (vp->tx_skbuff[i]) {
2278 struct sk_buff *skb = vp->tx_skbuff[i];
2280 pci_unmap_single(vp->pdev, le32_to_cpu(vp->tx_ring[i].addr), skb->len, PCI_DMA_TODEVICE);
2281 dev_kfree_skb(skb);
2282 vp->tx_skbuff[i] = 0;
2286 vp->open = 0;
2287 return 0;
2290 static void
2291 dump_tx_ring(struct net_device *dev)
2293 if (vortex_debug > 0) {
2294 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2295 long ioaddr = dev->base_addr;
2297 if (vp->full_bus_master_tx) {
2298 int i;
2299 int stalled = inl(ioaddr + PktStatus) & 0x04; /* Possible racy. But it's only debug stuff */
2301 wait_for_completion(dev, DownStall);
2302 printk(KERN_ERR " Flags; bus-master %d, full %d; dirty %d(%d) "
2303 "current %d(%d).\n",
2304 vp->full_bus_master_tx, vp->tx_full,
2305 vp->dirty_tx, vp->dirty_tx % TX_RING_SIZE,
2306 vp->cur_tx, vp->cur_tx % TX_RING_SIZE);
2307 printk(KERN_ERR " Transmit list %8.8x vs. %p.\n",
2308 inl(ioaddr + DownListPtr),
2309 &vp->tx_ring[vp->dirty_tx % TX_RING_SIZE]);
2310 for (i = 0; i < TX_RING_SIZE; i++) {
2311 printk(KERN_ERR " %d: @%p length %8.8x status %8.8x\n", i,
2312 &vp->tx_ring[i],
2313 le32_to_cpu(vp->tx_ring[i].length),
2314 le32_to_cpu(vp->tx_ring[i].status));
2316 if (!stalled)
2317 outw(DownUnstall, ioaddr + EL3_CMD);
2322 static struct net_device_stats *vortex_get_stats(struct net_device *dev)
2324 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2325 unsigned long flags;
2327 if (netif_device_present(dev)) { /* AKPM: Used to be netif_running */
2328 spin_lock_irqsave (&vp->lock, flags);
2329 update_stats(dev->base_addr, dev);
2330 spin_unlock_irqrestore (&vp->lock, flags);
2332 return &vp->stats;
2335 /* Update statistics.
2336 Unlike with the EL3 we need not worry about interrupts changing
2337 the window setting from underneath us, but we must still guard
2338 against a race condition with a StatsUpdate interrupt updating the
2339 table. This is done by checking that the ASM (!) code generated uses
2340 atomic updates with '+='.
2342 static void update_stats(long ioaddr, struct net_device *dev)
2344 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2345 int old_window = inw(ioaddr + EL3_CMD);
2347 if (old_window == 0xffff) /* Chip suspended or ejected. */
2348 return;
2349 /* Unlike the 3c5x9 we need not turn off stats updates while reading. */
2350 /* Switch to the stats window, and read everything. */
2351 EL3WINDOW(6);
2352 vp->stats.tx_carrier_errors += inb(ioaddr + 0);
2353 vp->stats.tx_heartbeat_errors += inb(ioaddr + 1);
2354 /* Multiple collisions. */ inb(ioaddr + 2);
2355 vp->stats.collisions += inb(ioaddr + 3);
2356 vp->stats.tx_window_errors += inb(ioaddr + 4);
2357 vp->stats.rx_fifo_errors += inb(ioaddr + 5);
2358 vp->stats.tx_packets += inb(ioaddr + 6);
2359 vp->stats.tx_packets += (inb(ioaddr + 9)&0x30) << 4;
2360 /* Rx packets */ inb(ioaddr + 7); /* Must read to clear */
2361 /* Tx deferrals */ inb(ioaddr + 8);
2362 /* Don't bother with register 9, an extension of registers 6&7.
2363 If we do use the 6&7 values the atomic update assumption above
2364 is invalid. */
2365 vp->stats.rx_bytes += inw(ioaddr + 10);
2366 vp->stats.tx_bytes += inw(ioaddr + 12);
2367 /* New: On the Vortex we must also clear the BadSSD counter. */
2368 EL3WINDOW(4);
2369 inb(ioaddr + 12);
2372 u8 up = inb(ioaddr + 13);
2373 vp->stats.rx_bytes += (up & 0x0f) << 16;
2374 vp->stats.tx_bytes += (up & 0xf0) << 12;
2377 /* We change back to window 7 (not 1) with the Vortex. */
2378 /* AKPM: the previous comment is obsolete - we switch back to the old window */
2379 EL3WINDOW(old_window >> 13);
2380 return;
2383 static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
2385 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2386 long ioaddr = dev->base_addr;
2387 u16 *data = (u16 *)&rq->ifr_data;
2388 int phy = vp->phys[0] & 0x1f;
2389 int retval;
2391 switch(cmd) {
2392 case SIOCDEVPRIVATE: /* Get the address of the PHY in use. */
2393 data[0] = phy;
2394 case SIOCDEVPRIVATE+1: /* Read the specified MII register. */
2395 EL3WINDOW(4);
2396 data[3] = mdio_read(dev, data[0] & 0x1f, data[1] & 0x1f);
2397 retval = 0;
2398 break;
2399 case SIOCDEVPRIVATE+2: /* Write the specified MII register */
2400 if (!capable(CAP_NET_ADMIN)) {
2401 retval = -EPERM;
2402 } else {
2403 EL3WINDOW(4);
2404 mdio_write(dev, data[0] & 0x1f, data[1] & 0x1f, data[2]);
2405 retval = 0;
2407 break;
2408 default:
2409 retval = -EOPNOTSUPP;
2410 break;
2413 return retval;
2416 /* Pre-Cyclone chips have no documented multicast filter, so the only
2417 multicast setting is to receive all multicast frames. At least
2418 the chip has a very clean way to set the mode, unlike many others. */
2419 static void set_rx_mode(struct net_device *dev)
2421 long ioaddr = dev->base_addr;
2422 int new_mode;
2424 if (dev->flags & IFF_PROMISC) {
2425 if (vortex_debug > 0)
2426 printk(KERN_NOTICE "%s: Setting promiscuous mode.\n", dev->name);
2427 new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast|RxProm;
2428 } else if ((dev->mc_list) || (dev->flags & IFF_ALLMULTI)) {
2429 new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast;
2430 } else
2431 new_mode = SetRxFilter | RxStation | RxBroadcast;
2433 outw(new_mode, ioaddr + EL3_CMD);
2436 /* MII transceiver control section.
2437 Read and write the MII registers using software-generated serial
2438 MDIO protocol. See the MII specifications or DP83840A data sheet
2439 for details. */
2441 /* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
2442 met by back-to-back PCI I/O cycles, but we insert a delay to avoid
2443 "overclocking" issues. */
2444 #define mdio_delay() inl(mdio_addr)
2446 #define MDIO_SHIFT_CLK 0x01
2447 #define MDIO_DIR_WRITE 0x04
2448 #define MDIO_DATA_WRITE0 (0x00 | MDIO_DIR_WRITE)
2449 #define MDIO_DATA_WRITE1 (0x02 | MDIO_DIR_WRITE)
2450 #define MDIO_DATA_READ 0x02
2451 #define MDIO_ENB_IN 0x00
2453 /* Generate the preamble required for initial synchronization and
2454 a few older transceivers. */
2455 static void mdio_sync(long ioaddr, int bits)
2457 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
2459 /* Establish sync by sending at least 32 logic ones. */
2460 while (-- bits >= 0) {
2461 outw(MDIO_DATA_WRITE1, mdio_addr);
2462 mdio_delay();
2463 outw(MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK, mdio_addr);
2464 mdio_delay();
2468 static int mdio_read(struct net_device *dev, int phy_id, int location)
2470 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2471 int i;
2472 long ioaddr = dev->base_addr;
2473 int read_cmd = (0xf6 << 10) | (phy_id << 5) | location;
2474 unsigned int retval = 0;
2475 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
2477 spin_lock_bh(&vp->mdio_lock);
2479 if (mii_preamble_required)
2480 mdio_sync(ioaddr, 32);
2482 /* Shift the read command bits out. */
2483 for (i = 14; i >= 0; i--) {
2484 int dataval = (read_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
2485 outw(dataval, mdio_addr);
2486 mdio_delay();
2487 outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
2488 mdio_delay();
2490 /* Read the two transition, 16 data, and wire-idle bits. */
2491 for (i = 19; i > 0; i--) {
2492 outw(MDIO_ENB_IN, mdio_addr);
2493 mdio_delay();
2494 retval = (retval << 1) | ((inw(mdio_addr) & MDIO_DATA_READ) ? 1 : 0);
2495 outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
2496 mdio_delay();
2498 spin_unlock_bh(&vp->mdio_lock);
2499 return retval & 0x20000 ? 0xffff : retval>>1 & 0xffff;
2502 static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
2504 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2505 long ioaddr = dev->base_addr;
2506 int write_cmd = 0x50020000 | (phy_id << 23) | (location << 18) | value;
2507 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
2508 int i;
2510 spin_lock_bh(&vp->mdio_lock);
2512 if (mii_preamble_required)
2513 mdio_sync(ioaddr, 32);
2515 /* Shift the command bits out. */
2516 for (i = 31; i >= 0; i--) {
2517 int dataval = (write_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
2518 outw(dataval, mdio_addr);
2519 mdio_delay();
2520 outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
2521 mdio_delay();
2523 /* Leave the interface idle. */
2524 for (i = 1; i >= 0; i--) {
2525 outw(MDIO_ENB_IN, mdio_addr);
2526 mdio_delay();
2527 outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
2528 mdio_delay();
2530 spin_unlock_bh(&vp->mdio_lock);
2531 return;
2534 /* ACPI: Advanced Configuration and Power Interface. */
2535 /* Set Wake-On-LAN mode and put the board into D3 (power-down) state. */
2536 static void acpi_set_WOL(struct net_device *dev)
2538 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2539 long ioaddr = dev->base_addr;
2541 /* AKPM: This kills the 905 */
2542 if (vortex_debug > 1) {
2543 printk(KERN_INFO PFX "Wake-on-LAN functions disabled\n");
2545 return;
2547 /* Power up on: 1==Downloaded Filter, 2==Magic Packets, 4==Link Status. */
2548 EL3WINDOW(7);
2549 outw(2, ioaddr + 0x0c);
2550 /* The RxFilter must accept the WOL frames. */
2551 outw(SetRxFilter|RxStation|RxMulticast|RxBroadcast, ioaddr + EL3_CMD);
2552 outw(RxEnable, ioaddr + EL3_CMD);
2553 /* Change the power state to D3; RxEnable doesn't take effect. */
2554 pci_write_config_word(vp->pdev, 0xe0, 0x8103);
2558 static void __devexit vortex_remove_one (struct pci_dev *pdev)
2560 struct net_device *dev = pdev->driver_data;
2561 struct vortex_private *vp;
2563 if (!dev) {
2564 printk("vortex_remove_one called for EISA device!\n");
2565 BUG();
2568 vp = (void *)(dev->priv);
2570 /* AKPM: FIXME: we should have
2571 * if (vp->cb_fn_base) iounmap(vp->cb_fn_base);
2572 * here
2574 unregister_netdev(dev);
2575 outw(TotalReset, dev->base_addr + EL3_CMD);
2576 if (vp->must_free_region)
2577 release_region(dev->base_addr, vp->io_size);
2578 kfree(dev);
2582 static struct pci_driver vortex_driver = {
2583 name: "3c575_cb",
2584 probe: vortex_init_one,
2585 remove: vortex_remove_one,
2586 suspend: vortex_suspend,
2587 resume: vortex_resume,
2588 id_table: vortex_pci_tbl,
2592 static int vortex_have_pci;
2593 static int vortex_have_eisa;
2596 static int __init vortex_init (void)
2598 int rc;
2600 rc = pci_module_init(&vortex_driver);
2601 if (rc < 0) {
2602 rc = vortex_eisa_init();
2603 if (rc > 0)
2604 vortex_have_eisa = 1;
2605 } else {
2606 vortex_have_pci = 1;
2609 return rc;
2613 static void __exit vortex_eisa_cleanup (void)
2615 struct net_device *dev, *tmp;
2616 struct vortex_private *vp;
2617 long ioaddr;
2619 dev = root_vortex_eisa_dev;
2621 while (dev) {
2622 vp = dev->priv;
2623 ioaddr = dev->base_addr;
2625 unregister_netdev (dev);
2626 outw (TotalReset, ioaddr + EL3_CMD);
2627 release_region (ioaddr, VORTEX_TOTAL_SIZE);
2629 tmp = dev;
2630 dev = vp->next_module;
2632 kfree (tmp);
2637 static void __exit vortex_cleanup (void)
2639 if (vortex_have_pci)
2640 pci_unregister_driver (&vortex_driver);
2641 if (vortex_have_eisa)
2642 vortex_eisa_cleanup ();
2646 module_init(vortex_init);
2647 module_exit(vortex_cleanup);
2652 * Local variables:
2653 * c-indent-level: 4
2654 * c-basic-offset: 4
2655 * tab-width: 4
2656 * End: