MOXA linux-2.6.x / linux-2.6.9-uc0 from sdlinux-moxaart.tgz
[linux-2.6.9-moxart.git] / drivers / net / 3c59x.c
blob89faad1d21044e3967d3d68ad597643d5beb886e
1 /* EtherLinkXL.c: A 3Com EtherLink PCI III/XL ethernet driver for linux. */
2 /*
3 Written 1996-1999 by Donald Becker.
5 This software may be used and distributed according to the terms
6 of the GNU General Public License, incorporated herein by reference.
8 This driver is for the 3Com "Vortex" and "Boomerang" series ethercards.
9 Members of the series include Fast EtherLink 3c590/3c592/3c595/3c597
10 and the EtherLink XL 3c900 and 3c905 cards.
12 Problem reports and questions should be directed to
13 vortex@scyld.com
15 The author may be reached as becker@scyld.com, or C/O
16 Scyld Computing Corporation
17 410 Severn Ave., Suite 210
18 Annapolis MD 21403
20 Linux Kernel Additions:
22 0.99H+lk0.9 - David S. Miller - softnet, PCI DMA updates
23 0.99H+lk1.0 - Jeff Garzik <jgarzik@pobox.com>
24 Remove compatibility defines for kernel versions < 2.2.x.
25 Update for new 2.3.x module interface
26 LK1.1.2 (March 19, 2000)
27 * New PCI interface (jgarzik)
29 LK1.1.3 25 April 2000, Andrew Morton <andrewm@uow.edu.au>
30 - Merged with 3c575_cb.c
31 - Don't set RxComplete in boomerang interrupt enable reg
32 - spinlock in vortex_timer to protect mdio functions
33 - disable local interrupts around call to vortex_interrupt in
34 vortex_tx_timeout() (So vortex_interrupt can use spin_lock())
35 - Select window 3 in vortex_timer()'s write to Wn3_MAC_Ctrl
36 - In vortex_start_xmit(), move the lock to _after_ we've altered
37 vp->cur_tx and vp->tx_full. This defeats the race between
38 vortex_start_xmit() and vortex_interrupt which was identified
39 by Bogdan Costescu.
40 - Merged back support for six new cards from various sources
41 - Set vortex_have_pci if pci_module_init returns zero (fixes cardbus
42 insertion oops)
43 - Tell it that 3c905C has NWAY for 100bT autoneg
44 - Fix handling of SetStatusEnd in 'Too much work..' code, as
45 per 2.3.99's 3c575_cb (Dave Hinds).
46 - Split ISR into two for vortex & boomerang
47 - Fix MOD_INC/DEC races
48 - Handle resource allocation failures.
49 - Fix 3CCFE575CT LED polarity
50 - Make tx_interrupt_mitigation the default
52 LK1.1.4 25 April 2000, Andrew Morton <andrewm@uow.edu.au>
53 - Add extra TxReset to vortex_up() to fix 575_cb hotplug initialisation probs.
54 - Put vortex_info_tbl into __devinitdata
55 - In the vortex_error StatsFull HACK, disable stats in vp->intr_enable as well
56 as in the hardware.
57 - Increased the loop counter in issue_and_wait from 2,000 to 4,000.
59 LK1.1.5 28 April 2000, andrewm
60 - Added powerpc defines (John Daniel <jdaniel@etresoft.com> said these work...)
61 - Some extra diagnostics
62 - In vortex_error(), reset the Tx on maxCollisions. Otherwise most
63 chips usually get a Tx timeout.
64 - Added extra_reset module parm
65 - Replaced some inline timer manip with mod_timer
66 (Franois romieu <Francois.Romieu@nic.fr>)
67 - In vortex_up(), don't make Wn3_config initialisation dependent upon has_nway
68 (this came across from 3c575_cb).
70 LK1.1.6 06 Jun 2000, andrewm
71 - Backed out the PPC defines.
72 - Use del_timer_sync(), mod_timer().
73 - Fix wrapped ulong comparison in boomerang_rx()
74 - Add IS_TORNADO, use it to suppress 3c905C checksum error msg
75 (Donald Becker, I Lee Hetherington <ilh@sls.lcs.mit.edu>)
76 - Replace union wn3_config with BFINS/BFEXT manipulation for
77 sparc64 (Pete Zaitcev, Peter Jones)
78 - In vortex_error, do_tx_reset and vortex_tx_timeout(Vortex):
79 do a netif_wake_queue() to better recover from errors. (Anders Pedersen,
80 Donald Becker)
81 - Print a warning on out-of-memory (rate limited to 1 per 10 secs)
82 - Added two more Cardbus 575 NICs: 5b57 and 6564 (Paul Wagland)
84 LK1.1.7 2 Jul 2000 andrewm
85 - Better handling of shared IRQs
86 - Reset the transmitter on a Tx reclaim error
87 - Fixed crash under OOM during vortex_open() (Mark Hemment)
88 - Fix Rx cessation problem during OOM (help from Mark Hemment)
89 - The spinlocks around the mdio access were blocking interrupts for 300uS.
90 Fix all this to use spin_lock_bh() within mdio_read/write
91 - Only write to TxFreeThreshold if it's a boomerang - other NICs don't
92 have one.
93 - Added 802.3x MAC-layer flow control support
95 LK1.1.8 13 Aug 2000 andrewm
96 - Ignore request_region() return value - already reserved if Cardbus.
97 - Merged some additional Cardbus flags from Don's 0.99Qk
98 - Some fixes for 3c556 (Fred Maciel)
99 - Fix for EISA initialisation (Jan Rekorajski)
100 - Renamed MII_XCVR_PWR and EEPROM_230 to align with 3c575_cb and D. Becker's drivers
101 - Fixed MII_XCVR_PWR for 3CCFE575CT
102 - Added INVERT_LED_PWR, used it.
103 - Backed out the extra_reset stuff
105 LK1.1.9 12 Sep 2000 andrewm
106 - Backed out the tx_reset_resume flags. It was a no-op.
107 - In vortex_error, don't reset the Tx on txReclaim errors
108 - In vortex_error, don't reset the Tx on maxCollisions errors.
109 Hence backed out all the DownListPtr logic here.
110 - In vortex_error, give Tornado cards a partial TxReset on
111 maxCollisions (David Hinds). Defined MAX_COLLISION_RESET for this.
112 - Redid some driver flags and device names based on pcmcia_cs-3.1.20.
113 - Fixed a bug where, if vp->tx_full is set when the interface
114 is downed, it remains set when the interface is upped. Bad
115 things happen.
117 LK1.1.10 17 Sep 2000 andrewm
118 - Added EEPROM_8BIT for 3c555 (Fred Maciel)
119 - Added experimental support for the 3c556B Laptop Hurricane (Louis Gerbarg)
120 - Add HAS_NWAY to "3c900 Cyclone 10Mbps TPO"
122 LK1.1.11 13 Nov 2000 andrewm
123 - Dump MOD_INC/DEC_USE_COUNT, use SET_MODULE_OWNER
125 LK1.1.12 1 Jan 2001 andrewm (2.4.0-pre1)
126 - Call pci_enable_device before we request our IRQ (Tobias Ringstrom)
127 - Add 3c590 PCI latency timer hack to vortex_probe1 (from 0.99Ra)
128 - Added extended issue_and_wait for the 3c905CX.
129 - Look for an MII on PHY index 24 first (3c905CX oddity).
130 - Add HAS_NWAY to 3cSOHO100-TX (Brett Frankenberger)
131 - Don't free skbs we don't own on oom path in vortex_open().
133 LK1.1.13 27 Jan 2001
134 - Added explicit `medialock' flag so we can truly
135 lock the media type down with `options'.
136 - "check ioremap return and some tidbits" (Arnaldo Carvalho de Melo <acme@conectiva.com.br>)
137 - Added and used EEPROM_NORESET for 3c556B PM resumes.
138 - Fixed leakage of vp->rx_ring.
139 - Break out separate HAS_HWCKSM device capability flag.
140 - Kill vp->tx_full (ANK)
141 - Merge zerocopy fragment handling (ANK?)
143 LK1.1.14 15 Feb 2001
144 - Enable WOL. Can be turned on with `enable_wol' module option.
145 - EISA and PCI initialisation fixes (jgarzik, Manfred Spraul)
146 - If a device's internalconfig register reports it has NWAY,
147 use it, even if autoselect is enabled.
149 LK1.1.15 6 June 2001 akpm
150 - Prevent double counting of received bytes (Lars Christensen)
151 - Add ethtool support (jgarzik)
152 - Add module parm descriptions (Andrzej M. Krzysztofowicz)
153 - Implemented alloc_etherdev() API
154 - Special-case the 'Tx error 82' message.
156 LK1.1.16 18 July 2001 akpm
157 - Make NETIF_F_SG dependent upon nr_free_highpages(), not on CONFIG_HIGHMEM
158 - Lessen verbosity of bootup messages
159 - Fix WOL - use new PM API functions.
160 - Use netif_running() instead of vp->open in suspend/resume.
161 - Don't reset the interface logic on open/close/rmmod. It upsets
162 autonegotiation, and hence DHCP (from 0.99T).
163 - Back out EEPROM_NORESET flag because of the above (we do it for all
164 NICs).
165 - Correct 3c982 identification string
166 - Rename wait_for_completion() to issue_and_wait() to avoid completion.h
167 clash.
169 LK1.1.17 18Dec01 akpm
170 - PCI ID 9805 is a Python-T, not a dual-port Cyclone. Apparently.
171 And it has NWAY.
172 - Mask our advertised modes (vp->advertising) with our capabilities
173 (MII reg5) when deciding which duplex mode to use.
174 - Add `global_options' as default for options[]. Ditto global_enable_wol,
175 global_full_duplex.
177 LK1.1.18 01Jul02 akpm
178 - Fix for undocumented transceiver power-up bit on some 3c566B's
179 (Donald Becker, Rahul Karnik)
181 - See http://www.zip.com.au/~akpm/linux/#3c59x-2.3 for more details.
182 - Also see Documentation/networking/vortex.txt
184 LK1.1.19 10Nov02 Marc Zyngier <maz@wild-wind.fr.eu.org>
185 - EISA sysfs integration.
189 * FIXME: This driver _could_ support MTU changing, but doesn't. See Don's hamachi.c implementation
190 * as well as other drivers
192 * NOTE: If you make 'vortex_debug' a constant (#define vortex_debug 0) the driver shrinks by 2k
193 * due to dead code elimination. There will be some performance benefits from this due to
194 * elimination of all the tests and reduced cache footprint.
198 #define DRV_NAME "3c59x"
199 #define DRV_VERSION "LK1.1.19"
200 #define DRV_RELDATE "10 Nov 2002"
204 /* A few values that may be tweaked. */
205 /* Keep the ring sizes a power of two for efficiency. */
206 #define TX_RING_SIZE 16
207 #define RX_RING_SIZE 32
208 #define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
210 /* "Knobs" that adjust features and parameters. */
211 /* Set the copy breakpoint for the copy-only-tiny-frames scheme.
212 Setting to > 1512 effectively disables this feature. */
213 #ifndef __arm__
214 static int rx_copybreak = 200;
215 #else
216 /* ARM systems perform better by disregarding the bus-master
217 transfer capability of these cards. -- rmk */
218 static int rx_copybreak = 1513;
219 #endif
220 /* Allow setting MTU to a larger size, bypassing the normal ethernet setup. */
221 static const int mtu = 1500;
222 /* Maximum events (Rx packets, etc.) to handle at each interrupt. */
223 static int max_interrupt_work = 32;
224 /* Tx timeout interval (millisecs) */
225 static int watchdog = 5000;
227 /* Allow aggregation of Tx interrupts. Saves CPU load at the cost
228 * of possible Tx stalls if the system is blocking interrupts
229 * somewhere else. Undefine this to disable.
231 #define tx_interrupt_mitigation 1
233 /* Put out somewhat more debugging messages. (0: no msg, 1 minimal .. 6). */
234 #define vortex_debug debug
235 #ifdef VORTEX_DEBUG
236 static int vortex_debug = VORTEX_DEBUG;
237 #else
238 static int vortex_debug = 1;
239 #endif
241 #include <linux/config.h>
242 #include <linux/module.h>
243 #include <linux/kernel.h>
244 #include <linux/string.h>
245 #include <linux/timer.h>
246 #include <linux/errno.h>
247 #include <linux/in.h>
248 #include <linux/ioport.h>
249 #include <linux/slab.h>
250 #include <linux/interrupt.h>
251 #include <linux/pci.h>
252 #include <linux/mii.h>
253 #include <linux/init.h>
254 #include <linux/netdevice.h>
255 #include <linux/etherdevice.h>
256 #include <linux/skbuff.h>
257 #include <linux/ethtool.h>
258 #include <linux/highmem.h>
259 #include <linux/eisa.h>
260 #include <asm/irq.h> /* For NR_IRQS only. */
261 #include <asm/bitops.h>
262 #include <asm/io.h>
263 #include <asm/uaccess.h>
265 /* Kernel compatibility defines, some common to David Hinds' PCMCIA package.
266 This is only in the support-all-kernels source code. */
268 #define RUN_AT(x) (jiffies + (x))
270 #include <linux/delay.h>
273 static char version[] __devinitdata =
274 DRV_NAME ": Donald Becker and others. www.scyld.com/network/vortex.html\n";
276 MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
277 MODULE_DESCRIPTION("3Com 3c59x/3c9xx ethernet driver "
278 DRV_VERSION " " DRV_RELDATE);
279 MODULE_LICENSE("GPL");
281 MODULE_PARM(debug, "i");
282 MODULE_PARM(global_options, "i");
283 MODULE_PARM(options, "1-" __MODULE_STRING(8) "i");
284 MODULE_PARM(global_full_duplex, "i");
285 MODULE_PARM(full_duplex, "1-" __MODULE_STRING(8) "i");
286 MODULE_PARM(hw_checksums, "1-" __MODULE_STRING(8) "i");
287 MODULE_PARM(flow_ctrl, "1-" __MODULE_STRING(8) "i");
288 MODULE_PARM(global_enable_wol, "i");
289 MODULE_PARM(enable_wol, "1-" __MODULE_STRING(8) "i");
290 MODULE_PARM(rx_copybreak, "i");
291 MODULE_PARM(max_interrupt_work, "i");
292 MODULE_PARM(compaq_ioaddr, "i");
293 MODULE_PARM(compaq_irq, "i");
294 MODULE_PARM(compaq_device_id, "i");
295 MODULE_PARM(watchdog, "i");
296 MODULE_PARM_DESC(debug, "3c59x debug level (0-6)");
297 MODULE_PARM_DESC(options, "3c59x: Bits 0-3: media type, bit 4: bus mastering, bit 9: full duplex");
298 MODULE_PARM_DESC(global_options, "3c59x: same as options, but applies to all NICs if options is unset");
299 MODULE_PARM_DESC(full_duplex, "3c59x full duplex setting(s) (1)");
300 MODULE_PARM_DESC(global_full_duplex, "3c59x: same as full_duplex, but applies to all NICs if options is unset");
301 MODULE_PARM_DESC(hw_checksums, "3c59x Hardware checksum checking by adapter(s) (0-1)");
302 MODULE_PARM_DESC(flow_ctrl, "3c59x 802.3x flow control usage (PAUSE only) (0-1)");
303 MODULE_PARM_DESC(enable_wol, "3c59x: Turn on Wake-on-LAN for adapter(s) (0-1)");
304 MODULE_PARM_DESC(global_enable_wol, "3c59x: same as enable_wol, but applies to all NICs if options is unset");
305 MODULE_PARM_DESC(rx_copybreak, "3c59x copy breakpoint for copy-only-tiny-frames");
306 MODULE_PARM_DESC(max_interrupt_work, "3c59x maximum events handled per interrupt");
307 MODULE_PARM_DESC(compaq_ioaddr, "3c59x PCI I/O base address (Compaq BIOS problem workaround)");
308 MODULE_PARM_DESC(compaq_irq, "3c59x PCI IRQ number (Compaq BIOS problem workaround)");
309 MODULE_PARM_DESC(compaq_device_id, "3c59x PCI device ID (Compaq BIOS problem workaround)");
310 MODULE_PARM_DESC(watchdog, "3c59x transmit timeout in milliseconds");
312 /* Operational parameter that usually are not changed. */
314 /* The Vortex size is twice that of the original EtherLinkIII series: the
315 runtime register window, window 1, is now always mapped in.
316 The Boomerang size is twice as large as the Vortex -- it has additional
317 bus master control registers. */
318 #define VORTEX_TOTAL_SIZE 0x20
319 #define BOOMERANG_TOTAL_SIZE 0x40
321 /* Set iff a MII transceiver on any interface requires mdio preamble.
322 This only set with the original DP83840 on older 3c905 boards, so the extra
323 code size of a per-interface flag is not worthwhile. */
324 static char mii_preamble_required;
326 #define PFX DRV_NAME ": "
331 Theory of Operation
333 I. Board Compatibility
335 This device driver is designed for the 3Com FastEtherLink and FastEtherLink
336 XL, 3Com's PCI to 10/100baseT adapters. It also works with the 10Mbs
337 versions of the FastEtherLink cards. The supported product IDs are
338 3c590, 3c592, 3c595, 3c597, 3c900, 3c905
340 The related ISA 3c515 is supported with a separate driver, 3c515.c, included
341 with the kernel source or available from
342 cesdis.gsfc.nasa.gov:/pub/linux/drivers/3c515.html
344 II. Board-specific settings
346 PCI bus devices are configured by the system at boot time, so no jumpers
347 need to be set on the board. The system BIOS should be set to assign the
348 PCI INTA signal to an otherwise unused system IRQ line.
350 The EEPROM settings for media type and forced-full-duplex are observed.
351 The EEPROM media type should be left at the default "autoselect" unless using
352 10base2 or AUI connections which cannot be reliably detected.
354 III. Driver operation
356 The 3c59x series use an interface that's very similar to the previous 3c5x9
357 series. The primary interface is two programmed-I/O FIFOs, with an
358 alternate single-contiguous-region bus-master transfer (see next).
360 The 3c900 "Boomerang" series uses a full-bus-master interface with separate
361 lists of transmit and receive descriptors, similar to the AMD LANCE/PCnet,
362 DEC Tulip and Intel Speedo3. The first chip version retains a compatible
363 programmed-I/O interface that has been removed in 'B' and subsequent board
364 revisions.
366 One extension that is advertised in a very large font is that the adapters
367 are capable of being bus masters. On the Vortex chip this capability was
368 only for a single contiguous region making it far less useful than the full
369 bus master capability. There is a significant performance impact of taking
370 an extra interrupt or polling for the completion of each transfer, as well
371 as difficulty sharing the single transfer engine between the transmit and
372 receive threads. Using DMA transfers is a win only with large blocks or
373 with the flawed versions of the Intel Orion motherboard PCI controller.
375 The Boomerang chip's full-bus-master interface is useful, and has the
376 currently-unused advantages over other similar chips that queued transmit
377 packets may be reordered and receive buffer groups are associated with a
378 single frame.
380 With full-bus-master support, this driver uses a "RX_COPYBREAK" scheme.
381 Rather than a fixed intermediate receive buffer, this scheme allocates
382 full-sized skbuffs as receive buffers. The value RX_COPYBREAK is used as
383 the copying breakpoint: it is chosen to trade-off the memory wasted by
384 passing the full-sized skbuff to the queue layer for all frames vs. the
385 copying cost of copying a frame to a correctly-sized skbuff.
387 IIIC. Synchronization
388 The driver runs as two independent, single-threaded flows of control. One
389 is the send-packet routine, which enforces single-threaded use by the
390 dev->tbusy flag. The other thread is the interrupt handler, which is single
391 threaded by the hardware and other software.
393 IV. Notes
395 Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing development
396 3c590, 3c595, and 3c900 boards.
397 The name "Vortex" is the internal 3Com project name for the PCI ASIC, and
398 the EISA version is called "Demon". According to Terry these names come
399 from rides at the local amusement park.
401 The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes!
402 This driver only supports ethernet packets because of the skbuff allocation
403 limit of 4K.
406 /* This table drives the PCI probe routines. It's mostly boilerplate in all
407 of the drivers, and will likely be provided by some future kernel.
409 enum pci_flags_bit {
410 PCI_USES_IO=1, PCI_USES_MEM=2, PCI_USES_MASTER=4,
411 PCI_ADDR0=0x10<<0, PCI_ADDR1=0x10<<1, PCI_ADDR2=0x10<<2, PCI_ADDR3=0x10<<3,
414 enum { IS_VORTEX=1, IS_BOOMERANG=2, IS_CYCLONE=4, IS_TORNADO=8,
415 EEPROM_8BIT=0x10, /* AKPM: Uses 0x230 as the base bitmaps for EEPROM reads */
416 HAS_PWR_CTRL=0x20, HAS_MII=0x40, HAS_NWAY=0x80, HAS_CB_FNS=0x100,
417 INVERT_MII_PWR=0x200, INVERT_LED_PWR=0x400, MAX_COLLISION_RESET=0x800,
418 EEPROM_OFFSET=0x1000, HAS_HWCKSM=0x2000, WNO_XCVR_PWR=0x4000,
419 EXTRA_PREAMBLE=0x8000, };
421 enum vortex_chips {
422 CH_3C590 = 0,
423 CH_3C592,
424 CH_3C597,
425 CH_3C595_1,
426 CH_3C595_2,
428 CH_3C595_3,
429 CH_3C900_1,
430 CH_3C900_2,
431 CH_3C900_3,
432 CH_3C900_4,
434 CH_3C900_5,
435 CH_3C900B_FL,
436 CH_3C905_1,
437 CH_3C905_2,
438 CH_3C905B_1,
440 CH_3C905B_2,
441 CH_3C905B_FX,
442 CH_3C905C,
443 CH_3C9202,
444 CH_3C980,
445 CH_3C9805,
447 CH_3CSOHO100_TX,
448 CH_3C555,
449 CH_3C556,
450 CH_3C556B,
451 CH_3C575,
453 CH_3C575_1,
454 CH_3CCFE575,
455 CH_3CCFE575CT,
456 CH_3CCFE656,
457 CH_3CCFEM656,
459 CH_3CCFEM656_1,
460 CH_3C450,
461 CH_3C920,
462 CH_3C982A,
463 CH_3C982B,
465 CH_905BT4,
466 CH_920B_EMB_WNM,
470 /* note: this array directly indexed by above enums, and MUST
471 * be kept in sync with both the enums above, and the PCI device
472 * table below
474 static struct vortex_chip_info {
475 const char *name;
476 int flags;
477 int drv_flags;
478 int io_size;
479 } vortex_info_tbl[] __devinitdata = {
480 {"3c590 Vortex 10Mbps",
481 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
482 {"3c592 EISA 10Mbps Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
483 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
484 {"3c597 EISA Fast Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
485 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
486 {"3c595 Vortex 100baseTx",
487 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
488 {"3c595 Vortex 100baseT4",
489 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
491 {"3c595 Vortex 100base-MII",
492 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
493 {"3c900 Boomerang 10baseT",
494 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG, 64, },
495 {"3c900 Boomerang 10Mbps Combo",
496 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG, 64, },
497 {"3c900 Cyclone 10Mbps TPO", /* AKPM: from Don's 0.99M */
498 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
499 {"3c900 Cyclone 10Mbps Combo",
500 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
502 {"3c900 Cyclone 10Mbps TPC", /* AKPM: from Don's 0.99M */
503 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
504 {"3c900B-FL Cyclone 10base-FL",
505 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
506 {"3c905 Boomerang 100baseTx",
507 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII, 64, },
508 {"3c905 Boomerang 100baseT4",
509 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII, 64, },
510 {"3c905B Cyclone 100baseTx",
511 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
513 {"3c905B Cyclone 10/100/BNC",
514 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
515 {"3c905B-FX Cyclone 100baseFx",
516 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
517 {"3c905C Tornado",
518 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
519 {"3c920B-EMB-WNM (ATI Radeon 9100 IGP)",
520 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_MII|HAS_HWCKSM, 128, },
521 {"3c980 Cyclone",
522 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
524 {"3c980C Python-T",
525 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
526 {"3cSOHO100-TX Hurricane",
527 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
528 {"3c555 Laptop Hurricane",
529 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|EEPROM_8BIT|HAS_HWCKSM, 128, },
530 {"3c556 Laptop Tornado",
531 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_8BIT|HAS_CB_FNS|INVERT_MII_PWR|
532 HAS_HWCKSM, 128, },
533 {"3c556B Laptop Hurricane",
534 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_OFFSET|HAS_CB_FNS|INVERT_MII_PWR|
535 WNO_XCVR_PWR|HAS_HWCKSM, 128, },
537 {"3c575 [Megahertz] 10/100 LAN CardBus",
538 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
539 {"3c575 Boomerang CardBus",
540 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
541 {"3CCFE575BT Cyclone CardBus",
542 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|
543 INVERT_LED_PWR|HAS_HWCKSM, 128, },
544 {"3CCFE575CT Tornado CardBus",
545 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
546 MAX_COLLISION_RESET|HAS_HWCKSM, 128, },
547 {"3CCFE656 Cyclone CardBus",
548 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
549 INVERT_LED_PWR|HAS_HWCKSM, 128, },
551 {"3CCFEM656B Cyclone+Winmodem CardBus",
552 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
553 INVERT_LED_PWR|HAS_HWCKSM, 128, },
554 {"3CXFEM656C Tornado+Winmodem CardBus", /* From pcmcia-cs-3.1.5 */
555 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
556 MAX_COLLISION_RESET|HAS_HWCKSM, 128, },
557 {"3c450 HomePNA Tornado", /* AKPM: from Don's 0.99Q */
558 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
559 {"3c920 Tornado",
560 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
561 {"3c982 Hydra Dual Port A",
562 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_HWCKSM|HAS_NWAY, 128, },
564 {"3c982 Hydra Dual Port B",
565 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_HWCKSM|HAS_NWAY, 128, },
566 {"3c905B-T4",
567 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
568 {"3c920B-EMB-WNM Tornado",
569 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
571 {NULL,}, /* NULL terminated list. */
575 static struct pci_device_id vortex_pci_tbl[] = {
576 { 0x10B7, 0x5900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C590 },
577 { 0x10B7, 0x5920, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C592 },
578 { 0x10B7, 0x5970, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C597 },
579 { 0x10B7, 0x5950, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_1 },
580 { 0x10B7, 0x5951, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_2 },
582 { 0x10B7, 0x5952, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_3 },
583 { 0x10B7, 0x9000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_1 },
584 { 0x10B7, 0x9001, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_2 },
585 { 0x10B7, 0x9004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_3 },
586 { 0x10B7, 0x9005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_4 },
588 { 0x10B7, 0x9006, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_5 },
589 { 0x10B7, 0x900A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900B_FL },
590 { 0x10B7, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_1 },
591 { 0x10B7, 0x9051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_2 },
592 { 0x10B7, 0x9055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_1 },
594 { 0x10B7, 0x9058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_2 },
595 { 0x10B7, 0x905A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_FX },
596 { 0x10B7, 0x9200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905C },
597 { 0x10B7, 0x9202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C9202 },
598 { 0x10B7, 0x9800, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C980 },
599 { 0x10B7, 0x9805, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C9805 },
601 { 0x10B7, 0x7646, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CSOHO100_TX },
602 { 0x10B7, 0x5055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C555 },
603 { 0x10B7, 0x6055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556 },
604 { 0x10B7, 0x6056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556B },
605 { 0x10B7, 0x5b57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575 },
607 { 0x10B7, 0x5057, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575_1 },
608 { 0x10B7, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575 },
609 { 0x10B7, 0x5257, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575CT },
610 { 0x10B7, 0x6560, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE656 },
611 { 0x10B7, 0x6562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656 },
613 { 0x10B7, 0x6564, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656_1 },
614 { 0x10B7, 0x4500, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C450 },
615 { 0x10B7, 0x9201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C920 },
616 { 0x10B7, 0x1201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C982A },
617 { 0x10B7, 0x1202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C982B },
619 { 0x10B7, 0x9056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_905BT4 },
620 { 0x10B7, 0x9210, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_920B_EMB_WNM },
622 {0,} /* 0 terminated list. */
624 MODULE_DEVICE_TABLE(pci, vortex_pci_tbl);
627 /* Operational definitions.
628 These are not used by other compilation units and thus are not
629 exported in a ".h" file.
631 First the windows. There are eight register windows, with the command
632 and status registers available in each.
634 #define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
635 #define EL3_CMD 0x0e
636 #define EL3_STATUS 0x0e
638 /* The top five bits written to EL3_CMD are a command, the lower
639 11 bits are the parameter, if applicable.
640 Note that 11 parameters bits was fine for ethernet, but the new chip
641 can handle FDDI length frames (~4500 octets) and now parameters count
642 32-bit 'Dwords' rather than octets. */
644 enum vortex_cmd {
645 TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
646 RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11,
647 UpStall = 6<<11, UpUnstall = (6<<11)+1,
648 DownStall = (6<<11)+2, DownUnstall = (6<<11)+3,
649 RxDiscard = 8<<11, TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
650 FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
651 SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
652 SetTxThreshold = 18<<11, SetTxStart = 19<<11,
653 StartDMAUp = 20<<11, StartDMADown = (20<<11)+1, StatsEnable = 21<<11,
654 StatsDisable = 22<<11, StopCoax = 23<<11, SetFilterBit = 25<<11,};
656 /* The SetRxFilter command accepts the following classes: */
657 enum RxFilter {
658 RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8 };
660 /* Bits in the general status register. */
661 enum vortex_status {
662 IntLatch = 0x0001, HostError = 0x0002, TxComplete = 0x0004,
663 TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
664 IntReq = 0x0040, StatsFull = 0x0080,
665 DMADone = 1<<8, DownComplete = 1<<9, UpComplete = 1<<10,
666 DMAInProgress = 1<<11, /* DMA controller is still busy.*/
667 CmdInProgress = 1<<12, /* EL3_CMD is still busy.*/
670 /* Register window 1 offsets, the window used in normal operation.
671 On the Vortex this window is always mapped at offsets 0x10-0x1f. */
672 enum Window1 {
673 TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
674 RxStatus = 0x18, Timer=0x1A, TxStatus = 0x1B,
675 TxFree = 0x1C, /* Remaining free bytes in Tx buffer. */
677 enum Window0 {
678 Wn0EepromCmd = 10, /* Window 0: EEPROM command register. */
679 Wn0EepromData = 12, /* Window 0: EEPROM results register. */
680 IntrStatus=0x0E, /* Valid in all windows. */
682 enum Win0_EEPROM_bits {
683 EEPROM_Read = 0x80, EEPROM_WRITE = 0x40, EEPROM_ERASE = 0xC0,
684 EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
685 EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
687 /* EEPROM locations. */
688 enum eeprom_offset {
689 PhysAddr01=0, PhysAddr23=1, PhysAddr45=2, ModelID=3,
690 EtherLink3ID=7, IFXcvrIO=8, IRQLine=9,
691 NodeAddr01=10, NodeAddr23=11, NodeAddr45=12,
692 DriverTune=13, Checksum=15};
694 enum Window2 { /* Window 2. */
695 Wn2_ResetOptions=12,
697 enum Window3 { /* Window 3: MAC/config bits. */
698 Wn3_Config=0, Wn3_MaxPktSize=4, Wn3_MAC_Ctrl=6, Wn3_Options=8,
701 #define BFEXT(value, offset, bitcount) \
702 ((((unsigned long)(value)) >> (offset)) & ((1 << (bitcount)) - 1))
704 #define BFINS(lhs, rhs, offset, bitcount) \
705 (((lhs) & ~((((1 << (bitcount)) - 1)) << (offset))) | \
706 (((rhs) & ((1 << (bitcount)) - 1)) << (offset)))
708 #define RAM_SIZE(v) BFEXT(v, 0, 3)
709 #define RAM_WIDTH(v) BFEXT(v, 3, 1)
710 #define RAM_SPEED(v) BFEXT(v, 4, 2)
711 #define ROM_SIZE(v) BFEXT(v, 6, 2)
712 #define RAM_SPLIT(v) BFEXT(v, 16, 2)
713 #define XCVR(v) BFEXT(v, 20, 4)
714 #define AUTOSELECT(v) BFEXT(v, 24, 1)
716 enum Window4 { /* Window 4: Xcvr/media bits. */
717 Wn4_FIFODiag = 4, Wn4_NetDiag = 6, Wn4_PhysicalMgmt=8, Wn4_Media = 10,
719 enum Win4_Media_bits {
720 Media_SQE = 0x0008, /* Enable SQE error counting for AUI. */
721 Media_10TP = 0x00C0, /* Enable link beat and jabber for 10baseT. */
722 Media_Lnk = 0x0080, /* Enable just link beat for 100TX/100FX. */
723 Media_LnkBeat = 0x0800,
725 enum Window7 { /* Window 7: Bus Master control. */
726 Wn7_MasterAddr = 0, Wn7_VlanEtherType=4, Wn7_MasterLen = 6,
727 Wn7_MasterStatus = 12,
729 /* Boomerang bus master control registers. */
730 enum MasterCtrl {
731 PktStatus = 0x20, DownListPtr = 0x24, FragAddr = 0x28, FragLen = 0x2c,
732 TxFreeThreshold = 0x2f, UpPktStatus = 0x30, UpListPtr = 0x38,
735 /* The Rx and Tx descriptor lists.
736 Caution Alpha hackers: these types are 32 bits! Note also the 8 byte
737 alignment contraint on tx_ring[] and rx_ring[]. */
738 #define LAST_FRAG 0x80000000 /* Last Addr/Len pair in descriptor. */
739 #define DN_COMPLETE 0x00010000 /* This packet has been downloaded */
740 struct boom_rx_desc {
741 u32 next; /* Last entry points to 0. */
742 s32 status;
743 u32 addr; /* Up to 63 addr/len pairs possible. */
744 s32 length; /* Set LAST_FRAG to indicate last pair. */
746 /* Values for the Rx status entry. */
747 enum rx_desc_status {
748 RxDComplete=0x00008000, RxDError=0x4000,
749 /* See boomerang_rx() for actual error bits */
750 IPChksumErr=1<<25, TCPChksumErr=1<<26, UDPChksumErr=1<<27,
751 IPChksumValid=1<<29, TCPChksumValid=1<<30, UDPChksumValid=1<<31,
754 #ifdef MAX_SKB_FRAGS
755 #define DO_ZEROCOPY 1
756 #else
757 #define DO_ZEROCOPY 0
758 #endif
760 struct boom_tx_desc {
761 u32 next; /* Last entry points to 0. */
762 s32 status; /* bits 0:12 length, others see below. */
763 #if DO_ZEROCOPY
764 struct {
765 u32 addr;
766 s32 length;
767 } frag[1+MAX_SKB_FRAGS];
768 #else
769 u32 addr;
770 s32 length;
771 #endif
774 /* Values for the Tx status entry. */
775 enum tx_desc_status {
776 CRCDisable=0x2000, TxDComplete=0x8000,
777 AddIPChksum=0x02000000, AddTCPChksum=0x04000000, AddUDPChksum=0x08000000,
778 TxIntrUploaded=0x80000000, /* IRQ when in FIFO, but maybe not sent. */
781 /* Chip features we care about in vp->capabilities, read from the EEPROM. */
782 enum ChipCaps { CapBusMaster=0x20, CapPwrMgmt=0x2000 };
784 struct vortex_private {
785 /* The Rx and Tx rings should be quad-word-aligned. */
786 struct boom_rx_desc* rx_ring;
787 struct boom_tx_desc* tx_ring;
788 dma_addr_t rx_ring_dma;
789 dma_addr_t tx_ring_dma;
790 /* The addresses of transmit- and receive-in-place skbuffs. */
791 struct sk_buff* rx_skbuff[RX_RING_SIZE];
792 struct sk_buff* tx_skbuff[TX_RING_SIZE];
793 unsigned int cur_rx, cur_tx; /* The next free ring entry */
794 unsigned int dirty_rx, dirty_tx; /* The ring entries to be free()ed. */
795 struct net_device_stats stats;
796 struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
797 dma_addr_t tx_skb_dma; /* Allocated DMA address for bus master ctrl DMA. */
799 /* PCI configuration space information. */
800 struct device *gendev;
801 char *cb_fn_base; /* CardBus function status addr space. */
803 /* Some values here only for performance evaluation and path-coverage */
804 int rx_nocopy, rx_copy, queued_packet, rx_csumhits;
805 int card_idx;
807 /* The remainder are related to chip state, mostly media selection. */
808 struct timer_list timer; /* Media selection timer. */
809 struct timer_list rx_oom_timer; /* Rx skb allocation retry timer */
810 int options; /* User-settable misc. driver options. */
811 unsigned int media_override:4, /* Passed-in media type. */
812 default_media:4, /* Read from the EEPROM/Wn3_Config. */
813 full_duplex:1, force_fd:1, autoselect:1,
814 bus_master:1, /* Vortex can only do a fragment bus-m. */
815 full_bus_master_tx:1, full_bus_master_rx:2, /* Boomerang */
816 flow_ctrl:1, /* Use 802.3x flow control (PAUSE only) */
817 partner_flow_ctrl:1, /* Partner supports flow control */
818 has_nway:1,
819 enable_wol:1, /* Wake-on-LAN is enabled */
820 pm_state_valid:1, /* power_state[] has sane contents */
821 open:1,
822 medialock:1,
823 must_free_region:1, /* Flag: if zero, Cardbus owns the I/O region */
824 large_frames:1; /* accept large frames */
825 int drv_flags;
826 u16 status_enable;
827 u16 intr_enable;
828 u16 available_media; /* From Wn3_Options. */
829 u16 capabilities, info1, info2; /* Various, from EEPROM. */
830 u16 advertising; /* NWay media advertisement */
831 unsigned char phys[2]; /* MII device addresses. */
832 u16 deferred; /* Resend these interrupts when we
833 * bale from the ISR */
834 u16 io_size; /* Size of PCI region (for release_region) */
835 spinlock_t lock; /* Serialise access to device & its vortex_private */
836 spinlock_t mdio_lock; /* Serialise access to mdio hardware */
837 u32 power_state[16];
840 #ifdef CONFIG_PCI
841 #define DEVICE_PCI(dev) (((dev)->bus == &pci_bus_type) ? to_pci_dev((dev)) : NULL)
842 #else
843 #define DEVICE_PCI(dev) NULL
844 #endif
846 #define VORTEX_PCI(vp) (((vp)->gendev) ? DEVICE_PCI((vp)->gendev) : NULL)
848 #ifdef CONFIG_EISA
849 #define DEVICE_EISA(dev) (((dev)->bus == &eisa_bus_type) ? to_eisa_device((dev)) : NULL)
850 #else
851 #define DEVICE_EISA(dev) NULL
852 #endif
854 #define VORTEX_EISA(vp) (((vp)->gendev) ? DEVICE_EISA((vp)->gendev) : NULL)
856 /* The action to take with a media selection timer tick.
857 Note that we deviate from the 3Com order by checking 10base2 before AUI.
859 enum xcvr_types {
860 XCVR_10baseT=0, XCVR_AUI, XCVR_10baseTOnly, XCVR_10base2, XCVR_100baseTx,
861 XCVR_100baseFx, XCVR_MII=6, XCVR_NWAY=8, XCVR_ExtMII=9, XCVR_Default=10,
864 static struct media_table {
865 char *name;
866 unsigned int media_bits:16, /* Bits to set in Wn4_Media register. */
867 mask:8, /* The transceiver-present bit in Wn3_Config.*/
868 next:8; /* The media type to try next. */
869 int wait; /* Time before we check media status. */
870 } media_tbl[] = {
871 { "10baseT", Media_10TP,0x08, XCVR_10base2, (14*HZ)/10},
872 { "10Mbs AUI", Media_SQE, 0x20, XCVR_Default, (1*HZ)/10},
873 { "undefined", 0, 0x80, XCVR_10baseT, 10000},
874 { "10base2", 0, 0x10, XCVR_AUI, (1*HZ)/10},
875 { "100baseTX", Media_Lnk, 0x02, XCVR_100baseFx, (14*HZ)/10},
876 { "100baseFX", Media_Lnk, 0x04, XCVR_MII, (14*HZ)/10},
877 { "MII", 0, 0x41, XCVR_10baseT, 3*HZ },
878 { "undefined", 0, 0x01, XCVR_10baseT, 10000},
879 { "Autonegotiate", 0, 0x41, XCVR_10baseT, 3*HZ},
880 { "MII-External", 0, 0x41, XCVR_10baseT, 3*HZ },
881 { "Default", 0, 0xFF, XCVR_10baseT, 10000},
884 static int vortex_probe1(struct device *gendev, long ioaddr, int irq,
885 int chip_idx, int card_idx);
886 static void vortex_up(struct net_device *dev);
887 static void vortex_down(struct net_device *dev, int final);
888 static int vortex_open(struct net_device *dev);
889 static void mdio_sync(long ioaddr, int bits);
890 static int mdio_read(struct net_device *dev, int phy_id, int location);
891 static void mdio_write(struct net_device *vp, int phy_id, int location, int value);
892 static void vortex_timer(unsigned long arg);
893 static void rx_oom_timer(unsigned long arg);
894 static int vortex_start_xmit(struct sk_buff *skb, struct net_device *dev);
895 static int boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev);
896 static int vortex_rx(struct net_device *dev);
897 static int boomerang_rx(struct net_device *dev);
898 static irqreturn_t vortex_interrupt(int irq, void *dev_id, struct pt_regs *regs);
899 static irqreturn_t boomerang_interrupt(int irq, void *dev_id, struct pt_regs *regs);
900 static int vortex_close(struct net_device *dev);
901 static void dump_tx_ring(struct net_device *dev);
902 static void update_stats(long ioaddr, struct net_device *dev);
903 static struct net_device_stats *vortex_get_stats(struct net_device *dev);
904 static void set_rx_mode(struct net_device *dev);
905 #ifdef CONFIG_PCI
906 static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
907 #endif
908 static void vortex_tx_timeout(struct net_device *dev);
909 static void acpi_set_WOL(struct net_device *dev);
910 static struct ethtool_ops vortex_ethtool_ops;
911 static void set_8021q_mode(struct net_device *dev, int enable);
914 /* This driver uses 'options' to pass the media type, full-duplex flag, etc. */
915 /* Option count limit only -- unlimited interfaces are supported. */
916 #define MAX_UNITS 8
917 static int options[MAX_UNITS] = { -1, -1, -1, -1, -1, -1, -1, -1,};
918 static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
919 static int hw_checksums[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
920 static int flow_ctrl[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
921 static int enable_wol[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
922 static int global_options = -1;
923 static int global_full_duplex = -1;
924 static int global_enable_wol = -1;
926 /* #define dev_alloc_skb dev_alloc_skb_debug */
928 /* Variables to work-around the Compaq PCI BIOS32 problem. */
929 static int compaq_ioaddr, compaq_irq, compaq_device_id = 0x5900;
930 static struct net_device *compaq_net_device;
932 static int vortex_cards_found;
934 #ifdef CONFIG_NET_POLL_CONTROLLER
935 static void poll_vortex(struct net_device *dev)
937 struct vortex_private *vp = (struct vortex_private *)dev->priv;
938 unsigned long flags;
939 local_save_flags(flags);
940 local_irq_disable();
941 (vp->full_bus_master_rx ? boomerang_interrupt:vortex_interrupt)(dev->irq,dev,NULL);
942 local_irq_restore(flags);
944 #endif
946 #ifdef CONFIG_PM
948 static int vortex_suspend (struct pci_dev *pdev, u32 state)
950 struct net_device *dev = pci_get_drvdata(pdev);
952 if (dev && dev->priv) {
953 if (netif_running(dev)) {
954 netif_device_detach(dev);
955 vortex_down(dev, 1);
958 return 0;
961 static int vortex_resume (struct pci_dev *pdev)
963 struct net_device *dev = pci_get_drvdata(pdev);
965 if (dev && dev->priv) {
966 if (netif_running(dev)) {
967 vortex_up(dev);
968 netif_device_attach(dev);
971 return 0;
974 #endif /* CONFIG_PM */
976 #ifdef CONFIG_EISA
977 static struct eisa_device_id vortex_eisa_ids[] = {
978 { "TCM5920", CH_3C592 },
979 { "TCM5970", CH_3C597 },
980 { "" }
983 static int vortex_eisa_probe (struct device *device);
984 static int vortex_eisa_remove (struct device *device);
986 static struct eisa_driver vortex_eisa_driver = {
987 .id_table = vortex_eisa_ids,
988 .driver = {
989 .name = "3c59x",
990 .probe = vortex_eisa_probe,
991 .remove = vortex_eisa_remove
995 static int vortex_eisa_probe (struct device *device)
997 long ioaddr;
998 struct eisa_device *edev;
1000 edev = to_eisa_device (device);
1001 ioaddr = edev->base_addr;
1003 if (!request_region(ioaddr, VORTEX_TOTAL_SIZE, DRV_NAME))
1004 return -EBUSY;
1006 if (vortex_probe1(device, ioaddr, inw(ioaddr + 0xC88) >> 12,
1007 edev->id.driver_data, vortex_cards_found)) {
1008 release_region (ioaddr, VORTEX_TOTAL_SIZE);
1009 return -ENODEV;
1012 vortex_cards_found++;
1014 return 0;
1017 static int vortex_eisa_remove (struct device *device)
1019 struct eisa_device *edev;
1020 struct net_device *dev;
1021 struct vortex_private *vp;
1022 long ioaddr;
1024 edev = to_eisa_device (device);
1025 dev = eisa_get_drvdata (edev);
1027 if (!dev) {
1028 printk("vortex_eisa_remove called for Compaq device!\n");
1029 BUG();
1032 vp = netdev_priv(dev);
1033 ioaddr = dev->base_addr;
1035 unregister_netdev (dev);
1036 outw (TotalReset|0x14, ioaddr + EL3_CMD);
1037 release_region (ioaddr, VORTEX_TOTAL_SIZE);
1039 free_netdev (dev);
1040 return 0;
1042 #endif
1044 /* returns count found (>= 0), or negative on error */
1045 static int __init vortex_eisa_init (void)
1047 int eisa_found = 0;
1048 int orig_cards_found = vortex_cards_found;
1050 #ifdef CONFIG_EISA
1051 if (eisa_driver_register (&vortex_eisa_driver) >= 0) {
1052 /* Because of the way EISA bus is probed, we cannot assume
1053 * any device have been found when we exit from
1054 * eisa_driver_register (the bus root driver may not be
1055 * initialized yet). So we blindly assume something was
1056 * found, and let the sysfs magic happend... */
1058 eisa_found = 1;
1060 #endif
1062 /* Special code to work-around the Compaq PCI BIOS32 problem. */
1063 if (compaq_ioaddr) {
1064 vortex_probe1(NULL, compaq_ioaddr, compaq_irq,
1065 compaq_device_id, vortex_cards_found++);
1068 return vortex_cards_found - orig_cards_found + eisa_found;
1071 /* returns count (>= 0), or negative on error */
1072 static int __devinit vortex_init_one (struct pci_dev *pdev,
1073 const struct pci_device_id *ent)
1075 int rc;
1077 /* wake up and enable device */
1078 if (pci_enable_device (pdev)) {
1079 rc = -EIO;
1080 } else {
1081 rc = vortex_probe1 (&pdev->dev, pci_resource_start (pdev, 0),
1082 pdev->irq, ent->driver_data, vortex_cards_found);
1083 if (rc == 0)
1084 vortex_cards_found++;
1086 return rc;
1090 * Start up the PCI/EISA device which is described by *gendev.
1091 * Return 0 on success.
1093 * NOTE: pdev can be NULL, for the case of a Compaq device
1095 static int __devinit vortex_probe1(struct device *gendev,
1096 long ioaddr, int irq,
1097 int chip_idx, int card_idx)
1099 struct vortex_private *vp;
1100 int option;
1101 unsigned int eeprom[0x40], checksum = 0; /* EEPROM contents */
1102 int i, step;
1103 struct net_device *dev;
1104 static int printed_version;
1105 int retval, print_info;
1106 struct vortex_chip_info * const vci = &vortex_info_tbl[chip_idx];
1107 char *print_name = "3c59x";
1108 struct pci_dev *pdev = NULL;
1109 struct eisa_device *edev = NULL;
1111 if (!printed_version) {
1112 printk (version);
1113 printed_version = 1;
1116 if (gendev) {
1117 if ((pdev = DEVICE_PCI(gendev))) {
1118 print_name = pci_name(pdev);
1121 if ((edev = DEVICE_EISA(gendev))) {
1122 print_name = edev->dev.bus_id;
1126 dev = alloc_etherdev(sizeof(*vp));
1127 retval = -ENOMEM;
1128 if (!dev) {
1129 printk (KERN_ERR PFX "unable to allocate etherdev, aborting\n");
1130 goto out;
1132 SET_MODULE_OWNER(dev);
1133 SET_NETDEV_DEV(dev, gendev);
1134 vp = netdev_priv(dev);
1136 option = global_options;
1138 /* The lower four bits are the media type. */
1139 if (dev->mem_start) {
1141 * The 'options' param is passed in as the third arg to the
1142 * LILO 'ether=' argument for non-modular use
1144 option = dev->mem_start;
1146 else if (card_idx < MAX_UNITS) {
1147 if (options[card_idx] >= 0)
1148 option = options[card_idx];
1151 if (option > 0) {
1152 if (option & 0x8000)
1153 vortex_debug = 7;
1154 if (option & 0x4000)
1155 vortex_debug = 2;
1156 if (option & 0x0400)
1157 vp->enable_wol = 1;
1160 print_info = (vortex_debug > 1);
1161 if (print_info)
1162 printk (KERN_INFO "See Documentation/networking/vortex.txt\n");
1164 printk(KERN_INFO "%s: 3Com %s %s at 0x%lx. Vers " DRV_VERSION "\n",
1165 print_name,
1166 pdev ? "PCI" : "EISA",
1167 vci->name,
1168 ioaddr);
1170 dev->base_addr = ioaddr;
1171 dev->irq = irq;
1172 dev->mtu = mtu;
1173 vp->large_frames = mtu > 1500;
1174 vp->drv_flags = vci->drv_flags;
1175 vp->has_nway = (vci->drv_flags & HAS_NWAY) ? 1 : 0;
1176 vp->io_size = vci->io_size;
1177 vp->card_idx = card_idx;
1179 /* module list only for Compaq device */
1180 if (gendev == NULL) {
1181 compaq_net_device = dev;
1184 /* PCI-only startup logic */
1185 if (pdev) {
1186 /* EISA resources already marked, so only PCI needs to do this here */
1187 /* Ignore return value, because Cardbus drivers already allocate for us */
1188 if (request_region(ioaddr, vci->io_size, print_name) != NULL)
1189 vp->must_free_region = 1;
1191 /* enable bus-mastering if necessary */
1192 if (vci->flags & PCI_USES_MASTER)
1193 pci_set_master (pdev);
1195 if (vci->drv_flags & IS_VORTEX) {
1196 u8 pci_latency;
1197 u8 new_latency = 248;
1199 /* Check the PCI latency value. On the 3c590 series the latency timer
1200 must be set to the maximum value to avoid data corruption that occurs
1201 when the timer expires during a transfer. This bug exists the Vortex
1202 chip only. */
1203 pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &pci_latency);
1204 if (pci_latency < new_latency) {
1205 printk(KERN_INFO "%s: Overriding PCI latency"
1206 " timer (CFLT) setting of %d, new value is %d.\n",
1207 print_name, pci_latency, new_latency);
1208 pci_write_config_byte(pdev, PCI_LATENCY_TIMER, new_latency);
1213 spin_lock_init(&vp->lock);
1214 spin_lock_init(&vp->mdio_lock);
1215 vp->gendev = gendev;
1217 /* Makes sure rings are at least 16 byte aligned. */
1218 vp->rx_ring = pci_alloc_consistent(pdev, sizeof(struct boom_rx_desc) * RX_RING_SIZE
1219 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
1220 &vp->rx_ring_dma);
1221 retval = -ENOMEM;
1222 if (vp->rx_ring == 0)
1223 goto free_region;
1225 vp->tx_ring = (struct boom_tx_desc *)(vp->rx_ring + RX_RING_SIZE);
1226 vp->tx_ring_dma = vp->rx_ring_dma + sizeof(struct boom_rx_desc) * RX_RING_SIZE;
1228 /* if we are a PCI driver, we store info in pdev->driver_data
1229 * instead of a module list */
1230 if (pdev)
1231 pci_set_drvdata(pdev, dev);
1232 if (edev)
1233 eisa_set_drvdata (edev, dev);
1235 vp->media_override = 7;
1236 if (option >= 0) {
1237 vp->media_override = ((option & 7) == 2) ? 0 : option & 15;
1238 if (vp->media_override != 7)
1239 vp->medialock = 1;
1240 vp->full_duplex = (option & 0x200) ? 1 : 0;
1241 vp->bus_master = (option & 16) ? 1 : 0;
1244 if (global_full_duplex > 0)
1245 vp->full_duplex = 1;
1246 if (global_enable_wol > 0)
1247 vp->enable_wol = 1;
1249 if (card_idx < MAX_UNITS) {
1250 if (full_duplex[card_idx] > 0)
1251 vp->full_duplex = 1;
1252 if (flow_ctrl[card_idx] > 0)
1253 vp->flow_ctrl = 1;
1254 if (enable_wol[card_idx] > 0)
1255 vp->enable_wol = 1;
1258 vp->force_fd = vp->full_duplex;
1259 vp->options = option;
1260 /* Read the station address from the EEPROM. */
1261 EL3WINDOW(0);
1263 int base;
1265 if (vci->drv_flags & EEPROM_8BIT)
1266 base = 0x230;
1267 else if (vci->drv_flags & EEPROM_OFFSET)
1268 base = EEPROM_Read + 0x30;
1269 else
1270 base = EEPROM_Read;
1272 for (i = 0; i < 0x40; i++) {
1273 int timer;
1274 outw(base + i, ioaddr + Wn0EepromCmd);
1275 /* Pause for at least 162 us. for the read to take place. */
1276 for (timer = 10; timer >= 0; timer--) {
1277 udelay(162);
1278 if ((inw(ioaddr + Wn0EepromCmd) & 0x8000) == 0)
1279 break;
1281 eeprom[i] = inw(ioaddr + Wn0EepromData);
1284 for (i = 0; i < 0x18; i++)
1285 checksum ^= eeprom[i];
1286 checksum = (checksum ^ (checksum >> 8)) & 0xff;
1287 if (checksum != 0x00) { /* Grrr, needless incompatible change 3Com. */
1288 while (i < 0x21)
1289 checksum ^= eeprom[i++];
1290 checksum = (checksum ^ (checksum >> 8)) & 0xff;
1292 if ((checksum != 0x00) && !(vci->drv_flags & IS_TORNADO))
1293 printk(" ***INVALID CHECKSUM %4.4x*** ", checksum);
1294 for (i = 0; i < 3; i++)
1295 ((u16 *)dev->dev_addr)[i] = htons(eeprom[i + 10]);
1296 if (print_info) {
1297 for (i = 0; i < 6; i++)
1298 printk("%c%2.2x", i ? ':' : ' ', dev->dev_addr[i]);
1300 /* Unfortunately an all zero eeprom passes the checksum and this
1301 gets found in the wild in failure cases. Crypto is hard 8) */
1302 if (!is_valid_ether_addr(dev->dev_addr)) {
1303 retval = -EINVAL;
1304 printk(KERN_ERR "*** EEPROM MAC address is invalid.\n");
1305 goto free_ring; /* With every pack */
1307 EL3WINDOW(2);
1308 for (i = 0; i < 6; i++)
1309 outb(dev->dev_addr[i], ioaddr + i);
1311 #ifdef __sparc__
1312 if (print_info)
1313 printk(", IRQ %s\n", __irq_itoa(dev->irq));
1314 #else
1315 if (print_info)
1316 printk(", IRQ %d\n", dev->irq);
1317 /* Tell them about an invalid IRQ. */
1318 if (dev->irq <= 0 || dev->irq >= NR_IRQS)
1319 printk(KERN_WARNING " *** Warning: IRQ %d is unlikely to work! ***\n",
1320 dev->irq);
1321 #endif
1323 EL3WINDOW(4);
1324 step = (inb(ioaddr + Wn4_NetDiag) & 0x1e) >> 1;
1325 if (print_info) {
1326 printk(KERN_INFO " product code %02x%02x rev %02x.%d date %02d-"
1327 "%02d-%02d\n", eeprom[6]&0xff, eeprom[6]>>8, eeprom[0x14],
1328 step, (eeprom[4]>>5) & 15, eeprom[4] & 31, eeprom[4]>>9);
1332 if (pdev && vci->drv_flags & HAS_CB_FNS) {
1333 unsigned long fn_st_addr; /* Cardbus function status space */
1334 unsigned short n;
1336 fn_st_addr = pci_resource_start (pdev, 2);
1337 if (fn_st_addr) {
1338 vp->cb_fn_base = ioremap(fn_st_addr, 128);
1339 retval = -ENOMEM;
1340 if (!vp->cb_fn_base)
1341 goto free_ring;
1343 if (print_info) {
1344 printk(KERN_INFO "%s: CardBus functions mapped %8.8lx->%p\n",
1345 print_name, fn_st_addr, vp->cb_fn_base);
1347 EL3WINDOW(2);
1349 n = inw(ioaddr + Wn2_ResetOptions) & ~0x4010;
1350 if (vp->drv_flags & INVERT_LED_PWR)
1351 n |= 0x10;
1352 if (vp->drv_flags & INVERT_MII_PWR)
1353 n |= 0x4000;
1354 outw(n, ioaddr + Wn2_ResetOptions);
1355 if (vp->drv_flags & WNO_XCVR_PWR) {
1356 EL3WINDOW(0);
1357 outw(0x0800, ioaddr);
1361 /* Extract our information from the EEPROM data. */
1362 vp->info1 = eeprom[13];
1363 vp->info2 = eeprom[15];
1364 vp->capabilities = eeprom[16];
1366 if (vp->info1 & 0x8000) {
1367 vp->full_duplex = 1;
1368 if (print_info)
1369 printk(KERN_INFO "Full duplex capable\n");
1373 static const char * ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
1374 unsigned int config;
1375 EL3WINDOW(3);
1376 vp->available_media = inw(ioaddr + Wn3_Options);
1377 if ((vp->available_media & 0xff) == 0) /* Broken 3c916 */
1378 vp->available_media = 0x40;
1379 config = inl(ioaddr + Wn3_Config);
1380 if (print_info) {
1381 printk(KERN_DEBUG " Internal config register is %4.4x, "
1382 "transceivers %#x.\n", config, inw(ioaddr + Wn3_Options));
1383 printk(KERN_INFO " %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
1384 8 << RAM_SIZE(config),
1385 RAM_WIDTH(config) ? "word" : "byte",
1386 ram_split[RAM_SPLIT(config)],
1387 AUTOSELECT(config) ? "autoselect/" : "",
1388 XCVR(config) > XCVR_ExtMII ? "<invalid transceiver>" :
1389 media_tbl[XCVR(config)].name);
1391 vp->default_media = XCVR(config);
1392 if (vp->default_media == XCVR_NWAY)
1393 vp->has_nway = 1;
1394 vp->autoselect = AUTOSELECT(config);
1397 if (vp->media_override != 7) {
1398 printk(KERN_INFO "%s: Media override to transceiver type %d (%s).\n",
1399 print_name, vp->media_override,
1400 media_tbl[vp->media_override].name);
1401 dev->if_port = vp->media_override;
1402 } else
1403 dev->if_port = vp->default_media;
1405 if ((vp->available_media & 0x40) || (vci->drv_flags & HAS_NWAY) ||
1406 dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
1407 int phy, phy_idx = 0;
1408 EL3WINDOW(4);
1409 mii_preamble_required++;
1410 if (vp->drv_flags & EXTRA_PREAMBLE)
1411 mii_preamble_required++;
1412 mdio_sync(ioaddr, 32);
1413 mdio_read(dev, 24, 1);
1414 for (phy = 0; phy < 32 && phy_idx < 1; phy++) {
1415 int mii_status, phyx;
1418 * For the 3c905CX we look at index 24 first, because it bogusly
1419 * reports an external PHY at all indices
1421 if (phy == 0)
1422 phyx = 24;
1423 else if (phy <= 24)
1424 phyx = phy - 1;
1425 else
1426 phyx = phy;
1427 mii_status = mdio_read(dev, phyx, 1);
1428 if (mii_status && mii_status != 0xffff) {
1429 vp->phys[phy_idx++] = phyx;
1430 if (print_info) {
1431 printk(KERN_INFO " MII transceiver found at address %d,"
1432 " status %4x.\n", phyx, mii_status);
1434 if ((mii_status & 0x0040) == 0)
1435 mii_preamble_required++;
1438 mii_preamble_required--;
1439 if (phy_idx == 0) {
1440 printk(KERN_WARNING" ***WARNING*** No MII transceivers found!\n");
1441 vp->phys[0] = 24;
1442 } else {
1443 vp->advertising = mdio_read(dev, vp->phys[0], 4);
1444 if (vp->full_duplex) {
1445 /* Only advertise the FD media types. */
1446 vp->advertising &= ~0x02A0;
1447 mdio_write(dev, vp->phys[0], 4, vp->advertising);
1452 if (vp->capabilities & CapBusMaster) {
1453 vp->full_bus_master_tx = 1;
1454 if (print_info) {
1455 printk(KERN_INFO " Enabling bus-master transmits and %s receives.\n",
1456 (vp->info2 & 1) ? "early" : "whole-frame" );
1458 vp->full_bus_master_rx = (vp->info2 & 1) ? 1 : 2;
1459 vp->bus_master = 0; /* AKPM: vortex only */
1462 /* The 3c59x-specific entries in the device structure. */
1463 dev->open = vortex_open;
1464 if (vp->full_bus_master_tx) {
1465 dev->hard_start_xmit = boomerang_start_xmit;
1466 /* Actually, it still should work with iommu. */
1467 dev->features |= NETIF_F_SG;
1468 if (((hw_checksums[card_idx] == -1) && (vp->drv_flags & HAS_HWCKSM)) ||
1469 (hw_checksums[card_idx] == 1)) {
1470 dev->features |= NETIF_F_IP_CSUM;
1472 } else {
1473 dev->hard_start_xmit = vortex_start_xmit;
1476 if (print_info) {
1477 printk(KERN_INFO "%s: scatter/gather %sabled. h/w checksums %sabled\n",
1478 print_name,
1479 (dev->features & NETIF_F_SG) ? "en":"dis",
1480 (dev->features & NETIF_F_IP_CSUM) ? "en":"dis");
1483 dev->stop = vortex_close;
1484 dev->get_stats = vortex_get_stats;
1485 #ifdef CONFIG_PCI
1486 dev->do_ioctl = vortex_ioctl;
1487 #endif
1488 dev->ethtool_ops = &vortex_ethtool_ops;
1489 dev->set_multicast_list = set_rx_mode;
1490 dev->tx_timeout = vortex_tx_timeout;
1491 dev->watchdog_timeo = (watchdog * HZ) / 1000;
1492 #ifdef CONFIG_NET_POLL_CONTROLLER
1493 dev->poll_controller = poll_vortex;
1494 #endif
1495 if (pdev && vp->enable_wol) {
1496 vp->pm_state_valid = 1;
1497 pci_save_state(VORTEX_PCI(vp), vp->power_state);
1498 acpi_set_WOL(dev);
1500 retval = register_netdev(dev);
1501 if (retval == 0)
1502 return 0;
1504 free_ring:
1505 pci_free_consistent(pdev,
1506 sizeof(struct boom_rx_desc) * RX_RING_SIZE
1507 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
1508 vp->rx_ring,
1509 vp->rx_ring_dma);
1510 free_region:
1511 if (vp->must_free_region)
1512 release_region(ioaddr, vci->io_size);
1513 free_netdev(dev);
1514 printk(KERN_ERR PFX "vortex_probe1 fails. Returns %d\n", retval);
1515 out:
1516 return retval;
1519 static void
1520 issue_and_wait(struct net_device *dev, int cmd)
1522 int i;
1524 outw(cmd, dev->base_addr + EL3_CMD);
1525 for (i = 0; i < 2000; i++) {
1526 if (!(inw(dev->base_addr + EL3_STATUS) & CmdInProgress))
1527 return;
1530 /* OK, that didn't work. Do it the slow way. One second */
1531 for (i = 0; i < 100000; i++) {
1532 if (!(inw(dev->base_addr + EL3_STATUS) & CmdInProgress)) {
1533 if (vortex_debug > 1)
1534 printk(KERN_INFO "%s: command 0x%04x took %d usecs\n",
1535 dev->name, cmd, i * 10);
1536 return;
1538 udelay(10);
1540 printk(KERN_ERR "%s: command 0x%04x did not complete! Status=0x%x\n",
1541 dev->name, cmd, inw(dev->base_addr + EL3_STATUS));
1544 static void
1545 vortex_up(struct net_device *dev)
1547 long ioaddr = dev->base_addr;
1548 struct vortex_private *vp = netdev_priv(dev);
1549 unsigned int config;
1550 int i;
1552 if (VORTEX_PCI(vp) && vp->enable_wol) {
1553 pci_set_power_state(VORTEX_PCI(vp), 0); /* Go active */
1554 pci_restore_state(VORTEX_PCI(vp), vp->power_state);
1557 /* Before initializing select the active media port. */
1558 EL3WINDOW(3);
1559 config = inl(ioaddr + Wn3_Config);
1561 if (vp->media_override != 7) {
1562 printk(KERN_INFO "%s: Media override to transceiver %d (%s).\n",
1563 dev->name, vp->media_override,
1564 media_tbl[vp->media_override].name);
1565 dev->if_port = vp->media_override;
1566 } else if (vp->autoselect) {
1567 if (vp->has_nway) {
1568 if (vortex_debug > 1)
1569 printk(KERN_INFO "%s: using NWAY device table, not %d\n",
1570 dev->name, dev->if_port);
1571 dev->if_port = XCVR_NWAY;
1572 } else {
1573 /* Find first available media type, starting with 100baseTx. */
1574 dev->if_port = XCVR_100baseTx;
1575 while (! (vp->available_media & media_tbl[dev->if_port].mask))
1576 dev->if_port = media_tbl[dev->if_port].next;
1577 if (vortex_debug > 1)
1578 printk(KERN_INFO "%s: first available media type: %s\n",
1579 dev->name, media_tbl[dev->if_port].name);
1581 } else {
1582 dev->if_port = vp->default_media;
1583 if (vortex_debug > 1)
1584 printk(KERN_INFO "%s: using default media %s\n",
1585 dev->name, media_tbl[dev->if_port].name);
1588 init_timer(&vp->timer);
1589 vp->timer.expires = RUN_AT(media_tbl[dev->if_port].wait);
1590 vp->timer.data = (unsigned long)dev;
1591 vp->timer.function = vortex_timer; /* timer handler */
1592 add_timer(&vp->timer);
1594 init_timer(&vp->rx_oom_timer);
1595 vp->rx_oom_timer.data = (unsigned long)dev;
1596 vp->rx_oom_timer.function = rx_oom_timer;
1598 if (vortex_debug > 1)
1599 printk(KERN_DEBUG "%s: Initial media type %s.\n",
1600 dev->name, media_tbl[dev->if_port].name);
1602 vp->full_duplex = vp->force_fd;
1603 config = BFINS(config, dev->if_port, 20, 4);
1604 if (vortex_debug > 6)
1605 printk(KERN_DEBUG "vortex_up(): writing 0x%x to InternalConfig\n", config);
1606 outl(config, ioaddr + Wn3_Config);
1608 if (dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
1609 int mii_reg1, mii_reg5;
1610 EL3WINDOW(4);
1611 /* Read BMSR (reg1) only to clear old status. */
1612 mii_reg1 = mdio_read(dev, vp->phys[0], 1);
1613 mii_reg5 = mdio_read(dev, vp->phys[0], 5);
1614 if (mii_reg5 == 0xffff || mii_reg5 == 0x0000) {
1615 netif_carrier_off(dev); /* No MII device or no link partner report */
1616 } else {
1617 mii_reg5 &= vp->advertising;
1618 if ((mii_reg5 & 0x0100) != 0 /* 100baseTx-FD */
1619 || (mii_reg5 & 0x00C0) == 0x0040) /* 10T-FD, but not 100-HD */
1620 vp->full_duplex = 1;
1621 netif_carrier_on(dev);
1623 vp->partner_flow_ctrl = ((mii_reg5 & 0x0400) != 0);
1624 if (vortex_debug > 1)
1625 printk(KERN_INFO "%s: MII #%d status %4.4x, link partner capability %4.4x,"
1626 " info1 %04x, setting %s-duplex.\n",
1627 dev->name, vp->phys[0],
1628 mii_reg1, mii_reg5,
1629 vp->info1, ((vp->info1 & 0x8000) || vp->full_duplex) ? "full" : "half");
1630 EL3WINDOW(3);
1633 /* Set the full-duplex bit. */
1634 outw( ((vp->info1 & 0x8000) || vp->full_duplex ? 0x20 : 0) |
1635 (vp->large_frames ? 0x40 : 0) |
1636 ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ? 0x100 : 0),
1637 ioaddr + Wn3_MAC_Ctrl);
1639 if (vortex_debug > 1) {
1640 printk(KERN_DEBUG "%s: vortex_up() InternalConfig %8.8x.\n",
1641 dev->name, config);
1644 issue_and_wait(dev, TxReset);
1646 * Don't reset the PHY - that upsets autonegotiation during DHCP operations.
1648 issue_and_wait(dev, RxReset|0x04);
1650 outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
1652 if (vortex_debug > 1) {
1653 EL3WINDOW(4);
1654 printk(KERN_DEBUG "%s: vortex_up() irq %d media status %4.4x.\n",
1655 dev->name, dev->irq, inw(ioaddr + Wn4_Media));
1658 /* Set the station address and mask in window 2 each time opened. */
1659 EL3WINDOW(2);
1660 for (i = 0; i < 6; i++)
1661 outb(dev->dev_addr[i], ioaddr + i);
1662 for (; i < 12; i+=2)
1663 outw(0, ioaddr + i);
1665 if (vp->cb_fn_base) {
1666 unsigned short n = inw(ioaddr + Wn2_ResetOptions) & ~0x4010;
1667 if (vp->drv_flags & INVERT_LED_PWR)
1668 n |= 0x10;
1669 if (vp->drv_flags & INVERT_MII_PWR)
1670 n |= 0x4000;
1671 outw(n, ioaddr + Wn2_ResetOptions);
1674 if (dev->if_port == XCVR_10base2)
1675 /* Start the thinnet transceiver. We should really wait 50ms...*/
1676 outw(StartCoax, ioaddr + EL3_CMD);
1677 if (dev->if_port != XCVR_NWAY) {
1678 EL3WINDOW(4);
1679 outw((inw(ioaddr + Wn4_Media) & ~(Media_10TP|Media_SQE)) |
1680 media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
1683 /* Switch to the stats window, and clear all stats by reading. */
1684 outw(StatsDisable, ioaddr + EL3_CMD);
1685 EL3WINDOW(6);
1686 for (i = 0; i < 10; i++)
1687 inb(ioaddr + i);
1688 inw(ioaddr + 10);
1689 inw(ioaddr + 12);
1690 /* New: On the Vortex we must also clear the BadSSD counter. */
1691 EL3WINDOW(4);
1692 inb(ioaddr + 12);
1693 /* ..and on the Boomerang we enable the extra statistics bits. */
1694 outw(0x0040, ioaddr + Wn4_NetDiag);
1696 /* Switch to register set 7 for normal use. */
1697 EL3WINDOW(7);
1699 if (vp->full_bus_master_rx) { /* Boomerang bus master. */
1700 vp->cur_rx = vp->dirty_rx = 0;
1701 /* Initialize the RxEarly register as recommended. */
1702 outw(SetRxThreshold + (1536>>2), ioaddr + EL3_CMD);
1703 outl(0x0020, ioaddr + PktStatus);
1704 outl(vp->rx_ring_dma, ioaddr + UpListPtr);
1706 if (vp->full_bus_master_tx) { /* Boomerang bus master Tx. */
1707 vp->cur_tx = vp->dirty_tx = 0;
1708 if (vp->drv_flags & IS_BOOMERANG)
1709 outb(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold); /* Room for a packet. */
1710 /* Clear the Rx, Tx rings. */
1711 for (i = 0; i < RX_RING_SIZE; i++) /* AKPM: this is done in vortex_open, too */
1712 vp->rx_ring[i].status = 0;
1713 for (i = 0; i < TX_RING_SIZE; i++)
1714 vp->tx_skbuff[i] = NULL;
1715 outl(0, ioaddr + DownListPtr);
1717 /* Set receiver mode: presumably accept b-case and phys addr only. */
1718 set_rx_mode(dev);
1719 /* enable 802.1q tagged frames */
1720 set_8021q_mode(dev, 1);
1721 outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
1723 // issue_and_wait(dev, SetTxStart|0x07ff);
1724 outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
1725 outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
1726 /* Allow status bits to be seen. */
1727 vp->status_enable = SetStatusEnb | HostError|IntReq|StatsFull|TxComplete|
1728 (vp->full_bus_master_tx ? DownComplete : TxAvailable) |
1729 (vp->full_bus_master_rx ? UpComplete : RxComplete) |
1730 (vp->bus_master ? DMADone : 0);
1731 vp->intr_enable = SetIntrEnb | IntLatch | TxAvailable |
1732 (vp->full_bus_master_rx ? 0 : RxComplete) |
1733 StatsFull | HostError | TxComplete | IntReq
1734 | (vp->bus_master ? DMADone : 0) | UpComplete | DownComplete;
1735 outw(vp->status_enable, ioaddr + EL3_CMD);
1736 /* Ack all pending events, and set active indicator mask. */
1737 outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
1738 ioaddr + EL3_CMD);
1739 outw(vp->intr_enable, ioaddr + EL3_CMD);
1740 if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
1741 writel(0x8000, vp->cb_fn_base + 4);
1742 netif_start_queue (dev);
1745 static int
1746 vortex_open(struct net_device *dev)
1748 struct vortex_private *vp = netdev_priv(dev);
1749 int i;
1750 int retval;
1752 /* Use the now-standard shared IRQ implementation. */
1753 if ((retval = request_irq(dev->irq, vp->full_bus_master_rx ?
1754 &boomerang_interrupt : &vortex_interrupt, SA_SHIRQ, dev->name, dev))) {
1755 printk(KERN_ERR "%s: Could not reserve IRQ %d\n", dev->name, dev->irq);
1756 goto out;
1759 if (vp->full_bus_master_rx) { /* Boomerang bus master. */
1760 if (vortex_debug > 2)
1761 printk(KERN_DEBUG "%s: Filling in the Rx ring.\n", dev->name);
1762 for (i = 0; i < RX_RING_SIZE; i++) {
1763 struct sk_buff *skb;
1764 vp->rx_ring[i].next = cpu_to_le32(vp->rx_ring_dma + sizeof(struct boom_rx_desc) * (i+1));
1765 vp->rx_ring[i].status = 0; /* Clear complete bit. */
1766 vp->rx_ring[i].length = cpu_to_le32(PKT_BUF_SZ | LAST_FRAG);
1767 skb = dev_alloc_skb(PKT_BUF_SZ);
1768 vp->rx_skbuff[i] = skb;
1769 if (skb == NULL)
1770 break; /* Bad news! */
1771 skb->dev = dev; /* Mark as being used by this device. */
1772 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
1773 vp->rx_ring[i].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->tail, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
1775 if (i != RX_RING_SIZE) {
1776 int j;
1777 printk(KERN_EMERG "%s: no memory for rx ring\n", dev->name);
1778 for (j = 0; j < i; j++) {
1779 if (vp->rx_skbuff[j]) {
1780 dev_kfree_skb(vp->rx_skbuff[j]);
1781 vp->rx_skbuff[j] = NULL;
1784 retval = -ENOMEM;
1785 goto out_free_irq;
1787 /* Wrap the ring. */
1788 vp->rx_ring[i-1].next = cpu_to_le32(vp->rx_ring_dma);
1791 vortex_up(dev);
1792 return 0;
1794 out_free_irq:
1795 free_irq(dev->irq, dev);
1796 out:
1797 if (vortex_debug > 1)
1798 printk(KERN_ERR "%s: vortex_open() fails: returning %d\n", dev->name, retval);
1799 return retval;
1802 static void
1803 vortex_timer(unsigned long data)
1805 struct net_device *dev = (struct net_device *)data;
1806 struct vortex_private *vp = netdev_priv(dev);
1807 long ioaddr = dev->base_addr;
1808 int next_tick = 60*HZ;
1809 int ok = 0;
1810 int media_status, mii_status, old_window;
1812 if (vortex_debug > 2) {
1813 printk(KERN_DEBUG "%s: Media selection timer tick happened, %s.\n",
1814 dev->name, media_tbl[dev->if_port].name);
1815 printk(KERN_DEBUG "dev->watchdog_timeo=%d\n", dev->watchdog_timeo);
1818 if (vp->medialock)
1819 goto leave_media_alone;
1820 disable_irq(dev->irq);
1821 old_window = inw(ioaddr + EL3_CMD) >> 13;
1822 EL3WINDOW(4);
1823 media_status = inw(ioaddr + Wn4_Media);
1824 switch (dev->if_port) {
1825 case XCVR_10baseT: case XCVR_100baseTx: case XCVR_100baseFx:
1826 if (media_status & Media_LnkBeat) {
1827 netif_carrier_on(dev);
1828 ok = 1;
1829 if (vortex_debug > 1)
1830 printk(KERN_DEBUG "%s: Media %s has link beat, %x.\n",
1831 dev->name, media_tbl[dev->if_port].name, media_status);
1832 } else {
1833 netif_carrier_off(dev);
1834 if (vortex_debug > 1) {
1835 printk(KERN_DEBUG "%s: Media %s has no link beat, %x.\n",
1836 dev->name, media_tbl[dev->if_port].name, media_status);
1839 break;
1840 case XCVR_MII: case XCVR_NWAY:
1842 mii_status = mdio_read(dev, vp->phys[0], 1);
1843 ok = 1;
1844 if (vortex_debug > 2)
1845 printk(KERN_DEBUG "%s: MII transceiver has status %4.4x.\n",
1846 dev->name, mii_status);
1847 if (mii_status & BMSR_LSTATUS) {
1848 int mii_reg5 = mdio_read(dev, vp->phys[0], 5);
1849 if (! vp->force_fd && mii_reg5 != 0xffff) {
1850 int duplex;
1852 mii_reg5 &= vp->advertising;
1853 duplex = (mii_reg5&0x0100) || (mii_reg5 & 0x01C0) == 0x0040;
1854 if (vp->full_duplex != duplex) {
1855 vp->full_duplex = duplex;
1856 printk(KERN_INFO "%s: Setting %s-duplex based on MII "
1857 "#%d link partner capability of %4.4x.\n",
1858 dev->name, vp->full_duplex ? "full" : "half",
1859 vp->phys[0], mii_reg5);
1860 /* Set the full-duplex bit. */
1861 EL3WINDOW(3);
1862 outw( (vp->full_duplex ? 0x20 : 0) |
1863 (vp->large_frames ? 0x40 : 0) |
1864 ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ? 0x100 : 0),
1865 ioaddr + Wn3_MAC_Ctrl);
1866 if (vortex_debug > 1)
1867 printk(KERN_DEBUG "Setting duplex in Wn3_MAC_Ctrl\n");
1868 /* AKPM: bug: should reset Tx and Rx after setting Duplex. Page 180 */
1871 netif_carrier_on(dev);
1872 } else {
1873 netif_carrier_off(dev);
1876 break;
1877 default: /* Other media types handled by Tx timeouts. */
1878 if (vortex_debug > 1)
1879 printk(KERN_DEBUG "%s: Media %s has no indication, %x.\n",
1880 dev->name, media_tbl[dev->if_port].name, media_status);
1881 ok = 1;
1883 if ( ! ok) {
1884 unsigned int config;
1886 do {
1887 dev->if_port = media_tbl[dev->if_port].next;
1888 } while ( ! (vp->available_media & media_tbl[dev->if_port].mask));
1889 if (dev->if_port == XCVR_Default) { /* Go back to default. */
1890 dev->if_port = vp->default_media;
1891 if (vortex_debug > 1)
1892 printk(KERN_DEBUG "%s: Media selection failing, using default "
1893 "%s port.\n",
1894 dev->name, media_tbl[dev->if_port].name);
1895 } else {
1896 if (vortex_debug > 1)
1897 printk(KERN_DEBUG "%s: Media selection failed, now trying "
1898 "%s port.\n",
1899 dev->name, media_tbl[dev->if_port].name);
1900 next_tick = media_tbl[dev->if_port].wait;
1902 outw((media_status & ~(Media_10TP|Media_SQE)) |
1903 media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
1905 EL3WINDOW(3);
1906 config = inl(ioaddr + Wn3_Config);
1907 config = BFINS(config, dev->if_port, 20, 4);
1908 outl(config, ioaddr + Wn3_Config);
1910 outw(dev->if_port == XCVR_10base2 ? StartCoax : StopCoax,
1911 ioaddr + EL3_CMD);
1912 if (vortex_debug > 1)
1913 printk(KERN_DEBUG "wrote 0x%08x to Wn3_Config\n", config);
1914 /* AKPM: FIXME: Should reset Rx & Tx here. P60 of 3c90xc.pdf */
1916 EL3WINDOW(old_window);
1917 enable_irq(dev->irq);
1919 leave_media_alone:
1920 if (vortex_debug > 2)
1921 printk(KERN_DEBUG "%s: Media selection timer finished, %s.\n",
1922 dev->name, media_tbl[dev->if_port].name);
1924 mod_timer(&vp->timer, RUN_AT(next_tick));
1925 if (vp->deferred)
1926 outw(FakeIntr, ioaddr + EL3_CMD);
1927 return;
1930 static void vortex_tx_timeout(struct net_device *dev)
1932 struct vortex_private *vp = netdev_priv(dev);
1933 long ioaddr = dev->base_addr;
1935 printk(KERN_ERR "%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
1936 dev->name, inb(ioaddr + TxStatus),
1937 inw(ioaddr + EL3_STATUS));
1938 EL3WINDOW(4);
1939 printk(KERN_ERR " diagnostics: net %04x media %04x dma %08x fifo %04x\n",
1940 inw(ioaddr + Wn4_NetDiag),
1941 inw(ioaddr + Wn4_Media),
1942 inl(ioaddr + PktStatus),
1943 inw(ioaddr + Wn4_FIFODiag));
1944 /* Slight code bloat to be user friendly. */
1945 if ((inb(ioaddr + TxStatus) & 0x88) == 0x88)
1946 printk(KERN_ERR "%s: Transmitter encountered 16 collisions --"
1947 " network cable problem?\n", dev->name);
1948 if (inw(ioaddr + EL3_STATUS) & IntLatch) {
1949 printk(KERN_ERR "%s: Interrupt posted but not delivered --"
1950 " IRQ blocked by another device?\n", dev->name);
1951 /* Bad idea here.. but we might as well handle a few events. */
1954 * Block interrupts because vortex_interrupt does a bare spin_lock()
1956 unsigned long flags;
1957 local_irq_save(flags);
1958 if (vp->full_bus_master_tx)
1959 boomerang_interrupt(dev->irq, dev, NULL);
1960 else
1961 vortex_interrupt(dev->irq, dev, NULL);
1962 local_irq_restore(flags);
1966 if (vortex_debug > 0)
1967 dump_tx_ring(dev);
1969 issue_and_wait(dev, TxReset);
1971 vp->stats.tx_errors++;
1972 if (vp->full_bus_master_tx) {
1973 printk(KERN_DEBUG "%s: Resetting the Tx ring pointer.\n", dev->name);
1974 if (vp->cur_tx - vp->dirty_tx > 0 && inl(ioaddr + DownListPtr) == 0)
1975 outl(vp->tx_ring_dma + (vp->dirty_tx % TX_RING_SIZE) * sizeof(struct boom_tx_desc),
1976 ioaddr + DownListPtr);
1977 if (vp->cur_tx - vp->dirty_tx < TX_RING_SIZE)
1978 netif_wake_queue (dev);
1979 if (vp->drv_flags & IS_BOOMERANG)
1980 outb(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold);
1981 outw(DownUnstall, ioaddr + EL3_CMD);
1982 } else {
1983 vp->stats.tx_dropped++;
1984 netif_wake_queue(dev);
1987 /* Issue Tx Enable */
1988 outw(TxEnable, ioaddr + EL3_CMD);
1989 dev->trans_start = jiffies;
1991 /* Switch to register set 7 for normal use. */
1992 EL3WINDOW(7);
1996 * Handle uncommon interrupt sources. This is a separate routine to minimize
1997 * the cache impact.
1999 static void
2000 vortex_error(struct net_device *dev, int status)
2002 struct vortex_private *vp = netdev_priv(dev);
2003 long ioaddr = dev->base_addr;
2004 int do_tx_reset = 0, reset_mask = 0;
2005 unsigned char tx_status = 0;
2007 if (vortex_debug > 2) {
2008 printk(KERN_ERR "%s: vortex_error(), status=0x%x\n", dev->name, status);
2011 if (status & TxComplete) { /* Really "TxError" for us. */
2012 tx_status = inb(ioaddr + TxStatus);
2013 /* Presumably a tx-timeout. We must merely re-enable. */
2014 if (vortex_debug > 2
2015 || (tx_status != 0x88 && vortex_debug > 0)) {
2016 printk(KERN_ERR "%s: Transmit error, Tx status register %2.2x.\n",
2017 dev->name, tx_status);
2018 if (tx_status == 0x82) {
2019 printk(KERN_ERR "Probably a duplex mismatch. See "
2020 "Documentation/networking/vortex.txt\n");
2022 dump_tx_ring(dev);
2024 if (tx_status & 0x14) vp->stats.tx_fifo_errors++;
2025 if (tx_status & 0x38) vp->stats.tx_aborted_errors++;
2026 outb(0, ioaddr + TxStatus);
2027 if (tx_status & 0x30) { /* txJabber or txUnderrun */
2028 do_tx_reset = 1;
2029 } else if ((tx_status & 0x08) && (vp->drv_flags & MAX_COLLISION_RESET)) { /* maxCollisions */
2030 do_tx_reset = 1;
2031 reset_mask = 0x0108; /* Reset interface logic, but not download logic */
2032 } else { /* Merely re-enable the transmitter. */
2033 outw(TxEnable, ioaddr + EL3_CMD);
2037 if (status & RxEarly) { /* Rx early is unused. */
2038 vortex_rx(dev);
2039 outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
2041 if (status & StatsFull) { /* Empty statistics. */
2042 static int DoneDidThat;
2043 if (vortex_debug > 4)
2044 printk(KERN_DEBUG "%s: Updating stats.\n", dev->name);
2045 update_stats(ioaddr, dev);
2046 /* HACK: Disable statistics as an interrupt source. */
2047 /* This occurs when we have the wrong media type! */
2048 if (DoneDidThat == 0 &&
2049 inw(ioaddr + EL3_STATUS) & StatsFull) {
2050 printk(KERN_WARNING "%s: Updating statistics failed, disabling "
2051 "stats as an interrupt source.\n", dev->name);
2052 EL3WINDOW(5);
2053 outw(SetIntrEnb | (inw(ioaddr + 10) & ~StatsFull), ioaddr + EL3_CMD);
2054 vp->intr_enable &= ~StatsFull;
2055 EL3WINDOW(7);
2056 DoneDidThat++;
2059 if (status & IntReq) { /* Restore all interrupt sources. */
2060 outw(vp->status_enable, ioaddr + EL3_CMD);
2061 outw(vp->intr_enable, ioaddr + EL3_CMD);
2063 if (status & HostError) {
2064 u16 fifo_diag;
2065 EL3WINDOW(4);
2066 fifo_diag = inw(ioaddr + Wn4_FIFODiag);
2067 printk(KERN_ERR "%s: Host error, FIFO diagnostic register %4.4x.\n",
2068 dev->name, fifo_diag);
2069 /* Adapter failure requires Tx/Rx reset and reinit. */
2070 if (vp->full_bus_master_tx) {
2071 int bus_status = inl(ioaddr + PktStatus);
2072 /* 0x80000000 PCI master abort. */
2073 /* 0x40000000 PCI target abort. */
2074 if (vortex_debug)
2075 printk(KERN_ERR "%s: PCI bus error, bus status %8.8x\n", dev->name, bus_status);
2077 /* In this case, blow the card away */
2078 /* Must not enter D3 or we can't legally issue the reset! */
2079 vortex_down(dev, 0);
2080 issue_and_wait(dev, TotalReset | 0xff);
2081 vortex_up(dev); /* AKPM: bug. vortex_up() assumes that the rx ring is full. It may not be. */
2082 } else if (fifo_diag & 0x0400)
2083 do_tx_reset = 1;
2084 if (fifo_diag & 0x3000) {
2085 /* Reset Rx fifo and upload logic */
2086 issue_and_wait(dev, RxReset|0x07);
2087 /* Set the Rx filter to the current state. */
2088 set_rx_mode(dev);
2089 /* enable 802.1q VLAN tagged frames */
2090 set_8021q_mode(dev, 1);
2091 outw(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
2092 outw(AckIntr | HostError, ioaddr + EL3_CMD);
2096 if (do_tx_reset) {
2097 issue_and_wait(dev, TxReset|reset_mask);
2098 outw(TxEnable, ioaddr + EL3_CMD);
2099 if (!vp->full_bus_master_tx)
2100 netif_wake_queue(dev);
2104 static int
2105 vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
2107 struct vortex_private *vp = netdev_priv(dev);
2108 long ioaddr = dev->base_addr;
2110 /* Put out the doubleword header... */
2111 outl(skb->len, ioaddr + TX_FIFO);
2112 if (vp->bus_master) {
2113 /* Set the bus-master controller to transfer the packet. */
2114 int len = (skb->len + 3) & ~3;
2115 outl( vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len, PCI_DMA_TODEVICE),
2116 ioaddr + Wn7_MasterAddr);
2117 outw(len, ioaddr + Wn7_MasterLen);
2118 vp->tx_skb = skb;
2119 outw(StartDMADown, ioaddr + EL3_CMD);
2120 /* netif_wake_queue() will be called at the DMADone interrupt. */
2121 } else {
2122 /* ... and the packet rounded to a doubleword. */
2123 outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
2124 dev_kfree_skb (skb);
2125 if (inw(ioaddr + TxFree) > 1536) {
2126 netif_start_queue (dev); /* AKPM: redundant? */
2127 } else {
2128 /* Interrupt us when the FIFO has room for max-sized packet. */
2129 netif_stop_queue(dev);
2130 outw(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
2134 dev->trans_start = jiffies;
2136 /* Clear the Tx status stack. */
2138 int tx_status;
2139 int i = 32;
2141 while (--i > 0 && (tx_status = inb(ioaddr + TxStatus)) > 0) {
2142 if (tx_status & 0x3C) { /* A Tx-disabling error occurred. */
2143 if (vortex_debug > 2)
2144 printk(KERN_DEBUG "%s: Tx error, status %2.2x.\n",
2145 dev->name, tx_status);
2146 if (tx_status & 0x04) vp->stats.tx_fifo_errors++;
2147 if (tx_status & 0x38) vp->stats.tx_aborted_errors++;
2148 if (tx_status & 0x30) {
2149 issue_and_wait(dev, TxReset);
2151 outw(TxEnable, ioaddr + EL3_CMD);
2153 outb(0x00, ioaddr + TxStatus); /* Pop the status stack. */
2156 return 0;
2159 static int
2160 boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
2162 struct vortex_private *vp = netdev_priv(dev);
2163 long ioaddr = dev->base_addr;
2164 /* Calculate the next Tx descriptor entry. */
2165 int entry = vp->cur_tx % TX_RING_SIZE;
2166 struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE];
2167 unsigned long flags;
2169 if (vortex_debug > 6) {
2170 printk(KERN_DEBUG "boomerang_start_xmit()\n");
2171 if (vortex_debug > 3)
2172 printk(KERN_DEBUG "%s: Trying to send a packet, Tx index %d.\n",
2173 dev->name, vp->cur_tx);
2176 if (vp->cur_tx - vp->dirty_tx >= TX_RING_SIZE) {
2177 if (vortex_debug > 0)
2178 printk(KERN_WARNING "%s: BUG! Tx Ring full, refusing to send buffer.\n",
2179 dev->name);
2180 netif_stop_queue(dev);
2181 return 1;
2184 vp->tx_skbuff[entry] = skb;
2186 vp->tx_ring[entry].next = 0;
2187 #if DO_ZEROCOPY
2188 if (skb->ip_summed != CHECKSUM_HW)
2189 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
2190 else
2191 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum);
2193 if (!skb_shinfo(skb)->nr_frags) {
2194 vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
2195 skb->len, PCI_DMA_TODEVICE));
2196 vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len | LAST_FRAG);
2197 } else {
2198 int i;
2200 vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data,
2201 skb->len-skb->data_len, PCI_DMA_TODEVICE));
2202 vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len-skb->data_len);
2204 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
2205 skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
2207 vp->tx_ring[entry].frag[i+1].addr =
2208 cpu_to_le32(pci_map_single(VORTEX_PCI(vp),
2209 (void*)page_address(frag->page) + frag->page_offset,
2210 frag->size, PCI_DMA_TODEVICE));
2212 if (i == skb_shinfo(skb)->nr_frags-1)
2213 vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size|LAST_FRAG);
2214 else
2215 vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size);
2218 #else
2219 vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE));
2220 vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG);
2221 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
2222 #endif
2224 spin_lock_irqsave(&vp->lock, flags);
2225 /* Wait for the stall to complete. */
2226 issue_and_wait(dev, DownStall);
2227 prev_entry->next = cpu_to_le32(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc));
2228 if (inl(ioaddr + DownListPtr) == 0) {
2229 outl(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc), ioaddr + DownListPtr);
2230 vp->queued_packet++;
2233 vp->cur_tx++;
2234 if (vp->cur_tx - vp->dirty_tx > TX_RING_SIZE - 1) {
2235 netif_stop_queue (dev);
2236 } else { /* Clear previous interrupt enable. */
2237 #if defined(tx_interrupt_mitigation)
2238 /* Dubious. If in boomeang_interrupt "faster" cyclone ifdef
2239 * were selected, this would corrupt DN_COMPLETE. No?
2241 prev_entry->status &= cpu_to_le32(~TxIntrUploaded);
2242 #endif
2244 outw(DownUnstall, ioaddr + EL3_CMD);
2245 spin_unlock_irqrestore(&vp->lock, flags);
2246 dev->trans_start = jiffies;
2247 return 0;
2250 /* The interrupt handler does all of the Rx thread work and cleans up
2251 after the Tx thread. */
2254 * This is the ISR for the vortex series chips.
2255 * full_bus_master_tx == 0 && full_bus_master_rx == 0
2258 static irqreturn_t
2259 vortex_interrupt(int irq, void *dev_id, struct pt_regs *regs)
2261 struct net_device *dev = dev_id;
2262 struct vortex_private *vp = netdev_priv(dev);
2263 long ioaddr;
2264 int status;
2265 int work_done = max_interrupt_work;
2266 int handled = 0;
2268 ioaddr = dev->base_addr;
2269 spin_lock(&vp->lock);
2271 status = inw(ioaddr + EL3_STATUS);
2273 if (vortex_debug > 6)
2274 printk("vortex_interrupt(). status=0x%4x\n", status);
2276 if ((status & IntLatch) == 0)
2277 goto handler_exit; /* No interrupt: shared IRQs cause this */
2278 handled = 1;
2280 if (status & IntReq) {
2281 status |= vp->deferred;
2282 vp->deferred = 0;
2285 if (status == 0xffff) /* h/w no longer present (hotplug)? */
2286 goto handler_exit;
2288 if (vortex_debug > 4)
2289 printk(KERN_DEBUG "%s: interrupt, status %4.4x, latency %d ticks.\n",
2290 dev->name, status, inb(ioaddr + Timer));
2292 do {
2293 if (vortex_debug > 5)
2294 printk(KERN_DEBUG "%s: In interrupt loop, status %4.4x.\n",
2295 dev->name, status);
2296 if (status & RxComplete)
2297 vortex_rx(dev);
2299 if (status & TxAvailable) {
2300 if (vortex_debug > 5)
2301 printk(KERN_DEBUG " TX room bit was handled.\n");
2302 /* There's room in the FIFO for a full-sized packet. */
2303 outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
2304 netif_wake_queue (dev);
2307 if (status & DMADone) {
2308 if (inw(ioaddr + Wn7_MasterStatus) & 0x1000) {
2309 outw(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
2310 pci_unmap_single(VORTEX_PCI(vp), vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3, PCI_DMA_TODEVICE);
2311 dev_kfree_skb_irq(vp->tx_skb); /* Release the transferred buffer */
2312 if (inw(ioaddr + TxFree) > 1536) {
2314 * AKPM: FIXME: I don't think we need this. If the queue was stopped due to
2315 * insufficient FIFO room, the TxAvailable test will succeed and call
2316 * netif_wake_queue()
2318 netif_wake_queue(dev);
2319 } else { /* Interrupt when FIFO has room for max-sized packet. */
2320 outw(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
2321 netif_stop_queue(dev);
2325 /* Check for all uncommon interrupts at once. */
2326 if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq)) {
2327 if (status == 0xffff)
2328 break;
2329 vortex_error(dev, status);
2332 if (--work_done < 0) {
2333 printk(KERN_WARNING "%s: Too much work in interrupt, status "
2334 "%4.4x.\n", dev->name, status);
2335 /* Disable all pending interrupts. */
2336 do {
2337 vp->deferred |= status;
2338 outw(SetStatusEnb | (~vp->deferred & vp->status_enable),
2339 ioaddr + EL3_CMD);
2340 outw(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
2341 } while ((status = inw(ioaddr + EL3_CMD)) & IntLatch);
2342 /* The timer will reenable interrupts. */
2343 mod_timer(&vp->timer, jiffies + 1*HZ);
2344 break;
2346 /* Acknowledge the IRQ. */
2347 outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
2348 } while ((status = inw(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
2350 if (vortex_debug > 4)
2351 printk(KERN_DEBUG "%s: exiting interrupt, status %4.4x.\n",
2352 dev->name, status);
2353 handler_exit:
2354 spin_unlock(&vp->lock);
2355 return IRQ_RETVAL(handled);
2359 * This is the ISR for the boomerang series chips.
2360 * full_bus_master_tx == 1 && full_bus_master_rx == 1
2363 static irqreturn_t
2364 boomerang_interrupt(int irq, void *dev_id, struct pt_regs *regs)
2366 struct net_device *dev = dev_id;
2367 struct vortex_private *vp = netdev_priv(dev);
2368 long ioaddr;
2369 int status;
2370 int work_done = max_interrupt_work;
2372 ioaddr = dev->base_addr;
2375 * It seems dopey to put the spinlock this early, but we could race against vortex_tx_timeout
2376 * and boomerang_start_xmit
2378 spin_lock(&vp->lock);
2380 status = inw(ioaddr + EL3_STATUS);
2382 if (vortex_debug > 6)
2383 printk(KERN_DEBUG "boomerang_interrupt. status=0x%4x\n", status);
2385 if ((status & IntLatch) == 0)
2386 goto handler_exit; /* No interrupt: shared IRQs can cause this */
2388 if (status == 0xffff) { /* h/w no longer present (hotplug)? */
2389 if (vortex_debug > 1)
2390 printk(KERN_DEBUG "boomerang_interrupt(1): status = 0xffff\n");
2391 goto handler_exit;
2394 if (status & IntReq) {
2395 status |= vp->deferred;
2396 vp->deferred = 0;
2399 if (vortex_debug > 4)
2400 printk(KERN_DEBUG "%s: interrupt, status %4.4x, latency %d ticks.\n",
2401 dev->name, status, inb(ioaddr + Timer));
2402 do {
2403 if (vortex_debug > 5)
2404 printk(KERN_DEBUG "%s: In interrupt loop, status %4.4x.\n",
2405 dev->name, status);
2406 if (status & UpComplete) {
2407 outw(AckIntr | UpComplete, ioaddr + EL3_CMD);
2408 if (vortex_debug > 5)
2409 printk(KERN_DEBUG "boomerang_interrupt->boomerang_rx\n");
2410 boomerang_rx(dev);
2413 if (status & DownComplete) {
2414 unsigned int dirty_tx = vp->dirty_tx;
2416 outw(AckIntr | DownComplete, ioaddr + EL3_CMD);
2417 while (vp->cur_tx - dirty_tx > 0) {
2418 int entry = dirty_tx % TX_RING_SIZE;
2419 #if 1 /* AKPM: the latter is faster, but cyclone-only */
2420 if (inl(ioaddr + DownListPtr) ==
2421 vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc))
2422 break; /* It still hasn't been processed. */
2423 #else
2424 if ((vp->tx_ring[entry].status & DN_COMPLETE) == 0)
2425 break; /* It still hasn't been processed. */
2426 #endif
2428 if (vp->tx_skbuff[entry]) {
2429 struct sk_buff *skb = vp->tx_skbuff[entry];
2430 #if DO_ZEROCOPY
2431 int i;
2432 for (i=0; i<=skb_shinfo(skb)->nr_frags; i++)
2433 pci_unmap_single(VORTEX_PCI(vp),
2434 le32_to_cpu(vp->tx_ring[entry].frag[i].addr),
2435 le32_to_cpu(vp->tx_ring[entry].frag[i].length)&0xFFF,
2436 PCI_DMA_TODEVICE);
2437 #else
2438 pci_unmap_single(VORTEX_PCI(vp),
2439 le32_to_cpu(vp->tx_ring[entry].addr), skb->len, PCI_DMA_TODEVICE);
2440 #endif
2441 dev_kfree_skb_irq(skb);
2442 vp->tx_skbuff[entry] = NULL;
2443 } else {
2444 printk(KERN_DEBUG "boomerang_interrupt: no skb!\n");
2446 /* vp->stats.tx_packets++; Counted below. */
2447 dirty_tx++;
2449 vp->dirty_tx = dirty_tx;
2450 if (vp->cur_tx - dirty_tx <= TX_RING_SIZE - 1) {
2451 if (vortex_debug > 6)
2452 printk(KERN_DEBUG "boomerang_interrupt: wake queue\n");
2453 netif_wake_queue (dev);
2457 /* Check for all uncommon interrupts at once. */
2458 if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq))
2459 vortex_error(dev, status);
2461 if (--work_done < 0) {
2462 printk(KERN_WARNING "%s: Too much work in interrupt, status "
2463 "%4.4x.\n", dev->name, status);
2464 /* Disable all pending interrupts. */
2465 do {
2466 vp->deferred |= status;
2467 outw(SetStatusEnb | (~vp->deferred & vp->status_enable),
2468 ioaddr + EL3_CMD);
2469 outw(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
2470 } while ((status = inw(ioaddr + EL3_CMD)) & IntLatch);
2471 /* The timer will reenable interrupts. */
2472 mod_timer(&vp->timer, jiffies + 1*HZ);
2473 break;
2475 /* Acknowledge the IRQ. */
2476 outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
2477 if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
2478 writel(0x8000, vp->cb_fn_base + 4);
2480 } while ((status = inw(ioaddr + EL3_STATUS)) & IntLatch);
2482 if (vortex_debug > 4)
2483 printk(KERN_DEBUG "%s: exiting interrupt, status %4.4x.\n",
2484 dev->name, status);
2485 handler_exit:
2486 spin_unlock(&vp->lock);
2487 return IRQ_HANDLED;
2490 static int vortex_rx(struct net_device *dev)
2492 struct vortex_private *vp = netdev_priv(dev);
2493 long ioaddr = dev->base_addr;
2494 int i;
2495 short rx_status;
2497 if (vortex_debug > 5)
2498 printk(KERN_DEBUG "vortex_rx(): status %4.4x, rx_status %4.4x.\n",
2499 inw(ioaddr+EL3_STATUS), inw(ioaddr+RxStatus));
2500 while ((rx_status = inw(ioaddr + RxStatus)) > 0) {
2501 if (rx_status & 0x4000) { /* Error, update stats. */
2502 unsigned char rx_error = inb(ioaddr + RxErrors);
2503 if (vortex_debug > 2)
2504 printk(KERN_DEBUG " Rx error: status %2.2x.\n", rx_error);
2505 vp->stats.rx_errors++;
2506 if (rx_error & 0x01) vp->stats.rx_over_errors++;
2507 if (rx_error & 0x02) vp->stats.rx_length_errors++;
2508 if (rx_error & 0x04) vp->stats.rx_frame_errors++;
2509 if (rx_error & 0x08) vp->stats.rx_crc_errors++;
2510 if (rx_error & 0x10) vp->stats.rx_length_errors++;
2511 } else {
2512 /* The packet length: up to 4.5K!. */
2513 int pkt_len = rx_status & 0x1fff;
2514 struct sk_buff *skb;
2516 skb = dev_alloc_skb(pkt_len + 5);
2517 if (vortex_debug > 4)
2518 printk(KERN_DEBUG "Receiving packet size %d status %4.4x.\n",
2519 pkt_len, rx_status);
2520 if (skb != NULL) {
2521 skb->dev = dev;
2522 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2523 /* 'skb_put()' points to the start of sk_buff data area. */
2524 if (vp->bus_master &&
2525 ! (inw(ioaddr + Wn7_MasterStatus) & 0x8000)) {
2526 dma_addr_t dma = pci_map_single(VORTEX_PCI(vp), skb_put(skb, pkt_len),
2527 pkt_len, PCI_DMA_FROMDEVICE);
2528 outl(dma, ioaddr + Wn7_MasterAddr);
2529 outw((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
2530 outw(StartDMAUp, ioaddr + EL3_CMD);
2531 while (inw(ioaddr + Wn7_MasterStatus) & 0x8000)
2533 pci_unmap_single(VORTEX_PCI(vp), dma, pkt_len, PCI_DMA_FROMDEVICE);
2534 } else {
2535 insl(ioaddr + RX_FIFO, skb_put(skb, pkt_len),
2536 (pkt_len + 3) >> 2);
2538 outw(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
2539 skb->protocol = eth_type_trans(skb, dev);
2540 netif_rx(skb);
2541 dev->last_rx = jiffies;
2542 vp->stats.rx_packets++;
2543 /* Wait a limited time to go to next packet. */
2544 for (i = 200; i >= 0; i--)
2545 if ( ! (inw(ioaddr + EL3_STATUS) & CmdInProgress))
2546 break;
2547 continue;
2548 } else if (vortex_debug > 0)
2549 printk(KERN_NOTICE "%s: No memory to allocate a sk_buff of "
2550 "size %d.\n", dev->name, pkt_len);
2552 vp->stats.rx_dropped++;
2553 issue_and_wait(dev, RxDiscard);
2556 return 0;
2559 static int
2560 boomerang_rx(struct net_device *dev)
2562 struct vortex_private *vp = netdev_priv(dev);
2563 int entry = vp->cur_rx % RX_RING_SIZE;
2564 long ioaddr = dev->base_addr;
2565 int rx_status;
2566 int rx_work_limit = vp->dirty_rx + RX_RING_SIZE - vp->cur_rx;
2568 if (vortex_debug > 5)
2569 printk(KERN_DEBUG "boomerang_rx(): status %4.4x\n", inw(ioaddr+EL3_STATUS));
2571 while ((rx_status = le32_to_cpu(vp->rx_ring[entry].status)) & RxDComplete){
2572 if (--rx_work_limit < 0)
2573 break;
2574 if (rx_status & RxDError) { /* Error, update stats. */
2575 unsigned char rx_error = rx_status >> 16;
2576 if (vortex_debug > 2)
2577 printk(KERN_DEBUG " Rx error: status %2.2x.\n", rx_error);
2578 vp->stats.rx_errors++;
2579 if (rx_error & 0x01) vp->stats.rx_over_errors++;
2580 if (rx_error & 0x02) vp->stats.rx_length_errors++;
2581 if (rx_error & 0x04) vp->stats.rx_frame_errors++;
2582 if (rx_error & 0x08) vp->stats.rx_crc_errors++;
2583 if (rx_error & 0x10) vp->stats.rx_length_errors++;
2584 } else {
2585 /* The packet length: up to 4.5K!. */
2586 int pkt_len = rx_status & 0x1fff;
2587 struct sk_buff *skb;
2588 dma_addr_t dma = le32_to_cpu(vp->rx_ring[entry].addr);
2590 if (vortex_debug > 4)
2591 printk(KERN_DEBUG "Receiving packet size %d status %4.4x.\n",
2592 pkt_len, rx_status);
2594 /* Check if the packet is long enough to just accept without
2595 copying to a properly sized skbuff. */
2596 if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != 0) {
2597 skb->dev = dev;
2598 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2599 pci_dma_sync_single_for_cpu(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2600 /* 'skb_put()' points to the start of sk_buff data area. */
2601 memcpy(skb_put(skb, pkt_len),
2602 vp->rx_skbuff[entry]->tail,
2603 pkt_len);
2604 pci_dma_sync_single_for_device(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2605 vp->rx_copy++;
2606 } else {
2607 /* Pass up the skbuff already on the Rx ring. */
2608 skb = vp->rx_skbuff[entry];
2609 vp->rx_skbuff[entry] = NULL;
2610 skb_put(skb, pkt_len);
2611 pci_unmap_single(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2612 vp->rx_nocopy++;
2614 skb->protocol = eth_type_trans(skb, dev);
2615 { /* Use hardware checksum info. */
2616 int csum_bits = rx_status & 0xee000000;
2617 if (csum_bits &&
2618 (csum_bits == (IPChksumValid | TCPChksumValid) ||
2619 csum_bits == (IPChksumValid | UDPChksumValid))) {
2620 skb->ip_summed = CHECKSUM_UNNECESSARY;
2621 vp->rx_csumhits++;
2624 netif_rx(skb);
2625 dev->last_rx = jiffies;
2626 vp->stats.rx_packets++;
2628 entry = (++vp->cur_rx) % RX_RING_SIZE;
2630 /* Refill the Rx ring buffers. */
2631 for (; vp->cur_rx - vp->dirty_rx > 0; vp->dirty_rx++) {
2632 struct sk_buff *skb;
2633 entry = vp->dirty_rx % RX_RING_SIZE;
2634 if (vp->rx_skbuff[entry] == NULL) {
2635 skb = dev_alloc_skb(PKT_BUF_SZ);
2636 if (skb == NULL) {
2637 static unsigned long last_jif;
2638 if ((jiffies - last_jif) > 10 * HZ) {
2639 printk(KERN_WARNING "%s: memory shortage\n", dev->name);
2640 last_jif = jiffies;
2642 if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE)
2643 mod_timer(&vp->rx_oom_timer, RUN_AT(HZ * 1));
2644 break; /* Bad news! */
2646 skb->dev = dev; /* Mark as being used by this device. */
2647 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2648 vp->rx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->tail, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
2649 vp->rx_skbuff[entry] = skb;
2651 vp->rx_ring[entry].status = 0; /* Clear complete bit. */
2652 outw(UpUnstall, ioaddr + EL3_CMD);
2654 return 0;
2658 * If we've hit a total OOM refilling the Rx ring we poll once a second
2659 * for some memory. Otherwise there is no way to restart the rx process.
2661 static void
2662 rx_oom_timer(unsigned long arg)
2664 struct net_device *dev = (struct net_device *)arg;
2665 struct vortex_private *vp = netdev_priv(dev);
2667 spin_lock_irq(&vp->lock);
2668 if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE) /* This test is redundant, but makes me feel good */
2669 boomerang_rx(dev);
2670 if (vortex_debug > 1) {
2671 printk(KERN_DEBUG "%s: rx_oom_timer %s\n", dev->name,
2672 ((vp->cur_rx - vp->dirty_rx) != RX_RING_SIZE) ? "succeeded" : "retrying");
2674 spin_unlock_irq(&vp->lock);
2677 static void
2678 vortex_down(struct net_device *dev, int final_down)
2680 struct vortex_private *vp = netdev_priv(dev);
2681 long ioaddr = dev->base_addr;
2683 netif_stop_queue (dev);
2685 del_timer_sync(&vp->rx_oom_timer);
2686 del_timer_sync(&vp->timer);
2688 /* Turn off statistics ASAP. We update vp->stats below. */
2689 outw(StatsDisable, ioaddr + EL3_CMD);
2691 /* Disable the receiver and transmitter. */
2692 outw(RxDisable, ioaddr + EL3_CMD);
2693 outw(TxDisable, ioaddr + EL3_CMD);
2695 /* Disable receiving 802.1q tagged frames */
2696 set_8021q_mode(dev, 0);
2698 if (dev->if_port == XCVR_10base2)
2699 /* Turn off thinnet power. Green! */
2700 outw(StopCoax, ioaddr + EL3_CMD);
2702 outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
2704 update_stats(ioaddr, dev);
2705 if (vp->full_bus_master_rx)
2706 outl(0, ioaddr + UpListPtr);
2707 if (vp->full_bus_master_tx)
2708 outl(0, ioaddr + DownListPtr);
2710 if (final_down && VORTEX_PCI(vp) && vp->enable_wol) {
2711 pci_save_state(VORTEX_PCI(vp), vp->power_state);
2712 acpi_set_WOL(dev);
2716 static int
2717 vortex_close(struct net_device *dev)
2719 struct vortex_private *vp = netdev_priv(dev);
2720 long ioaddr = dev->base_addr;
2721 int i;
2723 if (netif_device_present(dev))
2724 vortex_down(dev, 1);
2726 if (vortex_debug > 1) {
2727 printk(KERN_DEBUG"%s: vortex_close() status %4.4x, Tx status %2.2x.\n",
2728 dev->name, inw(ioaddr + EL3_STATUS), inb(ioaddr + TxStatus));
2729 printk(KERN_DEBUG "%s: vortex close stats: rx_nocopy %d rx_copy %d"
2730 " tx_queued %d Rx pre-checksummed %d.\n",
2731 dev->name, vp->rx_nocopy, vp->rx_copy, vp->queued_packet, vp->rx_csumhits);
2734 #if DO_ZEROCOPY
2735 if ( vp->rx_csumhits &&
2736 ((vp->drv_flags & HAS_HWCKSM) == 0) &&
2737 (hw_checksums[vp->card_idx] == -1)) {
2738 printk(KERN_WARNING "%s supports hardware checksums, and we're not using them!\n", dev->name);
2740 #endif
2742 free_irq(dev->irq, dev);
2744 if (vp->full_bus_master_rx) { /* Free Boomerang bus master Rx buffers. */
2745 for (i = 0; i < RX_RING_SIZE; i++)
2746 if (vp->rx_skbuff[i]) {
2747 pci_unmap_single( VORTEX_PCI(vp), le32_to_cpu(vp->rx_ring[i].addr),
2748 PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2749 dev_kfree_skb(vp->rx_skbuff[i]);
2750 vp->rx_skbuff[i] = NULL;
2753 if (vp->full_bus_master_tx) { /* Free Boomerang bus master Tx buffers. */
2754 for (i = 0; i < TX_RING_SIZE; i++) {
2755 if (vp->tx_skbuff[i]) {
2756 struct sk_buff *skb = vp->tx_skbuff[i];
2757 #if DO_ZEROCOPY
2758 int k;
2760 for (k=0; k<=skb_shinfo(skb)->nr_frags; k++)
2761 pci_unmap_single(VORTEX_PCI(vp),
2762 le32_to_cpu(vp->tx_ring[i].frag[k].addr),
2763 le32_to_cpu(vp->tx_ring[i].frag[k].length)&0xFFF,
2764 PCI_DMA_TODEVICE);
2765 #else
2766 pci_unmap_single(VORTEX_PCI(vp), le32_to_cpu(vp->tx_ring[i].addr), skb->len, PCI_DMA_TODEVICE);
2767 #endif
2768 dev_kfree_skb(skb);
2769 vp->tx_skbuff[i] = NULL;
2774 return 0;
2777 static void
2778 dump_tx_ring(struct net_device *dev)
2780 if (vortex_debug > 0) {
2781 struct vortex_private *vp = netdev_priv(dev);
2782 long ioaddr = dev->base_addr;
2784 if (vp->full_bus_master_tx) {
2785 int i;
2786 int stalled = inl(ioaddr + PktStatus) & 0x04; /* Possible racy. But it's only debug stuff */
2788 printk(KERN_ERR " Flags; bus-master %d, dirty %d(%d) current %d(%d)\n",
2789 vp->full_bus_master_tx,
2790 vp->dirty_tx, vp->dirty_tx % TX_RING_SIZE,
2791 vp->cur_tx, vp->cur_tx % TX_RING_SIZE);
2792 printk(KERN_ERR " Transmit list %8.8x vs. %p.\n",
2793 inl(ioaddr + DownListPtr),
2794 &vp->tx_ring[vp->dirty_tx % TX_RING_SIZE]);
2795 issue_and_wait(dev, DownStall);
2796 for (i = 0; i < TX_RING_SIZE; i++) {
2797 printk(KERN_ERR " %d: @%p length %8.8x status %8.8x\n", i,
2798 &vp->tx_ring[i],
2799 #if DO_ZEROCOPY
2800 le32_to_cpu(vp->tx_ring[i].frag[0].length),
2801 #else
2802 le32_to_cpu(vp->tx_ring[i].length),
2803 #endif
2804 le32_to_cpu(vp->tx_ring[i].status));
2806 if (!stalled)
2807 outw(DownUnstall, ioaddr + EL3_CMD);
2812 static struct net_device_stats *vortex_get_stats(struct net_device *dev)
2814 struct vortex_private *vp = netdev_priv(dev);
2815 unsigned long flags;
2817 if (netif_device_present(dev)) { /* AKPM: Used to be netif_running */
2818 spin_lock_irqsave (&vp->lock, flags);
2819 update_stats(dev->base_addr, dev);
2820 spin_unlock_irqrestore (&vp->lock, flags);
2822 return &vp->stats;
2825 /* Update statistics.
2826 Unlike with the EL3 we need not worry about interrupts changing
2827 the window setting from underneath us, but we must still guard
2828 against a race condition with a StatsUpdate interrupt updating the
2829 table. This is done by checking that the ASM (!) code generated uses
2830 atomic updates with '+='.
2832 static void update_stats(long ioaddr, struct net_device *dev)
2834 struct vortex_private *vp = netdev_priv(dev);
2835 int old_window = inw(ioaddr + EL3_CMD);
2837 if (old_window == 0xffff) /* Chip suspended or ejected. */
2838 return;
2839 /* Unlike the 3c5x9 we need not turn off stats updates while reading. */
2840 /* Switch to the stats window, and read everything. */
2841 EL3WINDOW(6);
2842 vp->stats.tx_carrier_errors += inb(ioaddr + 0);
2843 vp->stats.tx_heartbeat_errors += inb(ioaddr + 1);
2844 /* Multiple collisions. */ inb(ioaddr + 2);
2845 vp->stats.collisions += inb(ioaddr + 3);
2846 vp->stats.tx_window_errors += inb(ioaddr + 4);
2847 vp->stats.rx_fifo_errors += inb(ioaddr + 5);
2848 vp->stats.tx_packets += inb(ioaddr + 6);
2849 vp->stats.tx_packets += (inb(ioaddr + 9)&0x30) << 4;
2850 /* Rx packets */ inb(ioaddr + 7); /* Must read to clear */
2851 /* Tx deferrals */ inb(ioaddr + 8);
2852 /* Don't bother with register 9, an extension of registers 6&7.
2853 If we do use the 6&7 values the atomic update assumption above
2854 is invalid. */
2855 vp->stats.rx_bytes += inw(ioaddr + 10);
2856 vp->stats.tx_bytes += inw(ioaddr + 12);
2857 /* New: On the Vortex we must also clear the BadSSD counter. */
2858 EL3WINDOW(4);
2859 inb(ioaddr + 12);
2862 u8 up = inb(ioaddr + 13);
2863 vp->stats.rx_bytes += (up & 0x0f) << 16;
2864 vp->stats.tx_bytes += (up & 0xf0) << 12;
2867 EL3WINDOW(old_window >> 13);
2868 return;
2872 static void vortex_get_drvinfo(struct net_device *dev,
2873 struct ethtool_drvinfo *info)
2875 struct vortex_private *vp = netdev_priv(dev);
2877 strcpy(info->driver, DRV_NAME);
2878 strcpy(info->version, DRV_VERSION);
2879 if (VORTEX_PCI(vp)) {
2880 strcpy(info->bus_info, pci_name(VORTEX_PCI(vp)));
2881 } else {
2882 if (VORTEX_EISA(vp))
2883 sprintf(info->bus_info, vp->gendev->bus_id);
2884 else
2885 sprintf(info->bus_info, "EISA 0x%lx %d",
2886 dev->base_addr, dev->irq);
2890 static struct ethtool_ops vortex_ethtool_ops = {
2891 .get_drvinfo = vortex_get_drvinfo,
2894 #ifdef CONFIG_PCI
2895 static int vortex_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
2897 struct vortex_private *vp = netdev_priv(dev);
2898 long ioaddr = dev->base_addr;
2899 struct mii_ioctl_data *data = if_mii(rq);
2900 int phy = vp->phys[0] & 0x1f;
2901 int retval;
2903 switch(cmd) {
2904 case SIOCGMIIPHY: /* Get address of MII PHY in use. */
2905 data->phy_id = phy;
2907 case SIOCGMIIREG: /* Read MII PHY register. */
2908 EL3WINDOW(4);
2909 data->val_out = mdio_read(dev, data->phy_id & 0x1f, data->reg_num & 0x1f);
2910 retval = 0;
2911 break;
2913 case SIOCSMIIREG: /* Write MII PHY register. */
2914 if (!capable(CAP_NET_ADMIN)) {
2915 retval = -EPERM;
2916 } else {
2917 EL3WINDOW(4);
2918 mdio_write(dev, data->phy_id & 0x1f, data->reg_num & 0x1f, data->val_in);
2919 retval = 0;
2921 break;
2922 default:
2923 retval = -EOPNOTSUPP;
2924 break;
2927 return retval;
2931 * Must power the device up to do MDIO operations
2933 static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
2935 int err;
2936 struct vortex_private *vp = netdev_priv(dev);
2937 int state = 0;
2939 if(VORTEX_PCI(vp))
2940 state = VORTEX_PCI(vp)->current_state;
2942 /* The kernel core really should have pci_get_power_state() */
2944 if(state != 0)
2945 pci_set_power_state(VORTEX_PCI(vp), 0);
2946 err = vortex_do_ioctl(dev, rq, cmd);
2947 if(state != 0)
2948 pci_set_power_state(VORTEX_PCI(vp), state);
2950 return err;
2952 #endif
2955 /* Pre-Cyclone chips have no documented multicast filter, so the only
2956 multicast setting is to receive all multicast frames. At least
2957 the chip has a very clean way to set the mode, unlike many others. */
2958 static void set_rx_mode(struct net_device *dev)
2960 long ioaddr = dev->base_addr;
2961 int new_mode;
2963 if (dev->flags & IFF_PROMISC) {
2964 if (vortex_debug > 0)
2965 printk(KERN_NOTICE "%s: Setting promiscuous mode.\n", dev->name);
2966 new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast|RxProm;
2967 } else if ((dev->mc_list) || (dev->flags & IFF_ALLMULTI)) {
2968 new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast;
2969 } else
2970 new_mode = SetRxFilter | RxStation | RxBroadcast;
2972 outw(new_mode, ioaddr + EL3_CMD);
2975 #if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
2976 /* Setup the card so that it can receive frames with an 802.1q VLAN tag.
2977 Note that this must be done after each RxReset due to some backwards
2978 compatibility logic in the Cyclone and Tornado ASICs */
2980 /* The Ethernet Type used for 802.1q tagged frames */
2981 #define VLAN_ETHER_TYPE 0x8100
2983 static void set_8021q_mode(struct net_device *dev, int enable)
2985 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2986 long ioaddr = dev->base_addr;
2987 int old_window = inw(ioaddr + EL3_CMD);
2988 int mac_ctrl;
2990 if ((vp->drv_flags&IS_CYCLONE) || (vp->drv_flags&IS_TORNADO)) {
2991 /* cyclone and tornado chipsets can recognize 802.1q
2992 * tagged frames and treat them correctly */
2994 int max_pkt_size = dev->mtu+14; /* MTU+Ethernet header */
2995 if (enable)
2996 max_pkt_size += 4; /* 802.1Q VLAN tag */
2998 EL3WINDOW(3);
2999 outw(max_pkt_size, ioaddr+Wn3_MaxPktSize);
3001 /* set VlanEtherType to let the hardware checksumming
3002 treat tagged frames correctly */
3003 EL3WINDOW(7);
3004 outw(VLAN_ETHER_TYPE, ioaddr+Wn7_VlanEtherType);
3005 } else {
3006 /* on older cards we have to enable large frames */
3008 vp->large_frames = dev->mtu > 1500 || enable;
3010 EL3WINDOW(3);
3011 mac_ctrl = inw(ioaddr+Wn3_MAC_Ctrl);
3012 if (vp->large_frames)
3013 mac_ctrl |= 0x40;
3014 else
3015 mac_ctrl &= ~0x40;
3016 outw(mac_ctrl, ioaddr+Wn3_MAC_Ctrl);
3019 EL3WINDOW(old_window);
3021 #else
3023 static void set_8021q_mode(struct net_device *dev, int enable)
3028 #endif
3030 /* MII transceiver control section.
3031 Read and write the MII registers using software-generated serial
3032 MDIO protocol. See the MII specifications or DP83840A data sheet
3033 for details. */
3035 /* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
3036 met by back-to-back PCI I/O cycles, but we insert a delay to avoid
3037 "overclocking" issues. */
3038 #define mdio_delay() inl(mdio_addr)
3040 #define MDIO_SHIFT_CLK 0x01
3041 #define MDIO_DIR_WRITE 0x04
3042 #define MDIO_DATA_WRITE0 (0x00 | MDIO_DIR_WRITE)
3043 #define MDIO_DATA_WRITE1 (0x02 | MDIO_DIR_WRITE)
3044 #define MDIO_DATA_READ 0x02
3045 #define MDIO_ENB_IN 0x00
3047 /* Generate the preamble required for initial synchronization and
3048 a few older transceivers. */
3049 static void mdio_sync(long ioaddr, int bits)
3051 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
3053 /* Establish sync by sending at least 32 logic ones. */
3054 while (-- bits >= 0) {
3055 outw(MDIO_DATA_WRITE1, mdio_addr);
3056 mdio_delay();
3057 outw(MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK, mdio_addr);
3058 mdio_delay();
3062 static int mdio_read(struct net_device *dev, int phy_id, int location)
3064 struct vortex_private *vp = netdev_priv(dev);
3065 int i;
3066 long ioaddr = dev->base_addr;
3067 int read_cmd = (0xf6 << 10) | (phy_id << 5) | location;
3068 unsigned int retval = 0;
3069 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
3071 spin_lock_bh(&vp->mdio_lock);
3073 if (mii_preamble_required)
3074 mdio_sync(ioaddr, 32);
3076 /* Shift the read command bits out. */
3077 for (i = 14; i >= 0; i--) {
3078 int dataval = (read_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
3079 outw(dataval, mdio_addr);
3080 mdio_delay();
3081 outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
3082 mdio_delay();
3084 /* Read the two transition, 16 data, and wire-idle bits. */
3085 for (i = 19; i > 0; i--) {
3086 outw(MDIO_ENB_IN, mdio_addr);
3087 mdio_delay();
3088 retval = (retval << 1) | ((inw(mdio_addr) & MDIO_DATA_READ) ? 1 : 0);
3089 outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
3090 mdio_delay();
3092 spin_unlock_bh(&vp->mdio_lock);
3093 return retval & 0x20000 ? 0xffff : retval>>1 & 0xffff;
3096 static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
3098 struct vortex_private *vp = netdev_priv(dev);
3099 long ioaddr = dev->base_addr;
3100 int write_cmd = 0x50020000 | (phy_id << 23) | (location << 18) | value;
3101 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
3102 int i;
3104 spin_lock_bh(&vp->mdio_lock);
3106 if (mii_preamble_required)
3107 mdio_sync(ioaddr, 32);
3109 /* Shift the command bits out. */
3110 for (i = 31; i >= 0; i--) {
3111 int dataval = (write_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
3112 outw(dataval, mdio_addr);
3113 mdio_delay();
3114 outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
3115 mdio_delay();
3117 /* Leave the interface idle. */
3118 for (i = 1; i >= 0; i--) {
3119 outw(MDIO_ENB_IN, mdio_addr);
3120 mdio_delay();
3121 outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
3122 mdio_delay();
3124 spin_unlock_bh(&vp->mdio_lock);
3125 return;
3128 /* ACPI: Advanced Configuration and Power Interface. */
3129 /* Set Wake-On-LAN mode and put the board into D3 (power-down) state. */
3130 static void acpi_set_WOL(struct net_device *dev)
3132 struct vortex_private *vp = netdev_priv(dev);
3133 long ioaddr = dev->base_addr;
3135 /* Power up on: 1==Downloaded Filter, 2==Magic Packets, 4==Link Status. */
3136 EL3WINDOW(7);
3137 outw(2, ioaddr + 0x0c);
3138 /* The RxFilter must accept the WOL frames. */
3139 outw(SetRxFilter|RxStation|RxMulticast|RxBroadcast, ioaddr + EL3_CMD);
3140 outw(RxEnable, ioaddr + EL3_CMD);
3142 /* Change the power state to D3; RxEnable doesn't take effect. */
3143 pci_enable_wake(VORTEX_PCI(vp), 0, 1);
3144 pci_set_power_state(VORTEX_PCI(vp), 3);
3148 static void __devexit vortex_remove_one (struct pci_dev *pdev)
3150 struct net_device *dev = pci_get_drvdata(pdev);
3151 struct vortex_private *vp;
3153 if (!dev) {
3154 printk("vortex_remove_one called for Compaq device!\n");
3155 BUG();
3158 vp = netdev_priv(dev);
3160 /* AKPM: FIXME: we should have
3161 * if (vp->cb_fn_base) iounmap(vp->cb_fn_base);
3162 * here
3164 unregister_netdev(dev);
3166 if (VORTEX_PCI(vp) && vp->enable_wol) {
3167 pci_set_power_state(VORTEX_PCI(vp), 0); /* Go active */
3168 if (vp->pm_state_valid)
3169 pci_restore_state(VORTEX_PCI(vp), vp->power_state);
3171 /* Should really use issue_and_wait() here */
3172 outw(TotalReset|0x14, dev->base_addr + EL3_CMD);
3174 pci_free_consistent(pdev,
3175 sizeof(struct boom_rx_desc) * RX_RING_SIZE
3176 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
3177 vp->rx_ring,
3178 vp->rx_ring_dma);
3179 if (vp->must_free_region)
3180 release_region(dev->base_addr, vp->io_size);
3181 free_netdev(dev);
3185 static struct pci_driver vortex_driver = {
3186 .name = "3c59x",
3187 .probe = vortex_init_one,
3188 .remove = __devexit_p(vortex_remove_one),
3189 .id_table = vortex_pci_tbl,
3190 #ifdef CONFIG_PM
3191 .suspend = vortex_suspend,
3192 .resume = vortex_resume,
3193 #endif
3197 static int vortex_have_pci;
3198 static int vortex_have_eisa;
3201 static int __init vortex_init (void)
3203 int pci_rc, eisa_rc;
3205 pci_rc = pci_module_init(&vortex_driver);
3206 eisa_rc = vortex_eisa_init();
3208 if (pci_rc == 0)
3209 vortex_have_pci = 1;
3210 if (eisa_rc > 0)
3211 vortex_have_eisa = 1;
3213 return (vortex_have_pci + vortex_have_eisa) ? 0 : -ENODEV;
3217 static void __exit vortex_eisa_cleanup (void)
3219 struct vortex_private *vp;
3220 long ioaddr;
3222 #ifdef CONFIG_EISA
3223 /* Take care of the EISA devices */
3224 eisa_driver_unregister (&vortex_eisa_driver);
3225 #endif
3227 if (compaq_net_device) {
3228 vp = compaq_net_device->priv;
3229 ioaddr = compaq_net_device->base_addr;
3231 unregister_netdev (compaq_net_device);
3232 outw (TotalReset, ioaddr + EL3_CMD);
3233 release_region (ioaddr, VORTEX_TOTAL_SIZE);
3235 free_netdev (compaq_net_device);
3240 static void __exit vortex_cleanup (void)
3242 if (vortex_have_pci)
3243 pci_unregister_driver (&vortex_driver);
3244 if (vortex_have_eisa)
3245 vortex_eisa_cleanup ();
3249 module_init(vortex_init);
3250 module_exit(vortex_cleanup);
3254 * Local variables:
3255 * c-indent-level: 4
3256 * c-basic-offset: 4
3257 * tab-width: 4
3258 * End: