Merge with Linux 2.5.48.
[linux-2.6/linux-mips.git] / drivers / net / 3c59x.c
blob6726868ddf53f50ca54dbb37a469d5d5ad3e95a5
1 /* EtherLinkXL.c: A 3Com EtherLink PCI III/XL ethernet driver for linux. */
2 /*
3 Written 1996-1999 by Donald Becker.
5 This software may be used and distributed according to the terms
6 of the GNU General Public License, incorporated herein by reference.
8 This driver is for the 3Com "Vortex" and "Boomerang" series ethercards.
9 Members of the series include Fast EtherLink 3c590/3c592/3c595/3c597
10 and the EtherLink XL 3c900 and 3c905 cards.
12 Problem reports and questions should be directed to
13 vortex@scyld.com
15 The author may be reached as becker@scyld.com, or C/O
16 Scyld Computing Corporation
17 410 Severn Ave., Suite 210
18 Annapolis MD 21403
20 Linux Kernel Additions:
22 0.99H+lk0.9 - David S. Miller - softnet, PCI DMA updates
23 0.99H+lk1.0 - Jeff Garzik <jgarzik@pobox.com>
24 Remove compatibility defines for kernel versions < 2.2.x.
25 Update for new 2.3.x module interface
26 LK1.1.2 (March 19, 2000)
27 * New PCI interface (jgarzik)
29 LK1.1.3 25 April 2000, Andrew Morton <andrewm@uow.edu.au>
30 - Merged with 3c575_cb.c
31 - Don't set RxComplete in boomerang interrupt enable reg
32 - spinlock in vortex_timer to protect mdio functions
33 - disable local interrupts around call to vortex_interrupt in
34 vortex_tx_timeout() (So vortex_interrupt can use spin_lock())
35 - Select window 3 in vortex_timer()'s write to Wn3_MAC_Ctrl
36 - In vortex_start_xmit(), move the lock to _after_ we've altered
37 vp->cur_tx and vp->tx_full. This defeats the race between
38 vortex_start_xmit() and vortex_interrupt which was identified
39 by Bogdan Costescu.
40 - Merged back support for six new cards from various sources
41 - Set vortex_have_pci if pci_module_init returns zero (fixes cardbus
42 insertion oops)
43 - Tell it that 3c905C has NWAY for 100bT autoneg
44 - Fix handling of SetStatusEnd in 'Too much work..' code, as
45 per 2.3.99's 3c575_cb (Dave Hinds).
46 - Split ISR into two for vortex & boomerang
47 - Fix MOD_INC/DEC races
48 - Handle resource allocation failures.
49 - Fix 3CCFE575CT LED polarity
50 - Make tx_interrupt_mitigation the default
52 LK1.1.4 25 April 2000, Andrew Morton <andrewm@uow.edu.au>
53 - Add extra TxReset to vortex_up() to fix 575_cb hotplug initialisation probs.
54 - Put vortex_info_tbl into __devinitdata
55 - In the vortex_error StatsFull HACK, disable stats in vp->intr_enable as well
56 as in the hardware.
57 - Increased the loop counter in issue_and_wait from 2,000 to 4,000.
59 LK1.1.5 28 April 2000, andrewm
60 - Added powerpc defines (John Daniel <jdaniel@etresoft.com> said these work...)
61 - Some extra diagnostics
62 - In vortex_error(), reset the Tx on maxCollisions. Otherwise most
63 chips usually get a Tx timeout.
64 - Added extra_reset module parm
65 - Replaced some inline timer manip with mod_timer
66 (Franois romieu <Francois.Romieu@nic.fr>)
67 - In vortex_up(), don't make Wn3_config initialisation dependent upon has_nway
68 (this came across from 3c575_cb).
70 LK1.1.6 06 Jun 2000, andrewm
71 - Backed out the PPC defines.
72 - Use del_timer_sync(), mod_timer().
73 - Fix wrapped ulong comparison in boomerang_rx()
74 - Add IS_TORNADO, use it to suppress 3c905C checksum error msg
75 (Donald Becker, I Lee Hetherington <ilh@sls.lcs.mit.edu>)
76 - Replace union wn3_config with BFINS/BFEXT manipulation for
77 sparc64 (Pete Zaitcev, Peter Jones)
78 - In vortex_error, do_tx_reset and vortex_tx_timeout(Vortex):
79 do a netif_wake_queue() to better recover from errors. (Anders Pedersen,
80 Donald Becker)
81 - Print a warning on out-of-memory (rate limited to 1 per 10 secs)
82 - Added two more Cardbus 575 NICs: 5b57 and 6564 (Paul Wagland)
84 LK1.1.7 2 Jul 2000 andrewm
85 - Better handling of shared IRQs
86 - Reset the transmitter on a Tx reclaim error
87 - Fixed crash under OOM during vortex_open() (Mark Hemment)
88 - Fix Rx cessation problem during OOM (help from Mark Hemment)
89 - The spinlocks around the mdio access were blocking interrupts for 300uS.
90 Fix all this to use spin_lock_bh() within mdio_read/write
91 - Only write to TxFreeThreshold if it's a boomerang - other NICs don't
92 have one.
93 - Added 802.3x MAC-layer flow control support
95 LK1.1.8 13 Aug 2000 andrewm
96 - Ignore request_region() return value - already reserved if Cardbus.
97 - Merged some additional Cardbus flags from Don's 0.99Qk
98 - Some fixes for 3c556 (Fred Maciel)
99 - Fix for EISA initialisation (Jan Rekorajski)
100 - Renamed MII_XCVR_PWR and EEPROM_230 to align with 3c575_cb and D. Becker's drivers
101 - Fixed MII_XCVR_PWR for 3CCFE575CT
102 - Added INVERT_LED_PWR, used it.
103 - Backed out the extra_reset stuff
105 LK1.1.9 12 Sep 2000 andrewm
106 - Backed out the tx_reset_resume flags. It was a no-op.
107 - In vortex_error, don't reset the Tx on txReclaim errors
108 - In vortex_error, don't reset the Tx on maxCollisions errors.
109 Hence backed out all the DownListPtr logic here.
110 - In vortex_error, give Tornado cards a partial TxReset on
111 maxCollisions (David Hinds). Defined MAX_COLLISION_RESET for this.
112 - Redid some driver flags and device names based on pcmcia_cs-3.1.20.
113 - Fixed a bug where, if vp->tx_full is set when the interface
114 is downed, it remains set when the interface is upped. Bad
115 things happen.
117 LK1.1.10 17 Sep 2000 andrewm
118 - Added EEPROM_8BIT for 3c555 (Fred Maciel)
119 - Added experimental support for the 3c556B Laptop Hurricane (Louis Gerbarg)
120 - Add HAS_NWAY to "3c900 Cyclone 10Mbps TPO"
122 LK1.1.11 13 Nov 2000 andrewm
123 - Dump MOD_INC/DEC_USE_COUNT, use SET_MODULE_OWNER
125 LK1.1.12 1 Jan 2001 andrewm (2.4.0-pre1)
126 - Call pci_enable_device before we request our IRQ (Tobias Ringstrom)
127 - Add 3c590 PCI latency timer hack to vortex_probe1 (from 0.99Ra)
128 - Added extended issue_and_wait for the 3c905CX.
129 - Look for an MII on PHY index 24 first (3c905CX oddity).
130 - Add HAS_NWAY to 3cSOHO100-TX (Brett Frankenberger)
131 - Don't free skbs we don't own on oom path in vortex_open().
133 LK1.1.13 27 Jan 2001
134 - Added explicit `medialock' flag so we can truly
135 lock the media type down with `options'.
136 - "check ioremap return and some tidbits" (Arnaldo Carvalho de Melo <acme@conectiva.com.br>)
137 - Added and used EEPROM_NORESET for 3c556B PM resumes.
138 - Fixed leakage of vp->rx_ring.
139 - Break out separate HAS_HWCKSM device capability flag.
140 - Kill vp->tx_full (ANK)
141 - Merge zerocopy fragment handling (ANK?)
143 LK1.1.14 15 Feb 2001
144 - Enable WOL. Can be turned on with `enable_wol' module option.
145 - EISA and PCI initialisation fixes (jgarzik, Manfred Spraul)
146 - If a device's internalconfig register reports it has NWAY,
147 use it, even if autoselect is enabled.
149 LK1.1.15 6 June 2001 akpm
150 - Prevent double counting of received bytes (Lars Christensen)
151 - Add ethtool support (jgarzik)
152 - Add module parm descriptions (Andrzej M. Krzysztofowicz)
153 - Implemented alloc_etherdev() API
154 - Special-case the 'Tx error 82' message.
156 LK1.1.16 18 July 2001 akpm
157 - Make NETIF_F_SG dependent upon nr_free_highpages(), not on CONFIG_HIGHMEM
158 - Lessen verbosity of bootup messages
159 - Fix WOL - use new PM API functions.
160 - Use netif_running() instead of vp->open in suspend/resume.
161 - Don't reset the interface logic on open/close/rmmod. It upsets
162 autonegotiation, and hence DHCP (from 0.99T).
163 - Back out EEPROM_NORESET flag because of the above (we do it for all
164 NICs).
165 - Correct 3c982 identification string
166 - Rename wait_for_completion() to issue_and_wait() to avoid completion.h
167 clash.
169 LK1.1.17 18Dec01 akpm
170 - PCI ID 9805 is a Python-T, not a dual-port Cyclone. Apparently.
171 And it has NWAY.
172 - Mask our advertised modes (vp->advertising) with our capabilities
173 (MII reg5) when deciding which duplex mode to use.
174 - Add `global_options' as default for options[]. Ditto global_enable_wol,
175 global_full_duplex.
177 LK1.1.18 01Jul02 akpm
178 - Fix for undocumented transceiver power-up bit on some 3c566B's
179 (Donald Becker, Rahul Karnik)
181 - See http://www.zip.com.au/~akpm/linux/#3c59x-2.3 for more details.
182 - Also see Documentation/networking/vortex.txt
186 * FIXME: This driver _could_ support MTU changing, but doesn't. See Don's hamachi.c implementation
187 * as well as other drivers
189 * NOTE: If you make 'vortex_debug' a constant (#define vortex_debug 0) the driver shrinks by 2k
190 * due to dead code elimination. There will be some performance benefits from this due to
191 * elimination of all the tests and reduced cache footprint.
195 #define DRV_NAME "3c59x"
196 #define DRV_VERSION "LK1.1.18"
197 #define DRV_RELDATE "1 Jul 2002"
201 /* A few values that may be tweaked. */
202 /* Keep the ring sizes a power of two for efficiency. */
203 #define TX_RING_SIZE 16
204 #define RX_RING_SIZE 32
205 #define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
207 /* "Knobs" that adjust features and parameters. */
208 /* Set the copy breakpoint for the copy-only-tiny-frames scheme.
209 Setting to > 1512 effectively disables this feature. */
210 #ifndef __arm__
211 static const int rx_copybreak = 200;
212 #else
213 /* ARM systems perform better by disregarding the bus-master
214 transfer capability of these cards. -- rmk */
215 static const int rx_copybreak = 1513;
216 #endif
217 /* Allow setting MTU to a larger size, bypassing the normal ethernet setup. */
218 static const int mtu = 1500;
219 /* Maximum events (Rx packets, etc.) to handle at each interrupt. */
220 static int max_interrupt_work = 32;
221 /* Tx timeout interval (millisecs) */
222 static int watchdog = 5000;
224 /* Allow aggregation of Tx interrupts. Saves CPU load at the cost
225 * of possible Tx stalls if the system is blocking interrupts
226 * somewhere else. Undefine this to disable.
228 #define tx_interrupt_mitigation 1
230 /* Put out somewhat more debugging messages. (0: no msg, 1 minimal .. 6). */
231 #define vortex_debug debug
232 #ifdef VORTEX_DEBUG
233 static int vortex_debug = VORTEX_DEBUG;
234 #else
235 static int vortex_debug = 1;
236 #endif
238 #ifndef __OPTIMIZE__
239 #error You must compile this file with the correct options!
240 #error See the last lines of the source file.
241 #error You must compile this driver with "-O".
242 #endif
244 #include <linux/config.h>
245 #include <linux/module.h>
246 #include <linux/kernel.h>
247 #include <linux/sched.h>
248 #include <linux/string.h>
249 #include <linux/timer.h>
250 #include <linux/errno.h>
251 #include <linux/in.h>
252 #include <linux/ioport.h>
253 #include <linux/slab.h>
254 #include <linux/interrupt.h>
255 #include <linux/pci.h>
256 #include <linux/mii.h>
257 #include <linux/init.h>
258 #include <linux/netdevice.h>
259 #include <linux/etherdevice.h>
260 #include <linux/skbuff.h>
261 #include <linux/ethtool.h>
262 #include <linux/highmem.h>
263 #include <asm/irq.h> /* For NR_IRQS only. */
264 #include <asm/bitops.h>
265 #include <asm/io.h>
266 #include <asm/uaccess.h>
268 /* Kernel compatibility defines, some common to David Hinds' PCMCIA package.
269 This is only in the support-all-kernels source code. */
271 #define RUN_AT(x) (jiffies + (x))
273 #include <linux/delay.h>
276 static char version[] __devinitdata =
277 DRV_NAME ": Donald Becker and others. www.scyld.com/network/vortex.html\n";
279 MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
280 MODULE_DESCRIPTION("3Com 3c59x/3c9xx ethernet driver "
281 DRV_VERSION " " DRV_RELDATE);
282 MODULE_LICENSE("GPL");
284 MODULE_PARM(debug, "i");
285 MODULE_PARM(global_options, "i");
286 MODULE_PARM(options, "1-" __MODULE_STRING(8) "i");
287 MODULE_PARM(global_full_duplex, "i");
288 MODULE_PARM(full_duplex, "1-" __MODULE_STRING(8) "i");
289 MODULE_PARM(hw_checksums, "1-" __MODULE_STRING(8) "i");
290 MODULE_PARM(flow_ctrl, "1-" __MODULE_STRING(8) "i");
291 MODULE_PARM(global_enable_wol, "i");
292 MODULE_PARM(enable_wol, "1-" __MODULE_STRING(8) "i");
293 MODULE_PARM(rx_copybreak, "i");
294 MODULE_PARM(max_interrupt_work, "i");
295 MODULE_PARM(compaq_ioaddr, "i");
296 MODULE_PARM(compaq_irq, "i");
297 MODULE_PARM(compaq_device_id, "i");
298 MODULE_PARM(watchdog, "i");
299 MODULE_PARM_DESC(debug, "3c59x debug level (0-6)");
300 MODULE_PARM_DESC(options, "3c59x: Bits 0-3: media type, bit 4: bus mastering, bit 9: full duplex");
301 MODULE_PARM_DESC(global_options, "3c59x: same as options, but applies to all NICs if options is unset");
302 MODULE_PARM_DESC(full_duplex, "3c59x full duplex setting(s) (1)");
303 MODULE_PARM_DESC(global_full_duplex, "3c59x: same as full_duplex, but applies to all NICs if options is unset");
304 MODULE_PARM_DESC(hw_checksums, "3c59x Hardware checksum checking by adapter(s) (0-1)");
305 MODULE_PARM_DESC(flow_ctrl, "3c59x 802.3x flow control usage (PAUSE only) (0-1)");
306 MODULE_PARM_DESC(enable_wol, "3c59x: Turn on Wake-on-LAN for adapter(s) (0-1)");
307 MODULE_PARM_DESC(global_enable_wol, "3c59x: same as enable_wol, but applies to all NICs if options is unset");
308 MODULE_PARM_DESC(rx_copybreak, "3c59x copy breakpoint for copy-only-tiny-frames");
309 MODULE_PARM_DESC(max_interrupt_work, "3c59x maximum events handled per interrupt");
310 MODULE_PARM_DESC(compaq_ioaddr, "3c59x PCI I/O base address (Compaq BIOS problem workaround)");
311 MODULE_PARM_DESC(compaq_irq, "3c59x PCI IRQ number (Compaq BIOS problem workaround)");
312 MODULE_PARM_DESC(compaq_device_id, "3c59x PCI device ID (Compaq BIOS problem workaround)");
313 MODULE_PARM_DESC(watchdog, "3c59x transmit timeout in milliseconds");
315 /* Operational parameter that usually are not changed. */
317 /* The Vortex size is twice that of the original EtherLinkIII series: the
318 runtime register window, window 1, is now always mapped in.
319 The Boomerang size is twice as large as the Vortex -- it has additional
320 bus master control registers. */
321 #define VORTEX_TOTAL_SIZE 0x20
322 #define BOOMERANG_TOTAL_SIZE 0x40
324 /* Set iff a MII transceiver on any interface requires mdio preamble.
325 This only set with the original DP83840 on older 3c905 boards, so the extra
326 code size of a per-interface flag is not worthwhile. */
327 static char mii_preamble_required;
329 #define PFX DRV_NAME ": "
334 Theory of Operation
336 I. Board Compatibility
338 This device driver is designed for the 3Com FastEtherLink and FastEtherLink
339 XL, 3Com's PCI to 10/100baseT adapters. It also works with the 10Mbs
340 versions of the FastEtherLink cards. The supported product IDs are
341 3c590, 3c592, 3c595, 3c597, 3c900, 3c905
343 The related ISA 3c515 is supported with a separate driver, 3c515.c, included
344 with the kernel source or available from
345 cesdis.gsfc.nasa.gov:/pub/linux/drivers/3c515.html
347 II. Board-specific settings
349 PCI bus devices are configured by the system at boot time, so no jumpers
350 need to be set on the board. The system BIOS should be set to assign the
351 PCI INTA signal to an otherwise unused system IRQ line.
353 The EEPROM settings for media type and forced-full-duplex are observed.
354 The EEPROM media type should be left at the default "autoselect" unless using
355 10base2 or AUI connections which cannot be reliably detected.
357 III. Driver operation
359 The 3c59x series use an interface that's very similar to the previous 3c5x9
360 series. The primary interface is two programmed-I/O FIFOs, with an
361 alternate single-contiguous-region bus-master transfer (see next).
363 The 3c900 "Boomerang" series uses a full-bus-master interface with separate
364 lists of transmit and receive descriptors, similar to the AMD LANCE/PCnet,
365 DEC Tulip and Intel Speedo3. The first chip version retains a compatible
366 programmed-I/O interface that has been removed in 'B' and subsequent board
367 revisions.
369 One extension that is advertised in a very large font is that the adapters
370 are capable of being bus masters. On the Vortex chip this capability was
371 only for a single contiguous region making it far less useful than the full
372 bus master capability. There is a significant performance impact of taking
373 an extra interrupt or polling for the completion of each transfer, as well
374 as difficulty sharing the single transfer engine between the transmit and
375 receive threads. Using DMA transfers is a win only with large blocks or
376 with the flawed versions of the Intel Orion motherboard PCI controller.
378 The Boomerang chip's full-bus-master interface is useful, and has the
379 currently-unused advantages over other similar chips that queued transmit
380 packets may be reordered and receive buffer groups are associated with a
381 single frame.
383 With full-bus-master support, this driver uses a "RX_COPYBREAK" scheme.
384 Rather than a fixed intermediate receive buffer, this scheme allocates
385 full-sized skbuffs as receive buffers. The value RX_COPYBREAK is used as
386 the copying breakpoint: it is chosen to trade-off the memory wasted by
387 passing the full-sized skbuff to the queue layer for all frames vs. the
388 copying cost of copying a frame to a correctly-sized skbuff.
390 IIIC. Synchronization
391 The driver runs as two independent, single-threaded flows of control. One
392 is the send-packet routine, which enforces single-threaded use by the
393 dev->tbusy flag. The other thread is the interrupt handler, which is single
394 threaded by the hardware and other software.
396 IV. Notes
398 Thanks to Cameron Spitzer and Terry Murphy of 3Com for providing development
399 3c590, 3c595, and 3c900 boards.
400 The name "Vortex" is the internal 3Com project name for the PCI ASIC, and
401 the EISA version is called "Demon". According to Terry these names come
402 from rides at the local amusement park.
404 The new chips support both ethernet (1.5K) and FDDI (4.5K) packet sizes!
405 This driver only supports ethernet packets because of the skbuff allocation
406 limit of 4K.
409 /* This table drives the PCI probe routines. It's mostly boilerplate in all
410 of the drivers, and will likely be provided by some future kernel.
412 enum pci_flags_bit {
413 PCI_USES_IO=1, PCI_USES_MEM=2, PCI_USES_MASTER=4,
414 PCI_ADDR0=0x10<<0, PCI_ADDR1=0x10<<1, PCI_ADDR2=0x10<<2, PCI_ADDR3=0x10<<3,
417 enum { IS_VORTEX=1, IS_BOOMERANG=2, IS_CYCLONE=4, IS_TORNADO=8,
418 EEPROM_8BIT=0x10, /* AKPM: Uses 0x230 as the base bitmaps for EEPROM reads */
419 HAS_PWR_CTRL=0x20, HAS_MII=0x40, HAS_NWAY=0x80, HAS_CB_FNS=0x100,
420 INVERT_MII_PWR=0x200, INVERT_LED_PWR=0x400, MAX_COLLISION_RESET=0x800,
421 EEPROM_OFFSET=0x1000, HAS_HWCKSM=0x2000, WNO_XCVR_PWR=0x4000,
422 EXTRA_PREAMBLE=0x8000, };
424 enum vortex_chips {
425 CH_3C590 = 0,
426 CH_3C592,
427 CH_3C597,
428 CH_3C595_1,
429 CH_3C595_2,
431 CH_3C595_3,
432 CH_3C900_1,
433 CH_3C900_2,
434 CH_3C900_3,
435 CH_3C900_4,
437 CH_3C900_5,
438 CH_3C900B_FL,
439 CH_3C905_1,
440 CH_3C905_2,
441 CH_3C905B_1,
443 CH_3C905B_2,
444 CH_3C905B_FX,
445 CH_3C905C,
446 CH_3C980,
447 CH_3C9805,
449 CH_3CSOHO100_TX,
450 CH_3C555,
451 CH_3C556,
452 CH_3C556B,
453 CH_3C575,
455 CH_3C575_1,
456 CH_3CCFE575,
457 CH_3CCFE575CT,
458 CH_3CCFE656,
459 CH_3CCFEM656,
461 CH_3CCFEM656_1,
462 CH_3C450,
466 /* note: this array directly indexed by above enums, and MUST
467 * be kept in sync with both the enums above, and the PCI device
468 * table below
470 static struct vortex_chip_info {
471 const char *name;
472 int flags;
473 int drv_flags;
474 int io_size;
475 } vortex_info_tbl[] __devinitdata = {
476 #define EISA_TBL_OFFSET 0 /* Offset of this entry for vortex_eisa_init */
477 {"3c590 Vortex 10Mbps",
478 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
479 {"3c592 EISA 10Mbps Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
480 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
481 {"3c597 EISA Fast Demon/Vortex", /* AKPM: from Don's 3c59x_cb.c 0.49H */
482 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
483 {"3c595 Vortex 100baseTx",
484 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
485 {"3c595 Vortex 100baseT4",
486 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
488 {"3c595 Vortex 100base-MII",
489 PCI_USES_IO|PCI_USES_MASTER, IS_VORTEX, 32, },
490 {"3c900 Boomerang 10baseT",
491 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG, 64, },
492 {"3c900 Boomerang 10Mbps Combo",
493 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG, 64, },
494 {"3c900 Cyclone 10Mbps TPO", /* AKPM: from Don's 0.99M */
495 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
496 {"3c900 Cyclone 10Mbps Combo",
497 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
499 {"3c900 Cyclone 10Mbps TPC", /* AKPM: from Don's 0.99M */
500 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
501 {"3c900B-FL Cyclone 10base-FL",
502 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
503 {"3c905 Boomerang 100baseTx",
504 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII, 64, },
505 {"3c905 Boomerang 100baseT4",
506 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII, 64, },
507 {"3c905B Cyclone 100baseTx",
508 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
510 {"3c905B Cyclone 10/100/BNC",
511 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
512 {"3c905B-FX Cyclone 100baseFx",
513 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
514 {"3c905C Tornado",
515 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
516 {"3c980 Cyclone",
517 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
518 {"3c980C Python-T",
519 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
521 {"3cSOHO100-TX Hurricane",
522 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
523 {"3c555 Laptop Hurricane",
524 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|EEPROM_8BIT|HAS_HWCKSM, 128, },
525 {"3c556 Laptop Tornado",
526 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_8BIT|HAS_CB_FNS|INVERT_MII_PWR|
527 HAS_HWCKSM, 128, },
528 {"3c556B Laptop Hurricane",
529 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|EEPROM_OFFSET|HAS_CB_FNS|INVERT_MII_PWR|
530 WNO_XCVR_PWR|HAS_HWCKSM, 128, },
531 {"3c575 [Megahertz] 10/100 LAN CardBus",
532 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
534 {"3c575 Boomerang CardBus",
535 PCI_USES_IO|PCI_USES_MASTER, IS_BOOMERANG|HAS_MII|EEPROM_8BIT, 128, },
536 {"3CCFE575BT Cyclone CardBus",
537 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|
538 INVERT_LED_PWR|HAS_HWCKSM, 128, },
539 {"3CCFE575CT Tornado CardBus",
540 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
541 MAX_COLLISION_RESET|HAS_HWCKSM, 128, },
542 {"3CCFE656 Cyclone CardBus",
543 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
544 INVERT_LED_PWR|HAS_HWCKSM, 128, },
545 {"3CCFEM656B Cyclone+Winmodem CardBus",
546 PCI_USES_IO|PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
547 INVERT_LED_PWR|HAS_HWCKSM, 128, },
549 {"3CXFEM656C Tornado+Winmodem CardBus", /* From pcmcia-cs-3.1.5 */
550 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_CB_FNS|EEPROM_8BIT|INVERT_MII_PWR|
551 MAX_COLLISION_RESET|HAS_HWCKSM, 128, },
552 {"3c450 HomePNA Tornado", /* AKPM: from Don's 0.99Q */
553 PCI_USES_IO|PCI_USES_MASTER, IS_TORNADO|HAS_NWAY|HAS_HWCKSM, 128, },
554 {0,}, /* 0 terminated list. */
558 static struct pci_device_id vortex_pci_tbl[] __devinitdata = {
559 { 0x10B7, 0x5900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C590 },
560 { 0x10B7, 0x5920, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C592 },
561 { 0x10B7, 0x5970, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C597 },
562 { 0x10B7, 0x5950, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_1 },
563 { 0x10B7, 0x5951, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_2 },
565 { 0x10B7, 0x5952, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C595_3 },
566 { 0x10B7, 0x9000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_1 },
567 { 0x10B7, 0x9001, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_2 },
568 { 0x10B7, 0x9004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_3 },
569 { 0x10B7, 0x9005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_4 },
571 { 0x10B7, 0x9006, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900_5 },
572 { 0x10B7, 0x900A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C900B_FL },
573 { 0x10B7, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_1 },
574 { 0x10B7, 0x9051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905_2 },
575 { 0x10B7, 0x9055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_1 },
577 { 0x10B7, 0x9058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_2 },
578 { 0x10B7, 0x905A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905B_FX },
579 { 0x10B7, 0x9200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C905C },
580 { 0x10B7, 0x9800, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C980 },
581 { 0x10B7, 0x9805, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C9805 },
583 { 0x10B7, 0x7646, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CSOHO100_TX },
584 { 0x10B7, 0x5055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C555 },
585 { 0x10B7, 0x6055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556 },
586 { 0x10B7, 0x6056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C556B },
587 { 0x10B7, 0x5b57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575 },
589 { 0x10B7, 0x5057, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C575_1 },
590 { 0x10B7, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575 },
591 { 0x10B7, 0x5257, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE575CT },
592 { 0x10B7, 0x6560, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFE656 },
593 { 0x10B7, 0x6562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656 },
595 { 0x10B7, 0x6564, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3CCFEM656_1 },
596 { 0x10B7, 0x4500, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CH_3C450 },
597 {0,} /* 0 terminated list. */
599 MODULE_DEVICE_TABLE(pci, vortex_pci_tbl);
602 /* Operational definitions.
603 These are not used by other compilation units and thus are not
604 exported in a ".h" file.
606 First the windows. There are eight register windows, with the command
607 and status registers available in each.
609 #define EL3WINDOW(win_num) outw(SelectWindow + (win_num), ioaddr + EL3_CMD)
610 #define EL3_CMD 0x0e
611 #define EL3_STATUS 0x0e
613 /* The top five bits written to EL3_CMD are a command, the lower
614 11 bits are the parameter, if applicable.
615 Note that 11 parameters bits was fine for ethernet, but the new chip
616 can handle FDDI length frames (~4500 octets) and now parameters count
617 32-bit 'Dwords' rather than octets. */
619 enum vortex_cmd {
620 TotalReset = 0<<11, SelectWindow = 1<<11, StartCoax = 2<<11,
621 RxDisable = 3<<11, RxEnable = 4<<11, RxReset = 5<<11,
622 UpStall = 6<<11, UpUnstall = (6<<11)+1,
623 DownStall = (6<<11)+2, DownUnstall = (6<<11)+3,
624 RxDiscard = 8<<11, TxEnable = 9<<11, TxDisable = 10<<11, TxReset = 11<<11,
625 FakeIntr = 12<<11, AckIntr = 13<<11, SetIntrEnb = 14<<11,
626 SetStatusEnb = 15<<11, SetRxFilter = 16<<11, SetRxThreshold = 17<<11,
627 SetTxThreshold = 18<<11, SetTxStart = 19<<11,
628 StartDMAUp = 20<<11, StartDMADown = (20<<11)+1, StatsEnable = 21<<11,
629 StatsDisable = 22<<11, StopCoax = 23<<11, SetFilterBit = 25<<11,};
631 /* The SetRxFilter command accepts the following classes: */
632 enum RxFilter {
633 RxStation = 1, RxMulticast = 2, RxBroadcast = 4, RxProm = 8 };
635 /* Bits in the general status register. */
636 enum vortex_status {
637 IntLatch = 0x0001, HostError = 0x0002, TxComplete = 0x0004,
638 TxAvailable = 0x0008, RxComplete = 0x0010, RxEarly = 0x0020,
639 IntReq = 0x0040, StatsFull = 0x0080,
640 DMADone = 1<<8, DownComplete = 1<<9, UpComplete = 1<<10,
641 DMAInProgress = 1<<11, /* DMA controller is still busy.*/
642 CmdInProgress = 1<<12, /* EL3_CMD is still busy.*/
645 /* Register window 1 offsets, the window used in normal operation.
646 On the Vortex this window is always mapped at offsets 0x10-0x1f. */
647 enum Window1 {
648 TX_FIFO = 0x10, RX_FIFO = 0x10, RxErrors = 0x14,
649 RxStatus = 0x18, Timer=0x1A, TxStatus = 0x1B,
650 TxFree = 0x1C, /* Remaining free bytes in Tx buffer. */
652 enum Window0 {
653 Wn0EepromCmd = 10, /* Window 0: EEPROM command register. */
654 Wn0EepromData = 12, /* Window 0: EEPROM results register. */
655 IntrStatus=0x0E, /* Valid in all windows. */
657 enum Win0_EEPROM_bits {
658 EEPROM_Read = 0x80, EEPROM_WRITE = 0x40, EEPROM_ERASE = 0xC0,
659 EEPROM_EWENB = 0x30, /* Enable erasing/writing for 10 msec. */
660 EEPROM_EWDIS = 0x00, /* Disable EWENB before 10 msec timeout. */
662 /* EEPROM locations. */
663 enum eeprom_offset {
664 PhysAddr01=0, PhysAddr23=1, PhysAddr45=2, ModelID=3,
665 EtherLink3ID=7, IFXcvrIO=8, IRQLine=9,
666 NodeAddr01=10, NodeAddr23=11, NodeAddr45=12,
667 DriverTune=13, Checksum=15};
669 enum Window2 { /* Window 2. */
670 Wn2_ResetOptions=12,
672 enum Window3 { /* Window 3: MAC/config bits. */
673 Wn3_Config=0, Wn3_MAC_Ctrl=6, Wn3_Options=8,
676 #define BFEXT(value, offset, bitcount) \
677 ((((unsigned long)(value)) >> (offset)) & ((1 << (bitcount)) - 1))
679 #define BFINS(lhs, rhs, offset, bitcount) \
680 (((lhs) & ~((((1 << (bitcount)) - 1)) << (offset))) | \
681 (((rhs) & ((1 << (bitcount)) - 1)) << (offset)))
683 #define RAM_SIZE(v) BFEXT(v, 0, 3)
684 #define RAM_WIDTH(v) BFEXT(v, 3, 1)
685 #define RAM_SPEED(v) BFEXT(v, 4, 2)
686 #define ROM_SIZE(v) BFEXT(v, 6, 2)
687 #define RAM_SPLIT(v) BFEXT(v, 16, 2)
688 #define XCVR(v) BFEXT(v, 20, 4)
689 #define AUTOSELECT(v) BFEXT(v, 24, 1)
691 enum Window4 { /* Window 4: Xcvr/media bits. */
692 Wn4_FIFODiag = 4, Wn4_NetDiag = 6, Wn4_PhysicalMgmt=8, Wn4_Media = 10,
694 enum Win4_Media_bits {
695 Media_SQE = 0x0008, /* Enable SQE error counting for AUI. */
696 Media_10TP = 0x00C0, /* Enable link beat and jabber for 10baseT. */
697 Media_Lnk = 0x0080, /* Enable just link beat for 100TX/100FX. */
698 Media_LnkBeat = 0x0800,
700 enum Window7 { /* Window 7: Bus Master control. */
701 Wn7_MasterAddr = 0, Wn7_MasterLen = 6, Wn7_MasterStatus = 12,
703 /* Boomerang bus master control registers. */
704 enum MasterCtrl {
705 PktStatus = 0x20, DownListPtr = 0x24, FragAddr = 0x28, FragLen = 0x2c,
706 TxFreeThreshold = 0x2f, UpPktStatus = 0x30, UpListPtr = 0x38,
709 /* The Rx and Tx descriptor lists.
710 Caution Alpha hackers: these types are 32 bits! Note also the 8 byte
711 alignment contraint on tx_ring[] and rx_ring[]. */
712 #define LAST_FRAG 0x80000000 /* Last Addr/Len pair in descriptor. */
713 #define DN_COMPLETE 0x00010000 /* This packet has been downloaded */
714 struct boom_rx_desc {
715 u32 next; /* Last entry points to 0. */
716 s32 status;
717 u32 addr; /* Up to 63 addr/len pairs possible. */
718 s32 length; /* Set LAST_FRAG to indicate last pair. */
720 /* Values for the Rx status entry. */
721 enum rx_desc_status {
722 RxDComplete=0x00008000, RxDError=0x4000,
723 /* See boomerang_rx() for actual error bits */
724 IPChksumErr=1<<25, TCPChksumErr=1<<26, UDPChksumErr=1<<27,
725 IPChksumValid=1<<29, TCPChksumValid=1<<30, UDPChksumValid=1<<31,
728 #ifdef MAX_SKB_FRAGS
729 #define DO_ZEROCOPY 1
730 #else
731 #define DO_ZEROCOPY 0
732 #endif
734 struct boom_tx_desc {
735 u32 next; /* Last entry points to 0. */
736 s32 status; /* bits 0:12 length, others see below. */
737 #if DO_ZEROCOPY
738 struct {
739 u32 addr;
740 s32 length;
741 } frag[1+MAX_SKB_FRAGS];
742 #else
743 u32 addr;
744 s32 length;
745 #endif
748 /* Values for the Tx status entry. */
749 enum tx_desc_status {
750 CRCDisable=0x2000, TxDComplete=0x8000,
751 AddIPChksum=0x02000000, AddTCPChksum=0x04000000, AddUDPChksum=0x08000000,
752 TxIntrUploaded=0x80000000, /* IRQ when in FIFO, but maybe not sent. */
755 /* Chip features we care about in vp->capabilities, read from the EEPROM. */
756 enum ChipCaps { CapBusMaster=0x20, CapPwrMgmt=0x2000 };
758 struct vortex_private {
759 /* The Rx and Tx rings should be quad-word-aligned. */
760 struct boom_rx_desc* rx_ring;
761 struct boom_tx_desc* tx_ring;
762 dma_addr_t rx_ring_dma;
763 dma_addr_t tx_ring_dma;
764 /* The addresses of transmit- and receive-in-place skbuffs. */
765 struct sk_buff* rx_skbuff[RX_RING_SIZE];
766 struct sk_buff* tx_skbuff[TX_RING_SIZE];
767 struct net_device *next_module; /* NULL if PCI device */
768 unsigned int cur_rx, cur_tx; /* The next free ring entry */
769 unsigned int dirty_rx, dirty_tx; /* The ring entries to be free()ed. */
770 struct net_device_stats stats;
771 struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
772 dma_addr_t tx_skb_dma; /* Allocated DMA address for bus master ctrl DMA. */
774 /* PCI configuration space information. */
775 struct pci_dev *pdev;
776 char *cb_fn_base; /* CardBus function status addr space. */
778 /* Some values here only for performance evaluation and path-coverage */
779 int rx_nocopy, rx_copy, queued_packet, rx_csumhits;
780 int card_idx;
782 /* The remainder are related to chip state, mostly media selection. */
783 struct timer_list timer; /* Media selection timer. */
784 struct timer_list rx_oom_timer; /* Rx skb allocation retry timer */
785 int options; /* User-settable misc. driver options. */
786 unsigned int media_override:4, /* Passed-in media type. */
787 default_media:4, /* Read from the EEPROM/Wn3_Config. */
788 full_duplex:1, force_fd:1, autoselect:1,
789 bus_master:1, /* Vortex can only do a fragment bus-m. */
790 full_bus_master_tx:1, full_bus_master_rx:2, /* Boomerang */
791 flow_ctrl:1, /* Use 802.3x flow control (PAUSE only) */
792 partner_flow_ctrl:1, /* Partner supports flow control */
793 has_nway:1,
794 enable_wol:1, /* Wake-on-LAN is enabled */
795 pm_state_valid:1, /* power_state[] has sane contents */
796 open:1,
797 medialock:1,
798 must_free_region:1; /* Flag: if zero, Cardbus owns the I/O region */
799 int drv_flags;
800 u16 status_enable;
801 u16 intr_enable;
802 u16 available_media; /* From Wn3_Options. */
803 u16 capabilities, info1, info2; /* Various, from EEPROM. */
804 u16 advertising; /* NWay media advertisement */
805 unsigned char phys[2]; /* MII device addresses. */
806 u16 deferred; /* Resend these interrupts when we
807 * bale from the ISR */
808 u16 io_size; /* Size of PCI region (for release_region) */
809 spinlock_t lock; /* Serialise access to device & its vortex_private */
810 spinlock_t mdio_lock; /* Serialise access to mdio hardware */
811 u32 power_state[16];
814 /* The action to take with a media selection timer tick.
815 Note that we deviate from the 3Com order by checking 10base2 before AUI.
817 enum xcvr_types {
818 XCVR_10baseT=0, XCVR_AUI, XCVR_10baseTOnly, XCVR_10base2, XCVR_100baseTx,
819 XCVR_100baseFx, XCVR_MII=6, XCVR_NWAY=8, XCVR_ExtMII=9, XCVR_Default=10,
822 static struct media_table {
823 char *name;
824 unsigned int media_bits:16, /* Bits to set in Wn4_Media register. */
825 mask:8, /* The transceiver-present bit in Wn3_Config.*/
826 next:8; /* The media type to try next. */
827 int wait; /* Time before we check media status. */
828 } media_tbl[] = {
829 { "10baseT", Media_10TP,0x08, XCVR_10base2, (14*HZ)/10},
830 { "10Mbs AUI", Media_SQE, 0x20, XCVR_Default, (1*HZ)/10},
831 { "undefined", 0, 0x80, XCVR_10baseT, 10000},
832 { "10base2", 0, 0x10, XCVR_AUI, (1*HZ)/10},
833 { "100baseTX", Media_Lnk, 0x02, XCVR_100baseFx, (14*HZ)/10},
834 { "100baseFX", Media_Lnk, 0x04, XCVR_MII, (14*HZ)/10},
835 { "MII", 0, 0x41, XCVR_10baseT, 3*HZ },
836 { "undefined", 0, 0x01, XCVR_10baseT, 10000},
837 { "Autonegotiate", 0, 0x41, XCVR_10baseT, 3*HZ},
838 { "MII-External", 0, 0x41, XCVR_10baseT, 3*HZ },
839 { "Default", 0, 0xFF, XCVR_10baseT, 10000},
842 static int vortex_probe1(struct pci_dev *pdev, long ioaddr, int irq,
843 int chip_idx, int card_idx);
844 static void vortex_up(struct net_device *dev);
845 static void vortex_down(struct net_device *dev);
846 static int vortex_open(struct net_device *dev);
847 static void mdio_sync(long ioaddr, int bits);
848 static int mdio_read(struct net_device *dev, int phy_id, int location);
849 static void mdio_write(struct net_device *vp, int phy_id, int location, int value);
850 static void vortex_timer(unsigned long arg);
851 static void rx_oom_timer(unsigned long arg);
852 static int vortex_start_xmit(struct sk_buff *skb, struct net_device *dev);
853 static int boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev);
854 static int vortex_rx(struct net_device *dev);
855 static int boomerang_rx(struct net_device *dev);
856 static void vortex_interrupt(int irq, void *dev_id, struct pt_regs *regs);
857 static void boomerang_interrupt(int irq, void *dev_id, struct pt_regs *regs);
858 static int vortex_close(struct net_device *dev);
859 static void dump_tx_ring(struct net_device *dev);
860 static void update_stats(long ioaddr, struct net_device *dev);
861 static struct net_device_stats *vortex_get_stats(struct net_device *dev);
862 static void set_rx_mode(struct net_device *dev);
863 static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
864 static void vortex_tx_timeout(struct net_device *dev);
865 static void acpi_set_WOL(struct net_device *dev);
867 /* This driver uses 'options' to pass the media type, full-duplex flag, etc. */
868 /* Option count limit only -- unlimited interfaces are supported. */
869 #define MAX_UNITS 8
870 static int options[MAX_UNITS] = { -1, -1, -1, -1, -1, -1, -1, -1,};
871 static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
872 static int hw_checksums[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
873 static int flow_ctrl[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
874 static int enable_wol[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
875 static int global_options = -1;
876 static int global_full_duplex = -1;
877 static int global_enable_wol = -1;
879 /* #define dev_alloc_skb dev_alloc_skb_debug */
881 /* A list of all installed Vortex EISA devices, for removing the driver module. */
882 static struct net_device *root_vortex_eisa_dev;
884 /* Variables to work-around the Compaq PCI BIOS32 problem. */
885 static int compaq_ioaddr, compaq_irq, compaq_device_id = 0x5900;
887 static int vortex_cards_found;
889 #ifdef CONFIG_PM
891 static int vortex_suspend (struct pci_dev *pdev, u32 state)
893 struct net_device *dev = pci_get_drvdata(pdev);
895 if (dev && dev->priv) {
896 if (netif_running(dev)) {
897 netif_device_detach(dev);
898 vortex_down(dev);
901 return 0;
904 static int vortex_resume (struct pci_dev *pdev)
906 struct net_device *dev = pci_get_drvdata(pdev);
908 if (dev && dev->priv) {
909 if (netif_running(dev)) {
910 vortex_up(dev);
911 netif_device_attach(dev);
914 return 0;
917 #endif /* CONFIG_PM */
919 /* returns count found (>= 0), or negative on error */
920 static int __init vortex_eisa_init (void)
922 long ioaddr;
923 int rc;
924 int orig_cards_found = vortex_cards_found;
926 /* Now check all slots of the EISA bus. */
927 if (!EISA_bus)
928 return 0;
930 for (ioaddr = 0x1000; ioaddr < 0x9000; ioaddr += 0x1000) {
931 int device_id;
933 if (request_region(ioaddr, VORTEX_TOTAL_SIZE, DRV_NAME) == NULL)
934 continue;
936 /* Check the standard EISA ID register for an encoded '3Com'. */
937 if (inw(ioaddr + 0xC80) != 0x6d50) {
938 release_region (ioaddr, VORTEX_TOTAL_SIZE);
939 continue;
942 /* Check for a product that we support, 3c59{2,7} any rev. */
943 device_id = (inb(ioaddr + 0xC82)<<8) + inb(ioaddr + 0xC83);
944 if ((device_id & 0xFF00) != 0x5900) {
945 release_region (ioaddr, VORTEX_TOTAL_SIZE);
946 continue;
949 rc = vortex_probe1(NULL, ioaddr, inw(ioaddr + 0xC88) >> 12,
950 EISA_TBL_OFFSET, vortex_cards_found);
951 if (rc == 0)
952 vortex_cards_found++;
953 else
954 release_region (ioaddr, VORTEX_TOTAL_SIZE);
957 /* Special code to work-around the Compaq PCI BIOS32 problem. */
958 if (compaq_ioaddr) {
959 vortex_probe1(NULL, compaq_ioaddr, compaq_irq,
960 compaq_device_id, vortex_cards_found++);
963 return vortex_cards_found - orig_cards_found;
966 /* returns count (>= 0), or negative on error */
967 static int __devinit vortex_init_one (struct pci_dev *pdev,
968 const struct pci_device_id *ent)
970 int rc;
972 /* wake up and enable device */
973 if (pci_enable_device (pdev)) {
974 rc = -EIO;
975 } else {
976 rc = vortex_probe1 (pdev, pci_resource_start (pdev, 0), pdev->irq,
977 ent->driver_data, vortex_cards_found);
978 if (rc == 0)
979 vortex_cards_found++;
981 return rc;
985 * Start up the PCI device which is described by *pdev.
986 * Return 0 on success.
988 * NOTE: pdev can be NULL, for the case of an EISA driver
990 static int __devinit vortex_probe1(struct pci_dev *pdev,
991 long ioaddr, int irq,
992 int chip_idx, int card_idx)
994 struct vortex_private *vp;
995 int option;
996 unsigned int eeprom[0x40], checksum = 0; /* EEPROM contents */
997 int i, step;
998 struct net_device *dev;
999 static int printed_version;
1000 int retval, print_info;
1001 struct vortex_chip_info * const vci = &vortex_info_tbl[chip_idx];
1002 char *print_name;
1004 if (!printed_version) {
1005 printk (version);
1006 printed_version = 1;
1009 print_name = pdev ? pdev->slot_name : "3c59x";
1011 dev = alloc_etherdev(sizeof(*vp));
1012 retval = -ENOMEM;
1013 if (!dev) {
1014 printk (KERN_ERR PFX "unable to allocate etherdev, aborting\n");
1015 goto out;
1017 SET_MODULE_OWNER(dev);
1018 vp = dev->priv;
1020 option = global_options;
1022 /* The lower four bits are the media type. */
1023 if (dev->mem_start) {
1025 * The 'options' param is passed in as the third arg to the
1026 * LILO 'ether=' argument for non-modular use
1028 option = dev->mem_start;
1030 else if (card_idx < MAX_UNITS) {
1031 if (options[card_idx] >= 0)
1032 option = options[card_idx];
1035 if (option > 0) {
1036 if (option & 0x8000)
1037 vortex_debug = 7;
1038 if (option & 0x4000)
1039 vortex_debug = 2;
1040 if (option & 0x0400)
1041 vp->enable_wol = 1;
1044 print_info = (vortex_debug > 1);
1045 if (print_info)
1046 printk (KERN_INFO "See Documentation/networking/vortex.txt\n");
1048 printk(KERN_INFO "%s: 3Com %s %s at 0x%lx. Vers " DRV_VERSION "\n",
1049 print_name,
1050 pdev ? "PCI" : "EISA",
1051 vci->name,
1052 ioaddr);
1054 dev->base_addr = ioaddr;
1055 dev->irq = irq;
1056 dev->mtu = mtu;
1057 vp->drv_flags = vci->drv_flags;
1058 vp->has_nway = (vci->drv_flags & HAS_NWAY) ? 1 : 0;
1059 vp->io_size = vci->io_size;
1060 vp->card_idx = card_idx;
1062 /* module list only for EISA devices */
1063 if (pdev == NULL) {
1064 vp->next_module = root_vortex_eisa_dev;
1065 root_vortex_eisa_dev = dev;
1068 /* PCI-only startup logic */
1069 if (pdev) {
1070 /* EISA resources already marked, so only PCI needs to do this here */
1071 /* Ignore return value, because Cardbus drivers already allocate for us */
1072 if (request_region(ioaddr, vci->io_size, print_name) != NULL)
1073 vp->must_free_region = 1;
1075 /* enable bus-mastering if necessary */
1076 if (vci->flags & PCI_USES_MASTER)
1077 pci_set_master (pdev);
1079 if (vci->drv_flags & IS_VORTEX) {
1080 u8 pci_latency;
1081 u8 new_latency = 248;
1083 /* Check the PCI latency value. On the 3c590 series the latency timer
1084 must be set to the maximum value to avoid data corruption that occurs
1085 when the timer expires during a transfer. This bug exists the Vortex
1086 chip only. */
1087 pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &pci_latency);
1088 if (pci_latency < new_latency) {
1089 printk(KERN_INFO "%s: Overriding PCI latency"
1090 " timer (CFLT) setting of %d, new value is %d.\n",
1091 print_name, pci_latency, new_latency);
1092 pci_write_config_byte(pdev, PCI_LATENCY_TIMER, new_latency);
1097 spin_lock_init(&vp->lock);
1098 spin_lock_init(&vp->mdio_lock);
1099 vp->pdev = pdev;
1101 /* Makes sure rings are at least 16 byte aligned. */
1102 vp->rx_ring = pci_alloc_consistent(pdev, sizeof(struct boom_rx_desc) * RX_RING_SIZE
1103 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
1104 &vp->rx_ring_dma);
1105 retval = -ENOMEM;
1106 if (vp->rx_ring == 0)
1107 goto free_region;
1109 vp->tx_ring = (struct boom_tx_desc *)(vp->rx_ring + RX_RING_SIZE);
1110 vp->tx_ring_dma = vp->rx_ring_dma + sizeof(struct boom_rx_desc) * RX_RING_SIZE;
1112 /* if we are a PCI driver, we store info in pdev->driver_data
1113 * instead of a module list */
1114 if (pdev)
1115 pci_set_drvdata(pdev, dev);
1117 vp->media_override = 7;
1118 if (option >= 0) {
1119 vp->media_override = ((option & 7) == 2) ? 0 : option & 15;
1120 if (vp->media_override != 7)
1121 vp->medialock = 1;
1122 vp->full_duplex = (option & 0x200) ? 1 : 0;
1123 vp->bus_master = (option & 16) ? 1 : 0;
1126 if (global_full_duplex > 0)
1127 vp->full_duplex = 1;
1128 if (global_enable_wol > 0)
1129 vp->enable_wol = 1;
1131 if (card_idx < MAX_UNITS) {
1132 if (full_duplex[card_idx] > 0)
1133 vp->full_duplex = 1;
1134 if (flow_ctrl[card_idx] > 0)
1135 vp->flow_ctrl = 1;
1136 if (enable_wol[card_idx] > 0)
1137 vp->enable_wol = 1;
1140 vp->force_fd = vp->full_duplex;
1141 vp->options = option;
1142 /* Read the station address from the EEPROM. */
1143 EL3WINDOW(0);
1145 int base;
1147 if (vci->drv_flags & EEPROM_8BIT)
1148 base = 0x230;
1149 else if (vci->drv_flags & EEPROM_OFFSET)
1150 base = EEPROM_Read + 0x30;
1151 else
1152 base = EEPROM_Read;
1154 for (i = 0; i < 0x40; i++) {
1155 int timer;
1156 outw(base + i, ioaddr + Wn0EepromCmd);
1157 /* Pause for at least 162 us. for the read to take place. */
1158 for (timer = 10; timer >= 0; timer--) {
1159 udelay(162);
1160 if ((inw(ioaddr + Wn0EepromCmd) & 0x8000) == 0)
1161 break;
1163 eeprom[i] = inw(ioaddr + Wn0EepromData);
1166 for (i = 0; i < 0x18; i++)
1167 checksum ^= eeprom[i];
1168 checksum = (checksum ^ (checksum >> 8)) & 0xff;
1169 if (checksum != 0x00) { /* Grrr, needless incompatible change 3Com. */
1170 while (i < 0x21)
1171 checksum ^= eeprom[i++];
1172 checksum = (checksum ^ (checksum >> 8)) & 0xff;
1174 if ((checksum != 0x00) && !(vci->drv_flags & IS_TORNADO))
1175 printk(" ***INVALID CHECKSUM %4.4x*** ", checksum);
1176 for (i = 0; i < 3; i++)
1177 ((u16 *)dev->dev_addr)[i] = htons(eeprom[i + 10]);
1178 if (print_info) {
1179 for (i = 0; i < 6; i++)
1180 printk("%c%2.2x", i ? ':' : ' ', dev->dev_addr[i]);
1182 EL3WINDOW(2);
1183 for (i = 0; i < 6; i++)
1184 outb(dev->dev_addr[i], ioaddr + i);
1186 #ifdef __sparc__
1187 if (print_info)
1188 printk(", IRQ %s\n", __irq_itoa(dev->irq));
1189 #else
1190 if (print_info)
1191 printk(", IRQ %d\n", dev->irq);
1192 /* Tell them about an invalid IRQ. */
1193 if (dev->irq <= 0 || dev->irq >= NR_IRQS)
1194 printk(KERN_WARNING " *** Warning: IRQ %d is unlikely to work! ***\n",
1195 dev->irq);
1196 #endif
1198 EL3WINDOW(4);
1199 step = (inb(ioaddr + Wn4_NetDiag) & 0x1e) >> 1;
1200 if (print_info) {
1201 printk(KERN_INFO " product code %02x%02x rev %02x.%d date %02d-"
1202 "%02d-%02d\n", eeprom[6]&0xff, eeprom[6]>>8, eeprom[0x14],
1203 step, (eeprom[4]>>5) & 15, eeprom[4] & 31, eeprom[4]>>9);
1207 if (pdev && vci->drv_flags & HAS_CB_FNS) {
1208 unsigned long fn_st_addr; /* Cardbus function status space */
1209 unsigned short n;
1211 fn_st_addr = pci_resource_start (pdev, 2);
1212 if (fn_st_addr) {
1213 vp->cb_fn_base = ioremap(fn_st_addr, 128);
1214 retval = -ENOMEM;
1215 if (!vp->cb_fn_base)
1216 goto free_ring;
1218 if (print_info) {
1219 printk(KERN_INFO "%s: CardBus functions mapped %8.8lx->%p\n",
1220 print_name, fn_st_addr, vp->cb_fn_base);
1222 EL3WINDOW(2);
1224 n = inw(ioaddr + Wn2_ResetOptions) & ~0x4010;
1225 if (vp->drv_flags & INVERT_LED_PWR)
1226 n |= 0x10;
1227 if (vp->drv_flags & INVERT_MII_PWR)
1228 n |= 0x4000;
1229 outw(n, ioaddr + Wn2_ResetOptions);
1230 if (vp->drv_flags & WNO_XCVR_PWR) {
1231 EL3WINDOW(0);
1232 outw(0x0800, ioaddr);
1236 /* Extract our information from the EEPROM data. */
1237 vp->info1 = eeprom[13];
1238 vp->info2 = eeprom[15];
1239 vp->capabilities = eeprom[16];
1241 if (vp->info1 & 0x8000) {
1242 vp->full_duplex = 1;
1243 if (print_info)
1244 printk(KERN_INFO "Full duplex capable\n");
1248 static const char * ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
1249 unsigned int config;
1250 EL3WINDOW(3);
1251 vp->available_media = inw(ioaddr + Wn3_Options);
1252 if ((vp->available_media & 0xff) == 0) /* Broken 3c916 */
1253 vp->available_media = 0x40;
1254 config = inl(ioaddr + Wn3_Config);
1255 if (print_info) {
1256 printk(KERN_DEBUG " Internal config register is %4.4x, "
1257 "transceivers %#x.\n", config, inw(ioaddr + Wn3_Options));
1258 printk(KERN_INFO " %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
1259 8 << RAM_SIZE(config),
1260 RAM_WIDTH(config) ? "word" : "byte",
1261 ram_split[RAM_SPLIT(config)],
1262 AUTOSELECT(config) ? "autoselect/" : "",
1263 XCVR(config) > XCVR_ExtMII ? "<invalid transceiver>" :
1264 media_tbl[XCVR(config)].name);
1266 vp->default_media = XCVR(config);
1267 if (vp->default_media == XCVR_NWAY)
1268 vp->has_nway = 1;
1269 vp->autoselect = AUTOSELECT(config);
1272 if (vp->media_override != 7) {
1273 printk(KERN_INFO "%s: Media override to transceiver type %d (%s).\n",
1274 print_name, vp->media_override,
1275 media_tbl[vp->media_override].name);
1276 dev->if_port = vp->media_override;
1277 } else
1278 dev->if_port = vp->default_media;
1280 if ((vp->available_media & 0x4b) || (vci->drv_flags & HAS_NWAY) ||
1281 dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
1282 int phy, phy_idx = 0;
1283 EL3WINDOW(4);
1284 mii_preamble_required++;
1285 if (vp->drv_flags & EXTRA_PREAMBLE)
1286 mii_preamble_required++;
1287 mdio_sync(ioaddr, 32);
1288 mdio_read(dev, 24, 1);
1289 for (phy = 0; phy < 32 && phy_idx < 1; phy++) {
1290 int mii_status, phyx;
1293 * For the 3c905CX we look at index 24 first, because it bogusly
1294 * reports an external PHY at all indices
1296 if (phy == 0)
1297 phyx = 24;
1298 else if (phy <= 24)
1299 phyx = phy - 1;
1300 else
1301 phyx = phy;
1302 mii_status = mdio_read(dev, phyx, 1);
1303 if (mii_status && mii_status != 0xffff) {
1304 vp->phys[phy_idx++] = phyx;
1305 if (print_info) {
1306 printk(KERN_INFO " MII transceiver found at address %d,"
1307 " status %4x.\n", phyx, mii_status);
1309 if ((mii_status & 0x0040) == 0)
1310 mii_preamble_required++;
1313 mii_preamble_required--;
1314 if (phy_idx == 0) {
1315 printk(KERN_WARNING" ***WARNING*** No MII transceivers found!\n");
1316 vp->phys[0] = 24;
1317 } else {
1318 vp->advertising = mdio_read(dev, vp->phys[0], 4);
1319 if (vp->full_duplex) {
1320 /* Only advertise the FD media types. */
1321 vp->advertising &= ~0x02A0;
1322 mdio_write(dev, vp->phys[0], 4, vp->advertising);
1327 if (vp->capabilities & CapBusMaster) {
1328 vp->full_bus_master_tx = 1;
1329 if (print_info) {
1330 printk(KERN_INFO " Enabling bus-master transmits and %s receives.\n",
1331 (vp->info2 & 1) ? "early" : "whole-frame" );
1333 vp->full_bus_master_rx = (vp->info2 & 1) ? 1 : 2;
1334 vp->bus_master = 0; /* AKPM: vortex only */
1337 /* The 3c59x-specific entries in the device structure. */
1338 dev->open = vortex_open;
1339 if (vp->full_bus_master_tx) {
1340 dev->hard_start_xmit = boomerang_start_xmit;
1341 /* Actually, it still should work with iommu. */
1342 dev->features |= NETIF_F_SG;
1343 if (((hw_checksums[card_idx] == -1) && (vp->drv_flags & HAS_HWCKSM)) ||
1344 (hw_checksums[card_idx] == 1)) {
1345 dev->features |= NETIF_F_IP_CSUM;
1347 } else {
1348 dev->hard_start_xmit = vortex_start_xmit;
1351 if (print_info) {
1352 printk(KERN_INFO "%s: scatter/gather %sabled. h/w checksums %sabled\n",
1353 print_name,
1354 (dev->features & NETIF_F_SG) ? "en":"dis",
1355 (dev->features & NETIF_F_IP_CSUM) ? "en":"dis");
1358 dev->stop = vortex_close;
1359 dev->get_stats = vortex_get_stats;
1360 dev->do_ioctl = vortex_ioctl;
1361 dev->set_multicast_list = set_rx_mode;
1362 dev->tx_timeout = vortex_tx_timeout;
1363 dev->watchdog_timeo = (watchdog * HZ) / 1000;
1364 if (pdev && vp->enable_wol) {
1365 vp->pm_state_valid = 1;
1366 pci_save_state(vp->pdev, vp->power_state);
1367 acpi_set_WOL(dev);
1369 retval = register_netdev(dev);
1370 if (retval == 0)
1371 return 0;
1373 free_ring:
1374 pci_free_consistent(pdev,
1375 sizeof(struct boom_rx_desc) * RX_RING_SIZE
1376 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
1377 vp->rx_ring,
1378 vp->rx_ring_dma);
1379 free_region:
1380 if (vp->must_free_region)
1381 release_region(ioaddr, vci->io_size);
1382 kfree (dev);
1383 printk(KERN_ERR PFX "vortex_probe1 fails. Returns %d\n", retval);
1384 out:
1385 return retval;
1388 static void
1389 issue_and_wait(struct net_device *dev, int cmd)
1391 int i;
1393 outw(cmd, dev->base_addr + EL3_CMD);
1394 for (i = 0; i < 2000; i++) {
1395 if (!(inw(dev->base_addr + EL3_STATUS) & CmdInProgress))
1396 return;
1399 /* OK, that didn't work. Do it the slow way. One second */
1400 for (i = 0; i < 100000; i++) {
1401 if (!(inw(dev->base_addr + EL3_STATUS) & CmdInProgress)) {
1402 if (vortex_debug > 1)
1403 printk(KERN_INFO "%s: command 0x%04x took %d usecs\n",
1404 dev->name, cmd, i * 10);
1405 return;
1407 udelay(10);
1409 printk(KERN_ERR "%s: command 0x%04x did not complete! Status=0x%x\n",
1410 dev->name, cmd, inw(dev->base_addr + EL3_STATUS));
1413 static void
1414 vortex_up(struct net_device *dev)
1416 long ioaddr = dev->base_addr;
1417 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1418 unsigned int config;
1419 int i;
1421 if (vp->pdev && vp->enable_wol) {
1422 pci_set_power_state(vp->pdev, 0); /* Go active */
1423 pci_restore_state(vp->pdev, vp->power_state);
1426 /* Before initializing select the active media port. */
1427 EL3WINDOW(3);
1428 config = inl(ioaddr + Wn3_Config);
1430 if (vp->media_override != 7) {
1431 printk(KERN_INFO "%s: Media override to transceiver %d (%s).\n",
1432 dev->name, vp->media_override,
1433 media_tbl[vp->media_override].name);
1434 dev->if_port = vp->media_override;
1435 } else if (vp->autoselect) {
1436 if (vp->has_nway) {
1437 if (vortex_debug > 1)
1438 printk(KERN_INFO "%s: using NWAY device table, not %d\n",
1439 dev->name, dev->if_port);
1440 dev->if_port = XCVR_NWAY;
1441 } else {
1442 /* Find first available media type, starting with 100baseTx. */
1443 dev->if_port = XCVR_100baseTx;
1444 while (! (vp->available_media & media_tbl[dev->if_port].mask))
1445 dev->if_port = media_tbl[dev->if_port].next;
1446 if (vortex_debug > 1)
1447 printk(KERN_INFO "%s: first available media type: %s\n",
1448 dev->name, media_tbl[dev->if_port].name);
1450 } else {
1451 dev->if_port = vp->default_media;
1452 if (vortex_debug > 1)
1453 printk(KERN_INFO "%s: using default media %s\n",
1454 dev->name, media_tbl[dev->if_port].name);
1457 init_timer(&vp->timer);
1458 vp->timer.expires = RUN_AT(media_tbl[dev->if_port].wait);
1459 vp->timer.data = (unsigned long)dev;
1460 vp->timer.function = vortex_timer; /* timer handler */
1461 add_timer(&vp->timer);
1463 init_timer(&vp->rx_oom_timer);
1464 vp->rx_oom_timer.data = (unsigned long)dev;
1465 vp->rx_oom_timer.function = rx_oom_timer;
1467 if (vortex_debug > 1)
1468 printk(KERN_DEBUG "%s: Initial media type %s.\n",
1469 dev->name, media_tbl[dev->if_port].name);
1471 vp->full_duplex = vp->force_fd;
1472 config = BFINS(config, dev->if_port, 20, 4);
1473 if (vortex_debug > 6)
1474 printk(KERN_DEBUG "vortex_up(): writing 0x%x to InternalConfig\n", config);
1475 outl(config, ioaddr + Wn3_Config);
1477 if (dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
1478 int mii_reg1, mii_reg5;
1479 EL3WINDOW(4);
1480 /* Read BMSR (reg1) only to clear old status. */
1481 mii_reg1 = mdio_read(dev, vp->phys[0], 1);
1482 mii_reg5 = mdio_read(dev, vp->phys[0], 5);
1483 if (mii_reg5 == 0xffff || mii_reg5 == 0x0000) {
1484 netif_carrier_off(dev); /* No MII device or no link partner report */
1485 } else {
1486 mii_reg5 &= vp->advertising;
1487 if ((mii_reg5 & 0x0100) != 0 /* 100baseTx-FD */
1488 || (mii_reg5 & 0x00C0) == 0x0040) /* 10T-FD, but not 100-HD */
1489 vp->full_duplex = 1;
1490 netif_carrier_on(dev);
1492 vp->partner_flow_ctrl = ((mii_reg5 & 0x0400) != 0);
1493 if (vortex_debug > 1)
1494 printk(KERN_INFO "%s: MII #%d status %4.4x, link partner capability %4.4x,"
1495 " info1 %04x, setting %s-duplex.\n",
1496 dev->name, vp->phys[0],
1497 mii_reg1, mii_reg5,
1498 vp->info1, ((vp->info1 & 0x8000) || vp->full_duplex) ? "full" : "half");
1499 EL3WINDOW(3);
1502 /* Set the full-duplex bit. */
1503 outw( ((vp->info1 & 0x8000) || vp->full_duplex ? 0x20 : 0) |
1504 (dev->mtu > 1500 ? 0x40 : 0) |
1505 ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ? 0x100 : 0),
1506 ioaddr + Wn3_MAC_Ctrl);
1508 if (vortex_debug > 1) {
1509 printk(KERN_DEBUG "%s: vortex_up() InternalConfig %8.8x.\n",
1510 dev->name, config);
1513 issue_and_wait(dev, TxReset);
1515 * Don't reset the PHY - that upsets autonegotiation during DHCP operations.
1517 issue_and_wait(dev, RxReset|0x04);
1519 outw(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
1521 if (vortex_debug > 1) {
1522 EL3WINDOW(4);
1523 printk(KERN_DEBUG "%s: vortex_up() irq %d media status %4.4x.\n",
1524 dev->name, dev->irq, inw(ioaddr + Wn4_Media));
1527 /* Set the station address and mask in window 2 each time opened. */
1528 EL3WINDOW(2);
1529 for (i = 0; i < 6; i++)
1530 outb(dev->dev_addr[i], ioaddr + i);
1531 for (; i < 12; i+=2)
1532 outw(0, ioaddr + i);
1534 if (vp->cb_fn_base) {
1535 unsigned short n = inw(ioaddr + Wn2_ResetOptions) & ~0x4010;
1536 if (vp->drv_flags & INVERT_LED_PWR)
1537 n |= 0x10;
1538 if (vp->drv_flags & INVERT_MII_PWR)
1539 n |= 0x4000;
1540 outw(n, ioaddr + Wn2_ResetOptions);
1543 if (dev->if_port == XCVR_10base2)
1544 /* Start the thinnet transceiver. We should really wait 50ms...*/
1545 outw(StartCoax, ioaddr + EL3_CMD);
1546 if (dev->if_port != XCVR_NWAY) {
1547 EL3WINDOW(4);
1548 outw((inw(ioaddr + Wn4_Media) & ~(Media_10TP|Media_SQE)) |
1549 media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
1552 /* Switch to the stats window, and clear all stats by reading. */
1553 outw(StatsDisable, ioaddr + EL3_CMD);
1554 EL3WINDOW(6);
1555 for (i = 0; i < 10; i++)
1556 inb(ioaddr + i);
1557 inw(ioaddr + 10);
1558 inw(ioaddr + 12);
1559 /* New: On the Vortex we must also clear the BadSSD counter. */
1560 EL3WINDOW(4);
1561 inb(ioaddr + 12);
1562 /* ..and on the Boomerang we enable the extra statistics bits. */
1563 outw(0x0040, ioaddr + Wn4_NetDiag);
1565 /* Switch to register set 7 for normal use. */
1566 EL3WINDOW(7);
1568 if (vp->full_bus_master_rx) { /* Boomerang bus master. */
1569 vp->cur_rx = vp->dirty_rx = 0;
1570 /* Initialize the RxEarly register as recommended. */
1571 outw(SetRxThreshold + (1536>>2), ioaddr + EL3_CMD);
1572 outl(0x0020, ioaddr + PktStatus);
1573 outl(vp->rx_ring_dma, ioaddr + UpListPtr);
1575 if (vp->full_bus_master_tx) { /* Boomerang bus master Tx. */
1576 vp->cur_tx = vp->dirty_tx = 0;
1577 if (vp->drv_flags & IS_BOOMERANG)
1578 outb(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold); /* Room for a packet. */
1579 /* Clear the Rx, Tx rings. */
1580 for (i = 0; i < RX_RING_SIZE; i++) /* AKPM: this is done in vortex_open, too */
1581 vp->rx_ring[i].status = 0;
1582 for (i = 0; i < TX_RING_SIZE; i++)
1583 vp->tx_skbuff[i] = 0;
1584 outl(0, ioaddr + DownListPtr);
1586 /* Set receiver mode: presumably accept b-case and phys addr only. */
1587 set_rx_mode(dev);
1588 outw(StatsEnable, ioaddr + EL3_CMD); /* Turn on statistics. */
1590 // issue_and_wait(dev, SetTxStart|0x07ff);
1591 outw(RxEnable, ioaddr + EL3_CMD); /* Enable the receiver. */
1592 outw(TxEnable, ioaddr + EL3_CMD); /* Enable transmitter. */
1593 /* Allow status bits to be seen. */
1594 vp->status_enable = SetStatusEnb | HostError|IntReq|StatsFull|TxComplete|
1595 (vp->full_bus_master_tx ? DownComplete : TxAvailable) |
1596 (vp->full_bus_master_rx ? UpComplete : RxComplete) |
1597 (vp->bus_master ? DMADone : 0);
1598 vp->intr_enable = SetIntrEnb | IntLatch | TxAvailable |
1599 (vp->full_bus_master_rx ? 0 : RxComplete) |
1600 StatsFull | HostError | TxComplete | IntReq
1601 | (vp->bus_master ? DMADone : 0) | UpComplete | DownComplete;
1602 outw(vp->status_enable, ioaddr + EL3_CMD);
1603 /* Ack all pending events, and set active indicator mask. */
1604 outw(AckIntr | IntLatch | TxAvailable | RxEarly | IntReq,
1605 ioaddr + EL3_CMD);
1606 outw(vp->intr_enable, ioaddr + EL3_CMD);
1607 if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
1608 writel(0x8000, vp->cb_fn_base + 4);
1609 netif_start_queue (dev);
1612 static int
1613 vortex_open(struct net_device *dev)
1615 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1616 int i;
1617 int retval;
1619 /* Use the now-standard shared IRQ implementation. */
1620 if ((retval = request_irq(dev->irq, vp->full_bus_master_rx ?
1621 &boomerang_interrupt : &vortex_interrupt, SA_SHIRQ, dev->name, dev))) {
1622 printk(KERN_ERR "%s: Could not reserve IRQ %d\n", dev->name, dev->irq);
1623 goto out;
1626 if (vp->full_bus_master_rx) { /* Boomerang bus master. */
1627 if (vortex_debug > 2)
1628 printk(KERN_DEBUG "%s: Filling in the Rx ring.\n", dev->name);
1629 for (i = 0; i < RX_RING_SIZE; i++) {
1630 struct sk_buff *skb;
1631 vp->rx_ring[i].next = cpu_to_le32(vp->rx_ring_dma + sizeof(struct boom_rx_desc) * (i+1));
1632 vp->rx_ring[i].status = 0; /* Clear complete bit. */
1633 vp->rx_ring[i].length = cpu_to_le32(PKT_BUF_SZ | LAST_FRAG);
1634 skb = dev_alloc_skb(PKT_BUF_SZ);
1635 vp->rx_skbuff[i] = skb;
1636 if (skb == NULL)
1637 break; /* Bad news! */
1638 skb->dev = dev; /* Mark as being used by this device. */
1639 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
1640 vp->rx_ring[i].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->tail, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
1642 if (i != RX_RING_SIZE) {
1643 int j;
1644 printk(KERN_EMERG "%s: no memory for rx ring\n", dev->name);
1645 for (j = 0; j < i; j++) {
1646 if (vp->rx_skbuff[j]) {
1647 dev_kfree_skb(vp->rx_skbuff[j]);
1648 vp->rx_skbuff[j] = 0;
1651 retval = -ENOMEM;
1652 goto out_free_irq;
1654 /* Wrap the ring. */
1655 vp->rx_ring[i-1].next = cpu_to_le32(vp->rx_ring_dma);
1658 vortex_up(dev);
1659 return 0;
1661 out_free_irq:
1662 free_irq(dev->irq, dev);
1663 out:
1664 if (vortex_debug > 1)
1665 printk(KERN_ERR "%s: vortex_open() fails: returning %d\n", dev->name, retval);
1666 return retval;
1669 static void
1670 vortex_timer(unsigned long data)
1672 struct net_device *dev = (struct net_device *)data;
1673 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1674 long ioaddr = dev->base_addr;
1675 int next_tick = 60*HZ;
1676 int ok = 0;
1677 int media_status, mii_status, old_window;
1679 if (vortex_debug > 2) {
1680 printk(KERN_DEBUG "%s: Media selection timer tick happened, %s.\n",
1681 dev->name, media_tbl[dev->if_port].name);
1682 printk(KERN_DEBUG "dev->watchdog_timeo=%d\n", dev->watchdog_timeo);
1685 if (vp->medialock)
1686 goto leave_media_alone;
1687 disable_irq(dev->irq);
1688 old_window = inw(ioaddr + EL3_CMD) >> 13;
1689 EL3WINDOW(4);
1690 media_status = inw(ioaddr + Wn4_Media);
1691 switch (dev->if_port) {
1692 case XCVR_10baseT: case XCVR_100baseTx: case XCVR_100baseFx:
1693 if (media_status & Media_LnkBeat) {
1694 netif_carrier_on(dev);
1695 ok = 1;
1696 if (vortex_debug > 1)
1697 printk(KERN_DEBUG "%s: Media %s has link beat, %x.\n",
1698 dev->name, media_tbl[dev->if_port].name, media_status);
1699 } else if (vortex_debug > 1) {
1700 netif_carrier_off(dev);
1701 printk(KERN_DEBUG "%s: Media %s has no link beat, %x.\n",
1702 dev->name, media_tbl[dev->if_port].name, media_status);
1704 break;
1705 case XCVR_MII: case XCVR_NWAY:
1707 mii_status = mdio_read(dev, vp->phys[0], 1);
1708 ok = 1;
1709 if (vortex_debug > 2)
1710 printk(KERN_DEBUG "%s: MII transceiver has status %4.4x.\n",
1711 dev->name, mii_status);
1712 if (mii_status & BMSR_LSTATUS) {
1713 int mii_reg5 = mdio_read(dev, vp->phys[0], 5);
1714 if (! vp->force_fd && mii_reg5 != 0xffff) {
1715 int duplex;
1717 mii_reg5 &= vp->advertising;
1718 duplex = (mii_reg5&0x0100) || (mii_reg5 & 0x01C0) == 0x0040;
1719 if (vp->full_duplex != duplex) {
1720 vp->full_duplex = duplex;
1721 printk(KERN_INFO "%s: Setting %s-duplex based on MII "
1722 "#%d link partner capability of %4.4x.\n",
1723 dev->name, vp->full_duplex ? "full" : "half",
1724 vp->phys[0], mii_reg5);
1725 /* Set the full-duplex bit. */
1726 EL3WINDOW(3);
1727 outw( (vp->full_duplex ? 0x20 : 0) |
1728 (dev->mtu > 1500 ? 0x40 : 0) |
1729 ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ? 0x100 : 0),
1730 ioaddr + Wn3_MAC_Ctrl);
1731 if (vortex_debug > 1)
1732 printk(KERN_DEBUG "Setting duplex in Wn3_MAC_Ctrl\n");
1733 /* AKPM: bug: should reset Tx and Rx after setting Duplex. Page 180 */
1736 netif_carrier_on(dev);
1737 } else {
1738 netif_carrier_off(dev);
1741 break;
1742 default: /* Other media types handled by Tx timeouts. */
1743 if (vortex_debug > 1)
1744 printk(KERN_DEBUG "%s: Media %s has no indication, %x.\n",
1745 dev->name, media_tbl[dev->if_port].name, media_status);
1746 ok = 1;
1748 if ( ! ok) {
1749 unsigned int config;
1751 do {
1752 dev->if_port = media_tbl[dev->if_port].next;
1753 } while ( ! (vp->available_media & media_tbl[dev->if_port].mask));
1754 if (dev->if_port == XCVR_Default) { /* Go back to default. */
1755 dev->if_port = vp->default_media;
1756 if (vortex_debug > 1)
1757 printk(KERN_DEBUG "%s: Media selection failing, using default "
1758 "%s port.\n",
1759 dev->name, media_tbl[dev->if_port].name);
1760 } else {
1761 if (vortex_debug > 1)
1762 printk(KERN_DEBUG "%s: Media selection failed, now trying "
1763 "%s port.\n",
1764 dev->name, media_tbl[dev->if_port].name);
1765 next_tick = media_tbl[dev->if_port].wait;
1767 outw((media_status & ~(Media_10TP|Media_SQE)) |
1768 media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
1770 EL3WINDOW(3);
1771 config = inl(ioaddr + Wn3_Config);
1772 config = BFINS(config, dev->if_port, 20, 4);
1773 outl(config, ioaddr + Wn3_Config);
1775 outw(dev->if_port == XCVR_10base2 ? StartCoax : StopCoax,
1776 ioaddr + EL3_CMD);
1777 if (vortex_debug > 1)
1778 printk(KERN_DEBUG "wrote 0x%08x to Wn3_Config\n", config);
1779 /* AKPM: FIXME: Should reset Rx & Tx here. P60 of 3c90xc.pdf */
1781 EL3WINDOW(old_window);
1782 enable_irq(dev->irq);
1784 leave_media_alone:
1785 if (vortex_debug > 2)
1786 printk(KERN_DEBUG "%s: Media selection timer finished, %s.\n",
1787 dev->name, media_tbl[dev->if_port].name);
1789 mod_timer(&vp->timer, RUN_AT(next_tick));
1790 if (vp->deferred)
1791 outw(FakeIntr, ioaddr + EL3_CMD);
1792 return;
1795 static void vortex_tx_timeout(struct net_device *dev)
1797 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1798 long ioaddr = dev->base_addr;
1800 printk(KERN_ERR "%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
1801 dev->name, inb(ioaddr + TxStatus),
1802 inw(ioaddr + EL3_STATUS));
1803 EL3WINDOW(4);
1804 printk(KERN_ERR " diagnostics: net %04x media %04x dma %08x fifo %04x\n",
1805 inw(ioaddr + Wn4_NetDiag),
1806 inw(ioaddr + Wn4_Media),
1807 inl(ioaddr + PktStatus),
1808 inw(ioaddr + Wn4_FIFODiag));
1809 /* Slight code bloat to be user friendly. */
1810 if ((inb(ioaddr + TxStatus) & 0x88) == 0x88)
1811 printk(KERN_ERR "%s: Transmitter encountered 16 collisions --"
1812 " network cable problem?\n", dev->name);
1813 if (inw(ioaddr + EL3_STATUS) & IntLatch) {
1814 printk(KERN_ERR "%s: Interrupt posted but not delivered --"
1815 " IRQ blocked by another device?\n", dev->name);
1816 /* Bad idea here.. but we might as well handle a few events. */
1819 * Block interrupts because vortex_interrupt does a bare spin_lock()
1821 unsigned long flags;
1822 local_irq_save(flags);
1823 if (vp->full_bus_master_tx)
1824 boomerang_interrupt(dev->irq, dev, 0);
1825 else
1826 vortex_interrupt(dev->irq, dev, 0);
1827 local_irq_restore(flags);
1831 if (vortex_debug > 0)
1832 dump_tx_ring(dev);
1834 issue_and_wait(dev, TxReset);
1836 vp->stats.tx_errors++;
1837 if (vp->full_bus_master_tx) {
1838 printk(KERN_DEBUG "%s: Resetting the Tx ring pointer.\n", dev->name);
1839 if (vp->cur_tx - vp->dirty_tx > 0 && inl(ioaddr + DownListPtr) == 0)
1840 outl(vp->tx_ring_dma + (vp->dirty_tx % TX_RING_SIZE) * sizeof(struct boom_tx_desc),
1841 ioaddr + DownListPtr);
1842 if (vp->cur_tx - vp->dirty_tx < TX_RING_SIZE)
1843 netif_wake_queue (dev);
1844 if (vp->drv_flags & IS_BOOMERANG)
1845 outb(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold);
1846 outw(DownUnstall, ioaddr + EL3_CMD);
1847 } else {
1848 vp->stats.tx_dropped++;
1849 netif_wake_queue(dev);
1852 /* Issue Tx Enable */
1853 outw(TxEnable, ioaddr + EL3_CMD);
1854 dev->trans_start = jiffies;
1856 /* Switch to register set 7 for normal use. */
1857 EL3WINDOW(7);
1861 * Handle uncommon interrupt sources. This is a separate routine to minimize
1862 * the cache impact.
1864 static void
1865 vortex_error(struct net_device *dev, int status)
1867 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1868 long ioaddr = dev->base_addr;
1869 int do_tx_reset = 0, reset_mask = 0;
1870 unsigned char tx_status = 0;
1872 if (vortex_debug > 2) {
1873 printk(KERN_ERR "%s: vortex_error(), status=0x%x\n", dev->name, status);
1876 if (status & TxComplete) { /* Really "TxError" for us. */
1877 tx_status = inb(ioaddr + TxStatus);
1878 /* Presumably a tx-timeout. We must merely re-enable. */
1879 if (vortex_debug > 2
1880 || (tx_status != 0x88 && vortex_debug > 0)) {
1881 printk(KERN_ERR "%s: Transmit error, Tx status register %2.2x.\n",
1882 dev->name, tx_status);
1883 if (tx_status == 0x82) {
1884 printk(KERN_ERR "Probably a duplex mismatch. See "
1885 "Documentation/networking/vortex.txt\n");
1887 dump_tx_ring(dev);
1889 if (tx_status & 0x14) vp->stats.tx_fifo_errors++;
1890 if (tx_status & 0x38) vp->stats.tx_aborted_errors++;
1891 outb(0, ioaddr + TxStatus);
1892 if (tx_status & 0x30) { /* txJabber or txUnderrun */
1893 do_tx_reset = 1;
1894 } else if ((tx_status & 0x08) && (vp->drv_flags & MAX_COLLISION_RESET)) { /* maxCollisions */
1895 do_tx_reset = 1;
1896 reset_mask = 0x0108; /* Reset interface logic, but not download logic */
1897 } else { /* Merely re-enable the transmitter. */
1898 outw(TxEnable, ioaddr + EL3_CMD);
1902 if (status & RxEarly) { /* Rx early is unused. */
1903 vortex_rx(dev);
1904 outw(AckIntr | RxEarly, ioaddr + EL3_CMD);
1906 if (status & StatsFull) { /* Empty statistics. */
1907 static int DoneDidThat;
1908 if (vortex_debug > 4)
1909 printk(KERN_DEBUG "%s: Updating stats.\n", dev->name);
1910 update_stats(ioaddr, dev);
1911 /* HACK: Disable statistics as an interrupt source. */
1912 /* This occurs when we have the wrong media type! */
1913 if (DoneDidThat == 0 &&
1914 inw(ioaddr + EL3_STATUS) & StatsFull) {
1915 printk(KERN_WARNING "%s: Updating statistics failed, disabling "
1916 "stats as an interrupt source.\n", dev->name);
1917 EL3WINDOW(5);
1918 outw(SetIntrEnb | (inw(ioaddr + 10) & ~StatsFull), ioaddr + EL3_CMD);
1919 vp->intr_enable &= ~StatsFull;
1920 EL3WINDOW(7);
1921 DoneDidThat++;
1924 if (status & IntReq) { /* Restore all interrupt sources. */
1925 outw(vp->status_enable, ioaddr + EL3_CMD);
1926 outw(vp->intr_enable, ioaddr + EL3_CMD);
1928 if (status & HostError) {
1929 u16 fifo_diag;
1930 EL3WINDOW(4);
1931 fifo_diag = inw(ioaddr + Wn4_FIFODiag);
1932 printk(KERN_ERR "%s: Host error, FIFO diagnostic register %4.4x.\n",
1933 dev->name, fifo_diag);
1934 /* Adapter failure requires Tx/Rx reset and reinit. */
1935 if (vp->full_bus_master_tx) {
1936 int bus_status = inl(ioaddr + PktStatus);
1937 /* 0x80000000 PCI master abort. */
1938 /* 0x40000000 PCI target abort. */
1939 if (vortex_debug)
1940 printk(KERN_ERR "%s: PCI bus error, bus status %8.8x\n", dev->name, bus_status);
1942 /* In this case, blow the card away */
1943 vortex_down(dev);
1944 issue_and_wait(dev, TotalReset | 0xff);
1945 vortex_up(dev); /* AKPM: bug. vortex_up() assumes that the rx ring is full. It may not be. */
1946 } else if (fifo_diag & 0x0400)
1947 do_tx_reset = 1;
1948 if (fifo_diag & 0x3000) {
1949 /* Reset Rx fifo and upload logic */
1950 issue_and_wait(dev, RxReset|0x07);
1951 /* Set the Rx filter to the current state. */
1952 set_rx_mode(dev);
1953 outw(RxEnable, ioaddr + EL3_CMD); /* Re-enable the receiver. */
1954 outw(AckIntr | HostError, ioaddr + EL3_CMD);
1958 if (do_tx_reset) {
1959 issue_and_wait(dev, TxReset|reset_mask);
1960 outw(TxEnable, ioaddr + EL3_CMD);
1961 if (!vp->full_bus_master_tx)
1962 netif_wake_queue(dev);
1966 static int
1967 vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
1969 struct vortex_private *vp = (struct vortex_private *)dev->priv;
1970 long ioaddr = dev->base_addr;
1972 /* Put out the doubleword header... */
1973 outl(skb->len, ioaddr + TX_FIFO);
1974 if (vp->bus_master) {
1975 /* Set the bus-master controller to transfer the packet. */
1976 int len = (skb->len + 3) & ~3;
1977 outl( vp->tx_skb_dma = pci_map_single(vp->pdev, skb->data, len, PCI_DMA_TODEVICE),
1978 ioaddr + Wn7_MasterAddr);
1979 outw(len, ioaddr + Wn7_MasterLen);
1980 vp->tx_skb = skb;
1981 outw(StartDMADown, ioaddr + EL3_CMD);
1982 /* netif_wake_queue() will be called at the DMADone interrupt. */
1983 } else {
1984 /* ... and the packet rounded to a doubleword. */
1985 outsl(ioaddr + TX_FIFO, skb->data, (skb->len + 3) >> 2);
1986 dev_kfree_skb (skb);
1987 if (inw(ioaddr + TxFree) > 1536) {
1988 netif_start_queue (dev); /* AKPM: redundant? */
1989 } else {
1990 /* Interrupt us when the FIFO has room for max-sized packet. */
1991 netif_stop_queue(dev);
1992 outw(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
1996 dev->trans_start = jiffies;
1998 /* Clear the Tx status stack. */
2000 int tx_status;
2001 int i = 32;
2003 while (--i > 0 && (tx_status = inb(ioaddr + TxStatus)) > 0) {
2004 if (tx_status & 0x3C) { /* A Tx-disabling error occurred. */
2005 if (vortex_debug > 2)
2006 printk(KERN_DEBUG "%s: Tx error, status %2.2x.\n",
2007 dev->name, tx_status);
2008 if (tx_status & 0x04) vp->stats.tx_fifo_errors++;
2009 if (tx_status & 0x38) vp->stats.tx_aborted_errors++;
2010 if (tx_status & 0x30) {
2011 issue_and_wait(dev, TxReset);
2013 outw(TxEnable, ioaddr + EL3_CMD);
2015 outb(0x00, ioaddr + TxStatus); /* Pop the status stack. */
2018 return 0;
2021 static int
2022 boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
2024 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2025 long ioaddr = dev->base_addr;
2026 /* Calculate the next Tx descriptor entry. */
2027 int entry = vp->cur_tx % TX_RING_SIZE;
2028 struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE];
2029 unsigned long flags;
2031 if (vortex_debug > 6) {
2032 printk(KERN_DEBUG "boomerang_start_xmit()\n");
2033 if (vortex_debug > 3)
2034 printk(KERN_DEBUG "%s: Trying to send a packet, Tx index %d.\n",
2035 dev->name, vp->cur_tx);
2038 if (vp->cur_tx - vp->dirty_tx >= TX_RING_SIZE) {
2039 if (vortex_debug > 0)
2040 printk(KERN_WARNING "%s: BUG! Tx Ring full, refusing to send buffer.\n",
2041 dev->name);
2042 netif_stop_queue(dev);
2043 return 1;
2046 vp->tx_skbuff[entry] = skb;
2048 vp->tx_ring[entry].next = 0;
2049 #if DO_ZEROCOPY
2050 if (skb->ip_summed != CHECKSUM_HW)
2051 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
2052 else
2053 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum);
2055 if (!skb_shinfo(skb)->nr_frags) {
2056 vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->data,
2057 skb->len, PCI_DMA_TODEVICE));
2058 vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len | LAST_FRAG);
2059 } else {
2060 int i;
2062 vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->data,
2063 skb->len-skb->data_len, PCI_DMA_TODEVICE));
2064 vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len-skb->data_len);
2066 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
2067 skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
2069 vp->tx_ring[entry].frag[i+1].addr =
2070 cpu_to_le32(pci_map_single(vp->pdev,
2071 (void*)page_address(frag->page) + frag->page_offset,
2072 frag->size, PCI_DMA_TODEVICE));
2074 if (i == skb_shinfo(skb)->nr_frags-1)
2075 vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size|LAST_FRAG);
2076 else
2077 vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size);
2080 #else
2081 vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->data, skb->len, PCI_DMA_TODEVICE));
2082 vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG);
2083 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
2084 #endif
2086 spin_lock_irqsave(&vp->lock, flags);
2087 /* Wait for the stall to complete. */
2088 issue_and_wait(dev, DownStall);
2089 prev_entry->next = cpu_to_le32(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc));
2090 if (inl(ioaddr + DownListPtr) == 0) {
2091 outl(vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc), ioaddr + DownListPtr);
2092 vp->queued_packet++;
2095 vp->cur_tx++;
2096 if (vp->cur_tx - vp->dirty_tx > TX_RING_SIZE - 1) {
2097 netif_stop_queue (dev);
2098 } else { /* Clear previous interrupt enable. */
2099 #if defined(tx_interrupt_mitigation)
2100 /* Dubious. If in boomeang_interrupt "faster" cyclone ifdef
2101 * were selected, this would corrupt DN_COMPLETE. No?
2103 prev_entry->status &= cpu_to_le32(~TxIntrUploaded);
2104 #endif
2106 outw(DownUnstall, ioaddr + EL3_CMD);
2107 spin_unlock_irqrestore(&vp->lock, flags);
2108 dev->trans_start = jiffies;
2109 return 0;
2112 /* The interrupt handler does all of the Rx thread work and cleans up
2113 after the Tx thread. */
2116 * This is the ISR for the vortex series chips.
2117 * full_bus_master_tx == 0 && full_bus_master_rx == 0
2120 static void vortex_interrupt(int irq, void *dev_id, struct pt_regs *regs)
2122 struct net_device *dev = dev_id;
2123 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2124 long ioaddr;
2125 int status;
2126 int work_done = max_interrupt_work;
2128 ioaddr = dev->base_addr;
2129 spin_lock(&vp->lock);
2131 status = inw(ioaddr + EL3_STATUS);
2133 if (vortex_debug > 6)
2134 printk("vortex_interrupt(). status=0x%4x\n", status);
2136 if ((status & IntLatch) == 0)
2137 goto handler_exit; /* No interrupt: shared IRQs cause this */
2139 if (status & IntReq) {
2140 status |= vp->deferred;
2141 vp->deferred = 0;
2144 if (status == 0xffff) /* h/w no longer present (hotplug)? */
2145 goto handler_exit;
2147 if (vortex_debug > 4)
2148 printk(KERN_DEBUG "%s: interrupt, status %4.4x, latency %d ticks.\n",
2149 dev->name, status, inb(ioaddr + Timer));
2151 do {
2152 if (vortex_debug > 5)
2153 printk(KERN_DEBUG "%s: In interrupt loop, status %4.4x.\n",
2154 dev->name, status);
2155 if (status & RxComplete)
2156 vortex_rx(dev);
2158 if (status & TxAvailable) {
2159 if (vortex_debug > 5)
2160 printk(KERN_DEBUG " TX room bit was handled.\n");
2161 /* There's room in the FIFO for a full-sized packet. */
2162 outw(AckIntr | TxAvailable, ioaddr + EL3_CMD);
2163 netif_wake_queue (dev);
2166 if (status & DMADone) {
2167 if (inw(ioaddr + Wn7_MasterStatus) & 0x1000) {
2168 outw(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */
2169 pci_unmap_single(vp->pdev, vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3, PCI_DMA_TODEVICE);
2170 dev_kfree_skb_irq(vp->tx_skb); /* Release the transferred buffer */
2171 if (inw(ioaddr + TxFree) > 1536) {
2173 * AKPM: FIXME: I don't think we need this. If the queue was stopped due to
2174 * insufficient FIFO room, the TxAvailable test will succeed and call
2175 * netif_wake_queue()
2177 netif_wake_queue(dev);
2178 } else { /* Interrupt when FIFO has room for max-sized packet. */
2179 outw(SetTxThreshold + (1536>>2), ioaddr + EL3_CMD);
2180 netif_stop_queue(dev);
2184 /* Check for all uncommon interrupts at once. */
2185 if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq)) {
2186 if (status == 0xffff)
2187 break;
2188 vortex_error(dev, status);
2191 if (--work_done < 0) {
2192 printk(KERN_WARNING "%s: Too much work in interrupt, status "
2193 "%4.4x.\n", dev->name, status);
2194 /* Disable all pending interrupts. */
2195 do {
2196 vp->deferred |= status;
2197 outw(SetStatusEnb | (~vp->deferred & vp->status_enable),
2198 ioaddr + EL3_CMD);
2199 outw(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
2200 } while ((status = inw(ioaddr + EL3_CMD)) & IntLatch);
2201 /* The timer will reenable interrupts. */
2202 mod_timer(&vp->timer, jiffies + 1*HZ);
2203 break;
2205 /* Acknowledge the IRQ. */
2206 outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
2207 } while ((status = inw(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
2209 if (vortex_debug > 4)
2210 printk(KERN_DEBUG "%s: exiting interrupt, status %4.4x.\n",
2211 dev->name, status);
2212 handler_exit:
2213 spin_unlock(&vp->lock);
2217 * This is the ISR for the boomerang series chips.
2218 * full_bus_master_tx == 1 && full_bus_master_rx == 1
2221 static void boomerang_interrupt(int irq, void *dev_id, struct pt_regs *regs)
2223 struct net_device *dev = dev_id;
2224 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2225 long ioaddr;
2226 int status;
2227 int work_done = max_interrupt_work;
2229 ioaddr = dev->base_addr;
2232 * It seems dopey to put the spinlock this early, but we could race against vortex_tx_timeout
2233 * and boomerang_start_xmit
2235 spin_lock(&vp->lock);
2237 status = inw(ioaddr + EL3_STATUS);
2239 if (vortex_debug > 6)
2240 printk(KERN_DEBUG "boomerang_interrupt. status=0x%4x\n", status);
2242 if ((status & IntLatch) == 0)
2243 goto handler_exit; /* No interrupt: shared IRQs can cause this */
2245 if (status == 0xffff) { /* h/w no longer present (hotplug)? */
2246 if (vortex_debug > 1)
2247 printk(KERN_DEBUG "boomerang_interrupt(1): status = 0xffff\n");
2248 goto handler_exit;
2251 if (status & IntReq) {
2252 status |= vp->deferred;
2253 vp->deferred = 0;
2256 if (vortex_debug > 4)
2257 printk(KERN_DEBUG "%s: interrupt, status %4.4x, latency %d ticks.\n",
2258 dev->name, status, inb(ioaddr + Timer));
2259 do {
2260 if (vortex_debug > 5)
2261 printk(KERN_DEBUG "%s: In interrupt loop, status %4.4x.\n",
2262 dev->name, status);
2263 if (status & UpComplete) {
2264 outw(AckIntr | UpComplete, ioaddr + EL3_CMD);
2265 if (vortex_debug > 5)
2266 printk(KERN_DEBUG "boomerang_interrupt->boomerang_rx\n");
2267 boomerang_rx(dev);
2270 if (status & DownComplete) {
2271 unsigned int dirty_tx = vp->dirty_tx;
2273 outw(AckIntr | DownComplete, ioaddr + EL3_CMD);
2274 while (vp->cur_tx - dirty_tx > 0) {
2275 int entry = dirty_tx % TX_RING_SIZE;
2276 #if 1 /* AKPM: the latter is faster, but cyclone-only */
2277 if (inl(ioaddr + DownListPtr) ==
2278 vp->tx_ring_dma + entry * sizeof(struct boom_tx_desc))
2279 break; /* It still hasn't been processed. */
2280 #else
2281 if ((vp->tx_ring[entry].status & DN_COMPLETE) == 0)
2282 break; /* It still hasn't been processed. */
2283 #endif
2285 if (vp->tx_skbuff[entry]) {
2286 struct sk_buff *skb = vp->tx_skbuff[entry];
2287 #if DO_ZEROCOPY
2288 int i;
2289 for (i=0; i<=skb_shinfo(skb)->nr_frags; i++)
2290 pci_unmap_single(vp->pdev,
2291 le32_to_cpu(vp->tx_ring[entry].frag[i].addr),
2292 le32_to_cpu(vp->tx_ring[entry].frag[i].length)&0xFFF,
2293 PCI_DMA_TODEVICE);
2294 #else
2295 pci_unmap_single(vp->pdev,
2296 le32_to_cpu(vp->tx_ring[entry].addr), skb->len, PCI_DMA_TODEVICE);
2297 #endif
2298 dev_kfree_skb_irq(skb);
2299 vp->tx_skbuff[entry] = 0;
2300 } else {
2301 printk(KERN_DEBUG "boomerang_interrupt: no skb!\n");
2303 /* vp->stats.tx_packets++; Counted below. */
2304 dirty_tx++;
2306 vp->dirty_tx = dirty_tx;
2307 if (vp->cur_tx - dirty_tx <= TX_RING_SIZE - 1) {
2308 if (vortex_debug > 6)
2309 printk(KERN_DEBUG "boomerang_interrupt: wake queue\n");
2310 netif_wake_queue (dev);
2314 /* Check for all uncommon interrupts at once. */
2315 if (status & (HostError | RxEarly | StatsFull | TxComplete | IntReq))
2316 vortex_error(dev, status);
2318 if (--work_done < 0) {
2319 printk(KERN_WARNING "%s: Too much work in interrupt, status "
2320 "%4.4x.\n", dev->name, status);
2321 /* Disable all pending interrupts. */
2322 do {
2323 vp->deferred |= status;
2324 outw(SetStatusEnb | (~vp->deferred & vp->status_enable),
2325 ioaddr + EL3_CMD);
2326 outw(AckIntr | (vp->deferred & 0x7ff), ioaddr + EL3_CMD);
2327 } while ((status = inw(ioaddr + EL3_CMD)) & IntLatch);
2328 /* The timer will reenable interrupts. */
2329 mod_timer(&vp->timer, jiffies + 1*HZ);
2330 break;
2332 /* Acknowledge the IRQ. */
2333 outw(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
2334 if (vp->cb_fn_base) /* The PCMCIA people are idiots. */
2335 writel(0x8000, vp->cb_fn_base + 4);
2337 } while ((status = inw(ioaddr + EL3_STATUS)) & IntLatch);
2339 if (vortex_debug > 4)
2340 printk(KERN_DEBUG "%s: exiting interrupt, status %4.4x.\n",
2341 dev->name, status);
2342 handler_exit:
2343 spin_unlock(&vp->lock);
2346 static int vortex_rx(struct net_device *dev)
2348 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2349 long ioaddr = dev->base_addr;
2350 int i;
2351 short rx_status;
2353 if (vortex_debug > 5)
2354 printk(KERN_DEBUG "vortex_rx(): status %4.4x, rx_status %4.4x.\n",
2355 inw(ioaddr+EL3_STATUS), inw(ioaddr+RxStatus));
2356 while ((rx_status = inw(ioaddr + RxStatus)) > 0) {
2357 if (rx_status & 0x4000) { /* Error, update stats. */
2358 unsigned char rx_error = inb(ioaddr + RxErrors);
2359 if (vortex_debug > 2)
2360 printk(KERN_DEBUG " Rx error: status %2.2x.\n", rx_error);
2361 vp->stats.rx_errors++;
2362 if (rx_error & 0x01) vp->stats.rx_over_errors++;
2363 if (rx_error & 0x02) vp->stats.rx_length_errors++;
2364 if (rx_error & 0x04) vp->stats.rx_frame_errors++;
2365 if (rx_error & 0x08) vp->stats.rx_crc_errors++;
2366 if (rx_error & 0x10) vp->stats.rx_length_errors++;
2367 } else {
2368 /* The packet length: up to 4.5K!. */
2369 int pkt_len = rx_status & 0x1fff;
2370 struct sk_buff *skb;
2372 skb = dev_alloc_skb(pkt_len + 5);
2373 if (vortex_debug > 4)
2374 printk(KERN_DEBUG "Receiving packet size %d status %4.4x.\n",
2375 pkt_len, rx_status);
2376 if (skb != NULL) {
2377 skb->dev = dev;
2378 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2379 /* 'skb_put()' points to the start of sk_buff data area. */
2380 if (vp->bus_master &&
2381 ! (inw(ioaddr + Wn7_MasterStatus) & 0x8000)) {
2382 dma_addr_t dma = pci_map_single(vp->pdev, skb_put(skb, pkt_len),
2383 pkt_len, PCI_DMA_FROMDEVICE);
2384 outl(dma, ioaddr + Wn7_MasterAddr);
2385 outw((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen);
2386 outw(StartDMAUp, ioaddr + EL3_CMD);
2387 while (inw(ioaddr + Wn7_MasterStatus) & 0x8000)
2389 pci_unmap_single(vp->pdev, dma, pkt_len, PCI_DMA_FROMDEVICE);
2390 } else {
2391 insl(ioaddr + RX_FIFO, skb_put(skb, pkt_len),
2392 (pkt_len + 3) >> 2);
2394 outw(RxDiscard, ioaddr + EL3_CMD); /* Pop top Rx packet. */
2395 skb->protocol = eth_type_trans(skb, dev);
2396 netif_rx(skb);
2397 dev->last_rx = jiffies;
2398 vp->stats.rx_packets++;
2399 /* Wait a limited time to go to next packet. */
2400 for (i = 200; i >= 0; i--)
2401 if ( ! (inw(ioaddr + EL3_STATUS) & CmdInProgress))
2402 break;
2403 continue;
2404 } else if (vortex_debug > 0)
2405 printk(KERN_NOTICE "%s: No memory to allocate a sk_buff of "
2406 "size %d.\n", dev->name, pkt_len);
2408 vp->stats.rx_dropped++;
2409 issue_and_wait(dev, RxDiscard);
2412 return 0;
2415 static int
2416 boomerang_rx(struct net_device *dev)
2418 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2419 int entry = vp->cur_rx % RX_RING_SIZE;
2420 long ioaddr = dev->base_addr;
2421 int rx_status;
2422 int rx_work_limit = vp->dirty_rx + RX_RING_SIZE - vp->cur_rx;
2424 if (vortex_debug > 5)
2425 printk(KERN_DEBUG "boomerang_rx(): status %4.4x\n", inw(ioaddr+EL3_STATUS));
2427 while ((rx_status = le32_to_cpu(vp->rx_ring[entry].status)) & RxDComplete){
2428 if (--rx_work_limit < 0)
2429 break;
2430 if (rx_status & RxDError) { /* Error, update stats. */
2431 unsigned char rx_error = rx_status >> 16;
2432 if (vortex_debug > 2)
2433 printk(KERN_DEBUG " Rx error: status %2.2x.\n", rx_error);
2434 vp->stats.rx_errors++;
2435 if (rx_error & 0x01) vp->stats.rx_over_errors++;
2436 if (rx_error & 0x02) vp->stats.rx_length_errors++;
2437 if (rx_error & 0x04) vp->stats.rx_frame_errors++;
2438 if (rx_error & 0x08) vp->stats.rx_crc_errors++;
2439 if (rx_error & 0x10) vp->stats.rx_length_errors++;
2440 } else {
2441 /* The packet length: up to 4.5K!. */
2442 int pkt_len = rx_status & 0x1fff;
2443 struct sk_buff *skb;
2444 dma_addr_t dma = le32_to_cpu(vp->rx_ring[entry].addr);
2446 if (vortex_debug > 4)
2447 printk(KERN_DEBUG "Receiving packet size %d status %4.4x.\n",
2448 pkt_len, rx_status);
2450 /* Check if the packet is long enough to just accept without
2451 copying to a properly sized skbuff. */
2452 if (pkt_len < rx_copybreak && (skb = dev_alloc_skb(pkt_len + 2)) != 0) {
2453 skb->dev = dev;
2454 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2455 pci_dma_sync_single(vp->pdev, dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2456 /* 'skb_put()' points to the start of sk_buff data area. */
2457 memcpy(skb_put(skb, pkt_len),
2458 vp->rx_skbuff[entry]->tail,
2459 pkt_len);
2460 vp->rx_copy++;
2461 } else {
2462 /* Pass up the skbuff already on the Rx ring. */
2463 skb = vp->rx_skbuff[entry];
2464 vp->rx_skbuff[entry] = NULL;
2465 skb_put(skb, pkt_len);
2466 pci_unmap_single(vp->pdev, dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2467 vp->rx_nocopy++;
2469 skb->protocol = eth_type_trans(skb, dev);
2470 { /* Use hardware checksum info. */
2471 int csum_bits = rx_status & 0xee000000;
2472 if (csum_bits &&
2473 (csum_bits == (IPChksumValid | TCPChksumValid) ||
2474 csum_bits == (IPChksumValid | UDPChksumValid))) {
2475 skb->ip_summed = CHECKSUM_UNNECESSARY;
2476 vp->rx_csumhits++;
2479 netif_rx(skb);
2480 dev->last_rx = jiffies;
2481 vp->stats.rx_packets++;
2483 entry = (++vp->cur_rx) % RX_RING_SIZE;
2485 /* Refill the Rx ring buffers. */
2486 for (; vp->cur_rx - vp->dirty_rx > 0; vp->dirty_rx++) {
2487 struct sk_buff *skb;
2488 entry = vp->dirty_rx % RX_RING_SIZE;
2489 if (vp->rx_skbuff[entry] == NULL) {
2490 skb = dev_alloc_skb(PKT_BUF_SZ);
2491 if (skb == NULL) {
2492 static unsigned long last_jif;
2493 if ((jiffies - last_jif) > 10 * HZ) {
2494 printk(KERN_WARNING "%s: memory shortage\n", dev->name);
2495 last_jif = jiffies;
2497 if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE)
2498 mod_timer(&vp->rx_oom_timer, RUN_AT(HZ * 1));
2499 break; /* Bad news! */
2501 skb->dev = dev; /* Mark as being used by this device. */
2502 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
2503 vp->rx_ring[entry].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->tail, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
2504 vp->rx_skbuff[entry] = skb;
2506 vp->rx_ring[entry].status = 0; /* Clear complete bit. */
2507 outw(UpUnstall, ioaddr + EL3_CMD);
2509 return 0;
2513 * If we've hit a total OOM refilling the Rx ring we poll once a second
2514 * for some memory. Otherwise there is no way to restart the rx process.
2516 static void
2517 rx_oom_timer(unsigned long arg)
2519 struct net_device *dev = (struct net_device *)arg;
2520 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2522 spin_lock_irq(&vp->lock);
2523 if ((vp->cur_rx - vp->dirty_rx) == RX_RING_SIZE) /* This test is redundant, but makes me feel good */
2524 boomerang_rx(dev);
2525 if (vortex_debug > 1) {
2526 printk(KERN_DEBUG "%s: rx_oom_timer %s\n", dev->name,
2527 ((vp->cur_rx - vp->dirty_rx) != RX_RING_SIZE) ? "succeeded" : "retrying");
2529 spin_unlock_irq(&vp->lock);
2532 static void
2533 vortex_down(struct net_device *dev)
2535 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2536 long ioaddr = dev->base_addr;
2538 netif_stop_queue (dev);
2540 del_timer_sync(&vp->rx_oom_timer);
2541 del_timer_sync(&vp->timer);
2543 /* Turn off statistics ASAP. We update vp->stats below. */
2544 outw(StatsDisable, ioaddr + EL3_CMD);
2546 /* Disable the receiver and transmitter. */
2547 outw(RxDisable, ioaddr + EL3_CMD);
2548 outw(TxDisable, ioaddr + EL3_CMD);
2550 if (dev->if_port == XCVR_10base2)
2551 /* Turn off thinnet power. Green! */
2552 outw(StopCoax, ioaddr + EL3_CMD);
2554 outw(SetIntrEnb | 0x0000, ioaddr + EL3_CMD);
2556 update_stats(ioaddr, dev);
2557 if (vp->full_bus_master_rx)
2558 outl(0, ioaddr + UpListPtr);
2559 if (vp->full_bus_master_tx)
2560 outl(0, ioaddr + DownListPtr);
2562 if (vp->pdev && vp->enable_wol) {
2563 pci_save_state(vp->pdev, vp->power_state);
2564 acpi_set_WOL(dev);
2568 static int
2569 vortex_close(struct net_device *dev)
2571 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2572 long ioaddr = dev->base_addr;
2573 int i;
2575 if (netif_device_present(dev))
2576 vortex_down(dev);
2578 if (vortex_debug > 1) {
2579 printk(KERN_DEBUG"%s: vortex_close() status %4.4x, Tx status %2.2x.\n",
2580 dev->name, inw(ioaddr + EL3_STATUS), inb(ioaddr + TxStatus));
2581 printk(KERN_DEBUG "%s: vortex close stats: rx_nocopy %d rx_copy %d"
2582 " tx_queued %d Rx pre-checksummed %d.\n",
2583 dev->name, vp->rx_nocopy, vp->rx_copy, vp->queued_packet, vp->rx_csumhits);
2586 #if DO_ZEROCOPY
2587 if ( vp->rx_csumhits &&
2588 ((vp->drv_flags & HAS_HWCKSM) == 0) &&
2589 (hw_checksums[vp->card_idx] == -1)) {
2590 printk(KERN_WARNING "%s supports hardware checksums, and we're not using them!\n", dev->name);
2592 #endif
2594 free_irq(dev->irq, dev);
2596 if (vp->full_bus_master_rx) { /* Free Boomerang bus master Rx buffers. */
2597 for (i = 0; i < RX_RING_SIZE; i++)
2598 if (vp->rx_skbuff[i]) {
2599 pci_unmap_single( vp->pdev, le32_to_cpu(vp->rx_ring[i].addr),
2600 PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
2601 dev_kfree_skb(vp->rx_skbuff[i]);
2602 vp->rx_skbuff[i] = 0;
2605 if (vp->full_bus_master_tx) { /* Free Boomerang bus master Tx buffers. */
2606 for (i = 0; i < TX_RING_SIZE; i++) {
2607 if (vp->tx_skbuff[i]) {
2608 struct sk_buff *skb = vp->tx_skbuff[i];
2609 #if DO_ZEROCOPY
2610 int k;
2612 for (k=0; k<=skb_shinfo(skb)->nr_frags; k++)
2613 pci_unmap_single(vp->pdev,
2614 le32_to_cpu(vp->tx_ring[i].frag[k].addr),
2615 le32_to_cpu(vp->tx_ring[i].frag[k].length)&0xFFF,
2616 PCI_DMA_TODEVICE);
2617 #else
2618 pci_unmap_single(vp->pdev, le32_to_cpu(vp->tx_ring[i].addr), skb->len, PCI_DMA_TODEVICE);
2619 #endif
2620 dev_kfree_skb(skb);
2621 vp->tx_skbuff[i] = 0;
2626 return 0;
2629 static void
2630 dump_tx_ring(struct net_device *dev)
2632 if (vortex_debug > 0) {
2633 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2634 long ioaddr = dev->base_addr;
2636 if (vp->full_bus_master_tx) {
2637 int i;
2638 int stalled = inl(ioaddr + PktStatus) & 0x04; /* Possible racy. But it's only debug stuff */
2640 printk(KERN_ERR " Flags; bus-master %d, dirty %d(%d) current %d(%d)\n",
2641 vp->full_bus_master_tx,
2642 vp->dirty_tx, vp->dirty_tx % TX_RING_SIZE,
2643 vp->cur_tx, vp->cur_tx % TX_RING_SIZE);
2644 printk(KERN_ERR " Transmit list %8.8x vs. %p.\n",
2645 inl(ioaddr + DownListPtr),
2646 &vp->tx_ring[vp->dirty_tx % TX_RING_SIZE]);
2647 issue_and_wait(dev, DownStall);
2648 for (i = 0; i < TX_RING_SIZE; i++) {
2649 printk(KERN_ERR " %d: @%p length %8.8x status %8.8x\n", i,
2650 &vp->tx_ring[i],
2651 #if DO_ZEROCOPY
2652 le32_to_cpu(vp->tx_ring[i].frag[0].length),
2653 #else
2654 le32_to_cpu(vp->tx_ring[i].length),
2655 #endif
2656 le32_to_cpu(vp->tx_ring[i].status));
2658 if (!stalled)
2659 outw(DownUnstall, ioaddr + EL3_CMD);
2664 static struct net_device_stats *vortex_get_stats(struct net_device *dev)
2666 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2667 unsigned long flags;
2669 if (netif_device_present(dev)) { /* AKPM: Used to be netif_running */
2670 spin_lock_irqsave (&vp->lock, flags);
2671 update_stats(dev->base_addr, dev);
2672 spin_unlock_irqrestore (&vp->lock, flags);
2674 return &vp->stats;
2677 /* Update statistics.
2678 Unlike with the EL3 we need not worry about interrupts changing
2679 the window setting from underneath us, but we must still guard
2680 against a race condition with a StatsUpdate interrupt updating the
2681 table. This is done by checking that the ASM (!) code generated uses
2682 atomic updates with '+='.
2684 static void update_stats(long ioaddr, struct net_device *dev)
2686 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2687 int old_window = inw(ioaddr + EL3_CMD);
2689 if (old_window == 0xffff) /* Chip suspended or ejected. */
2690 return;
2691 /* Unlike the 3c5x9 we need not turn off stats updates while reading. */
2692 /* Switch to the stats window, and read everything. */
2693 EL3WINDOW(6);
2694 vp->stats.tx_carrier_errors += inb(ioaddr + 0);
2695 vp->stats.tx_heartbeat_errors += inb(ioaddr + 1);
2696 /* Multiple collisions. */ inb(ioaddr + 2);
2697 vp->stats.collisions += inb(ioaddr + 3);
2698 vp->stats.tx_window_errors += inb(ioaddr + 4);
2699 vp->stats.rx_fifo_errors += inb(ioaddr + 5);
2700 vp->stats.tx_packets += inb(ioaddr + 6);
2701 vp->stats.tx_packets += (inb(ioaddr + 9)&0x30) << 4;
2702 /* Rx packets */ inb(ioaddr + 7); /* Must read to clear */
2703 /* Tx deferrals */ inb(ioaddr + 8);
2704 /* Don't bother with register 9, an extension of registers 6&7.
2705 If we do use the 6&7 values the atomic update assumption above
2706 is invalid. */
2707 vp->stats.rx_bytes += inw(ioaddr + 10);
2708 vp->stats.tx_bytes += inw(ioaddr + 12);
2709 /* New: On the Vortex we must also clear the BadSSD counter. */
2710 EL3WINDOW(4);
2711 inb(ioaddr + 12);
2714 u8 up = inb(ioaddr + 13);
2715 vp->stats.rx_bytes += (up & 0x0f) << 16;
2716 vp->stats.tx_bytes += (up & 0xf0) << 12;
2719 EL3WINDOW(old_window >> 13);
2720 return;
2724 static int netdev_ethtool_ioctl(struct net_device *dev, void *useraddr)
2726 struct vortex_private *vp = dev->priv;
2727 u32 ethcmd;
2729 if (copy_from_user(&ethcmd, useraddr, sizeof(ethcmd)))
2730 return -EFAULT;
2732 switch (ethcmd) {
2733 case ETHTOOL_GDRVINFO: {
2734 struct ethtool_drvinfo info = {ETHTOOL_GDRVINFO};
2735 strcpy(info.driver, DRV_NAME);
2736 strcpy(info.version, DRV_VERSION);
2737 if (vp->pdev)
2738 strcpy(info.bus_info, vp->pdev->slot_name);
2739 else
2740 sprintf(info.bus_info, "EISA 0x%lx %d",
2741 dev->base_addr, dev->irq);
2742 if (copy_to_user(useraddr, &info, sizeof(info)))
2743 return -EFAULT;
2744 return 0;
2749 return -EOPNOTSUPP;
2752 static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
2754 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2755 long ioaddr = dev->base_addr;
2756 struct mii_ioctl_data *data = (struct mii_ioctl_data *)&rq->ifr_data;
2757 int phy = vp->phys[0] & 0x1f;
2758 int retval;
2760 switch(cmd) {
2761 case SIOCETHTOOL:
2762 return netdev_ethtool_ioctl(dev, (void *) rq->ifr_data);
2764 case SIOCGMIIPHY: /* Get address of MII PHY in use. */
2765 data->phy_id = phy;
2767 case SIOCGMIIREG: /* Read MII PHY register. */
2768 EL3WINDOW(4);
2769 data->val_out = mdio_read(dev, data->phy_id & 0x1f, data->reg_num & 0x1f);
2770 retval = 0;
2771 break;
2773 case SIOCSMIIREG: /* Write MII PHY register. */
2774 if (!capable(CAP_NET_ADMIN)) {
2775 retval = -EPERM;
2776 } else {
2777 EL3WINDOW(4);
2778 mdio_write(dev, data->phy_id & 0x1f, data->reg_num & 0x1f, data->val_in);
2779 retval = 0;
2781 break;
2782 default:
2783 retval = -EOPNOTSUPP;
2784 break;
2787 return retval;
2790 /* Pre-Cyclone chips have no documented multicast filter, so the only
2791 multicast setting is to receive all multicast frames. At least
2792 the chip has a very clean way to set the mode, unlike many others. */
2793 static void set_rx_mode(struct net_device *dev)
2795 long ioaddr = dev->base_addr;
2796 int new_mode;
2798 if (dev->flags & IFF_PROMISC) {
2799 if (vortex_debug > 0)
2800 printk(KERN_NOTICE "%s: Setting promiscuous mode.\n", dev->name);
2801 new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast|RxProm;
2802 } else if ((dev->mc_list) || (dev->flags & IFF_ALLMULTI)) {
2803 new_mode = SetRxFilter|RxStation|RxMulticast|RxBroadcast;
2804 } else
2805 new_mode = SetRxFilter | RxStation | RxBroadcast;
2807 outw(new_mode, ioaddr + EL3_CMD);
2810 /* MII transceiver control section.
2811 Read and write the MII registers using software-generated serial
2812 MDIO protocol. See the MII specifications or DP83840A data sheet
2813 for details. */
2815 /* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
2816 met by back-to-back PCI I/O cycles, but we insert a delay to avoid
2817 "overclocking" issues. */
2818 #define mdio_delay() inl(mdio_addr)
2820 #define MDIO_SHIFT_CLK 0x01
2821 #define MDIO_DIR_WRITE 0x04
2822 #define MDIO_DATA_WRITE0 (0x00 | MDIO_DIR_WRITE)
2823 #define MDIO_DATA_WRITE1 (0x02 | MDIO_DIR_WRITE)
2824 #define MDIO_DATA_READ 0x02
2825 #define MDIO_ENB_IN 0x00
2827 /* Generate the preamble required for initial synchronization and
2828 a few older transceivers. */
2829 static void mdio_sync(long ioaddr, int bits)
2831 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
2833 /* Establish sync by sending at least 32 logic ones. */
2834 while (-- bits >= 0) {
2835 outw(MDIO_DATA_WRITE1, mdio_addr);
2836 mdio_delay();
2837 outw(MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK, mdio_addr);
2838 mdio_delay();
2842 static int mdio_read(struct net_device *dev, int phy_id, int location)
2844 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2845 int i;
2846 long ioaddr = dev->base_addr;
2847 int read_cmd = (0xf6 << 10) | (phy_id << 5) | location;
2848 unsigned int retval = 0;
2849 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
2851 spin_lock_bh(&vp->mdio_lock);
2853 if (mii_preamble_required)
2854 mdio_sync(ioaddr, 32);
2856 /* Shift the read command bits out. */
2857 for (i = 14; i >= 0; i--) {
2858 int dataval = (read_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
2859 outw(dataval, mdio_addr);
2860 mdio_delay();
2861 outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
2862 mdio_delay();
2864 /* Read the two transition, 16 data, and wire-idle bits. */
2865 for (i = 19; i > 0; i--) {
2866 outw(MDIO_ENB_IN, mdio_addr);
2867 mdio_delay();
2868 retval = (retval << 1) | ((inw(mdio_addr) & MDIO_DATA_READ) ? 1 : 0);
2869 outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
2870 mdio_delay();
2872 spin_unlock_bh(&vp->mdio_lock);
2873 return retval & 0x20000 ? 0xffff : retval>>1 & 0xffff;
2876 static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
2878 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2879 long ioaddr = dev->base_addr;
2880 int write_cmd = 0x50020000 | (phy_id << 23) | (location << 18) | value;
2881 long mdio_addr = ioaddr + Wn4_PhysicalMgmt;
2882 int i;
2884 spin_lock_bh(&vp->mdio_lock);
2886 if (mii_preamble_required)
2887 mdio_sync(ioaddr, 32);
2889 /* Shift the command bits out. */
2890 for (i = 31; i >= 0; i--) {
2891 int dataval = (write_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
2892 outw(dataval, mdio_addr);
2893 mdio_delay();
2894 outw(dataval | MDIO_SHIFT_CLK, mdio_addr);
2895 mdio_delay();
2897 /* Leave the interface idle. */
2898 for (i = 1; i >= 0; i--) {
2899 outw(MDIO_ENB_IN, mdio_addr);
2900 mdio_delay();
2901 outw(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
2902 mdio_delay();
2904 spin_unlock_bh(&vp->mdio_lock);
2905 return;
2908 /* ACPI: Advanced Configuration and Power Interface. */
2909 /* Set Wake-On-LAN mode and put the board into D3 (power-down) state. */
2910 static void acpi_set_WOL(struct net_device *dev)
2912 struct vortex_private *vp = (struct vortex_private *)dev->priv;
2913 long ioaddr = dev->base_addr;
2915 /* Power up on: 1==Downloaded Filter, 2==Magic Packets, 4==Link Status. */
2916 EL3WINDOW(7);
2917 outw(2, ioaddr + 0x0c);
2918 /* The RxFilter must accept the WOL frames. */
2919 outw(SetRxFilter|RxStation|RxMulticast|RxBroadcast, ioaddr + EL3_CMD);
2920 outw(RxEnable, ioaddr + EL3_CMD);
2922 /* Change the power state to D3; RxEnable doesn't take effect. */
2923 pci_enable_wake(vp->pdev, 0, 1);
2924 pci_set_power_state(vp->pdev, 3);
2928 static void __devexit vortex_remove_one (struct pci_dev *pdev)
2930 struct net_device *dev = pci_get_drvdata(pdev);
2931 struct vortex_private *vp;
2933 if (!dev) {
2934 printk("vortex_remove_one called for EISA device!\n");
2935 BUG();
2938 vp = dev->priv;
2940 /* AKPM: FIXME: we should have
2941 * if (vp->cb_fn_base) iounmap(vp->cb_fn_base);
2942 * here
2944 unregister_netdev(dev);
2945 /* Should really use issue_and_wait() here */
2946 outw(TotalReset|0x14, dev->base_addr + EL3_CMD);
2948 if (vp->pdev && vp->enable_wol) {
2949 pci_set_power_state(vp->pdev, 0); /* Go active */
2950 if (vp->pm_state_valid)
2951 pci_restore_state(vp->pdev, vp->power_state);
2954 pci_free_consistent(pdev,
2955 sizeof(struct boom_rx_desc) * RX_RING_SIZE
2956 + sizeof(struct boom_tx_desc) * TX_RING_SIZE,
2957 vp->rx_ring,
2958 vp->rx_ring_dma);
2959 if (vp->must_free_region)
2960 release_region(dev->base_addr, vp->io_size);
2961 kfree(dev);
2965 static struct pci_driver vortex_driver = {
2966 .name = "3c59x",
2967 .probe = vortex_init_one,
2968 .remove = __devexit_p(vortex_remove_one),
2969 .id_table = vortex_pci_tbl,
2970 #ifdef CONFIG_PM
2971 .suspend = vortex_suspend,
2972 .resume = vortex_resume,
2973 #endif
2977 static int vortex_have_pci;
2978 static int vortex_have_eisa;
2981 static int __init vortex_init (void)
2983 int pci_rc, eisa_rc;
2985 pci_rc = pci_module_init(&vortex_driver);
2986 eisa_rc = vortex_eisa_init();
2988 if (pci_rc == 0)
2989 vortex_have_pci = 1;
2990 if (eisa_rc > 0)
2991 vortex_have_eisa = 1;
2993 return (vortex_have_pci + vortex_have_eisa) ? 0 : -ENODEV;
2997 static void __exit vortex_eisa_cleanup (void)
2999 struct net_device *dev, *tmp;
3000 struct vortex_private *vp;
3001 long ioaddr;
3003 dev = root_vortex_eisa_dev;
3005 while (dev) {
3006 vp = dev->priv;
3007 ioaddr = dev->base_addr;
3009 unregister_netdev (dev);
3010 outw (TotalReset, ioaddr + EL3_CMD);
3011 release_region (ioaddr, VORTEX_TOTAL_SIZE);
3013 tmp = dev;
3014 dev = vp->next_module;
3016 kfree (tmp);
3021 static void __exit vortex_cleanup (void)
3023 if (vortex_have_pci)
3024 pci_unregister_driver (&vortex_driver);
3025 if (vortex_have_eisa)
3026 vortex_eisa_cleanup ();
3030 module_init(vortex_init);
3031 module_exit(vortex_cleanup);
3035 * Local variables:
3036 * c-indent-level: 4
3037 * c-basic-offset: 4
3038 * tab-width: 4
3039 * End: