3 drivers/net/eepro100.c: An Intel i82557-559 Ethernet driver for Linux
5 Written 1996-1999 by Donald Becker.
6 Modified 2000 by Linux Kernel Team
8 This software may be used and distributed according to the terms
9 of the GNU Public License, incorporated herein by reference.
11 This driver is for the Intel EtherExpress Pro100 (Speedo3) design.
12 It should work with all i82557/558/559 boards.
14 To use as a module, use the compile-command at the end of the file.
16 The author may be reached as becker@CESDIS.usra.edu, or C/O
17 Center of Excellence in Space Data and Information Sciences
18 Code 930.5, NASA Goddard Space Flight Center, Greenbelt MD 20771
20 http://cesdis.gsfc.nasa.gov/linux/drivers/eepro100.html
21 For installation instructions
22 http://cesdis.gsfc.nasa.gov/linux/misc/modules.html
23 There is a Majordomo mailing list based at
24 linux-eepro100@cesdis.gsfc.nasa.gov
29 v1.09j+LK1.0 - Jeff Garzik <jgarzik@mandrakesoft.com>
30 Convert to new PCI driver interface
36 I. Board Compatibility
38 This device driver is designed for the Intel i82557 "Speedo3" chip, Intel's
39 single-chip fast Ethernet controller for PCI, as used on the Intel
40 EtherExpress Pro 100 adapter.
42 II. Board-specific settings
44 PCI bus devices are configured by the system at boot time, so no jumpers
45 need to be set on the board. The system BIOS should be set to assign the
46 PCI INTA signal to an otherwise unused system IRQ line. While it's
47 possible to share PCI interrupt lines, it negatively impacts performance and
48 only recent kernels support it.
53 The Speedo3 is very similar to other Intel network chips, that is to say
54 "apparently designed on a different planet". This chips retains the complex
55 Rx and Tx descriptors and multiple buffers pointers as previous chips, but
56 also has simplified Tx and Rx buffer modes. This driver uses the "flexible"
57 Tx mode, but in a simplified lower-overhead manner: it associates only a
58 single buffer descriptor with each frame descriptor.
60 Despite the extra space overhead in each receive skbuff, the driver must use
61 the simplified Rx buffer mode to assure that only a single data buffer is
62 associated with each RxFD. The driver implements this by reserving space
63 for the Rx descriptor at the head of each Rx skbuff.
65 The Speedo-3 has receive and command unit base addresses that are added to
66 almost all descriptor pointers. The driver sets these to zero, so that all
67 pointer fields are absolute addresses.
69 The System Control Block (SCB) of some previous Intel chips exists on the
70 chip in both PCI I/O and memory space. This driver uses the I/O space
71 registers, but might switch to memory mapped mode to better support non-x86
74 IIIB. Transmit structure
76 The driver must use the complex Tx command+descriptor mode in order to
77 have a indirect pointer to the skbuff data section. Each Tx command block
78 (TxCB) is associated with two immediately appended Tx Buffer Descriptor
79 (TxBD). A fixed ring of these TxCB+TxBD pairs are kept as part of the
80 speedo_private data structure for each adapter instance.
82 The newer i82558 explicitly supports this structure, and can read the two
83 TxBDs in the same PCI burst as the TxCB.
85 This ring structure is used for all normal transmit packets, but the
86 transmit packet descriptors aren't long enough for most non-Tx commands such
87 as CmdConfigure. This is complicated by the possibility that the chip has
88 already loaded the link address in the previous descriptor. So for these
89 commands we convert the next free descriptor on the ring to a NoOp, and point
90 that descriptor's link to the complex command.
92 An additional complexity of these non-transmit commands are that they may be
93 added asynchronous to the normal transmit queue, so we disable interrupts
94 whenever the Tx descriptor ring is manipulated.
96 A notable aspect of these special configure commands is that they do
97 work with the normal Tx ring entry scavenge method. The Tx ring scavenge
98 is done at interrupt time using the 'dirty_tx' index, and checking for the
99 command-complete bit. While the setup frames may have the NoOp command on the
100 Tx ring marked as complete, but not have completed the setup command, this
101 is not a problem. The tx_ring entry can be still safely reused, as the
102 tx_skbuff[] entry is always empty for config_cmd and mc_setup frames.
104 Commands may have bits set e.g. CmdSuspend in the command word to either
105 suspend or stop the transmit/command unit. This driver always flags the last
106 command with CmdSuspend, erases the CmdSuspend in the previous command, and
107 then issues a CU_RESUME.
108 Note: Watch out for the potential race condition here: imagine
109 erasing the previous suspend
110 the chip processes the previous command
111 the chip processes the final command, and suspends
113 the chip processes the next-yet-valid post-final-command.
114 So blindly sending a CU_RESUME is only safe if we do it immediately after
115 after erasing the previous CmdSuspend, without the possibility of an
116 intervening delay. Thus the resume command is always within the
117 interrupts-disabled region. This is a timing dependence, but handling this
118 condition in a timing-independent way would considerably complicate the code.
120 Note: In previous generation Intel chips, restarting the command unit was a
121 notoriously slow process. This is presumably no longer true.
123 IIIC. Receive structure
125 Because of the bus-master support on the Speedo3 this driver uses the new
126 SKBUFF_RX_COPYBREAK scheme, rather than a fixed intermediate receive buffer.
127 This scheme allocates full-sized skbuffs as receive buffers. The value
128 SKBUFF_RX_COPYBREAK is used as the copying breakpoint: it is chosen to
129 trade-off the memory wasted by passing the full-sized skbuff to the queue
130 layer for all frames vs. the copying cost of copying a frame to a
131 correctly-sized skbuff.
133 For small frames the copying cost is negligible (esp. considering that we
134 are pre-loading the cache with immediately useful header information), so we
135 allocate a new, minimally-sized skbuff. For large frames the copying cost
136 is non-trivial, and the larger copy might flush the cache of useful data, so
137 we pass up the skbuff the packet was received into.
139 IIID. Synchronization
140 The driver runs as two independent, single-threaded flows of control. One
141 is the send-packet routine, which enforces single-threaded use by the
142 dev->tbusy flag. The other thread is the interrupt handler, which is single
143 threaded by the hardware and other software.
145 The send packet thread has partial control over the Tx ring and 'dev->tbusy'
146 flag. It sets the tbusy flag whenever it's queuing a Tx packet. If the next
147 queue slot is empty, it clears the tbusy flag when finished otherwise it sets
148 the 'sp->tx_full' flag.
150 The interrupt handler has exclusive control over the Rx ring and records stats
151 from the Tx ring. (The Tx-done interrupt can't be selectively turned off, so
152 we can't avoid the interrupt overhead by having the Tx routine reap the Tx
153 stats.) After reaping the stats, it marks the queue entry as empty by setting
154 the 'base' to zero. Iff the 'sp->tx_full' flag is set, it clears both the
155 tx_full and tbusy flags.
159 Thanks to Steve Williams of Intel for arranging the non-disclosure agreement
160 that stated that I could disclose the information. But I still resent
161 having to sign an Intel NDA when I'm helping Intel sell their own product!
166 static const char *version
=
167 "eepro100.c:v1.09j+LK1.0 Feb 13, 2000 Linux Kernel Team http://cesdis.gsfc.nasa.gov/linux/drivers/eepro100.html\n";
169 /* A few user-configurable values that apply to all boards.
170 First set is undocumented and spelled per Intel recommendations. */
172 static int congenb
= 0; /* Enable congestion control in the DP83840. */
173 static int txfifo
= 8; /* Tx FIFO threshold in 4 byte units, 0-15 */
174 static int rxfifo
= 8; /* Rx FIFO threshold, default 32 bytes. */
175 /* Tx/Rx DMA burst length, 0-127, 0 == no preemption, tx==128 -> disabled. */
176 static int txdmacount
= 128;
177 static int rxdmacount
= 0;
179 /* Set the copy breakpoint for the copy-only-tiny-buffer Rx method.
180 Lower values use more memory, but are faster. */
181 static int rx_copybreak
= 200;
183 /* Maximum events (Rx packets, etc.) to handle at each interrupt. */
184 static int max_interrupt_work
= 20;
186 /* Maximum number of multicast addresses to filter (vs. rx-all-multicast) */
187 static int multicast_filter_limit
= 64;
189 /* 'options' is used to pass a transceiver override or full-duplex flag
190 e.g. "options=16" for FD, "options=32" for 100mbps-only. */
191 static int full_duplex
[] = {-1, -1, -1, -1, -1, -1, -1, -1};
192 static int options
[] = {-1, -1, -1, -1, -1, -1, -1, -1};
193 static int debug
= -1; /* The debug level */
195 /* A few values that may be tweaked. */
196 /* The ring sizes should be a power of two for efficiency. */
197 #define TX_RING_SIZE 32 /* Effectively 2 entries fewer. */
198 #define RX_RING_SIZE 32
199 /* Actual number of TX packets queued, must be <= TX_RING_SIZE-2. */
200 #define TX_QUEUE_LIMIT 12
202 /* Operational parameters that usually are not changed. */
204 /* Time in jiffies before concluding the transmitter is hung. */
205 #define TX_TIMEOUT (2*HZ)
206 /* Size of an pre-allocated Rx buffer: <Ethernet MTU> + slack.*/
207 #define PKT_BUF_SZ 1536
209 #if !defined(__OPTIMIZE__) || !defined(__KERNEL__)
210 #warning You must compile this file with the correct options!
211 #warning See the last lines of the source file.
212 #error You must compile this driver with "-O".
216 #include <linux/config.h>
217 #include <linux/module.h>
218 #include <linux/kernel.h>
219 #include <linux/string.h>
220 #include <linux/errno.h>
221 #include <linux/ioport.h>
222 #include <linux/malloc.h>
223 #include <linux/interrupt.h>
224 #include <linux/timer.h>
225 #include <linux/pci.h>
226 #include <linux/spinlock.h>
227 #include <linux/init.h>
228 #include <asm/bitops.h>
231 #include <linux/netdevice.h>
232 #include <linux/etherdevice.h>
233 #include <linux/skbuff.h>
234 #include <linux/delay.h>
236 MODULE_AUTHOR("Donald Becker <becker@cesdis.gsfc.nasa.gov>");
237 MODULE_DESCRIPTION("Intel i82557/i82558 PCI EtherExpressPro driver");
238 MODULE_PARM(debug
, "i");
239 MODULE_PARM(options
, "1-" __MODULE_STRING(8) "i");
240 MODULE_PARM(full_duplex
, "1-" __MODULE_STRING(8) "i");
241 MODULE_PARM(congenb
, "i");
242 MODULE_PARM(txfifo
, "i");
243 MODULE_PARM(rxfifo
, "i");
244 MODULE_PARM(txdmacount
, "i");
245 MODULE_PARM(rxdmacount
, "i");
246 MODULE_PARM(rx_copybreak
, "i");
247 MODULE_PARM(max_interrupt_work
, "i");
248 MODULE_PARM(multicast_filter_limit
, "i");
250 #define EEPRO100_MODULE_NAME "eepro100"
251 #define PFX EEPRO100_MODULE_NAME ": "
253 #define RUN_AT(x) (jiffies + (x))
255 /* ACPI power states don't universally work (yet) */
256 #ifndef CONFIG_EEPRO100_PM
257 #undef pci_set_power_state
258 #define pci_set_power_state null_set_power_state
259 static inline int null_set_power_state(struct pci_dev
*dev
, int state
)
263 #endif /* CONFIG_EEPRO100_PM */
266 /* compile-time switch to en/disable slow PIO */
270 int speedo_debug
= 1;
274 PCI_USES_IO
=1, PCI_USES_MEM
=2, PCI_USES_MASTER
=4,
275 PCI_ADDR0
=0x10<<0, PCI_ADDR1
=0x10<<1, PCI_ADDR2
=0x10<<2, PCI_ADDR3
=0x10<<3,
288 /* How to wait for the command unit to accept a command.
289 Typically this takes 0 ticks. */
290 static inline void wait_for_cmd_done (long cmd_ioaddr
)
294 while (inb (cmd_ioaddr
) && --wait
>= 0);
298 /* Offsets to the various registers.
299 All accesses need not be longword aligned. */
300 enum speedo_offsets
{
301 SCBStatus
= 0, SCBCmd
= 2, /* Rx/Command Unit command and status. */
302 SCBPointer
= 4, /* General purpose pointer. */
303 SCBPort
= 8, /* Misc. commands and operands. */
304 SCBflash
= 12, SCBeeprom
= 14, /* EEPROM and flash memory control. */
305 SCBCtrlMDI
= 16, /* MDI interface control. */
306 SCBEarlyRx
= 20, /* Early receive byte count. */
310 /* Commands that can be put in a command list entry. */
313 CmdIASetup
= 0x10000,
314 CmdConfigure
= 0x20000,
315 CmdMulticastList
= 0x30000,
319 CmdDiagnose
= 0x70000,
320 CmdSuspend
= 0x40000000, /* Suspend after completion. */
321 CmdIntr
= 0x20000000, /* Interrupt after completion. */
322 CmdTxFlex
= 0x00080000, /* Use "Flexible mode" for CmdTx command. */
326 /* Do atomically if possible. */
327 #if defined(__i386__) || defined(__alpha__) || defined(__ia64__)
328 #define clear_suspend(cmd) clear_bit(30, &(cmd)->cmd_status)
329 #elif defined(__powerpc__)
330 #define clear_suspend(cmd) clear_bit(6, &(cmd)->cmd_status)
333 # error You are probably in trouble: clear_suspend() MUST be atomic.
335 # define clear_suspend(cmd) (cmd)->cmd_status &= cpu_to_le32(~CmdSuspend)
339 SCBMaskCmdDone
= 0x8000,
340 SCBMaskRxDone
= 0x4000,
341 SCBMaskCmdIdle
= 0x2000,
342 SCBMaskRxSuspend
= 0x1000,
343 SCBMaskEarlyRx
= 0x0800,
344 SCBMaskFlowCtl
= 0x0400,
345 SCBTriggerIntr
= 0x0200,
347 /* The rest are Rx and Tx commands. */
350 CUStatsAddr
= 0x0040,
351 CUShowStats
= 0x0050,
352 CUCmdBase
= 0x0060, /* CU Base address (set to zero) . */
353 CUDumpStats
= 0x0070, /* Dump then reset stats counters. */
358 RxResumeNoResources
= 0x0007,
364 PortPartialReset
= 2,
368 /* The Speedo3 Rx and Tx frame/buffer descriptors. */
369 struct descriptor
{ /* A generic descriptor. */
370 s32 cmd_status
; /* All command and status fields. */
371 u32 link
; /* struct descriptor * */
372 unsigned char params
[0];
375 /* The Speedo3 Rx and Tx buffer descriptors. */
376 struct RxFD
{ /* Receive frame descriptor. */
378 u32 link
; /* struct RxFD * */
379 u32 rx_buf_addr
; /* void * */
383 /* Selected elements of the Tx/RxFD.status word. */
389 RxErrTooBig
= 0x0200,
390 RxErrSymbol
= 0x0010,
393 RxNoIAMatch
= 0x0002,
395 StatusComplete
= 0x8000,
398 struct TxFD
{ /* Transmit frame descriptor set. */
400 u32 link
; /* void * */
401 u32 tx_desc_addr
; /* Always points to the tx_buf_addr element. */
402 s32 count
; /* # of TBD (=1), Tx start thresh., etc. */
403 /* This constitutes two "TBD" entries -- we only use one. */
404 u32 tx_buf_addr0
; /* void *, frame to be transmitted. */
405 s32 tx_buf_size0
; /* Length of Tx frame. */
406 u32 tx_buf_addr1
; /* void *, frame to be transmitted. */
407 s32 tx_buf_size1
; /* Length of Tx frame. */
410 /* Elements of the dump_statistics block. This block must be lword aligned. */
411 struct speedo_stats
{
424 u32 rx_resource_errs
;
431 /* Do not change the position (alignment) of the first few elements!
432 The later elements are grouped for cache locality. */
433 struct speedo_private
{
434 struct TxFD
*tx_ring
; /* Commands (usually CmdTxPacket). */
435 struct RxFD
*rx_ringp
[RX_RING_SIZE
];/* Rx descriptor, used as ring. */
436 /* The addresses of a Tx/Rx-in-place packets/buffers. */
437 struct sk_buff
*tx_skbuff
[TX_RING_SIZE
];
438 struct sk_buff
*rx_skbuff
[RX_RING_SIZE
];
439 dma_addr_t rx_ring_dma
[RX_RING_SIZE
];
440 dma_addr_t tx_ring_dma
;
441 struct descriptor
*last_cmd
; /* Last command sent. */
442 unsigned int cur_tx
, dirty_tx
; /* The ring entries to be free()ed. */
443 spinlock_t lock
; /* Group with Tx control cache line. */
444 u32 tx_threshold
; /* The value for txdesc.count. */
445 struct RxFD
*last_rxf
; /* Last command sent. */
446 unsigned int cur_rx
, dirty_rx
; /* The next free ring entry */
447 long last_rx_time
; /* Last Rx, in jiffies, to handle Rx hang. */
448 const char *product_name
;
449 struct enet_statistics stats
;
450 struct speedo_stats
*lstats
;
452 unsigned char acpi_pwr
;
453 struct pci_dev
*pdev
;
454 struct timer_list timer
; /* Media selection timer. */
455 int mc_setup_frm_len
; /* The length of an allocated.. */
456 struct descriptor
*mc_setup_frm
;/* ..multicast setup frame. */
457 int mc_setup_busy
; /* Avoid double-use of setup frame. */
458 dma_addr_t mc_setup_dma
;
459 char rx_mode
; /* Current PROMISC/ALLMULTI setting. */
460 unsigned int tx_full
:1; /* The Tx queue is full. */
461 unsigned int full_duplex
:1; /* Full-duplex operation requested. */
462 unsigned int flow_ctrl
:1; /* Use 802.3x flow control. */
463 unsigned int rx_bug
:1; /* Work around receiver hang errata. */
464 unsigned int rx_bug10
:1; /* Receiver might hang at 10mbps. */
465 unsigned int rx_bug100
:1; /* Receiver might hang at 100mbps. */
466 unsigned char default_port
:8; /* Last dev->if_port value. */
467 unsigned short phy
[2]; /* PHY media interfaces available. */
468 unsigned short advertising
; /* Current PHY advertised caps. */
469 unsigned short partner
; /* Link partner caps. */
471 /* The parameters for a CmdConfigure operation.
472 There are so many options that it would be difficult to document each bit.
473 We mostly use the default or recommended settings. */
474 const char i82557_config_cmd
[22] = {
475 22, 0x08, 0, 0, 0, 0, 0x32, 0x03, 1, /* 1=Use MII 0=Use AUI */
477 0xf2, 0x48, 0, 0x40, 0xf2, 0x80, /* 0x40=Force full-duplex */
479 const char i82558_config_cmd
[22] = {
480 22, 0x08, 0, 1, 0, 0, 0x22, 0x03, 1, /* 1=Use MII 0=Use AUI */
481 0, 0x2E, 0, 0x60, 0x08, 0x88,
482 0x68, 0, 0x40, 0xf2, 0xBD, /* 0xBD->0xFD=Force full-duplex */
485 /* PHY media interface chips. */
486 static const char *phys
[] = {
487 "None", "i82553-A/B", "i82553-C", "i82503",
488 "DP83840", "80c240", "80c24", "i82555",
489 "unknown-8", "unknown-9", "DP83840A", "unknown-11",
490 "unknown-12", "unknown-13", "unknown-14", "unknown-15", };
491 enum phy_chips
{ NonSuchPhy
=0, I82553AB
, I82553C
, I82503
, DP83840
, S80C240
,
492 S80C24
, I82555
, DP83840A
=10, };
493 static const char is_mii
[] = { 0, 1, 1, 0, 1, 1, 0, 1 };
494 #define EE_READ_CMD (6)
496 static int do_eeprom_cmd(long ioaddr
, int cmd
, int cmd_len
);
497 static int mdio_read(long ioaddr
, int phy_id
, int location
);
498 static int mdio_write(long ioaddr
, int phy_id
, int location
, int value
);
499 static int speedo_open(struct net_device
*dev
);
500 static void speedo_resume(struct net_device
*dev
);
501 static void speedo_timer(unsigned long data
);
502 static void speedo_init_rx_ring(struct net_device
*dev
);
503 static void speedo_tx_timeout(struct net_device
*dev
);
504 static int speedo_start_xmit(struct sk_buff
*skb
, struct net_device
*dev
);
505 static int speedo_rx(struct net_device
*dev
);
506 static void speedo_interrupt(int irq
, void *dev_instance
, struct pt_regs
*regs
);
507 static int speedo_close(struct net_device
*dev
);
508 static struct enet_statistics
*speedo_get_stats(struct net_device
*dev
);
509 static int speedo_ioctl(struct net_device
*dev
, struct ifreq
*rq
, int cmd
);
510 static void set_rx_mode(struct net_device
*dev
);
514 #ifdef honor_default_port
515 /* Optional driver feature to allow forcing the transceiver setting.
517 static int mii_ctrl
[8] = { 0x3300, 0x3100, 0x0000, 0x0100,
518 0x2000, 0x2100, 0x0400, 0x3100};
523 static int __devinit
eepro100_init_one (struct pci_dev
*pdev
,
524 const struct pci_device_id
*ent
)
526 struct net_device
*dev
;
527 struct speedo_private
*sp
;
528 unsigned char *tx_ring
;
529 dma_addr_t tx_ring_dma
;
533 int acpi_idle_state
= 0, pm
, irq
;
534 unsigned long ioaddr
;
535 static int card_idx
= -1;
537 static int did_version
= 0; /* Already printed version info. */
539 if (speedo_debug
> 0 && did_version
++ == 0)
543 ioaddr
= pci_resource_start (pdev
, 0);
545 ioaddr
= pci_resource_start (pdev
, 1);
551 if (!request_region (pci_resource_start (pdev
, 1),
552 pci_resource_len (pdev
, 1),
553 EEPRO100_MODULE_NAME
)) {
554 printk (KERN_ERR PFX
"cannot reserve I/O ports\n");
557 if (!request_mem_region (pci_resource_start (pdev
, 0),
558 pci_resource_len (pdev
, 0),
559 EEPRO100_MODULE_NAME
)) {
560 printk (KERN_ERR PFX
"cannot reserve MMIO region\n");
561 goto err_out_free_pio_region
;
565 ioaddr
= (unsigned long) ioremap (pci_resource_start (pdev
, 0),
566 pci_resource_len (pdev
, 0));
568 printk (KERN_ERR PFX
"cannot remap MMIO region %lx @ %lx\n",
569 pci_resource_len (pdev
, 0),
570 pci_resource_start (pdev
, 0));
571 goto err_out_free_mmio_region
;
575 tx_ring
= pci_alloc_consistent(pdev
, TX_RING_SIZE
* sizeof(struct TxFD
)
576 + sizeof(struct speedo_stats
), &tx_ring_dma
);
578 printk(KERN_ERR PFX
"Could not allocate DMA memory.\n");
579 goto err_out_iounmap
;
582 dev
= init_etherdev(NULL
, sizeof(struct speedo_private
));
584 printk(KERN_ERR PFX
"Could not allocate ethernet device.\n");
585 goto err_out_free_tx_ring
;
587 if (dev
->priv
== NULL
) {
588 printk(KERN_ERR PFX
"Could not allocate ethernet device private info.\n");
589 goto err_out_free_netdev
;
592 if (dev
->mem_start
> 0)
593 option
= dev
->mem_start
;
594 else if (card_idx
>= 0 && options
[card_idx
] >= 0)
595 option
= options
[card_idx
];
599 /* save power state b4 pci_enable_device overwrites it */
600 pm
= pci_find_capability(pdev
, PCI_CAP_ID_PM
);
603 pci_read_config_word(pdev
, pm
+ PCI_PM_CTRL
, &pwr_command
);
604 acpi_idle_state
= pwr_command
& PCI_PM_CTRL_STATE_MASK
;
607 if (pci_enable_device (pdev
)) {
608 printk(KERN_ERR PFX
"Could not enable PCI device\n");
609 goto err_out_free_netdev
;
612 pci_set_master (pdev
);
614 /* Read the station address EEPROM before doing the reset.
615 Nominally his should even be done before accepting the device, but
616 then we wouldn't have a device name with which to report the error.
617 The size test is for 6 bit vs. 8 bit address serial EEPROMs.
622 int read_cmd
, ee_size
;
624 if ((do_eeprom_cmd(ioaddr
, EE_READ_CMD
<< 24, 27) & 0xffe0000)
627 read_cmd
= EE_READ_CMD
<< 24;
630 read_cmd
= EE_READ_CMD
<< 22;
633 for (j
= 0, i
= 0; i
< ee_size
; i
++) {
634 u16 value
= do_eeprom_cmd(ioaddr
, read_cmd
| (i
<< 16), 27);
638 dev
->dev_addr
[j
++] = value
;
639 dev
->dev_addr
[j
++] = value
>> 8;
643 printk(KERN_WARNING
"%s: Invalid EEPROM checksum %#4.4x, "
644 "check settings before activating this device!\n",
646 /* Don't unregister_netdev(dev); as the EEPro may actually be
647 usable, especially if the MAC address is set later. */
650 /* Reset the chip: stop Tx and Rx processes and clear counters.
651 This takes less than 10usec and will easily finish before the next
653 outl(PortReset
, ioaddr
+ SCBPort
);
655 if (eeprom
[3] & 0x0100)
656 product
= "OEM i82557/i82558 10/100 Ethernet";
658 product
= "Intel PCI EtherExpress Pro100";
660 printk(KERN_INFO
"%s: %s at %#3lx, ", dev
->name
, product
, ioaddr
);
662 for (i
= 0; i
< 5; i
++)
663 printk("%2.2X:", dev
->dev_addr
[i
]);
664 printk("%2.2X, IRQ %d.\n", dev
->dev_addr
[i
], irq
);
667 /* OK, this is pure kernel bloat. I don't like it when other drivers
668 waste non-pageable kernel space to emit similar messages, but I need
669 them for bug reports. */
671 const char *connectors
[] = {" RJ45", " BNC", " AUI", " MII"};
672 /* The self-test results must be paragraph aligned. */
673 volatile s32
*self_test_results
= (volatile s32
*)tx_ring
;
674 int boguscnt
= 16000; /* Timeout for set-test. */
675 if (eeprom
[3] & 0x03)
676 printk(KERN_INFO
" Receiver lock-up bug exists -- enabling"
678 printk(KERN_INFO
" Board assembly %4.4x%2.2x-%3.3d, Physical"
679 " connectors present:",
680 eeprom
[8], eeprom
[9]>>8, eeprom
[9] & 0xff);
681 for (i
= 0; i
< 4; i
++)
682 if (eeprom
[5] & (1<<i
))
683 printk(connectors
[i
]);
684 printk("\n"KERN_INFO
" Primary interface chip %s PHY #%d.\n",
685 phys
[(eeprom
[6]>>8)&15], eeprom
[6] & 0x1f);
686 if (eeprom
[7] & 0x0700)
687 printk(KERN_INFO
" Secondary interface chip %s.\n",
688 phys
[(eeprom
[7]>>8)&7]);
689 if (((eeprom
[6]>>8) & 0x3f) == DP83840
690 || ((eeprom
[6]>>8) & 0x3f) == DP83840A
) {
691 int mdi_reg23
= mdio_read(ioaddr
, eeprom
[6] & 0x1f, 23) | 0x0422;
694 printk(KERN_INFO
" DP83840 specific setup, setting register 23 to %4.4x.\n",
696 mdio_write(ioaddr
, eeprom
[6] & 0x1f, 23, mdi_reg23
);
698 if ((option
>= 0) && (option
& 0x70)) {
699 printk(KERN_INFO
" Forcing %dMbs %s-duplex operation.\n",
700 (option
& 0x20 ? 100 : 10),
701 (option
& 0x10 ? "full" : "half"));
702 mdio_write(ioaddr
, eeprom
[6] & 0x1f, 0,
703 ((option
& 0x20) ? 0x2000 : 0) | /* 100mbps? */
704 ((option
& 0x10) ? 0x0100 : 0)); /* Full duplex? */
707 /* Perform a system self-test. Use the tx_ring consistent DMA mapping for it. */
708 self_test_results
[0] = 0;
709 self_test_results
[1] = -1;
710 outl(tx_ring_dma
| PortSelfTest
, ioaddr
+ SCBPort
);
713 } while (self_test_results
[1] == -1 && --boguscnt
>= 0);
715 if (boguscnt
< 0) { /* Test optimized out. */
716 printk(KERN_ERR
"Self test failed, status %8.8x:\n"
717 KERN_ERR
" Failure to initialize the i82557.\n"
718 KERN_ERR
" Verify that the card is a bus-master"
720 self_test_results
[1]);
722 printk(KERN_INFO
" General self-test: %s.\n"
723 KERN_INFO
" Serial sub-system self-test: %s.\n"
724 KERN_INFO
" Internal registers self-test: %s.\n"
725 KERN_INFO
" ROM checksum self-test: %s (%#8.8x).\n",
726 self_test_results
[1] & 0x1000 ? "failed" : "passed",
727 self_test_results
[1] & 0x0020 ? "failed" : "passed",
728 self_test_results
[1] & 0x0008 ? "failed" : "passed",
729 self_test_results
[1] & 0x0004 ? "failed" : "passed",
730 self_test_results
[0]);
732 #endif /* kernel_bloat */
734 outl(PortReset
, ioaddr
+ SCBPort
);
736 /* Return the chip to its original power state. */
737 pci_set_power_state (pdev
, acpi_idle_state
);
739 pdev
->driver_data
= dev
;
741 dev
->base_addr
= ioaddr
;
747 sp
->acpi_pwr
= acpi_idle_state
;
748 sp
->tx_ring
= (struct TxFD
*)tx_ring
;
749 sp
->tx_ring_dma
= tx_ring_dma
;
750 sp
->lstats
= (struct speedo_stats
*)(sp
->tx_ring
+ TX_RING_SIZE
);
752 sp
->full_duplex
= option
>= 0 && (option
& 0x10) ? 1 : 0;
755 if (full_duplex
[card_idx
] >= 0)
756 sp
->full_duplex
= full_duplex
[card_idx
];
759 sp
->default_port
= option
>= 0 ? (option
& 0x0f) : 0;
761 sp
->phy
[0] = eeprom
[6];
762 sp
->phy
[1] = eeprom
[7];
763 sp
->rx_bug
= (eeprom
[3] & 0x03) == 3 ? 0 : 1;
766 printk(KERN_INFO
" Receiver lock-up workaround activated.\n");
768 /* The Speedo-specific entries in the device structure. */
769 dev
->open
= &speedo_open
;
770 dev
->hard_start_xmit
= &speedo_start_xmit
;
771 dev
->tx_timeout
= &speedo_tx_timeout
;
772 dev
->watchdog_timeo
= TX_TIMEOUT
;
773 dev
->stop
= &speedo_close
;
774 dev
->get_stats
= &speedo_get_stats
;
775 dev
->set_multicast_list
= &set_rx_mode
;
776 dev
->do_ioctl
= &speedo_ioctl
;
781 unregister_netdevice (dev
);
783 err_out_free_tx_ring
:
784 pci_free_consistent(pdev
, TX_RING_SIZE
* sizeof(struct TxFD
)
785 + sizeof(struct speedo_stats
),
786 tx_ring
, tx_ring_dma
);
789 iounmap ((void *)ioaddr
);
790 err_out_free_mmio_region
:
792 release_mem_region (pci_resource_start (pdev
, 0),
793 pci_resource_len (pdev
, 0));
794 err_out_free_pio_region
:
795 release_region (pci_resource_start (pdev
, 1),
796 pci_resource_len (pdev
, 1));
802 /* Serial EEPROM section.
803 A "bit" grungy, but we work our way through bit-by-bit :->. */
804 /* EEPROM_Ctrl bits. */
805 #define EE_SHIFT_CLK 0x01 /* EEPROM shift clock. */
806 #define EE_CS 0x02 /* EEPROM chip select. */
807 #define EE_DATA_WRITE 0x04 /* EEPROM chip data in. */
808 #define EE_DATA_READ 0x08 /* EEPROM chip data out. */
809 #define EE_ENB (0x4800 | EE_CS)
810 #define EE_WRITE_0 0x4802
811 #define EE_WRITE_1 0x4806
812 #define EE_OFFSET SCBeeprom
814 /* Delay between EEPROM clock transitions.
815 The code works with no delay on 33Mhz PCI. */
816 #define eeprom_delay() inw(ee_addr)
818 static int do_eeprom_cmd(long ioaddr
, int cmd
, int cmd_len
)
821 long ee_addr
= ioaddr
+ SCBeeprom
;
823 outw(EE_ENB
| EE_SHIFT_CLK
, ee_addr
);
825 /* Shift the command bits out. */
827 short dataval
= (cmd
& (1 << cmd_len
)) ? EE_WRITE_1
: EE_WRITE_0
;
828 outw(dataval
, ee_addr
);
830 outw(dataval
| EE_SHIFT_CLK
, ee_addr
);
832 retval
= (retval
<< 1) | ((inw(ee_addr
) & EE_DATA_READ
) ? 1 : 0);
833 } while (--cmd_len
>= 0);
834 outw(EE_ENB
, ee_addr
);
836 /* Terminate the EEPROM access. */
837 outw(EE_ENB
& ~EE_CS
, ee_addr
);
841 static int mdio_read(long ioaddr
, int phy_id
, int location
)
843 int val
, boguscnt
= 64*10; /* <64 usec. to complete, typ 27 ticks */
844 outl(0x08000000 | (location
<<16) | (phy_id
<<21), ioaddr
+ SCBCtrlMDI
);
846 val
= inl(ioaddr
+ SCBCtrlMDI
);
847 if (--boguscnt
< 0) {
848 printk(KERN_ERR
" mdio_read() timed out with val = %8.8x.\n", val
);
851 } while (! (val
& 0x10000000));
855 static int mdio_write(long ioaddr
, int phy_id
, int location
, int value
)
857 int val
, boguscnt
= 64*10; /* <64 usec. to complete, typ 27 ticks */
858 outl(0x04000000 | (location
<<16) | (phy_id
<<21) | value
,
859 ioaddr
+ SCBCtrlMDI
);
861 val
= inl(ioaddr
+ SCBCtrlMDI
);
862 if (--boguscnt
< 0) {
863 printk(KERN_ERR
" mdio_write() timed out with val = %8.8x.\n", val
);
866 } while (! (val
& 0x10000000));
872 speedo_open(struct net_device
*dev
)
874 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
875 long ioaddr
= dev
->base_addr
;
877 if (speedo_debug
> 1)
878 printk(KERN_DEBUG
"%s: speedo_open() irq %d.\n", dev
->name
, dev
->irq
);
882 pci_set_power_state(sp
->pdev
, 0);
884 /* Set up the Tx queue early.. */
889 spin_lock_init(&sp
->lock
);
891 /* .. we can safely take handler calls during init. */
892 if (request_irq(dev
->irq
, &speedo_interrupt
, SA_SHIRQ
, dev
->name
, dev
)) {
897 dev
->if_port
= sp
->default_port
;
900 /* With some transceivers we must retrigger negotiation to reset
902 if ((sp
->phy
[0] & 0x8000) == 0) {
903 int phy_addr
= sp
->phy
[0] & 0x1f ;
904 /* Use 0x3300 for restarting NWay, other values to force xcvr:
910 #ifdef honor_default_port
911 mdio_write(ioaddr
, phy_addr
, 0, mii_ctrl
[dev
->default_port
& 7]);
913 mdio_write(ioaddr
, phy_addr
, 0, 0x3300);
918 speedo_init_rx_ring(dev
);
920 /* Fire up the hardware. */
923 netif_start_queue(dev
);
925 /* Setup the chip and configure the multicast list. */
926 sp
->mc_setup_frm
= NULL
;
927 sp
->mc_setup_frm_len
= 0;
928 sp
->mc_setup_busy
= 0;
929 sp
->rx_mode
= -1; /* Invalid -> always reset the mode. */
930 sp
->flow_ctrl
= sp
->partner
= 0;
932 if ((sp
->phy
[0] & 0x8000) == 0)
933 sp
->advertising
= mdio_read(ioaddr
, sp
->phy
[0] & 0x1f, 4);
935 if (speedo_debug
> 2) {
936 printk(KERN_DEBUG
"%s: Done speedo_open(), status %8.8x.\n",
937 dev
->name
, inw(ioaddr
+ SCBStatus
));
940 /* Set the timer. The timer serves a dual purpose:
941 1) to monitor the media interface (e.g. link beat) and perhaps switch
942 to an alternate media type
943 2) to monitor Rx activity, and restart the Rx process if the receiver
945 init_timer(&sp
->timer
);
946 sp
->timer
.expires
= RUN_AT((24*HZ
)/10); /* 2.4 sec. */
947 sp
->timer
.data
= (unsigned long)dev
;
948 sp
->timer
.function
= &speedo_timer
; /* timer handler */
949 add_timer(&sp
->timer
);
951 /* No need to wait for the command unit to accept here. */
952 if ((sp
->phy
[0] & 0x8000) == 0)
953 mdio_read(ioaddr
, sp
->phy
[0] & 0x1f, 0);
957 /* Start the chip hardware after a full reset. */
958 static void speedo_resume(struct net_device
*dev
)
960 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
961 long ioaddr
= dev
->base_addr
;
963 outw(SCBMaskAll
, ioaddr
+ SCBCmd
);
965 /* Start with a Tx threshold of 256 (0x..20.... 8 byte units). */
966 sp
->tx_threshold
= 0x01208000;
968 /* Set the segment registers to '0'. */
969 wait_for_cmd_done(ioaddr
+ SCBCmd
);
970 outl(0, ioaddr
+ SCBPointer
);
971 outb(RxAddrLoad
, ioaddr
+ SCBCmd
);
972 wait_for_cmd_done(ioaddr
+ SCBCmd
);
973 outb(CUCmdBase
, ioaddr
+ SCBCmd
);
974 wait_for_cmd_done(ioaddr
+ SCBCmd
);
976 /* Load the statistics block and rx ring addresses. */
977 outl(sp
->tx_ring_dma
+ sizeof(struct TxFD
) * TX_RING_SIZE
, ioaddr
+ SCBPointer
);
978 outb(CUStatsAddr
, ioaddr
+ SCBCmd
);
979 sp
->lstats
->done_marker
= 0;
980 wait_for_cmd_done(ioaddr
+ SCBCmd
);
982 outl(sp
->rx_ring_dma
[sp
->cur_rx
% RX_RING_SIZE
],
983 ioaddr
+ SCBPointer
);
984 outb(RxStart
, ioaddr
+ SCBCmd
);
985 wait_for_cmd_done(ioaddr
+ SCBCmd
);
987 outb(CUDumpStats
, ioaddr
+ SCBCmd
);
989 /* Fill the first command with our physical address. */
991 int entry
= sp
->cur_tx
++ % TX_RING_SIZE
;
992 struct descriptor
*cur_cmd
= (struct descriptor
*)&sp
->tx_ring
[entry
];
994 /* Avoid a bug(?!) here by marking the command already completed. */
995 cur_cmd
->cmd_status
= cpu_to_le32((CmdSuspend
| CmdIASetup
) | 0xa000);
997 cpu_to_le32(sp
->tx_ring_dma
+ (sp
->cur_tx
% TX_RING_SIZE
)
998 * sizeof(struct TxFD
));
999 memcpy(cur_cmd
->params
, dev
->dev_addr
, 6);
1001 clear_suspend(sp
->last_cmd
);
1002 sp
->last_cmd
= cur_cmd
;
1005 /* Start the chip's Tx process and unmask interrupts. */
1006 wait_for_cmd_done(ioaddr
+ SCBCmd
);
1007 outl(sp
->tx_ring_dma
1008 + (sp
->dirty_tx
% TX_RING_SIZE
) * sizeof(struct TxFD
),
1009 ioaddr
+ SCBPointer
);
1010 outw(CUStart
, ioaddr
+ SCBCmd
);
1012 netif_start_queue (dev
);
1015 /* Media monitoring and control. */
1016 static void speedo_timer(unsigned long data
)
1018 struct net_device
*dev
= (struct net_device
*)data
;
1019 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1020 long ioaddr
= dev
->base_addr
;
1021 int phy_num
= sp
->phy
[0] & 0x1f;
1023 /* We have MII and lost link beat. */
1024 if ((sp
->phy
[0] & 0x8000) == 0) {
1025 int partner
= mdio_read(ioaddr
, phy_num
, 5);
1026 if (partner
!= sp
->partner
) {
1027 int flow_ctrl
= sp
->advertising
& partner
& 0x0400 ? 1 : 0;
1028 sp
->partner
= partner
;
1029 if (flow_ctrl
!= sp
->flow_ctrl
) {
1030 sp
->flow_ctrl
= flow_ctrl
;
1031 sp
->rx_mode
= -1; /* Trigger a reload. */
1033 /* Clear sticky bit. */
1034 mdio_read(ioaddr
, phy_num
, 1);
1035 /* If link beat has returned... */
1036 if (mdio_read(ioaddr
, phy_num
, 1) & 0x0004)
1037 dev
->flags
|= IFF_RUNNING
;
1039 dev
->flags
&= ~IFF_RUNNING
;
1043 if (speedo_debug
> 3) {
1044 printk(KERN_DEBUG
"%s: Media control tick, status %4.4x.\n",
1045 dev
->name
, inw(ioaddr
+ SCBStatus
));
1047 if (sp
->rx_mode
< 0 ||
1048 (sp
->rx_bug
&& jiffies
- sp
->last_rx_time
> 2*HZ
)) {
1049 /* We haven't received a packet in a Long Time. We might have been
1050 bitten by the receiver hang bug. This can be cleared by sending
1051 a set multicast list command. */
1054 /* We must continue to monitor the media. */
1055 sp
->timer
.expires
= RUN_AT(2*HZ
); /* 2.0 sec. */
1056 add_timer(&sp
->timer
);
1059 static void speedo_show_state(struct net_device
*dev
)
1061 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1062 long ioaddr
= dev
->base_addr
;
1063 int phy_num
= sp
->phy
[0] & 0x1f;
1066 /* Print a few items for debugging. */
1067 if (speedo_debug
> 0) {
1069 printk(KERN_DEBUG
"%s: Tx ring dump, Tx queue %d / %d:\n", dev
->name
,
1070 sp
->cur_tx
, sp
->dirty_tx
);
1071 for (i
= 0; i
< TX_RING_SIZE
; i
++)
1072 printk(KERN_DEBUG
"%s: %c%c%d %8.8x.\n", dev
->name
,
1073 i
== sp
->dirty_tx
% TX_RING_SIZE
? '*' : ' ',
1074 i
== sp
->cur_tx
% TX_RING_SIZE
? '=' : ' ',
1075 i
, sp
->tx_ring
[i
].status
);
1077 printk(KERN_DEBUG
"%s:Printing Rx ring (next to receive into %d).\n",
1078 dev
->name
, sp
->cur_rx
);
1080 for (i
= 0; i
< RX_RING_SIZE
; i
++)
1081 printk(KERN_DEBUG
" Rx ring entry %d %8.8x.\n",
1082 i
, (int)sp
->rx_ringp
[i
]->status
);
1084 for (i
= 0; i
< 16; i
++) {
1086 printk(KERN_DEBUG
" PHY index %d register %d is %4.4x.\n",
1087 phy_num
, i
, mdio_read(ioaddr
, phy_num
, i
));
1092 /* Initialize the Rx and Tx rings, along with various 'dev' bits. */
1094 speedo_init_rx_ring(struct net_device
*dev
)
1096 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1097 struct RxFD
*rxf
, *last_rxf
= NULL
;
1102 for (i
= 0; i
< RX_RING_SIZE
; i
++) {
1103 struct sk_buff
*skb
;
1104 skb
= dev_alloc_skb(PKT_BUF_SZ
+ sizeof(struct RxFD
));
1105 sp
->rx_skbuff
[i
] = skb
;
1107 break; /* OK. Just initially short of Rx bufs. */
1108 skb
->dev
= dev
; /* Mark as being used by this device. */
1109 rxf
= (struct RxFD
*)skb
->tail
;
1110 sp
->rx_ringp
[i
] = rxf
;
1111 sp
->rx_ring_dma
[i
] =
1112 pci_map_single(sp
->pdev
, rxf
, PKT_BUF_SZ
+ sizeof(struct RxFD
), PCI_DMA_FROMDEVICE
);
1113 skb_reserve(skb
, sizeof(struct RxFD
));
1115 last_rxf
->link
= cpu_to_le32(sp
->rx_ring_dma
[i
]);
1117 rxf
->status
= cpu_to_le32(0x00000001); /* '1' is flag value only. */
1118 rxf
->link
= 0; /* None yet. */
1119 /* This field unused by i82557. */
1120 rxf
->rx_buf_addr
= 0xffffffff;
1121 rxf
->count
= cpu_to_le32(PKT_BUF_SZ
<< 16);
1123 sp
->dirty_rx
= (unsigned int)(i
- RX_RING_SIZE
);
1124 /* Mark the last entry as end-of-list. */
1125 last_rxf
->status
= cpu_to_le32(0xC0000002); /* '2' is flag value only. */
1126 sp
->last_rxf
= last_rxf
;
1129 static void speedo_tx_timeout(struct net_device
*dev
)
1131 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1132 long ioaddr
= dev
->base_addr
;
1133 int status
= inw(ioaddr
+ SCBStatus
);
1135 /* Trigger a stats dump to give time before the reset. */
1136 speedo_get_stats(dev
);
1138 printk(KERN_WARNING
"%s: Transmit timed out: status %4.4x "
1139 " %4.4x at %d/%d command %8.8x.\n",
1140 dev
->name
, status
, inw(ioaddr
+ SCBCmd
),
1141 sp
->dirty_tx
, sp
->cur_tx
,
1142 sp
->tx_ring
[sp
->dirty_tx
% TX_RING_SIZE
].status
);
1143 speedo_show_state(dev
);
1144 if ((status
& 0x00C0) != 0x0080
1145 && (status
& 0x003C) == 0x0010) {
1146 /* Only the command unit has stopped. */
1147 printk(KERN_WARNING
"%s: Trying to restart the transmitter...\n",
1149 outl(sp
->tx_ring_dma
1150 + (sp
->dirty_tx
% TX_RING_SIZE
) * sizeof(struct TxFD
),
1151 ioaddr
+ SCBPointer
);
1152 outw(CUStart
, ioaddr
+ SCBCmd
);
1154 /* Reset the Tx and Rx units. */
1155 outl(PortReset
, ioaddr
+ SCBPort
);
1156 if (speedo_debug
> 0)
1157 speedo_show_state(dev
);
1161 /* Reset the MII transceiver, suggested by Fred Young @ scalable.com. */
1162 if ((sp
->phy
[0] & 0x8000) == 0) {
1163 int phy_addr
= sp
->phy
[0] & 0x1f;
1164 mdio_write(ioaddr
, phy_addr
, 0, 0x0400);
1165 mdio_write(ioaddr
, phy_addr
, 1, 0x0000);
1166 mdio_write(ioaddr
, phy_addr
, 4, 0x0000);
1167 mdio_write(ioaddr
, phy_addr
, 0, 0x8000);
1168 #ifdef honor_default_port
1169 mdio_write(ioaddr
, phy_addr
, 0, mii_ctrl
[dev
->default_port
& 7]);
1172 sp
->stats
.tx_errors
++;
1173 dev
->trans_start
= jiffies
;
1174 netif_start_queue (dev
);
1179 speedo_start_xmit(struct sk_buff
*skb
, struct net_device
*dev
)
1181 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1182 long ioaddr
= dev
->base_addr
;
1185 /* Caution: the write order is important here, set the base address
1186 with the "ownership" bits last. */
1188 { /* Prevent interrupts from changing the Tx ring from underneath us. */
1189 unsigned long flags
;
1191 spin_lock_irqsave(&sp
->lock
, flags
);
1192 /* Calculate the Tx descriptor entry. */
1193 entry
= sp
->cur_tx
++ % TX_RING_SIZE
;
1195 sp
->tx_skbuff
[entry
] = skb
;
1196 /* Todo: be a little more clever about setting the interrupt bit. */
1197 sp
->tx_ring
[entry
].status
=
1198 cpu_to_le32(CmdSuspend
| CmdTx
| CmdTxFlex
);
1199 sp
->tx_ring
[entry
].link
=
1200 cpu_to_le32(sp
->tx_ring_dma
1201 + (sp
->cur_tx
% TX_RING_SIZE
)
1202 * sizeof(struct TxFD
));
1203 sp
->tx_ring
[entry
].tx_desc_addr
=
1204 cpu_to_le32(sp
->tx_ring_dma
1205 + ((long)&sp
->tx_ring
[entry
].tx_buf_addr0
1206 - (long)sp
->tx_ring
));
1207 /* The data region is always in one buffer descriptor. */
1208 sp
->tx_ring
[entry
].count
= cpu_to_le32(sp
->tx_threshold
);
1209 sp
->tx_ring
[entry
].tx_buf_addr0
=
1210 cpu_to_le32(pci_map_single(sp
->pdev
, skb
->data
,
1211 skb
->len
, PCI_DMA_TODEVICE
));
1212 sp
->tx_ring
[entry
].tx_buf_size0
= cpu_to_le32(skb
->len
);
1213 /* Todo: perhaps leave the interrupt bit set if the Tx queue is more
1214 than half full. Argument against: we should be receiving packets
1215 and scavenging the queue. Argument for: if so, it shouldn't
1217 /* Trigger the command unit resume. */
1219 struct descriptor
*last_cmd
= sp
->last_cmd
;
1220 sp
->last_cmd
= (struct descriptor
*)&sp
->tx_ring
[entry
];
1221 last_cmd
->cmd_status
&= cpu_to_le32(~(CmdSuspend
| CmdIntr
));
1223 if (sp
->cur_tx
- sp
->dirty_tx
>= TX_QUEUE_LIMIT
) {
1225 netif_stop_queue (dev
);
1227 spin_unlock_irqrestore(&sp
->lock
, flags
);
1230 wait_for_cmd_done(ioaddr
+ SCBCmd
);
1231 outw(CUResume
, ioaddr
+ SCBCmd
);
1232 dev
->trans_start
= jiffies
;
1237 /* The interrupt handler does all of the Rx thread work and cleans up
1238 after the Tx thread. */
1239 static void speedo_interrupt(int irq
, void *dev_instance
, struct pt_regs
*regs
)
1241 struct net_device
*dev
= (struct net_device
*)dev_instance
;
1242 struct speedo_private
*sp
;
1243 long ioaddr
, boguscnt
= max_interrupt_work
;
1244 unsigned short status
;
1246 #ifndef final_version
1248 printk(KERN_ERR
"speedo_interrupt(): irq %d for unknown device.\n", irq
);
1253 ioaddr
= dev
->base_addr
;
1254 sp
= (struct speedo_private
*)dev
->priv
;
1256 spin_lock (&sp
->lock
);
1259 status
= inw(ioaddr
+ SCBStatus
);
1260 /* Acknowledge all of the current interrupt sources ASAP. */
1261 outw(status
& 0xfc00, ioaddr
+ SCBStatus
);
1263 if (speedo_debug
> 4)
1264 printk(KERN_DEBUG
"%s: interrupt status=%#4.4x.\n",
1267 if ((status
& 0xfc00) == 0)
1270 if (status
& 0x4000) /* Packet received. */
1273 if (status
& 0x1000) {
1274 if ((status
& 0x003c) == 0x0028) /* No more Rx buffers. */
1275 outw(RxResumeNoResources
, ioaddr
+ SCBCmd
);
1276 else if ((status
& 0x003c) == 0x0008) { /* No resources (why?!) */
1277 /* No idea of what went wrong. Restart the receiver. */
1278 outl(sp
->rx_ring_dma
[sp
->cur_rx
% RX_RING_SIZE
],
1279 ioaddr
+ SCBPointer
);
1280 outw(RxStart
, ioaddr
+ SCBCmd
);
1282 sp
->stats
.rx_errors
++;
1285 /* User interrupt, Command/Tx unit interrupt or CU not active. */
1286 if (status
& 0xA400) {
1287 unsigned int dirty_tx
;
1289 dirty_tx
= sp
->dirty_tx
;
1290 while (sp
->cur_tx
- dirty_tx
> 0) {
1291 int entry
= dirty_tx
% TX_RING_SIZE
;
1292 int status
= le32_to_cpu(sp
->tx_ring
[entry
].status
);
1294 if (speedo_debug
> 5)
1295 printk(KERN_DEBUG
" scavenge candidate %d status %4.4x.\n",
1297 if ((status
& StatusComplete
) == 0)
1298 break; /* It still hasn't been processed. */
1299 if (status
& TxUnderrun
)
1300 if (sp
->tx_threshold
< 0x01e08000)
1301 sp
->tx_threshold
+= 0x00040000;
1302 /* Free the original skb. */
1303 if (sp
->tx_skbuff
[entry
]) {
1304 sp
->stats
.tx_packets
++; /* Count only user packets. */
1305 sp
->stats
.tx_bytes
+= sp
->tx_skbuff
[entry
]->len
;
1306 pci_unmap_single(sp
->pdev
,
1307 le32_to_cpu(sp
->tx_ring
[entry
].tx_buf_addr0
),
1308 sp
->tx_skbuff
[entry
]->len
, PCI_DMA_TODEVICE
);
1309 dev_kfree_skb_irq(sp
->tx_skbuff
[entry
]);
1310 sp
->tx_skbuff
[entry
] = 0;
1311 } else if ((status
& 0x70000) == CmdNOp
) {
1312 if (sp
->mc_setup_busy
)
1313 pci_unmap_single(sp
->pdev
,
1315 sp
->mc_setup_frm_len
,
1317 sp
->mc_setup_busy
= 0;
1322 #ifndef final_version
1323 if (sp
->cur_tx
- dirty_tx
> TX_RING_SIZE
) {
1324 printk(KERN_ERR
"out-of-sync dirty pointer, %d vs. %d,"
1326 dirty_tx
, sp
->cur_tx
, sp
->tx_full
);
1327 dirty_tx
+= TX_RING_SIZE
;
1331 sp
->dirty_tx
= dirty_tx
;
1333 && sp
->cur_tx
- dirty_tx
< TX_QUEUE_LIMIT
- 1) {
1334 /* The ring is no longer full, clear tbusy. */
1336 netif_wake_queue (dev
);
1340 } while (--boguscnt
> 0);
1342 if (boguscnt
<= 0) {
1343 printk(KERN_ERR
"%s: Too much work at interrupt, status=0x%4.4x.\n",
1345 /* Clear all interrupt sources. */
1346 outl(0xfc00, ioaddr
+ SCBStatus
);
1349 if (speedo_debug
> 3)
1350 printk(KERN_DEBUG
"%s: exiting interrupt, status=%#4.4x.\n",
1351 dev
->name
, inw(ioaddr
+ SCBStatus
));
1353 spin_unlock (&sp
->lock
);
1357 speedo_rx(struct net_device
*dev
)
1359 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1360 int entry
= sp
->cur_rx
% RX_RING_SIZE
;
1362 int rx_work_limit
= sp
->dirty_rx
+ RX_RING_SIZE
- sp
->cur_rx
;
1364 if (speedo_debug
> 4)
1365 printk(KERN_DEBUG
" In speedo_rx().\n");
1366 /* If we own the next entry, it's a new packet. Send it up. */
1367 while (sp
->rx_ringp
[entry
] != NULL
&&
1368 (status
= le32_to_cpu(sp
->rx_ringp
[entry
]->status
)) & RxComplete
) {
1369 int pkt_len
= le32_to_cpu(sp
->rx_ringp
[entry
]->count
) & 0x3fff;
1371 if (--rx_work_limit
< 0)
1373 if (speedo_debug
> 4)
1374 printk(KERN_DEBUG
" speedo_rx() status %8.8x len %d.\n", status
,
1376 if ((status
& (RxErrTooBig
|RxOK
|0x0f90)) != RxOK
) {
1377 if (status
& RxErrTooBig
)
1378 printk(KERN_ERR
"%s: Ethernet frame overran the Rx buffer, "
1379 "status %8.8x!\n", dev
->name
, status
);
1380 else if ( ! (status
& RxOK
)) {
1381 /* There was a fatal error. This *should* be impossible. */
1382 sp
->stats
.rx_errors
++;
1383 printk(KERN_ERR
"%s: Anomalous event in speedo_rx(), "
1384 "status %8.8x.\n", dev
->name
, status
);
1387 struct sk_buff
*skb
;
1389 /* Check if the packet is long enough to just accept without
1390 copying to a properly sized skbuff. */
1391 if (pkt_len
< rx_copybreak
1392 && (skb
= dev_alloc_skb(pkt_len
+ 2)) != 0) {
1394 skb_reserve(skb
, 2); /* Align IP on 16 byte boundaries */
1395 /* 'skb_put()' points to the start of sk_buff data area. */
1396 pci_dma_sync_single(sp
->pdev
, sp
->rx_ring_dma
[entry
],
1397 PKT_BUF_SZ
+ sizeof(struct RxFD
), PCI_DMA_FROMDEVICE
);
1398 #if 1 || USE_IP_CSUM
1399 /* Packet is in one chunk -- we can copy + cksum. */
1400 eth_copy_and_sum(skb
, sp
->rx_skbuff
[entry
]->tail
, pkt_len
, 0);
1401 skb_put(skb
, pkt_len
);
1403 memcpy(skb_put(skb
, pkt_len
), sp
->rx_skbuff
[entry
]->tail
,
1408 /* Pass up the already-filled skbuff. */
1409 skb
= sp
->rx_skbuff
[entry
];
1411 printk(KERN_ERR
"%s: Inconsistent Rx descriptor chain.\n",
1415 sp
->rx_skbuff
[entry
] = NULL
;
1416 temp
= skb_put(skb
, pkt_len
);
1417 sp
->rx_ringp
[entry
] = NULL
;
1418 pci_unmap_single(sp
->pdev
, sp
->rx_ring_dma
[entry
],
1419 PKT_BUF_SZ
+ sizeof(struct RxFD
), PCI_DMA_FROMDEVICE
);
1421 skb
->protocol
= eth_type_trans(skb
, dev
);
1423 sp
->stats
.rx_packets
++;
1424 sp
->stats
.rx_bytes
+= pkt_len
;
1426 entry
= (++sp
->cur_rx
) % RX_RING_SIZE
;
1429 /* Refill the Rx ring buffers. */
1430 for (; sp
->cur_rx
- sp
->dirty_rx
> 0; sp
->dirty_rx
++) {
1432 entry
= sp
->dirty_rx
% RX_RING_SIZE
;
1433 if (sp
->rx_skbuff
[entry
] == NULL
) {
1434 struct sk_buff
*skb
;
1435 /* Get a fresh skbuff to replace the consumed one. */
1436 skb
= dev_alloc_skb(PKT_BUF_SZ
+ sizeof(struct RxFD
));
1437 sp
->rx_skbuff
[entry
] = skb
;
1439 sp
->rx_ringp
[entry
] = NULL
;
1440 break; /* Better luck next time! */
1442 rxf
= sp
->rx_ringp
[entry
] = (struct RxFD
*)skb
->tail
;
1443 sp
->rx_ring_dma
[entry
] =
1444 pci_map_single(sp
->pdev
, rxf
, PKT_BUF_SZ
1445 + sizeof(struct RxFD
), PCI_DMA_FROMDEVICE
);
1447 skb_reserve(skb
, sizeof(struct RxFD
));
1448 rxf
->rx_buf_addr
= 0xffffffff;
1450 rxf
= sp
->rx_ringp
[entry
];
1452 rxf
->status
= cpu_to_le32(0xC0000001); /* '1' for driver use only. */
1453 rxf
->link
= 0; /* None yet. */
1454 rxf
->count
= cpu_to_le32(PKT_BUF_SZ
<< 16);
1455 sp
->last_rxf
->link
= cpu_to_le32(sp
->rx_ring_dma
[entry
]);
1456 sp
->last_rxf
->status
&= cpu_to_le32(~0xC0000000);
1460 sp
->last_rx_time
= jiffies
;
1465 speedo_close(struct net_device
*dev
)
1467 long ioaddr
= dev
->base_addr
;
1468 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1471 netif_stop_queue(dev
);
1473 if (speedo_debug
> 1)
1474 printk(KERN_DEBUG
"%s: Shutting down ethercard, status was %4.4x.\n",
1475 dev
->name
, inw(ioaddr
+ SCBStatus
));
1477 /* Shut off the media monitoring timer. */
1478 del_timer(&sp
->timer
);
1480 /* Disable interrupts, and stop the chip's Rx process. */
1481 outw(SCBMaskAll
, ioaddr
+ SCBCmd
);
1482 outw(SCBMaskAll
| RxAbort
, ioaddr
+ SCBCmd
);
1484 free_irq(dev
->irq
, dev
);
1486 /* Free all the skbuffs in the Rx and Tx queues. */
1487 for (i
= 0; i
< RX_RING_SIZE
; i
++) {
1488 struct sk_buff
*skb
= sp
->rx_skbuff
[i
];
1489 sp
->rx_skbuff
[i
] = 0;
1490 /* Clear the Rx descriptors. */
1492 pci_unmap_single(sp
->pdev
,
1494 PKT_BUF_SZ
+ sizeof(struct RxFD
), PCI_DMA_FROMDEVICE
);
1499 for (i
= 0; i
< TX_RING_SIZE
; i
++) {
1500 struct sk_buff
*skb
= sp
->tx_skbuff
[i
];
1501 sp
->tx_skbuff
[i
] = 0;
1503 /* Clear the Tx descriptors. */
1505 pci_unmap_single(sp
->pdev
,
1506 le32_to_cpu(sp
->tx_ring
[i
].tx_buf_addr0
),
1507 skb
->len
, PCI_DMA_TODEVICE
);
1511 if (sp
->mc_setup_frm
) {
1512 kfree(sp
->mc_setup_frm
);
1513 sp
->mc_setup_frm_len
= 0;
1516 /* Print a few items for debugging. */
1517 if (speedo_debug
> 3)
1518 speedo_show_state(dev
);
1520 /* Alt: acpi_set_pwr_state(pci_bus, pci_devfn, sp->acpi_pwr); */
1521 pci_set_power_state (sp
->pdev
, 2);
1528 /* The Speedo-3 has an especially awkward and unusable method of getting
1529 statistics out of the chip. It takes an unpredictable length of time
1530 for the dump-stats command to complete. To avoid a busy-wait loop we
1531 update the stats with the previous dump results, and then trigger a
1534 These problems are mitigated by the current /proc implementation, which
1535 calls this routine first to judge the output length, and then to emit the
1538 Oh, and incoming frames are dropped while executing dump-stats!
1540 static struct enet_statistics
*
1541 speedo_get_stats(struct net_device
*dev
)
1543 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1544 long ioaddr
= dev
->base_addr
;
1546 /* Update only if the previous dump finished. */
1547 if (sp
->lstats
->done_marker
== le32_to_cpu(0xA007)) {
1548 sp
->stats
.tx_aborted_errors
+= le32_to_cpu(sp
->lstats
->tx_coll16_errs
);
1549 sp
->stats
.tx_window_errors
+= le32_to_cpu(sp
->lstats
->tx_late_colls
);
1550 sp
->stats
.tx_fifo_errors
+= le32_to_cpu(sp
->lstats
->tx_underruns
);
1551 sp
->stats
.tx_fifo_errors
+= le32_to_cpu(sp
->lstats
->tx_lost_carrier
);
1552 /*sp->stats.tx_deferred += le32_to_cpu(sp->lstats->tx_deferred);*/
1553 sp
->stats
.collisions
+= le32_to_cpu(sp
->lstats
->tx_total_colls
);
1554 sp
->stats
.rx_crc_errors
+= le32_to_cpu(sp
->lstats
->rx_crc_errs
);
1555 sp
->stats
.rx_frame_errors
+= le32_to_cpu(sp
->lstats
->rx_align_errs
);
1556 sp
->stats
.rx_over_errors
+= le32_to_cpu(sp
->lstats
->rx_resource_errs
);
1557 sp
->stats
.rx_fifo_errors
+= le32_to_cpu(sp
->lstats
->rx_overrun_errs
);
1558 sp
->stats
.rx_length_errors
+= le32_to_cpu(sp
->lstats
->rx_runt_errs
);
1559 sp
->lstats
->done_marker
= 0x0000;
1560 if (netif_running(dev
)) {
1561 wait_for_cmd_done(ioaddr
+ SCBCmd
);
1562 outw(CUDumpStats
, ioaddr
+ SCBCmd
);
1568 static int speedo_ioctl(struct net_device
*dev
, struct ifreq
*rq
, int cmd
)
1570 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1571 long ioaddr
= dev
->base_addr
;
1572 u16
*data
= (u16
*)&rq
->ifr_data
;
1573 int phy
= sp
->phy
[0] & 0x1f;
1577 case SIOCDEVPRIVATE
: /* Get the address of the PHY in use. */
1579 case SIOCDEVPRIVATE
+1: /* Read the specified MII register. */
1580 saved_acpi
= pci_set_power_state (sp
->pdev
, 0);
1581 data
[3] = mdio_read (ioaddr
, data
[0], data
[1]);
1582 pci_set_power_state (sp
->pdev
, saved_acpi
);
1584 case SIOCDEVPRIVATE
+2: /* Write the specified MII register */
1585 if (!capable(CAP_NET_ADMIN
))
1587 saved_acpi
= pci_set_power_state(sp
->pdev
, 0);
1588 mdio_write(ioaddr
, data
[0], data
[1], data
[2]);
1589 pci_set_power_state(sp
->pdev
, saved_acpi
);
1596 /* Set or clear the multicast filter for this adaptor.
1597 This is very ugly with Intel chips -- we usually have to execute an
1598 entire configuration command, plus process a multicast command.
1599 This is complicated. We must put a large configuration command and
1600 an arbitrarily-sized multicast command in the transmit list.
1601 To minimize the disruption -- the previous command might have already
1602 loaded the link -- we convert the current command block, normally a Tx
1603 command, into a no-op and link it to the new command.
1605 static void set_rx_mode(struct net_device
*dev
)
1607 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1608 long ioaddr
= dev
->base_addr
;
1609 struct descriptor
*last_cmd
;
1611 unsigned long flags
;
1614 if (dev
->flags
& IFF_PROMISC
) { /* Set promiscuous. */
1616 } else if ((dev
->flags
& IFF_ALLMULTI
) ||
1617 dev
->mc_count
> multicast_filter_limit
) {
1622 if (sp
->cur_tx
- sp
->dirty_tx
>= TX_RING_SIZE
- 1) {
1623 /* The Tx ring is full -- don't add anything! Presumably the new mode
1624 is in config_cmd_data and will be added anyway. */
1629 if (new_rx_mode
!= sp
->rx_mode
) {
1630 u8
*config_cmd_data
;
1632 spin_lock_irqsave(&sp
->lock
, flags
);
1633 entry
= sp
->cur_tx
++ % TX_RING_SIZE
;
1634 last_cmd
= sp
->last_cmd
;
1635 sp
->last_cmd
= (struct descriptor
*)&sp
->tx_ring
[entry
];
1637 sp
->tx_skbuff
[entry
] = 0; /* Redundant. */
1638 sp
->tx_ring
[entry
].status
= cpu_to_le32(CmdSuspend
| CmdConfigure
);
1639 sp
->tx_ring
[entry
].link
=
1640 cpu_to_le32(sp
->tx_ring_dma
+ ((entry
+ 1) % TX_RING_SIZE
)
1641 * sizeof(struct TxFD
));
1642 config_cmd_data
= (void *)&sp
->tx_ring
[entry
].tx_desc_addr
;
1643 /* Construct a full CmdConfig frame. */
1644 memcpy(config_cmd_data
, i82558_config_cmd
, sizeof(i82558_config_cmd
));
1645 config_cmd_data
[1] = (txfifo
<< 4) | rxfifo
;
1646 config_cmd_data
[4] = rxdmacount
;
1647 config_cmd_data
[5] = txdmacount
+ 0x80;
1648 config_cmd_data
[15] |= (new_rx_mode
& 2) ? 1 : 0;
1649 config_cmd_data
[19] = sp
->flow_ctrl
? 0xBD : 0x80;
1650 config_cmd_data
[19] |= sp
->full_duplex
? 0x40 : 0;
1651 config_cmd_data
[21] = (new_rx_mode
& 1) ? 0x0D : 0x05;
1652 if (sp
->phy
[0] & 0x8000) { /* Use the AUI port instead. */
1653 config_cmd_data
[15] |= 0x80;
1654 config_cmd_data
[8] = 0;
1656 /* Trigger the command unit resume. */
1657 wait_for_cmd_done(ioaddr
+ SCBCmd
);
1658 clear_suspend(last_cmd
);
1659 outw(CUResume
, ioaddr
+ SCBCmd
);
1660 spin_unlock_irqrestore(&sp
->lock
, flags
);
1663 if (new_rx_mode
== 0 && dev
->mc_count
< 4) {
1664 /* The simple case of 0-3 multicast list entries occurs often, and
1665 fits within one tx_ring[] entry. */
1666 struct dev_mc_list
*mclist
;
1667 u16
*setup_params
, *eaddrs
;
1669 spin_lock_irqsave(&sp
->lock
, flags
);
1670 entry
= sp
->cur_tx
++ % TX_RING_SIZE
;
1671 last_cmd
= sp
->last_cmd
;
1672 sp
->last_cmd
= (struct descriptor
*)&sp
->tx_ring
[entry
];
1674 sp
->tx_skbuff
[entry
] = 0;
1675 sp
->tx_ring
[entry
].status
= cpu_to_le32(CmdSuspend
| CmdMulticastList
);
1676 sp
->tx_ring
[entry
].link
=
1677 cpu_to_le32(sp
->tx_ring_dma
+ ((entry
+ 1) % TX_RING_SIZE
)
1678 * sizeof(struct TxFD
));
1679 sp
->tx_ring
[entry
].tx_desc_addr
= 0; /* Really MC list count. */
1680 setup_params
= (u16
*)&sp
->tx_ring
[entry
].tx_desc_addr
;
1681 *setup_params
++ = cpu_to_le16(dev
->mc_count
*6);
1682 /* Fill in the multicast addresses. */
1683 for (i
= 0, mclist
= dev
->mc_list
; i
< dev
->mc_count
;
1684 i
++, mclist
= mclist
->next
) {
1685 eaddrs
= (u16
*)mclist
->dmi_addr
;
1686 *setup_params
++ = *eaddrs
++;
1687 *setup_params
++ = *eaddrs
++;
1688 *setup_params
++ = *eaddrs
++;
1691 wait_for_cmd_done(ioaddr
+ SCBCmd
);
1692 clear_suspend(last_cmd
);
1693 /* Immediately trigger the command unit resume. */
1694 outw(CUResume
, ioaddr
+ SCBCmd
);
1695 spin_unlock_irqrestore(&sp
->lock
, flags
);
1696 } else if (new_rx_mode
== 0) {
1697 struct dev_mc_list
*mclist
;
1698 u16
*setup_params
, *eaddrs
;
1699 struct descriptor
*mc_setup_frm
= sp
->mc_setup_frm
;
1702 /* If we are busy, someone might be quickly adding to the MC list.
1703 Try again later when the list updates stop. */
1704 if (sp
->mc_setup_busy
) {
1708 if (sp
->mc_setup_frm_len
< 10 + dev
->mc_count
*6
1709 || sp
->mc_setup_frm
== NULL
) {
1710 /* Allocate a full setup frame, 10bytes + <max addrs>. */
1711 if (sp
->mc_setup_frm
)
1712 kfree(sp
->mc_setup_frm
);
1713 sp
->mc_setup_frm_len
= 10 + multicast_filter_limit
*6;
1714 sp
->mc_setup_frm
= kmalloc(sp
->mc_setup_frm_len
, GFP_ATOMIC
);
1715 if (sp
->mc_setup_frm
== NULL
) {
1716 printk(KERN_ERR
"%s: Failed to allocate a setup frame.\n",
1718 sp
->rx_mode
= -1; /* We failed, try again. */
1722 mc_setup_frm
= sp
->mc_setup_frm
;
1723 /* Fill the setup frame. */
1724 if (speedo_debug
> 1)
1725 printk(KERN_DEBUG
"%s: Constructing a setup frame at %p, "
1727 dev
->name
, sp
->mc_setup_frm
, sp
->mc_setup_frm_len
);
1728 mc_setup_frm
->cmd_status
=
1729 cpu_to_le32(CmdSuspend
| CmdIntr
| CmdMulticastList
);
1730 /* Link set below. */
1731 setup_params
= (u16
*)&mc_setup_frm
->params
;
1732 *setup_params
++ = cpu_to_le16(dev
->mc_count
*6);
1733 /* Fill in the multicast addresses. */
1734 for (i
= 0, mclist
= dev
->mc_list
; i
< dev
->mc_count
;
1735 i
++, mclist
= mclist
->next
) {
1736 eaddrs
= (u16
*)mclist
->dmi_addr
;
1737 *setup_params
++ = *eaddrs
++;
1738 *setup_params
++ = *eaddrs
++;
1739 *setup_params
++ = *eaddrs
++;
1742 /* Disable interrupts while playing with the Tx Cmd list. */
1743 spin_lock_irqsave(&sp
->lock
, flags
);
1744 entry
= sp
->cur_tx
++ % TX_RING_SIZE
;
1745 last_cmd
= sp
->last_cmd
;
1746 sp
->last_cmd
= mc_setup_frm
;
1747 sp
->mc_setup_busy
= 1;
1749 /* Change the command to a NoOp, pointing to the CmdMulti command. */
1750 sp
->tx_skbuff
[entry
] = 0;
1751 sp
->tx_ring
[entry
].status
= cpu_to_le32(CmdNOp
);
1752 sp
->mc_setup_dma
= pci_map_single(sp
->pdev
, mc_setup_frm
, sp
->mc_setup_frm_len
, PCI_DMA_TODEVICE
);
1753 sp
->tx_ring
[entry
].link
= cpu_to_le32(sp
->mc_setup_dma
);
1755 /* Set the link in the setup frame. */
1756 mc_setup_frm
->link
=
1757 cpu_to_le32(sp
->tx_ring_dma
+ ((entry
+ 1) % TX_RING_SIZE
)
1758 * sizeof(struct TxFD
));
1760 wait_for_cmd_done(ioaddr
+ SCBCmd
);
1761 clear_suspend(last_cmd
);
1762 /* Immediately trigger the command unit resume. */
1763 outw(CUResume
, ioaddr
+ SCBCmd
);
1764 spin_unlock_irqrestore(&sp
->lock
, flags
);
1765 if (speedo_debug
> 5)
1766 printk(" CmdMCSetup frame length %d in entry %d.\n",
1767 dev
->mc_count
, entry
);
1770 sp
->rx_mode
= new_rx_mode
;
1774 static void eepro100_suspend (struct pci_dev
*pdev
)
1776 struct net_device
*dev
= pdev
->driver_data
;
1777 long ioaddr
= dev
->base_addr
;
1779 netif_device_detach(dev
);
1780 outl(PortPartialReset
, ioaddr
+ SCBPort
);
1782 /* XXX call pci_set_power_state ()? */
1786 static void eepro100_resume (struct pci_dev
*pdev
)
1788 struct net_device
*dev
= pdev
->driver_data
;
1789 struct speedo_private
*np
= (struct speedo_private
*)dev
->priv
;
1791 netif_device_attach(dev
);
1794 np
->flow_ctrl
= np
->partner
= 0;
1799 static void __devexit
eepro100_remove_one (struct pci_dev
*pdev
)
1801 struct net_device
*dev
= pdev
->driver_data
;
1802 struct speedo_private
*sp
= (struct speedo_private
*)dev
->priv
;
1804 unregister_netdev (dev
);
1806 release_region (pci_resource_start (pdev
, 1),
1807 pci_resource_len (pdev
, 1));
1808 release_mem_region (pci_resource_start (pdev
, 0),
1809 pci_resource_len (pdev
, 0));
1812 iounmap ((char *) dev
->base_addr
);
1815 pci_free_consistent(pdev
, TX_RING_SIZE
* sizeof(struct TxFD
)
1816 + sizeof(struct speedo_stats
),
1817 sp
->tx_ring
, sp
->tx_ring_dma
);
1823 static struct pci_device_id eepro100_pci_tbl
[] __devinitdata
= {
1824 { PCI_VENDOR_ID_INTEL
, PCI_DEVICE_ID_INTEL_82557
,
1825 PCI_ANY_ID
, PCI_ANY_ID
, },
1828 MODULE_DEVICE_TABLE (pci
, eepro100_pci_tbl
);
1831 static struct pci_driver eepro100_driver
= {
1832 name
: EEPRO100_MODULE_NAME
,
1833 id_table
: eepro100_pci_tbl
,
1834 probe
: eepro100_init_one
,
1835 remove
: eepro100_remove_one
,
1836 suspend
: eepro100_suspend
,
1837 resume
: eepro100_resume
,
1841 static int __init
eepro100_init_module(void)
1844 speedo_debug
= debug
;
1846 return pci_module_init (&eepro100_driver
);
1850 static void __exit
eepro100_cleanup_module(void)
1852 pci_unregister_driver (&eepro100_driver
);
1856 module_init(eepro100_init_module
);
1857 module_exit(eepro100_cleanup_module
);
1861 * compile-command: "gcc -DMODULE -D__KERNEL__ -Wall -Wstrict-prototypes -O6 -c eepro100.c `[ -f /usr/include/linux/modversions.h ] && echo -DMODVERSIONS` `[ -f ./pci-netif.h ] && echo -DHAS_PCI_NETIF`"
1862 * SMP-compile-command: "gcc -D__SMP__ -DMODULE -D__KERNEL__ -Wall -Wstrict-prototypes -O6 -c eepro100.c `[ -f /usr/include/linux/modversions.h ] && echo -DMODVERSIONS`"
1863 * simple-compile-command: "gcc -DMODULE -D__KERNEL__ -O6 -c eepro100.c"